id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
426533 | https://en.wikipedia.org/wiki/Adaptive%20chosen-ciphertext%20attack | Adaptive chosen-ciphertext attack | An adaptive chosen-ciphertext attack (abbreviated as CCA2) is an interactive form of chosen-ciphertext attack in which an attacker first sends a number of ciphertexts to be decrypted chosen adaptively, then uses the results to distinguish a target ciphertext without consulting the oracle on the challenge ciphertext, in an adaptive attack the attacker is further allowed adaptive queries to be asked after the target is revealed (but the target query is disallowed). It is extending the indifferent (non-adaptive) chosen-ciphertext attack (CCA1) where the second stage of adaptive queries is not allowed. Charles Rackoff and Dan Simon defined CCA2 and suggested a system building on the non-adaptive CCA1 definition and system of Moni Naor and Moti Yung (which was the first treatment of chosen ciphertext attack immunity of public key systems).
In certain practical settings, the goal of this attack is to gradually reveal information about an encrypted message, or about the decryption key itself. For public-key systems, adaptive-chosen-ciphertexts are generally applicable only when they have the property of ciphertext malleability — that is, a ciphertext can be modified in specific ways that will have a predictable effect on the decryption of that message.
Practical attacks
Adaptive-chosen-ciphertext attacks were perhaps considered to be a theoretical concern but not to be manifested in practice until 1998, when Daniel Bleichenbacher of Bell Laboratories (at the time) demonstrated a practical attack against systems using RSA encryption in concert with the PKCS#1 v1 encoding function, including a version of the Secure Sockets Layer (SSL) protocol used by thousands of web servers at the time.
The Bleichenbacher attacks, also known as the million message attack, took advantage of flaws within the PKCS #1 function to gradually reveal the content of an RSA encrypted message. Doing this requires sending several million test ciphertexts to the decryption device (e.g., SSL-equipped web server). In practical terms, this means that an SSL session key can be exposed in a reasonable amount of time, perhaps a day or less.
With slight variations this vulnerability still exists in many modern servers, under the new name "Return Of Bleichenbacher's Oracle Threat" (ROBOT).
Preventing attacks
In order to prevent adaptive-chosen-ciphertext attacks, it is necessary to use an encryption or encoding scheme that limits ciphertext malleability and a proof of security of the system. After the theoretical and foundation level development of CCA secure systems, a number of systems have been proposed in the Random Oracle model: the most common standard for RSA encryption is Optimal Asymmetric Encryption Padding (OAEP). Unlike improvised schemes such as the padding used in the early versions of PKCS#1, OAEP has been proven secure in the random oracle model, OAEP was incorporated into PKCS#1 as of version 2.0 published in 1998 as the now-recommended encoding scheme, with the older scheme still supported but not recommended for new applications. However, the golden standard for security is to show the system secure without relying on the Random Oracle idealization.
Mathematical model
In complexity-theoretic cryptography, security against adaptive chosen-ciphertext attacks is commonly modeled using ciphertext indistinguishability (IND-CCA2).
References
Cryptographic attacks |
427724 | https://en.wikipedia.org/wiki/Henryk%20Zygalski | Henryk Zygalski | Henryk Zygalski (; 15 July 1908 – 30 August 1978) was a Polish mathematician and cryptologist who worked at breaking German Enigma ciphers before and during World War II.
Life
Zygalski was born on 15 July 1908 in Posen, German Empire (now Poznań, Poland). He was, from September 1932, a civilian cryptologist with the Polish General Staff's Biuro Szyfrów (Cipher Bureau), housed in the Saxon Palace in Warsaw. He worked there with fellow Poznań University alumni and Cipher Bureau cryptology-course graduates Marian Rejewski and Jerzy Różycki. Together they developed methods and equipment for breaking Enigma messages.
In late 1938, in response to growing complexities in German encryption procedures, Zygalski designed the "perforated sheets," also known as "Zygalski sheets," a manual device for finding Enigma settings. This scheme, like the earlier "card catalog," was independent of the number of connections being used in the Enigma's plugboard, or commutator.
After the war, he remained in exile in the United Kingdom and worked, until his retirement, as a lecturer in mathematical statistics at the University of Surrey. During this period he was prevented by the Official Secrets Act from speaking of his achievements in cryptology.
He died on 30 August 1978 in Liss, England, was cremated and his ashes taken to London.
Recognition
Shortly before his death, Zygalski was honored by the Polish University in Exile with an honorary doctorate for his role in breaking Enigma.
In 2000 he was posthumously awarded by President Aleksander Kwaśniewski the Grand Cross of the Order of Polonia Restituta for his "outstanding contributions to the Republic of Poland".
In 2009 the Polish Post issued a commemorative stamp featuring Henryk Zygalski alongside fellow cryptologists Marian Rejewski and Jerzy Różycki.
In 2021 the Enigma Cipher Center, an educational and scientific institution dedicated to the Polish mathematicians who broke the Enigma cipher, including Henryk Zygalski, opened in Poznań.
See also
Cryptanalysis of the Enigma
List of cryptographers
List of Polish mathematicians
References
Władysław Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War II, edited and translated by Christopher Kasparek, Frederick, MD, University Publications of America, 1984, .
Scientists from Poznań
Polish Army officers
20th-century Polish mathematicians
Pre-computer cryptographers
Polish cryptographers
Cipher Bureau (Poland)
Polish emigrants to the United Kingdom
20th-century Polish inventors
Academics of the University of Surrey
1908 births
1978 deaths
Adam Mickiewicz University in Poznań alumni |
427776 | https://en.wikipedia.org/wiki/Whitfield%20Diffie | Whitfield Diffie | Bailey Whitfield 'Whit' Diffie (born June 5, 1944), ForMemRS, is an American cryptographer and one of the pioneers of public-key cryptography along with Martin Hellman and Ralph Merkle. Diffie and Hellman's 1976 paper New Directions in Cryptography introduced a radically new method of distributing cryptographic keys, that helped solve key distribution—a fundamental problem in cryptography. Their technique became known as Diffie–Hellman key exchange. The article stimulated the almost immediate public development of a new class of encryption algorithms, the asymmetric key algorithms.
After a long career at Sun Microsystems, where he became a Sun Fellow, Diffie served for two and a half years as Vice President for Information Security and Cryptography at the Internet Corporation for Assigned Names and Numbers (2010–2012). He has also served as a visiting scholar (2009–2010) and affiliate (2010–2012) at the Freeman Spogli Institute's Center for International Security and Cooperation at Stanford University, where he is currently a consulting scholar.
Education and early life
Diffie was born in Washington, D.C., the son of Justine Louise (Whitfield), a writer and scholar, and Bailey Wallys Diffie, who taught Iberian history and culture at City College of New York. His interest in cryptography began at "age 10 when his father, a professor, brought home the entire crypto shelf of the City College Library in New York."
At Jamaica High School in Queens, New York, Diffie "performed competently" but "never did apply himself to the degree his father hoped." Although he graduated with a local diploma, he did not take the statewide Regents examinations that would have awarded him an academic diploma because he had previously secured admission to Massachusetts Institute of Technology on the basis of "stratospheric scores on standardized tests." While he received a B.S. in mathematics from the institution in 1965, he remained unengaged and seriously considered transferring to the University of California, Berkeley (which he perceived as a more hospitable academic environment) during the first two years of his undergraduate studies. At MIT, he began to program computers (in an effort to cultivate a practical skill set) while continuing to perceive the devices "as very low class... I thought of myself as a pure mathematician and was very interested in partial differential equations and topology and things like that."
Career and research
From 1965 to 1969, he remained in Greater Boston as a research assistant for the MITRE Corporation in Bedford, Massachusetts. As MITRE was a defense contractor, this position enabled Diffie (a pacifist who opposed the Vietnam War) to avoid the draft. During this period, he helped to develop MATHLAB (an early symbolic manipulation system that served as the basis for Macsyma) and other non-military applications.
In November 1969, Diffie became a research programmer at the Stanford Artificial Intelligence Laboratory, where he worked on LISP 1.6 (widely distributed to PDP-10 systems running the TOPS-10 operating system) and correctness problems while cultivating interests in cryptography and computer security under the aegis of John McCarthy.
Diffie left SAIL to pursue independent research in cryptography in May 1973. As the most current research in the field during the epoch fell under the classified oversight of the National Security Agency, Diffie "went around doing one of the things I am good at, which is digging up rare manuscripts in libraries, driving around, visiting friends at universities." He was assisted by his new girlfriend and future wife, Mary Fischer.
In the summer of 1974, Diffie and Fischer met with a friend at the Thomas J. Watson Research Center (headquarters of IBM Research) in Yorktown Heights, New York, which housed one of the only nongovernmental cryptographic research groups in the United States. While group director Alan Konheim "couldn't tell [Diffie] very much because of a secrecy order," he advised him to meet with Martin Hellman, a young electrical engineering professor at Stanford University who was also pursuing a cryptographic research program. A planned half-hour meeting between Diffie and Hellman extended over many hours as they shared ideas and information.
Hellman then hired Diffie as a grant-funded part-time research programmer for the 1975 spring term. Under his sponsorship, he also enrolled as a doctoral student in electrical engineering at Stanford in June 1975; however, Diffie was once again unable to acclimate to "homework assignments [and] the structure" and eventually dropped out after failing to complete a required physical examination: "I didn't feel like doing it, I didn't get around to it." Although it is unclear when he dropped out, Diffie remained employed in Hellman's lab as a research assistant through June 1978.
In 1975–76, Diffie and Hellman criticized the NBS proposed Data Encryption Standard, largely because its 56-bit key length was too short to prevent brute-force attack. An audio recording survives of their review of DES at Stanford in 1976 with Dennis Branstad of NBS and representatives of the National Security Agency. Their concern was well-founded: subsequent history has shown not only that NSA actively intervened with IBM and NBS to shorten the key size, but also that the short key size enabled exactly the kind of massively parallel key crackers that Hellman and Diffie sketched out. When these were ultimately built outside the classified world (EFF DES cracker), they made it clear that DES was insecure and obsolete.
From 1978 to 1991, Diffie was Manager of Secure Systems Research for Northern Telecom in Mountain View, California, where he designed the key management architecture for the PDSO security system for X.25 networks.
In 1991, he joined Sun Microsystems Laboratories in Menlo Park, California as a Distinguished Engineer, working primarily on public policy aspects of cryptography. Diffie remained with Sun, serving as its Chief Security Officer and as a Vice President until November 2009. He was also a Sun Fellow.
, Diffie was a visiting professor at the Information Security Group based at Royal Holloway, University of London.
In May 2010, Diffie joined the Internet Corporation for Assigned Names and Numbers (ICANN) as Vice President for Information Security and Cryptography, a position he left in October 2012.
Diffie is a member of the technical advisory boards of BlackRidge Technology, and Cryptomathic where he collaborates with researchers such as Vincent Rijmen, Ivan Damgård and Peter Landrock.
In 2018, he joined Zhejiang University, China, as a visiting professor, Cryptic Labs generated 2 months course in Zhejiang University.
Public key cryptography
In the early 1970s, Diffie worked with Martin Hellman to develop the fundamental ideas of dual-key, or public key, cryptography. They published their results in 1976—solving one of the fundamental problems of cryptography, key distribution—and essentially broke the monopoly that had previously existed where government entities controlled cryptographic technology and the terms on which other individuals could have access to it. "From the moment Diffie and Hellman published their findings..., the National Security Agency's crypto monopoly was effectively terminated. ... Every company, every citizen now had routine access to the sorts of cryptographic technology that not many years ago ranked alongside the atom bomb as a source of power."<ref name=nytm19940712>
{{cite news|last=Levy|first=Stephen |title=Battle of the Clipper Chip |newspaper=New York Times Magazine |date=1994-07-12 |pages=44–51, plus cover photo of Diffie |quote=Whitfield Diffie's amazing breakthrough could guarantee computer privacy. But the Government, fearing crime and terror, wants to co-opt his magic key and listen in. ... High-tech has created a huge privacy gap. But miraculously, a fix has emerged: cheap, easy-to-use-, virtually unbreakable encryption. Cryptography is the silver bullet by which we can hope to reclaim our privacy. ... a remarkable discovery made almost 20 years ago, a breakthrough that combined with the obscure field of cryptography into the mainstream of communications policy. It began with Whitfield Diffie, a young computer scientist and cryptographer. He did not work for the government. ... He had been bitten by the cryptography bug at age 10 when his father, a professor, brought home the entire crypto shelf of the City College Library in New York. ... [Diffie] was always concerned about individuals, an individual's privacy as opposed to Government secrecy. ... Diffie, now 50, is still committed to those beliefs. ... [Diffie] and Martin E. Hellman, an electrical engineering professor at Stanford University, created a crypto revolution. ... Diffie was dissatisfied with the security [on computer systems] ... in the 1960s [because] a system manager had access to all passwords. ... A perfect system would eliminate the need for a trusted third party. ... led Diffie to think about a more general problem in cryptography: key management. ... When Diffie moved to Stanford University in 1969, he foresaw the rise of home computer terminals [and pondered] how to use them to make transactions. ... in the mid-1970s, Diffie and Hellman achieved a stunning breakthrough that changed cryptography forever. They split the cryptographic key. In their system, every user has two keys, a public one and a private one, that are unique to their owner. Whatever is scrambled by one key can be unscrambled by the other. ... It was an amazing solution, but even more remarkable was that this split-key system solved both of Diffie's problems, the desire to shield communications from eavesdroppers and also to provide a secure electronic identification for contracts and financial transactions done by computer. It provided the identification by the use of 'digital signatures' that verify the sender much the same way that a real signature validates a check or contract. ... From the moment Diffie and Hellman published their findings in 1976, the National Security Agency's crypto monopoly was effectively terminated. ... Every company, every citizen now had routine access to the sorts of cryptographic technology that not many years ago ranked alongside the atom bomb as a source of power.'''}}</ref>
The solution has become known as Diffie–Hellman key exchange.
Publications
Privacy on the Line with Susan Landau in 1998. An updated and expanded edition was published in 2007.
New directions in cryptography in 1976 with Martin Hellman.
Awards and honors
Together with Martin Hellman, Diffie won the 2015 Turing Award, widely considered the most prestigious award in the field of computer science. The citation for the award was: "For fundamental contributions to modern cryptography. Diffie and Hellman's groundbreaking 1976 paper, 'New Directions in Cryptography', introduced the ideas of public-key cryptography and digital signatures, which are the foundation for most regularly-used security protocols on the internet today."
Diffie received an honorary doctorate from the Swiss Federal Institute of Technology in 1992. He is also a fellow of the Marconi Foundation and visiting fellow of the Isaac Newton Institute. He has received various awards from other organisations. In July 2008, he was also awarded a Degree of Doctor of Science (Honoris Causa) by Royal Holloway, University of London.
He was also awarded the IEEE Donald G. Fink Prize Paper Award in 1981 (together with Martin E. Hellman), The Franklin Institute's Louis E. Levy Medal in 1997 a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society in 1998, and the IEEE Richard W. Hamming Medal in 2010. In 2011, Diffie was inducted into the National Inventors Hall of Fame and named a Fellow of the Computer History Museum "for his work, with Martin Hellman and Ralph Merkle, on public key cryptography." Diffie was elected a Foreign Member of the Royal Society (ForMemRS) in 2017. Diffie was also elected a member of the National Academy of Engineering in 2017 for the invention of public key cryptography and for broader contributions to privacy.
Personal life
Diffie self-identifies as an iconoclast. He has stated that he "was always concerned about individuals, an individual's privacy as opposed to government secrecy."
References
Further reading
Steven Levy, Crypto: How the Code Rebels Beat the Government — Saving Privacy in the Digital Age, , 2001.
Oral history interview with Martin Hellman Oral history interview 2004, Palo Alto, California. Charles Babbage Institute, University of Minnesota, Minneapolis. Hellman describes his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s. He also relates his subsequent work in cryptography with Steve Pohlig (the Pohlig–Hellman algorithm) and others. Hellman addresses the National Security Agency's (NSA) early efforts to contain and discourage academic work in the field, the Department of Commerce's encryption export restrictions, and key escrow (the so-called Clipper chip). He also touches on the commercialization of cryptography with RSA Data Security and VeriSign.
Wired Magazine biography of Whitfield Diffie
Crypto dream team Diffie & Hellman wins 2015 "Nobel Prize of Computing". Network World''.
External links
Cranky Geeks Episode 133
Interview with Whitfield Diffie on Chaosradio Express International
Cranky Geeks Episode 71
Risking Communications Security: Potential Hazards of the Protect America Act
RSA Conference 2010 USA: The Cryptographers Panel 1/6, video with Diffie participating on the Cryptographer's Panel, April 21, 2009, Moscone Center, San Francisco
Nordsense: Security advisor 2017- Present
1944 births
Living people
American cryptographers
Modern cryptographers
Public-key cryptographers
Nortel employees
Sun Microsystems people
Massachusetts Institute of Technology School of Science alumni
Stanford University School of Engineering alumni
International Association for Cryptologic Research fellows
Turing Award laureates
Foreign Members of the Royal Society
Computer security academics
Recipients of the Order of the Cross of Terra Mariana, 3rd Class
Jamaica High School (New York City) alumni
Science fiction fans |
428117 | https://en.wikipedia.org/wiki/StuffIt | StuffIt | StuffIt was a family of computer software utilities for archiving and compressing files. Originally produced for the Macintosh, versions for Microsoft Windows, Linux (x86), and Sun Solaris were later created. The proprietary compression format used by the StuffIt utilities is also termed StuffIt.
In December 2019, Smith Micro Software, the product's most-recent owner and developer, officially announced that StuffIt had reached its end-of-life and that StuffIt products would no longer be developed. One last update did come out in December 2020 after the launch of the Apple M1 architecture to support that and Intel Mac systems through a universal binary of the program.
Overview
StuffIt was originally developed in the summer of 1987 by Raymond Lau, who was then a student at Stuyvesant High School in New York City. It combined the fork-combining capabilities of utilities such as MacBinary with newer compression algorithms similar to those used in ZIP. Compared to existing utilities on the Mac, notably PackIt, StuffIt offered "one step" operation and higher compression ratios. By the fall of 1987 StuffIt had largely replaced PackIt in the Mac world, with many software sites even going so far as to convert existing PackIt archives to save more space.
StuffIt soon became very popular and Aladdin Systems was formed to market it (the last shareware release by Lau was version 1.5.1). They split the product line in two, offering StuffIt Classic in shareware and StuffIt Deluxe as a commercial package. Deluxe added a variety of additional functions, including additional compression methods and integration into the Mac Finder to allow files to be compressed from a "Magic Menu", or seamlessly browse inside and edit compressed files without expanding them using "True Finder Integration".
StuffIt was upgraded several times, and Lau removed himself from direct development as major upgrades to the "internal machinery" were rare. Because new features and techniques appeared regularly on the Macintosh platform, the shareware utility Compact Pro emerged as a competitor to StuffIt in the early 1990s.
A major competitive upgrade followed, accompanied by the release of the freeware StuffIt Expander, to make the format more universally readable, as well as the shareware StuffIt Lite which made it easier to produce. Prior to this anyone attempting to use the format needed to buy StuffIt, making Compact Pro more attractive. This move was a success, and Compact Pro subsequently fell out of use.
Several other Mac compression utilities appeared and disappeared during the 1990s, but none became a real threat to StuffIt's dominance. The only ones to see any widespread use were special-purpose "disk expanders" like DiskDoubler and SuperDisk!, which served a different niche. Apparently as a side-effect, StuffIt once again saw few upgrades. The file format changed in a number of major revisions, leading to incompatible updates. PC-based formats long surpassed the original StuffIt format in terms of compression, notably newer systems like RAR and 7z. These had little impact on the Mac market, as most of these never appeared in an easy-to-use program on the Mac.
With the introduction of Mac OS X, newer Mac software lost their forks and no longer needed anything except the built-in Unix utilities like gzip and tar. Numerous programs "wrapping" these utilities were distributed, and since these files could be opened on any machine, they were considerably more practical than StuffIt in an era when most data is cross-platform. With the release of OS X Public Beta, Aladdin Systems released StuffIt 6.0 which runs under OS X.
Although it was late to market, Aladdin Systems introduced the completely new StuffIt X format in September 2002 with StuffIt Deluxe 7.0 for Macintosh. It was designed to be extendable, support more compression methods, support long file names, and support Unix and Windows file attributes. StuffIt X improves over the original StuffIt format and its descendants by adding multiple compression algorithms such as PPM, and BWT to LZW-type compression. It also added a "block mode" option, error correcting "redundancy" options to protect against data loss, and several encryption options. In January 2005, JPEG compression was added as a StuffIt X compression option (see the related 'SIF Format' below).
From the mid-1990s until the 2005 acquisition by Smith Micro Software, coinciding with the release of Mac OS X v10.4 "Tiger," StuffIt Expander came bundled with the Macintosh operating system.
Although Mac files generally did not use filename extensions, one of StuffIt's primary uses was to allow Mac files to be stored on non-Mac systems where extensions were required.
So, StuffIt-compressed files save the resource forks of the Macintosh files inside them, and typically have the extension . Newer (non-backwards compatible) Stuffit X-compressed files carry the file extension . Encrypted StuffIt archives created with the now-discontinued Private File utility will have extensions. StuffIt-compressed ShrinkWrap disk images will carry or extensions. However, a Classic Mac OS version of StuffIt is needed to mount the images or convert them to a newer format readable in macOS.
Smith Micro Software offers free downloads of StuffIt Expander for Mac and Windows, which expands (uncompresses) files compressed using the StuffIt and StuffIt X format, as well as many other compressed, encoded, encrypted and segmented formats. The shareware application DropStuff permits the compressing of files into the StuffIt X format.
The StuffIt and StuffIt X formats remain, unlike some other file compression formats, proprietary, and Smith Micro Software charge license fees for its use in other programs. Given this, few alternative programs support the format.
There was also an "self-expanding" variant of StuffIt files with a extension that runs as an executable. A utility called exists to turn such an executable into a vanilla sit file.
Derivative products
StuffIt Image Format (SIF)
Early in 2005, a new JPEG compression system was released that regularly obtained compression in the order of 25% (meaning a compressed file size 75% of the original file size) without any further loss of image quality and with the ability to rebuild the original file, not just the original image. (ZIP-like programs typically achieve JPEG compression rates in the order of 1 to 3%. Programs that optimize JPEGs without regard for the original file, only the original image, obtain compression rates from 3 to 10% (depending on the efficiency of the original JPEG). Programs that use the rarely implemented arithmetic coding option available to the JPEG standard typically achieve rates around 12%.)
The new technique was implemented as a StuffIt X format option in their StuffIt Deluxe product. They have also proposed a new image format known as SIF, which simply consists of a single JPEG file compressed using this new technique.
Pending filing of their patent, they retain knowledge of the details of this algorithm as a trade secret. Some details are disclosed in: the high JPEG recompression is achieved by undoing the last step of the JPEG compression itself (the Huffman encoding of quantized transform coefficients). Instead, the transform coefficients are compressed by a more efficient algorithm (a predictive model based on the DC coefficients of neighboring blocks). Similar techniques are also applied for other image file formats such as GIF and TIFF and even the MP3 music file format. By means of decomposition, the relatively high compression rates for individual file formats can also be achieved for container file formats such as PDF, PSD and even ZIP.
StuffIt Wireless
On July 5, 2005, Smith Micro Software announced their acquisition and intention to expand the new JPEG recompression technique to wireless platforms and other file formats. The initial press release and preliminary information saw the first use of the title “StuffIt Wireless.”
StuffIt Expander
StuffIt Expander is a proprietary, freeware, closed source, decompression software utility developed by Allume Systems (a subsidiary of Smith Micro Software formerly known as Aladdin Systems). It runs on the classic Mac OS, macOS, and Microsoft Windows. Prior to 2011, a Linux version had also been available for download.
Notable features
Duplicate Folding
Duplicate Folding is a feature which saves even more space by only keeping one copy of a duplicate file in an archive.
Issues
Backwards compatibility
Changes to the Stuffit compression format render previous versions of Stuffit or software using its API unable to decompress newer archives, necessitating installation of new versions. This incompatibility can be inconvenient for work flows where timely execution is of importance, or where the intended recipient's system is not capable of running newer versions of Stuffit. Though users are able to create archives in a legacy format, this functionality is not clearly exposed.
Alternatives
macOS includes Archive Utility which decompresses the legacy open formats ZIP, GZIP, and BZIP2, and creates ZIP. In versions since 10.3 (Panther), it now preserves resource forks in the ZIP format, so Stuffit is no longer a requirement for Mac file compression. ZIP is also a de facto standard, making it more widely accepted for archives and sharing.
While StuffIt used to be a standard way of packaging Mac software for download, macOS native compressed disk images (DMG) have largely replaced this practice.
StuffIt might still be used in situations where its specific features are required (archive editing/browsing, better compression, JPEG compression, encryption, old packages). An open source alternative might be The Unarchiver, even if it doesn't support the last versions of the StuffIt file formats. Some 3rd-party software, such as the Macintosh Finder replacement Path Finder, use the licensed Stuffit SDK to gain all the features of Stuffit.
See also
List of archive formats
List of file archivers
Comparison of file archivers
DiskDoubler
References
External links
StuffIt official website
Raymond Lau's home page
JPEG Compression Test
Conversation with Raymond Lau
Stuffit Method 15 compression format (Arsenic)
Archive formats
Data compression software
Classic Mac OS software
MacOS archivers and compression-related utilities
Windows archivers and compression-related utilities
File archivers
Windows compression software |
428273 | https://en.wikipedia.org/wiki/Blind%20signature | Blind signature | In cryptography a blind signature, as introduced by David Chaum, is a form of digital signature in which the content of a message is disguised (blinded) before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems and digital cash schemes.
An often-used analogy to the cryptographic blind signature is the physical act of a voter enclosing a completed anonymous ballot in a special carbon paper lined envelope that has the voter's credentials pre-printed on the outside. An official verifies the credentials and signs the envelope, thereby transferring his signature to the ballot inside via the carbon paper. Once signed, the package is given back to the voter, who transfers the now signed ballot to a new unmarked normal envelope. Thus, the signer does not view the message content, but a third party can later verify the signature and know that the signature is valid within the limitations of the underlying signature scheme.
Blind signatures can also be used to provide unlinkability, which prevents the signer from linking the blinded message it signs to a later un-blinded version that it may be called upon to verify. In this case, the signer's response is first "un-blinded" prior to verification in such a way that the signature remains valid for the un-blinded message. This can be useful in schemes where anonymity is required.
Blind signature schemes can be implemented using a number of common public key signing schemes, for instance RSA and DSA. To perform such a signature, the message is first "blinded", typically by combining it in some way with a random "blinding factor". The blinded message is passed to a signer, who then signs it using a standard signing algorithm. The resulting message, along with the blinding factor, can be later verified against the signer's public key. In some blind signature schemes, such as RSA, it is even possible to remove the blinding factor from the signature before it is verified. In these schemes, the final output (message/signature) of the blind signature scheme is identical to that of the normal signing protocol.
Uses
Blind signature schemes see a great deal of use in applications where sender privacy is important. This includes various "digital cash" schemes and voting protocols.
For example, the integrity of some electronic voting system may require that each ballot be certified by an election authority before it can be accepted for counting; this allows the authority to check the credentials of the voter to ensure that they are allowed to vote, and that they are not submitting more than one ballot. Simultaneously, it is important that this authority does not learn the voter's selections. An unlinkable blind signature provides this guarantee, as the authority will not see the contents of any ballot it signs, and will be unable to link the blinded ballots it signs back to the un-blinded ballots it receives for counting.
Blind signature schemes
Blind signature schemes exist for many public key signing protocols. More formally a blind signature scheme is a cryptographic protocol that involves two parties, a user Alice that wants to obtain signatures on her messages, and a signer Bob that is in possession of his secret signing key. At the end of the protocol Alice obtains Bob’s signature on m without Bob learning anything about the message. This intuition of not learning anything is hard to capture in mathematical terms. The usual approach is to show that for every (adversarial) signer, there exists a simulator that can output the same information as the signer. This is similar to the way zero-knowledge is defined in zero-knowledge proof systems.
Blind RSA signatures
One of the simplest blind signature schemes is based on RSA signing. A traditional RSA signature is computed by raising the message m to the secret exponent d modulo the public modulus N. The blind version uses a random value r, such that r is relatively prime to N (i.e. gcd(r, N) = 1). r is raised to the public exponent e modulo N, and the resulting value is used as a blinding factor. The author of the message computes the product of the message and blinding factor, i.e.:
and sends the resulting value to the signing authority. Because r is a random value and the mapping is a permutation it follows that is random too. This implies that does not leak any information about m. The signing authority then calculates the blinded signature s' as:
s' is sent back to the author of the message, who can then remove the blinding factor to reveal s, the valid RSA signature of m:
This works because RSA keys satisfy the equation and thus
hence s is indeed the signature of m.
In practice, the property that signing one blinded message produces at most one valid signed messages is usually desired. This means one vote per signed ballot in elections, for example. This property does not hold for the simple scheme described above: the original message and the unblinded signature is valid, but so is the blinded message and the blind signature, and possibly other combinations given a clever attacker. A solution to this is to blind sign a cryptographic hash of the message, not the message itself.
Dangers of RSA blind signing
RSA is subject to the RSA blinding attack through which it is possible to be tricked into decrypting a message by blind signing another message. Since the signing process is equivalent to decrypting with the signer's secret key, an attacker can provide a blinded version of a message encrypted with the signer's public key, for them to sign. The encrypted message would usually be some secret information which the attacker observed being sent encrypted under the signer's public key which the attacker wants to learn more about. When the attacker removes the blindness the signed version they will have the clear text:
where is the encrypted version of the message. When the message is signed, the cleartext is easily extracted:
Note that refers to Euler's totient function. The message is now easily obtained.
This attack works because in this blind signature scheme the signer signs the message directly. By contrast, in an unblinded signature scheme the signer would typically use a padding scheme (e.g. by instead signing the result of a cryptographic hash function applied to the message, instead of signing the message itself), however since the signer does not know the actual message, any padding scheme would produce an incorrect value when unblinded. Due to this multiplicative property of RSA, the same key should never be used for both encryption and signing purposes.
See also
Dining cryptographers protocol
Electronic money
References
External links
Security of Blind Signatures Under Aborts
Implementation of Blind Signature in Java
American inventions
Public-key cryptography
Financial cryptography
Electronic voting
Digital signature schemes |
428379 | https://en.wikipedia.org/wiki/Janus%20Friis | Janus Friis | Janus Friis (; born 26 June 1976) is a Danish entrepreneur best known for co-founding the file-sharing application Kazaa, and the peer-to-peer telephony application Skype. In September 2005, he and his business partner Niklas Zennström sold Skype to eBay for $2.6B. Friis has maintained ownership interest in Skype through Silver Lake Partners, which sold Skype to Microsoft for $8.5 billion, in May 2011.
Friis and Zennström also developed Joost—an interactive software application for distributing TV shows and other forms of video content over the Web. The assets of this service were sold to Adconion Media Group in November 2009. Independently, Friis founded video streaming startup Vdio in 2011.
Friis and Ahti Heinla founded Starship Technologies in 2014, to develop small self-driving delivery robots.
Career
Friis had no formal higher education, dropping out of high school before starting a job at the help desk of CyberCity, one of Denmark's first Internet service providers. He met Zennström in 1996. At that time, Zennström headed Tele2 in Denmark, and Friis was hired to run its customer support. Friis and Zennström worked together at Tele2 to launch get2net, another Danish ISP, and the portal everyday.com.
After this, the partners decided to leave Tele2. Friis moved into Zennström's small apartment in Amsterdam in January 2000 where they started developing KaZaA, the company responsible for the most popular software for use with the FastTrack file sharing network protocol. Janus Friis and Niklas Zennström developed the FastTrack protocol in 2001.
From the success of KaZaA's peer-to-peer technology the duo co-founded Joltid, a software company developing and marketing peer-to-peer solutions and peer-to-peer traffic optimization technologies to companies.
Friis is also co-founder of Altnet, a network that sells commercial music to KaZaA users.
Friis founded the online music streaming service Rdio with Zennström in 2010. It filed for bankruptcy in November 2015 and announced the sale of assets to Pandora Radio for $75 million.
In 2012, Friis co-founded Wire, a secure collaboration platform that uses end-to-end encryption to protect digital assets.
Friis and Ahti Heinla founded Starship Technologies in 2014, to develop small self-driving delivery robots. In September 2016, the robots took the streets in San Francisco in a test authorized by the city.
Awards
Friis was named in Time Magazine's list of 100 most influential people in 2006.
In 2006 Janus Friis got the prestigious award “IT-prisen” ("The IT Prize") in his home country, given by the Danish IT industry and IDG, for his work and innovation.
He and Zennström were also the co-recipients of the 2006 Wharton Infosys Business Transformation Award, given to business and individuals who have used information technology in a way that changed an industry or society as a whole.
Personal life
He was engaged to Danish recording artist Aura Dione but the couple split up in April 2015.
Notes and references
External links
Atomico Ventures - Friis' venture capital firm
The Skype Guys: The 2006 TIME 100
Wire - new project by Janus Friis
1976 births
Living people
Skype people
Danish company founders |
429846 | https://en.wikipedia.org/wiki/British%20Council | British Council | The British Council is a British organisation specialising in international cultural and educational opportunities. It works in over 100 countries: promoting a wider knowledge of the United Kingdom and the English language (and the Welsh language in Argentina); encouraging cultural, scientific, technological and educational co-operation with the United Kingdom. The organisation has been called a soft power extension of UK foreign policy, as well as a tool for propaganda.
The British Council is governed by a Royal Charter. It is also a public corporation and an executive nondepartmental public body (NDPB), sponsored by the Foreign, Commonwealth and Development Office. Its headquarters are in Stratford, London. Its Chairman is Stevie Spring and its Chief Executive is Scott McDonald.
History
1934: British Foreign Office officials created the "British Committee for Relations with Other Countries" to support English education abroad, promote British culture and fight the rise of fascism. The name quickly became British Council for Relations with Other Countries.
1936: The organisation's name was officially shortened to the British Council.
1938: The British Council opens its first four offices in Bucharest (Romania), Cairo (Egypt), Lisbon (Portugal) and Warsaw (Poland). the offices in Portugal are currently the oldest in continuous operation in the world.
1940: King George VI granted the British Council a Royal Charter for promoting "a wider knowledge of [the United Kingdom] and the English language abroad and developing closer cultural relations between [the UK] and other countries".
1942: The British Council undertook a promotion of British culture overseas. The music section of the project was a recording of significant recent compositions by British composers: E.J. Moeran's Symphony in G minor was the first work to be recorded under this initiative, followed by recordings of Walton's Belshazzar's Feast, Bliss's Piano Concerto, Bax's Third Symphony, and Elgar's The Dream of Gerontius.
1944: In August, after the liberation of Paris, Austin Gill was sent by the council to reestablish the Paris office, which soon had tours by the Old Vic company, Julian Huxley and T. S. Eliot.
1946: The British Council collected handicraft products from crafts that were being practised in the British countryside for an "Exhibition of Rural Handicrafts from Great Britain" that travelled to Australia and New Zealand. The majority of the collection was sold to the Museum of English Rural Life in 1960 and 1961.
2007: The Russian Foreign Ministry ordered the British Council to close its offices outside Moscow. The Ministry alleged that it had violated Russian tax regulations, a move that British officials claimed was a retaliation over the British expulsion of Russian diplomats allegedly involved with the poisoning of Alexander Litvinenko. This caused the British Council to cease carrying out all English-language examinations in Russia from January 2008. In early 2009, a Russian arbitration court ruled that the majority of the tax claims, valued at $6.6 million, were unjustified.
2011: On 19 August, a group of armed men attacked the British Council office in Kabul, the capital of Afghanistan, killing at least 12 people – none of them British – and temporarily took over the compound. All the attackers were killed in counter-attacks by forces guarding the compound. The British Council office was relocated to the British Embassy compound, as the British Council compound was destroyed in the suicide attack.
2013: The British Council in Tripoli, Libya, was targeted by a car bomb on the morning of 23 April. Diplomatic sources were reported as saying that "the bombers were foiled as they were preparing to park a rigged vehicle in front of the compound gate". The attempted attack was simultaneous with the attack on the French Embassy in Tripoli on the same day that injured two French security guards, one severely, and wounded several residents in neighbouring houses. A jihadist group calling itself the Mujahedeen Brigade was suspected possibly linked to Al-Qaeda in the Islamic Maghreb.
Organisation
The British Council is a charity governed by Royal Charter. It is also a public corporation and an executive nondepartmental public body (NDPB), sponsored by the Foreign, Commonwealth and Development Office. Its headquarters are in Stratford, London. Its chair is Stevie Spring, and its CEO Scott McDonald.
The British Council's total income in 2014–15 was £973 million principally made up of £154.9 million grant-in-aid received from the Foreign, Commonwealth and Development Office; £637 million income from fees and teaching and examinations services; and £164 million from contracts.
The British Council works in more than 100 countries: promoting a wider knowledge of the UK and the English language; encouraging cultural, scientific, technological and educational understanding and co-operation; changing people's lives through access to UK education, skills, qualifications, culture and society; and attracting people who matter to the future of the UK and engaging them with the UK's culture, educational opportunities and its diverse, modern, open society.
In 2014–15 the British Council spent: £489 million developing a wider knowledge of the English language; £238 million encouraging educational co-operation and promoting the advancement of education; £155 million building capacity for social change; £80 million encouraging cultural, scientific and technological co-operation; and £10 million on governance, tax and trading expenses.
Notable activity
English and examinations
The British Council offers face-to-face teaching in more than 80 teaching centres in more than 50 countries
Three million candidates took UK examinations with the British Council in more than 850 towns and cities in 2014–15.
The British Council jointly runs the global IELTS English-language standardised test with University of Cambridge ESOL Examinations and IDP Education Australia. Over 2.5 million IELTS tests were delivered in 2014–15.
Massive Open Online Course (MOOC)
In 2014, the British Council launched its first MOOC, Exploring English: Language and Culture, on the UK social learning platform FutureLearn. This was accessed by over 230,000 people.
English for peace
"Peacekeeping English" is a collaboration between the British Council, the Foreign, Commonwealth and Development Office and the Ministry of Defence to improve the English-language skills of military personnel through the Peacekeeping English Project (PEP). PEP is helping train approximately 50,000 military and police service personnel in 28 countries, amongst them Libya, Ethiopia and Georgia.
Mobility programmes
Education UK
In 2013, the British Council relaunched the global website Education UK for international students interested in a UK education. The site receives 2.2 million visitors per year and includes a search tool for UK courses and scholarships, advice and articles about living and studying in the UK.
Erasmus+
From 2014 to 2020, the British Council and Ecorys UK jointly administered almost €1 billion of the €14.7 billion Erasmus+ programme offering education, training, youth and sport opportunity for young people in the UK. It was expected that nearly 250,000 will have undertaken activities abroad with the programme.
Schools
Connecting Classrooms
Over 16,000 schools have taken part in an international school partnership or benefited from teacher training through the British Council Connecting Classrooms programmes.
Arts and culture
ACCELERATE
ACCELERATE was a leadership programme for Aboriginal and Torres Strait Islander people in the creative arts, run jointly by the British Council and the Australia Council in partnership with Australian state arts agencies, between 2009 and 2016. During that time, 35 people participated in the program, with many alumni going on to excel in their fields.
UK-India Year of Culture
Her Majesty Queen Elizabeth II hosted the official launch of the UK-India Year of Culture on 27 February 2017 at Buckingham Palace, with Indian Finance Minister Arun Jaitley representing Prime Minister Narendra Modi. The British Council worked with the Palace and British-Indian start-up Studio Carrom to project a peacock, India's national bird, onto the facade of Buckingham Palace.
fiveFilms4freedom
In 2015, the British Council launched fiveFilms4freedom a free, online, 10-day LGBT film festival with the British Film Institute supported by the UN Free & Equal campaign. It was the first global online LGBT film festival. The festival runs a 24-hour campaign to ask people to watch a movie and show that love is a human right. In 2016, films were viewed by over 1.5m people in 179 countries.
Shakespeare Lives
In October 2015 the British Council announced a global programme with the BBC, British Film Institute, the National Theatre, the Royal Shakespeare Company, the Shakespeare 400 consortium, the Shakespeare Birthplace Trust and Shakespeare's Globe to celebrate Shakespeare's life and work on the 400th anniversary of this death.
Selector Radio
Selector Radio is a weekly two-hour radio show, produced by Folded Wing for the British Council. Originally launched in 2001, the show is now broadcast in more than 30 countries around the world, connecting a global audience to a wide range of music the United Kingdom has to offer, covering a variety of genres from grime, indie, soul, dance and more. The show features interviews, guest DJ mixes and exclusive live sessions from some of the UK's most exciting artists. It avoids many mainstream acts, in favour of emerging talent and underground styles. It has an estimated listenership of over four million people. The show is hosted in the UK by Jamz Supernova – many countries take the English language version of the show and create a new show from the tracks and features, translating the 'links' into local language.
Cultural and educational exchange with North Korea
The British Council has been running a teacher training programme in North Korea since 2001. In July 2014 the British Council signed a Memorandum of Understanding with the Democratic People's Republic of Korea (DPRK) for cultural and educational exchange.
Other activities
Love's Labours Lost
The British Council-supported production of Love's Labours Lost in 2005 was the first performance of a Shakespeare play in Afghanistan in more than 17 years. The play was performed in the Afghan language of Dari.
Young Creative Entrepreneur Awards
The British Council Young Creative Entrepreneurs identify and support talented people from across the creative industries such as the International Young Publisher of the Year, International Young Design Entrepreneur of the Year, International Young Music Entrepreneur of the Year and British Council West Africa Arts Program ~ Creative Entrepreneurs 2018 awards.
Controversies
Expenses
In 2010, Conservative MP Mark Lancaster, the then Lord Commissioner of HM Treasury, the then Speaker of the House of Commons Michael Martin, and other MPs were involved in rows over expenses incurred on undisclosed taxpayer-funded British Council trips. The British Council's then Chief Executive, Martin Davidson, also faced press criticism for expenses claimed in apparent breach of the British Council's own internal rules for overnight stays in London.
Closure in Russia
In 2007, the Russian government accused the British Council of illegal operation by breaking Russian tax laws and ordered the organisation to close two of its offices. Many believed that the council had become the victim of a diplomatic row between the UK and Russia. In 2018, Russia expelled 23 British diplomats and closed down the British Council (due to lack of regulations of its activities) along with the general consulate in St. Petersburg. The move was reported to be retaliation against the UK's actions toward Russia for the poisoning of Sergei and Yulia Skripal.
Israel and Palestine
The British Council has been a primary partner of the Palestine Festival of Literature since the Festival's beginning in 2008. In 2009, the Israeli police, acting on a court order, closed down the venue scheduled to host the Festival's closing event since there was Palestinian Authority involvement, but the British Council stepped in and the evening was relocated to its grounds.
The British Council supports the festival, also known as PalFest. A controversial issue arose in 2012, because PalFest's website states that they endorse the "2004 Palestinian call for the academic and cultural boycott of Israel". Susanna Nicklin, the council's director of literature said in response: "The British Council is a non-political organisation, and we believe that international cultural exchange makes a powerful contribution to a more peaceful, tolerant and prosperous world. Therefore, the British Council does not support cultural or academic boycotts."
Dissident Chinese writers
In April 2012, the British Council faced a storm of protest over the exclusion of dissident Chinese writers from The London Book Fair in 2012. Critics included English PEN and journalist Nick Cohen writing in The Observer, as well as Alastair Niven, a former Literature Director of The British Council itself.
Cuts
In March 2007, the British Council announced its "intention to increase its investment in the Middle East, North Africa and Central and Southern Asia". In June 2007, MPs were told of further closures in Tel Aviv and East Jerusalem (where there had been a British Council Library since 1946). The British Council libraries in Athens and in Belgrade are also to close. Similarly in India, the British Council Libraries at Bhopal and Trivandrum were closed despite protests from library users as part of the Council's policy to "reduce its physical presence" in the country and to divert funds to mega projects in the fields of culture, education, science and research.
British Council libraries and offices have also been closed in a number of other countries judged by the British Council to be of little strategic or commercial importance, as it refocused its activities on China and the Persian Gulf area. Council offices were closed in Lesotho, Swaziland, Ecuador and provincial Länder in Germany in 2000–2001 – as well as Belarus – prompting Parliamentary criticism. Subsequent promises by British Council Chair Neil Kinnock to a conference in Edinburgh that the Belarus closure would hopefully prove to be just a "temporary" withdrawal proved illusory. The British Council office in Peru also closed in September 2006 as part of a rethink of its strategy in Latin America. In Italy British Council closed its offices in Turin and Bologna, and reduced the size of offices in Milan and Rome (with the closure of the library in the latter).
Charles Arnold-Baker, author of the Companion to British History said of the British Council's shift in priorities: "This whole policy is misconstrued from top to bottom. We are going somewhere where we can't succeed and neglecting our friends in Europe who wish us well. The only people who are going to read our books in Beirut or Baghdad are converts already."
The article also points out that the Alliance française and the Goethe-Institut, unlike the British Council, are both expanding and replenishing libraries Europe-wide. France opened its new library in Tel Aviv in 2007, just a few months after the British Council closed there and shut down the British Council library in West Jerusalem. In Gaza, the Institut français supports the Gaza municipal library in partnership with the local authority and a municipal twinning link between Gaza City and the French port of Dunkerque. In Oslo British Council informs Norwegian callers that "our office is not open to the public and we do not have an enquiry service". Goethe Institute also has a more visible presence in Glasgow than the British Council. There is now, in contrast, only one British Council office left in Germany – and that is in Berlin.
Accountability
Formally it is to its sponsoring department, the Foreign, Commonwealth and Development Office, that the UK Parliamentary Table Office refers any parliamentary questions about the British Council.
The effectiveness of British Council efforts to promote higher education in China was examined in the UK by the House of Commons Select Committee on Education and Skills in a report issued in August 2007. It expressed concern that in terms of joint educational programmes involving Chinese universities, the UK lagged behind Australia, USA, Hong Kong, Canada and France. In its evidence to this committee, the British Council had argued that "UK degrees are highly valued by international students for their global recognition. International students adopt an essentially utilitarian view of higher education which is likely to increasingly involve consideration of value for money, including opting for programmes at least partly delivered offshore". As their preferred marketing 'model', the British Council gave the example of India where their UK India Education and Research Initiative is being 'championed' by British multinational oil companies such as BP and Shell, the pharmaceutical giant GSK and arms company BAE Systems.
Criticism of British Council marketing efforts in this area have also come from Scotland where The Sunday Herald obtained documents under the Freedom of Information Act showing that the British Council's Marketing Co-ordinator in the USA had been referring to the University of Stirling as 'The University of Sterling' (sic) and also documenting 'tensions' between Scottish Executive civil servants and British Council in India and China over overseas promotion of universities in Scotland where education is a devolved responsibility. The Sunday Herald reported that these turf wars were undermining the Scottish Executive's key Fresh Talent policy.
Some of the activities of the British Council were examined in 2007/08 by the National Audit Office (NAO). The NAO's report, The British Council: Achieving Impact, concluded "that the British Council's performance is strong and valued by its customers and stakeholders". It also concluded, however, that its English classes are elitist and have unfair advantages over commercial providers, as well as questioning thousands of unanswered phone-calls and e-mails to British Council offices.
As part of its examination of the Foreign, Commonwealth and Development Office Annual Report, the Foreign Affairs Committee spends an hour each year examining witnesses from the British Council but even this level of scrutiny is undermined by a Commons ruling exempting MPs from the requirement to declare overseas trips paid for by The British Council.
Two members of the Public Accounts Committee (Nigel Griffiths MP and Ian Davidson MP) were office-bearers in the British Council Associate Parliamentary Group. Nigel Griffiths MP was Vice-Chair of this British Council lobby group until stepping down as an MP.
In 2008 the British Council was called before the Public Accounts Committee (PAC) following earlier publication of a National Audit Office report. The subsequent PAC report confirmed that Nigel Griffiths MP – Vice Chair of The British Council Associate Parliamentary Group – was part of the small number of PAC members who approved this report on the British Council despite not having been recorded as being present during the evidence session – in June 2008 – where the British Council's Chief Executive was cross-examined. Mr Griffiths had earlier travelled to Russia and spoke favourably of British Council activities there in January 1998 around the time that their man in St Petersburg (Stephen Kinnock) was expelled.
In April 2009 the British Council was told to clean up its act by the Information Commissioner after losing staff data that included details of their trade union affiliations and lying about the encryption status of the computer disc lost.
Following the accusations made against the British Council in Russia (see above) Trevor Royle, the experienced Diplomatic Editor of The Sunday Herald quoted a 'British diplomatic source' admitting: "There is a widespread assumption that The British Council is a wing of our Secret Intelligence Services, however minor. Officially it is no such thing but there are connections. Why should it be otherwise because all information is invaluable? After all, the British Council also deals with trade missions and inevitably that involves low-grade intelligence-gathering."
In 2005, along with the Alliance française, the Società Dante Alighieri, the Goethe-Institut, the Instituto Cervantes, and the Instituto Camões, the British Council shared in the Prince of Asturias Award for the outstanding achievements of Western Europe's national cultural agencies in communications and the humanities. At the time of this joint award the full extent of The British Council's closure policies in Europe was not yet public knowledge.
In literature
Royle also goes on to note that the novel The Russia House by John Le Carré (former consular official David Cornwell) opens with a reference to The British Council. The organisation's "first ever audio fair for the teaching of the English language and the spread of British culture" is "grinding to its excruciating end" and one of its officials is packing away his stuff when he is approached by an attractive Russian woman to undertake clandestine delivery of a manuscript which she claims is a novel to an English publisher who she says is 'her friend'!
It is also featured in one of the scenes in Graham Greene's The Third Man – the character Crabbin, played by Wilfrid Hyde-White in the film, worked for The British Council. In 1946, the writer George Orwell advised serious authors not to work for it as a day-job arguing that "the effort [of writing] is too much to make if one has already squandered one's energies on semi-creative work such as teaching, broadcasting or composing propaganda for bodies such as the British Council". In her autobiography, Dame Stella Rimington, the first woman head of MI5, mentions working for British Council in India prior to joining the British Intelligence Services.
The British Council has been referred to (and its man on-station, Goole) – frequently in a humorous way by Lawrence Durrell in his collection of anecdotes about a diplomat's life on foreign postings for the Foreign, Commonwealth and Development Office – Antrobus Complete.
In the six Olivia Manning novels that make up The Balkan Trilogy and The Levant Trilogy, Guy Pringle is an employee of the British Council, and Council politics make up several of the plot points. The books portray Eastern Europe and the Middle East in the opening years of World War Two.
Burma
The role of British Council in Burma in 1947 came under scrutiny with release of classified documents to a BBC investigation by journalist Feargal Keane into the role of dissident British colonial officials in the assassination of the then Burmese independence leader Aung San (father of Aung San Suu Kyi). The BBC programme quoted from a 1948 document sent by the Chief of Police in Rangoon to the British Ambassador stating their belief that there had been British involvement in the assassination of Aung San and his Cabinet for which one of his political opponents was hanged and that "the go-between" had been a British Council official named in the programme.
Libya
In August 2011 a journalist from The Irish Times discovered a certificate dated 2007 issued by the British Council in Tripoli to a daughter of President Gadaffi who had previously been said to have been killed in a US raid on Gadaffi's residence in 1986.
English and examinations
In July 2011 the Hong Kong edition of China Daily reported on the flourishing "ghost-writing" industry that critics suggest has sprung up around the British Council IELTS tests in China.
A major IELTS corruption scandal in Western Australia resulted in prosecutions in November 2011.
Connecting Classrooms
In January 2012 the press in Pakistan reported that the Federal Investigations Agency was investigating a visa scam associated with the British Council's "Connecting Classrooms" programme.
Chairs
The Council has been chaired by:
1934–37 Lord Tyrrell
1937–41 Lord Lloyd
1941–45 Sir Malcolm Robertson
1946–55 Sir Ronald Adam
1955–59 Sir David Kelly
1959–67 Lord Bridges
1968–71 Lord Fulton
1971–72 Sir Leslie Rowan
1972–76 Lord Ballantrae
1977–84 Sir Charles Troughton
1985–92 Sir David Orr
1992–98 Sir Martin Jacomb
1998–2004 Baroness Kennedy of The Shaws
2004–09 Lord Kinnock
2010–16 Sir Vernon Ellis
2016–19 Christopher Rodrigues
2019–present Stevie Spring
Trade unions
Some staff at the British Council are members of unions. UK staff are represented by the Public and Commercial Services Union. Some employees in Japan belong to the General Union.
Publications
From 1967 to 1989 the British Council published the journal Media in Education and Development.
HistoryInitially titled CETO news, ISSN 0574-9409, it became Educational Television International: a journal of the Centre for Educational Television Overseas, ISSN 0424-6128, in March 1967 (volume 1, issue 1). The journal changed its name again, in March 1971, to Educational Broadcasting International: a journal of the Centre for Educational Development Overseas, ISSN 0013-1970 (volume 5, issue 1). Its final name change was to Media in Education and Development, ISSN 0262-0251, in December 1981 (volume 14 issue 4). The final issue went to print in 1989 (volume 22).
British Council Partnership
English UK
List of British Council Approved Centres
British Study Centres
Annex
Locations
The British Council is organised into seven regions.
Americas
The British Council has offices in:
Asia Pacific
The British Council has offices in:
European Union
The British Council has offices in:
Middle East and North Africa
The British Council has offices in:
Bahrain
Indian Subcontinent
The British Council has offices in:
Sub-Saharan Africa
The British Council has offices in:
Wider Europe
The British Council has offices in:
See also
Eunic
Teaching English as a Foreign Language (TEFL)
Cultural diplomacy
Public diplomacy
References
External links
British Council Film directory
Royal Charter of the British Council (1993).
Catalogue of the British Council Whitley Council Staff/Trade Union Side archives, held at the Modern Records Centre, University of Warwick
1934 establishments in the United Kingdom
Cultural organisations based in the United Kingdom
Cultural promotion organizations
English-language education
English as a global language
Foreign, Commonwealth and Development Office
Foreign Office during World War II
Funding bodies in the United Kingdom
Non-departmental public bodies of the United Kingdom government
British propaganda organisations
Organisations based in the London Borough of Newham
Organizations established in 1934
Stratford, London |
430503 | https://en.wikipedia.org/wiki/Werner%20Koch | Werner Koch | Werner Koch (born July 11, 1961) is a German free software developer. He is best known as the principal author of the GNU Privacy Guard (GnuPG or GPG). He was also Head of Office and German Vice-Chancellor of the Free Software Foundation Europe. He is the winner of Award for the Advancement of Free Software in 2015 for founding GnuPG.
Journalists and security professionals rely on GnuPG, and Edward Snowden used it to evade monitoring whilst he leaked classified information from the U.S. National Security Agency.
Life and work
Koch lives in Erkrath, near Düsseldorf, Germany. He began writing GNU Privacy Guard in 1997, inspired by attending a talk by Richard Stallman who made a call for someone to write a replacement for Phil Zimmermann's Pretty Good Privacy (PGP) which was subject to U.S. export restrictions. The first release of GNU Privacy Guard was in 1999 and it went on to become the basis for most of the popular email encryption programs: GPGTools, Enigmail, and Koch's own Gpg4win, the primary free encryption program for Microsoft Windows.
In 1999 Koch, via the German Unix User Group which he served on the board of, received a grant of 318,000 marks (about $170,000 USD) from the German Federal Ministry of Economics and Technology to make GPG compatible with Microsoft Windows. In 2005 he received a contract from the German government to support the development of S/MIME.
Journalists and security professionals rely on GnuPG, and Edward Snowden used it to evade monitoring whilst he leaked classified information from the U.S. National Security Agency. Despite GnuPG's popularity, Koch has struggled to survive financially, earning about $25,000 per year since 2001 and thus considered abandoning the project and taking a better paying programming job. However, given Snowden's leaked documents showed the extent of NSA surveillance, Koch continued. In 2014 he held a funding drive and in response received $137,000 in donations from the public, and Facebook and Stripe each pledged to annually donate $50,000 to GPG development. Unrelated, in 2015 Koch was also awarded a one-time grant of $60,000 from the Linux Foundation's Core Infrastructure Initiative.
References
External links
1961 births
Cypherpunks
Living people
Free software programmers
German computer programmers
German cryptographers
Modern cryptographers
People from Mettmann (district)
Privacy activists
Public-key cryptographers |
430699 | https://en.wikipedia.org/wiki/START%20I | START I | START I (Strategic Arms Reduction Treaty) was a bilateral treaty between the United States and the Soviet Union on the reduction and the limitation of strategic offensive arms. The treaty was signed on 31 July 1991 and entered into force on 5 December 1994. The treaty barred its signatories from deploying more than 6,000 nuclear warheads and a total of 1,600 intercontinental ballistic missiles (ICBMs) and bombers.
START negotiated the largest and most complex arms control treaty in history, and its final implementation in late 2001 resulted in the removal of about 80% of all strategic nuclear weapons then in existence. Proposed by US President Ronald Reagan, it was renamed START I after negotiations began on START II.
The treaty expired on 5 December 2009.
On 8 April 2010, the replacement New START Treaty was signed in Prague by US President Barack Obama and Russian President Dmitry Medvedev. Following its ratification by the US Senate and the Federal Assembly of Russia, the treaty went into force on 26 January 2011, extending deep reductions of American and Soviet or Russian strategic nuclear weapons through February 2026.
Proposal
The START proposal was first announced by US President Ronald Reagan in a commencement address at his alma mater, Eureka College, on 9 May 1982, and presented by Reagan in Geneva on 29 June 1982. He proposed a dramatic reduction in strategic forces in two phases, which he referred to as SALT III.
The first phase would reduce overall warhead counts on any missile type to 5,000, with an additional limit of 2,500 on ICBMs. Additionally, a total of 850 ICBMs would be allowed, with a limit of 110 "heavy throw" missiles like the SS-18 and additional limits on the total "throw weight" of the missiles.
The second phase introduced similar limits on heavy bombers and their warheads, as well as other strategic systems.
The US then had a commanding lead in strategic bombers. The aging B-52 force was a credible strategic threat but was equipped with only AGM-86 cruise missiles beginning in 1982 because of Soviet air defense improvements in the early 1980s. The US had begun to introduce the new B-1B Lancer quasi-stealth bomber as well and was secretly developing the Advanced Technology Bomber (ATB) project, which would eventually result in the B-2 Spirit stealth bomber.
The Soviet force was of little threat to the US, on the other hand, as it was tasked almost entirely with attacking US convoys in the Atlantic and land targets on the Eurasian landmass. Although the Soviets had 1,200 medium and heavy bombers, only 150 of them (Tupolev Tu-95s and Myasishchev M-4s) could reach North America (the latter only by in-flight refueling). They also faced difficult problems in penetrating US airspace, which was admittedly smaller and less defended. Having too few bombers available compared to US bomber numbers was evened out by the US forces being required to penetrate the Soviet airspace, which is much larger and more defended.
That changed in 1984, when new Tu-95MS and Tu-160 bombers appeared and were equipped with the first Soviet AS-15 cruise missiles. By limiting the phasing in, it was proposed that the US would be left with a strategic advantage for a time.
As Time magazine put it, "Under Reagan's ceilings, the US would have to make considerably less of an adjustment in its strategic forces than would the Soviet Union. That feature of the proposal will almost certainly prompt the Soviets to charge that it is unfair and one-sided. No doubt some American arms-control advocates will agree, accusing the Administration of making the Kremlin an offer it cannot possibly accept—a deceptively equal-looking, deliberately nonnegotiable proposal that is part of what some suspect is the hardliners' secret agenda of sabotaging disarmament so that the US can get on with the business of rearmament." However, Time pointed out, "The Soviets' monstrous ICBMs have given them a nearly 3-to-1 advantage over the US in 'throw weight'—the cumulative power to 'throw' megatons of death and destruction at the other nation."
Costs
Three institutes ran studies in regards to the estimated costs that the US government would have to pay to implement START I: the Congressional Budget Office (CBO), the US Senate Foreign Relations Committee (SFRC), and the Institute for Defense Analyses (IDA). The CBO estimates assumed that the full-implementation cost would consist of a one-time cost of $410 to 1,830 million and that the continuing annual costs would be $100 to 390 million.
The SFRC had estimates of $200 to 1,000 million for one-time costs and that total inspection costs over the 15-year period of the treaty would be $1,250 to 2,050 million.
Finally, the IDA estimated only in regards to the verification costs, which it claimed to be around $760 million.
In addition to the costs of implementing the treaty, the US also aidef to the former Soviet republics by the Cooperative Threat Reduction Program (Nunn-Lugar Program), which added $591 million to the costs of implementing the START I program in the former Soviet Union, which would almost double the cost of the program for the US.
After the implementation of the treaty, the former Soviet Union's stock of nuclear weapons would fall from 12,000 to 3,500. The US would also save money since it would not have to be concerned with the upkeep and innovations towards its own nuclear forces. The CBO estimated that would amount to a total saving of $46 billion in the first five years of the treaty and around $130 billion until 2010, which would pay for the cost of the implementation of the treaty about twenty times over.
The other risk associated with START was the failure of compliance on the side of Russia. The US Senate Defence Committee expressed concerns that Russia could covertly produce missiles, produce false numbers regarding numbers of warheads, and monitoring cruise missiles.
The Joint Chiefs of Staff assessment of those situations determined the risk of a significant violation of the treaty to be within acceptable limits. Another risk would be the ability for Russia to perform espionage during the inspection of US bases and military facilities. The risk was also determined to be an acceptable factor by the assessment.
Considering the potential savings from the implementation of START I and its relatively-low risk factor, Reagan and the US government deemed it a reasonable plan of action towards the goal of disarmament.
Negotiations
Negotiations for START I began in May 1982, but continued negotiation of the START process was delayed several times because US agreement terms were considered non-negotiable by pre-Gorbachev Soviet rulers. Reagan's introduction of the Strategic Defense Initiative (SDI) program in 1983 was viewed as a threat by the Soviets, who withdrew from setting a timetable for further negotiations. In January 1985, however, US Secretary of State George Shultz and Soviet Foreign Minister Andrei Gromyko discussed a formula for a three-part negotiation strategy that included intermediate-range forces, strategic defense, and missile defense. During the Reykjavík Summit between Reagan and Gorbachev in October 1986, negotiations towards the implementation of the START Program were accelerated and turned towards the reduction of strategic weapons after the Intermediate-Range Nuclear Forces Treaty was signed in December 1987.
However, a dramatic nuclear arms race proceeded in the 1980s. It essentially ended in 1991 by nuclear parity preservation with 10,000 strategic warheads on both sides.
Verification tools
The verification regimes in arms control treaties contain many tools to enable them to hold parties accountable for their actions and violations of their treaty agreements. The START Treaty verification provisions were the most complicated and demanding of any agreement at the time by providing twelve different types of inspection. Data exchanges and declarations between parties became required and included exact quantities, technical characteristics, locations, movements, and the status of all offensive nuclear threats. The national technical means of verification (NTM) provision protected satellites and other information-gathering systems controlled by the verifying side, as they helped to verify adherence of international treaties. The international technical means of verification provision protected the multilateral technical systems specified in other treaties. Co-operative measures were established to facilitate verification by the NTM and included displaying items in plain sight and not hiding them from detection. The new on-site inspections (OSI) and Perimeter and Portal Continuous Monitoring (PPCM) provisions helped to maintain the treaty's integrity by providing a regulatory system manned by a representative from the verifying side at all times. In addition, access to telemetry from ballistic missile flight tests are now required, including exchanges of tapes and a ban on encryption and encapsulation from both parties.
Signing
Negotiations that led to the signing of the treaty began in May 1982. In November 1983, the Soviet Union "discontinued" communication with the US, which had deployed intermediate-range missiles in Europe. In January 1985, US Secretary of State George Shultz and Soviet Foreign Minister Andrey Gromyko negotiated a three-part plan including strategic weapons, intermediate missiles, and missile defense. It received a lot of attention at the Reykjavik Summit between Ronald Reagan and Mikhail Gorbachev and ultimately led to the signing of the Intermediate-Range Nuclear Forces Treaty in December 1987. Talk of a comprehensive strategic arms reduction continued and the START Treaty was officially signed by US President George H.W. Bush and Soviet General Secretary Gorbachev on 31 July 1991.
Implementation
There were 375 B-52s were flown to the Aerospace Maintenance and Regeneration Center at Davis-Monthan Air Force Base, in Arizona. The bombers were stripped of all usable parts and chopped into five pieces by a 13,000-pound steel blade dropped from a crane. The guillotine sliced four times on each plane, which severed the wings and left the fuselage in three pieces. The dissected B-52s remained in place for three months so that Russian satellites could confirm that the bombers had been destroyed, and they were then sold for scrap.
After the collapse of the Soviet Union, treaty obligations passed to twelve Soviet successor states. Of those, Turkmenistan and Uzbekistan each eliminated its one nuclear-related sites, and on-site inspections were discontinued. Inspections continued in Belarus, Kazakhstan, the Russian Federation, and Ukraine. Belarus, Kazakhstan, and Ukraine became non-nuclear weapons states under the Treaty on the Non-Proliferation of Nuclear Weapons on 1 July 1968 and are committed to it under the Lisbon Protocol (Protocol to the Treaty Between the United States of America and the Union of Soviet Socialist Republics on the Reduction and Limitation of Strategic Offensive Arms) after they had become independent nations in the wake of the end of the Soviet Union.
Efficacy
Belarus, Kazakhstan, and Ukraine have disposed of all their nuclear weapons or transferred them to Russia. The US and Russia have reduced the capacity of delivery vehicles to 1,600 each, with no more than 6,000 warheads.
A report by the US State Department, "Adherence to and Compliance With Arms Control, Nonproliferation and Disarmament Agreements and Commitments," was released on 28 July 2010 and stated that Russia was not in full compliance with the treaty when it expired on 5 December 2009. The report did not specifically identify Russia's compliance issues.
One incident that occurred in regards to Russia violating the START I Treaty occurred in 1994. It was announced by Arms Control and Disarmament Agency Director John Holum in a congressional testimony that Russia had converted its SS-19 ICBM into a space-launch vehicle without notifying the appropriate parties. Russia justified the incident claiming that it did not have to follow all of START's reporting policies in regards to missiles that had been recreated into space-launch vehicles. In addition to the SS-19, Russia was also reportedly using SS-25 missiles to assemble space-launch vehicles. The issue that the US had was that it did not have accurate numbers and locations of Russian ICBMs with those violations. The dispute was resolved in 1995.
Expiration and renewal
START I expired on 5 December 2009, but both sides agreed to keep observing the terms of the treaty until a new agreement was reached. There are proposals to renew and expand the treaty, supported by US President Barack Obama. Sergei Rogov, director of the Institute of the U.S. and Canada, said: "Obama supports sharp reductions in nuclear arsenals and I believe that Russia and the U.S. may sign in the summer or fall of 2009 a new treaty that would replace START-1." He added that a new deal would happen only if Washington abandoned plans to place elements of a missile shield in Central Europe. He expressed willingness "to make new steps in the sphere of disarmament" but said that he was waiting for the US to abandon attempts to "surround Russia with a missile defense ring" in reference to the placement of ten interceptor missiles in Poland and accompanying radar in the Czech Republic.
Russian President Dmitri Medvedev, said the day after the US elections in his first State of the Nation address that Russia would move to deploy short-range Iskander missile systems in the western exclave of Kaliningrad "to neutralize if necessary the anti-ballistic missile system in Europe." Russia insists for any movement towards New START to be a legally binding document and to set lower ceilings on the number of nuclear warheads and their delivery vehicles.
On 17 March 2009, Medvedev signaled that Russia would begin "large-scale" rearmament and renewal of Russia's nuclear arsenal. He accused NATO of pushing ahead with expansion near Russian borders and ordered for the rearmament to commence in 2011 with increased army, naval, and nuclear capabilities. Also, the head of Russia's strategic missile forces, Nikolai Solovtsov, told news agencies that Russia would start deploying its next-generation RS-24 missiles after the 5 December expiry of the START I. Russia hopes to for a new treaty. The increased tensions came despite the warming of relations between the US and Russia in the two years since Obama had taken office.
On 4 May 2009, the US and Russia began the process of renegotiating START and of counting both nuclear warheads and their delivery vehicles in making a new agreement. While setting aside problematic issues between the two countries, both sides agreed to make further cuts in the number of warheads deployed to around 1,000 to 1,500 each. The US said that is are open to a Russian proposal to use radar in Azerbaijan, rather than Eastern Europe for the proposed missile system. The George W. Bush administration insisted that the Eastern Europe defense system was intended as a deterrent for Iran, but Russia feared that it could be used against itself. The flexibility by both sides to make compromises now will lead to a new phase of arms reduction in the future.
A "Joint understanding for a follow-on agreement to START-1" was signed by Obama and Medvedev in Moscow on 6 July 2009 to reduce the number of deployed warheads on each side to 1,500–1,675 on 500–1,100 delivery systems. A new treaty was to be signed before START-1 expired in December 2009, with reductions to be achieved within seven years. After many months of negotiations, Obama and Medvedev signed the successor treaty, Measures to Further Reduction and Limitation of Strategic Offensive Arms, in Prague, Czech Republic, on 8 April 2010.
New START Treaty
The New START Treaty imposed even more limitations on the United States and Russia by reducing them to significantly-less strategic arms within seven years of its entering full force. Organized into three tiers, the new treaty focusses on the treaty itself, a protocol that contains additional rights and obligations regarding the treaty provisions, and technical annexes to the protocol.
The limits were based on stringent analysis conducted by Department of Defense planners in support of the 2010 Nuclear Posture Review. These aggregate limits consist of 1,550 nuclear warheads which include warheads on deployed intercontinental ballistic missiles (ICBM), warheads on deployed submarine-launched ballistic missiles (SLBM), and even any deployed heavy bomber equipped for nuclear armaments. That is 74% fewer than the limit set in the 1991 Treaty and 30% fewer than the limit of the 2002 Treaty of Moscow. Both parties will also be limited to a combined total of 800 deployed and non-deployed ICBM launchers, SLBM launchers, and heavy bombers equipped for nuclear armaments. There is also a separate limit of 700 deployed ICBMs, deployed SLBMs, and deployed heavy bombers equipped for nuclear armaments which is less than half the corresponding strategic nuclear delivery vehicle limit imposed in the previous treaty. Although the new restrictions have been set, the new treaty does not contain any limitations regarding the testing, development, or deployment of current or planned US missile defense programs and low-range conventional strike capabilities.
The duration of the new treaty is ten years and can be extended for a period of no more than five years at a time. It includes a standard withdrawal clause like most other arms control agreements. The treaty has been superseded by subsequent treaties.
Memorandum of Understanding data
See also
Strategic Arms Limitation Talks
START II
START III
RS-24
New START
References and notes
Further reading
Polen, Stuart. "START I: A Retrospective." Illini Journal of International Security 3.1 (2017): 21-36 online.
Tachibana, Seiitsu. "Bush Administration’s Nuclear Weapons Policy: New Obstacles to Nuclear Disarmament." Hiroshima Peace Science 24 (2002): 105-133. online
Woolf, Amy F. Nuclear Arms Control: The Strategic Offensive Reductions Treaty (DIANE Publishing, 2010). online
External links
START1 treaty text, from US State Department
Engineer Memoirs - Lieutenant General Edward L. Rowny, ambassador for the Strategic Arms Limitation Talks (START)
Atomwaffen A-Z Glossareintrag zu START-I-Vertrag
1991 in the Soviet Union
1991 in the United States
Arms control treaties
Cold War treaties
History of the Soviet Union
Nuclear technology treaties
Nuclear weapons governance
Perestroika
Presidency of George H. W. Bush
Mikhail Gorbachev
Soviet Union–United States treaties
Treaties concluded in 1991
Treaties entered into force in 1994
July 1991 events
Treaties of Belarus
Treaties of Kazakhstan
Treaties of Turkmenistan
Treaties of Ukraine
Treaties of Uzbekistan |
432989 | https://en.wikipedia.org/wiki/Needham%E2%80%93Schroeder%20protocol | Needham–Schroeder protocol | The Needham–Schroeder protocol is one of the two key transport protocols intended for use over an insecure network, both proposed by Roger Needham and Michael Schroeder. These are:
The Needham–Schroeder Symmetric Key Protocol, based on a symmetric encryption algorithm. It forms the basis for the Kerberos protocol. This protocol aims to establish a session key between two parties on a network, typically to protect further communication.
The Needham–Schroeder Public-Key Protocol, based on public-key cryptography. This protocol is intended to provide mutual authentication between two parties communicating on a network, but in its proposed form is insecure.
The symmetric protocol
Here, Alice initiates the communication to Bob . is a server trusted by both parties. In the communication:
and are identities of Alice and Bob respectively
is a symmetric key known only to and
is a symmetric key known only to and
and are nonces generated by and respectively
is a symmetric, generated key, which will be the session key of the session between and
The protocol can be specified as follows in security protocol notation:
Alice sends a message to the server identifying herself and Bob, telling the server she wants to communicate with Bob.
The server generates and sends back to Alice a copy encrypted under for Alice to forward to Bob and also a copy for Alice. Since Alice may be requesting keys for several different people, the nonce assures Alice that the message is fresh and that the server is replying to that particular message and the inclusion of Bob's name tells Alice who she is to share this key with.
Alice forwards the key to Bob who can decrypt it with the key he shares with the server, thus authenticating the data.
Bob sends Alice a nonce encrypted under to show that he has the key.
Alice performs a simple operation on the nonce, re-encrypts it and sends it back verifying that she is still alive and that she holds the key.
Attacks on the protocol
The protocol is vulnerable to a replay attack (as identified by Denning and Sacco). If an attacker uses an older, compromised value for , he can then replay the message to Bob, who will accept it, being unable to tell that the key is not fresh.
Fixing the attack
This flaw is fixed in the Kerberos protocol by the inclusion of a timestamp. It can also be fixed with the use of nonces as described below. At the beginning of the protocol:
Alice sends to Bob a request.
Bob responds with a nonce encrypted under his key with the Server.
Alice sends a message to the server identifying herself and Bob, telling the server she wants to communicate with Bob.
Note the inclusion of the nonce.
The protocol then continues as described through the final three steps as described in the original protocol above. Note that is a different nonce from . The inclusion of this new nonce prevents the replaying of a compromised version of since such a message would need to be of the form which the attacker can't forge since she does not have .
The public-key protocol
This assumes the use of a public-key encryption algorithm.
Here, Alice and Bob use a trusted server to distribute public keys on request. These keys are:
and , respectively public and private halves of an encryption key-pair belonging to ( stands for "secret key" here)
and , similar belonging to
and , similar belonging to . (Note that this key-pair will be used for digital signatures, i.e., used for signing a message and used for verification. must be known to and before the protocol starts.)
The protocol runs as follows:
requests 's public keys from
responds with public key alongside 's identity, signed by the server for authentication purposes.
chooses a random and sends it to .
now knows A wants to communicate, so requests 's public keys.
Server responds.
chooses a random , and sends it to along with to prove ability to decrypt with .
confirms to , to prove ability to decrypt with
At the end of the protocol, and know each other's identities, and know both and . These nonces are not known to eavesdroppers.
An attack on the protocol
This protocol is vulnerable to a man-in-the-middle attack. If an impostor can persuade to initiate a session with them, they can relay the messages to and convince that he is communicating with .
Ignoring the traffic to and from , which is unchanged, the attack runs as follows:
sends to , who decrypts the message with
relays the message to , pretending that is communicating
sends
relays it to
decrypts and confirms it to , who learns it
re-encrypts , and convinces that she's decrypted it
At the end of the attack, falsely believes that is communicating with him, and that and are known only to and .
The following example illustrates the attack. Alice (A) would like to contact her bank (B). We assume that an impostor (I) successfully convinces A that they are the bank. As a consequence A uses the public key of I instead of using the public key of B to encrypt the messages she intends to send to her bank. Therefore, A sends I her nonce encrypted with the public key of I. I decrypts the message using their private key and contacts B sending it the nonce of A encrypted with the public key of B. B has no way to know that this message was actually sent by I. B responds with their own nonce and encrypts the message with the public key of A. Since I is not in possession of the private key of A they have to relay the message to A without knowing the content. A decrypts the message with her private key and respond with the nonce of B encrypted with the public key of I. I decrypts the message using their private key and is now in possession of nonce A and B. Therefore, they can now impersonate the bank and the client respectively.
Fixing the man-in-the-middle attack
The attack was first described in a 1995 paper by Gavin Lowe.
The paper also describes a fixed version of the scheme, referred to as the Needham–Schroeder–Lowe protocol. The fix involves the modification of message six to include the responder's identity, that is we replace:
with the fixed version:
and the intruder cannot successfully replay the message because A is expecting a message containing the identity of I whereas the message will have identity of B.
See also
Kerberos
Otway–Rees protocol
Yahalom
Wide Mouth Frog protocol
Neuman–Stubblebine protocol
References
External links
Explanation of man-in-the-middle attack by Computerphile.
Authentication protocols
Key transport protocols
Symmetric-key cryptography
Computer access control protocols |
433034 | https://en.wikipedia.org/wiki/Wide%20Mouth%20Frog%20protocol | Wide Mouth Frog protocol | The Wide-Mouth Frog protocol is a computer network authentication protocol designed for use on insecure networks (the Internet for example). It allows individuals communicating over a network to prove their identity to each other while also preventing eavesdropping or replay attacks, and provides for detection of modification and the prevention of unauthorized reading. This can be proven using Degano.
The protocol was first described under the name "The Wide-mouthed-frog Protocol" in the paper "A Logic of Authentication" (1990), which introduced Burrows–Abadi–Needham logic, and in which it was an "unpublished protocol ... proposed by" coauthor Michael Burrows. The paper gives no rationale for the protocol's whimsical name.
The protocol can be specified as follows in security protocol notation:
A, B, and S are identities of Alice, Bob, and the trusted server respectively
and are timestamps generated by A and S respectively
is a symmetric key known only to A and S
is a generated symmetric key, which will be the session key of the session between A and B
is a symmetric key known only to B and S
Note that to prevent active attacks, some form of authenticated encryption (or message authentication) must be used.
The protocol has several problems:
A global clock is required.
The server S has access to all keys.
The value of the session key is completely determined by A, who must be competent enough to generate good keys.
It can replay messages within the period when the timestamp is valid.
A is not assured that B exists.
The protocol is stateful. This is usually undesired because it requires more functionality and capability from the server. For example, S must be able to deal with situations in which B is unavailable.
See also
Alice and Bob
Kerberos (protocol)
Needham–Schroeder protocol
Neuman–Stubblebine protocol
Otway–Rees protocol
Yahalom (protocol)
References
Computer access control protocols |
433890 | https://en.wikipedia.org/wiki/Nordic%20Mobile%20Telephone | Nordic Mobile Telephone | NMT (Nordisk MobilTelefoni or Nordiska MobilTelefoni-gruppen, Nordic Mobile Telephony in English) is an automatic cellular phone system specified by Nordic telecommunications administrations (PTTs) and opened for service on 1 October 1981 as a response to the increasing congestion and heavy requirements of the manual mobile phone networks: ARP (150 MHz) in Finland, MTD (450 MHz) in Sweden and Denmark, and OLT in Norway.
NMT is based on analogue technology (first generation or 1G) and two variants exist: NMT-450 and NMT-900. The numbers indicate the frequency bands used. NMT-900 was introduced in 1986 and carries more channels than the older NMT-450 network.
The NMT specifications were free and open, allowing many companies to produce NMT hardware and pushing prices down. The success of NMT was important to Nokia (then Mobira) and Ericsson. First Danish implementers were Storno (then owned by General Electric, later taken over by Motorola) and AP (later taken over by Philips). Initial NMT phones were designed to mount in the trunk of a car, with a keyboard/display unit at the driver's seat. "Portable" versions existed, though they were still bulky, and with battery life still being a big problem. Later models such as Benefon's were as small as and weighed only about 100 grams.
History
The NMT network was opened in Sweden and Norway in 1981, and in Denmark and Finland in 1982. Iceland joined in 1986. However, Ericsson introduced the first commercial service in Saudi Arabia on 1 September 1981 to 1,200 users, as a pilot test project, one month before they did the same in Sweden. By 1985 the network had grown to 110,000 subscribers in Scandinavia and Finland, 63,300 in Norway alone, which made it the world's largest mobile network at the time.
The NMT network has mainly been used in the Nordic countries, Baltic countries, Switzerland, France, Netherlands, Hungary, Poland, Bulgaria, Romania, Czech Republic, Slovakia, Slovenia, Serbia, Turkey, Croatia, Bosnia, Russia, Ukraine and in Asia. The introduction of digital mobile networks such as GSM has reduced the popularity of NMT and the Nordic countries have suspended their NMT networks. In Estonia the NMT network was shut down in December 2000. In Finland TeliaSonera's NMT network was suspended on 31 December 2002. Norway's last NMT network was suspended on 31 December 2004. Sweden's TeliaSonera NMT network was suspended on 31 December 2007. The NMT network (450 MHz) however has one big advantage over GSM which is the range; this advantage is valuable in big but sparsely populated countries such as Iceland. In Iceland, the GSM network reaches 98% of the country's population but only a small proportion of its land area. The NMT system however reaches most of the country and a lot of the surrounding waters, thus the network was popular with fishermen and those traveling in the vast empty mainland. In Iceland the NMT service was stopped on 1 September 2010, when Síminn closed down its NMT network.
In Denmark, Norway and Sweden the NMT-450 frequencies have been auctioned off to Swedish Nordisk Mobiltelefon which later became Ice.net and renamed to Net 1 that built a digital network using CDMA 450. During 2015, the network has been migrated to 4G.
Permission for TeliaSonera to continue operation of the NMT-450 frequencies ended on 31 December 2007.
In Russia Uralwestcom shut down their NMT network on 1 September 2006 and Sibirtelecom on 10 January 2008.
Skylink, subsidiary company of TELE2 Russia operates NMT-450 network as of 2016 in Arkhangelsk Oblast and Perm Krai. These networks are used in sparsely populated areas with long distance. License for the provision of services is valid until 2021
Technology
The cell sizes in an NMT network range from 2 km to 30 km. With smaller ranges the network can service more simultaneous callers; for example in a city the range can be kept short for better service. NMT used full duplex transmission, allowing for simultaneous receiving and transmission of voice. Car phone versions of NMT used transmission power of up to 15 watt (NMT-450) and 6 watt (NMT-900), handsets up to 1 watt. NMT had automatic switching (dialing) and handover of the call built into the standard from the beginning, which was not the case with most preceding car phone services, such as the Finnish ARP. Additionally, the NMT standard specified billing as well as national and international roaming.
Signaling
NMT voice channel is transmitted with FM (Frequency Modulation) and NMT signaling transfer speeds vary between 600 and 1,200 bits per second, using FFSK (Fast Frequency Shift Keying) modulation. Signaling between the base station and the mobile station was implemented using the same RF channel that was used for audio, and using the 1,200 bit/s FFSK modem. This caused the periodic short noise bursts, e.g. during handover, that were uniquely characteristic to NMT sound.
Security
A disadvantage of the original NMT specification is that voice traffic was not encrypted, therefore it was possible to listen to calls using e.g. a scanner. As a result, some scanners have had the NMT bands blocked so they could not be accessed. Later versions of the NMT specifications defined optional analog scrambling which was based on two-band audio frequency inversion. If both the base station and the mobile station supported scrambling, they could agree upon using it when initiating a phone call. Also, if two users had mobile (phone) stations supporting scrambling, they could turn it on during conversation even if the base stations didn't support it. In this case, audio would be scrambled all the way between the 2 mobile stations. While the scrambling method was not at all as strong as encryption of current digital phones, such as GSM or CDMA, it did prevent casual listening with scanners. Scrambling is defined in NMT Doc 450-1: System Description (1999-03-23) and NMT Doc 450-3 and 900-3: Technical Specification for the Mobile Station (1995-10-04)'s Annex 26 v.1.1: Mobile Station with Speech Scrambling – Split Inversion Method (Optional) (1998-01-27).
Data transfer
NMT also supported a simple but robust integrated data transfer mode called DMS (Data and Messaging Service) or NMT-Text, which used the network's signaling channel for data transfer. Using DMS, text messaging was also possible between two NMT handsets before SMS service started in GSM, but this feature was never commercially available except in Russian, Polish and Bulgarian NMT networks. Another data transfer method was called NMT Mobidigi with transfer speeds of 380 bits per second. It required external equipment.
List of NMT operators
References
First generation mobile telecommunications
Products introduced in 1981
Mobile radio telephone systems |
435217 | https://en.wikipedia.org/wiki/Enigmail | Enigmail | Enigmail is a data encryption and decryption extension for Mozilla Thunderbird and the Postbox that provides OpenPGP public key e-mail encryption and signing. Enigmail works under Microsoft Windows, Unix-like, and Mac OS X operating systems. Enigmail can operate with other mail clients compatible with PGP/MIME and inline PGP such as: Microsoft Outlook with Gpg4win package installed, Gnome Evolution, KMail, Claws Mail, Gnus, Mutt. Its cryptographic functionality is handled by GNU Privacy Guard.
In their default configuration, Thunderbird and SeaMonkey provide e-mail encryption and signing using S/MIME, which relies on X.509 keys provided by a centralised certificate authority. Enigmail adds an alternative mechanism where cooperating users can instead use keys provided by a web of trust, which relies on multiple users to endorse the authenticity of the sender's and recipient's credentials. In principle this enhances security, since it does not rely on a centralised entity which might be compromised by security failures or engage in malpractice due to commercial interests or pressure from the jurisdiction in which it resides.
Enigmail was first released in 2001 by Ramalingam Saravanan, and since 2003 maintained by Patrick Brunschwig. Both Enigmail and GNU Privacy Guard are free, open-source software. Enigmail with Thunderbird is now the most popular PGP setup.
Enigmail has announced its support for the new "pretty Easy privacy" (p≡p) encryption scheme in a joint Thunderbird extension to be released in December 2015. As of June 2016 the FAQ note it will be available in Q3 2016.
Enigmail also supports Autocrypt exchange of cryptographic keys since version 2.0.
In October 2019, the developers of Thunderbird announced built-in support for encryption and signing based on OpenPGP Thunderbird 78 to replace the Enigmail add-on. The background is a change in the code base of Thunderbird, removing support for legacy add-ons. Since this would require a rewrite from scratch for Enigmail, Patrick Brunschwig instead supports the Thunderbird team in a native implementation in Thunderbird. Enigmail will be maintained for Thunderbird 68 until 6 months after the release of Thunderbird 78. The support of Enigmail for Postbox will be unaffected.
See also
GNU Privacy Guard
OpenPGP
References
External links
Cryptographic software
Thunderbird WebExtensions
OpenPGP
Free email software
MacOS security software
Windows security software
Unix security software
MacOS Internet software
Windows Internet software
Unix Internet software
Cross-platform free software |
435975 | https://en.wikipedia.org/wiki/TC | TC | TC, T.C., Tc, Tc, tc, tC, or .tc may refer to:
Arts and entertainment
Film and television
Theodore "T.C." Calvin, a character on the TV series Magnum, P.I. and its reboot
Tom Caron, American television host for New England Sports Network
Top Cat, an animated sitcom named after the protagonist
BBC Television Centre, a studio and office complex whose name is abbreviated to TC, and whose studios are numbered TC0-TC12
Music
TC Smith, American singer, for TCR
Tom Constanten (born 1944), American musician from the Grateful Dead
Top Combine, a Mandopop boy band
TC (musician), a British drum and bass producer and DJ
Games
Transcendental Chess
X3: Terran Conflict, a PC game
T.C., an era designation in the Xenosaga game series
Organizations
TC Electronic, a Danish manufacturer of studio equipment and guitar effects
Air Tanzania (IATA code TC)
Teachers College, Columbia University, a graduate school of education in New York City
Telecom Cambodia, a telecom company in Cambodia
Tierra Comunera, a Spanish political party
Thompson Creek Metals Company Inc (NYSE: TC), a diversified mining company
Transport Canada, a Canadian federal government department
Trinity College (Connecticut), an American liberal arts college
Tomball College, now Lone Star College-Tomball
People
T. C. Brister (1907–1976), Louisiana state legislator
TC Clements, Michigan state representative
Tony Currie (footballer) (born 1950), English footballer
Places
The Republic of Turkey, from the Turkish name, Türkiye Cumhuriyeti
Traverse City, Michigan, US
Turks and Caicos Islands (ISO 3166-1 country code TC)
Science and technology
Biology and medicine
Testicular cancer
Therapeutic community, a treatment program for addictions, personality disorder or other mental problems
Cytotoxic T-cells
Total cholesterol, a measure of the total amount of Cholesterol in the blood, not differentiated for the density fractions (LDL, HDL, etc.)
Chemistry and physics
Technetium (symbol Tc), a chemical element
Temperature coefficient
Critical temperature (Tc)
Convective temperature (Tc)
Curie temperature (Tc)
Teracoulomb, an SI unit for electric charge equal to 1012 coulombs
Tesla coil, a category of high-voltage discharge coils
Tonnes of Carbon (tC)
Computing
Turing complete
.tc, the Internet country code top-level domain (ccTLD) for Turks and Caicos Islands
TC (complexity), a complexity class
Tc (Linux), a command-line utility used to configure network traffic control in the Linux kernel
Take Command (command line interpreter), a command line interpreter by JP Software
Telecine (copying), a form of digital transfer using a Telecine machine
Teleconference
TrueCrypt, a disk encryption software
Trusted Computing, a scheme for adding additional controls on what computers may or may not do into hardware or software
Transportation
TC, a Mazda piston engine
Chrysler TC by Maserati, an automobile sold by Chrysler from 1989-1991
Scion tC, a touring coupe made by Scion that's based on the Toyota Avensis
Traction control system, an anti-slip automobile subsystem
Turn coordinator, an aircraft instrument
Transit center, a major transport hub for trains and buses
Other uses in science and technology
Timecode, in video and audio production
Tropical cyclone, also known as a hurricane, tropical storm, tropical depression, or typhoon
Turbocharger, turbine-driven device that increases an internal combustion engine's power output
Sports
TC (mascot), the mascot for the Minnesota Twins baseball team
TC 2000, a series of races for touring cars which is held each year in Argentina
TC Panther, the mascot of the Northern Iowa Panthers
Total chances, a baseball fielding term
Turismo Carretera, a popular stock car category in Argentina
Other uses
Total Communication, a form of sign language
Total cost, in economics and accounting
Traffic collision, an automobile accident
Trinity Cross, the highest national award in Trinidad and Tobago
Type certificate, an aircraft document
Traditional Chinese characters, Chinese characters in any character set that does not contain newly created characters or character substitutions performed after 1946
United States Tax Court, a U.S. Article I federal trial court |
436166 | https://en.wikipedia.org/wiki/Variable-frequency%20oscillator | Variable-frequency oscillator | A variable frequency oscillator (VFO) in electronics is an oscillator whose frequency can be tuned (i.e., varied) over some range. It is a necessary component in any tunable radio transmitter or receiver that works by the superheterodyne principle, and controls the frequency to which the apparatus is tuned.
Purpose
In a simple superheterodyne receiver, the incoming radio frequency signal (at frequency ) from the antenna is mixed with the VFO output signal tuned to , producing an intermediate frequency (IF) signal that can be processed downstream to extract the modulated information. Depending on the receiver design, the IF signal frequency is chosen to be either the sum of the two frequencies at the mixer inputs (up-conversion), or more commonly, the difference frequency (down-conversion), .
In addition to the desired IF signal and its unwanted image (the mixing product of opposite sign above), the mixer output will also contain the two original frequencies, and and various harmonic combinations of the input signals. These undesired signals are rejected by the IF filter. If a double balanced mixer is employed, the input signals appearing at the mixer outputs are greatly attenuated, reducing the required complexity of the IF filter.
The advantage of using a VFO as a heterodyning oscillator is that only a small portion of the radio receiver (the sections before the mixer such as the preamplifier) need to have a wide bandwidth. The rest of the receiver can be finely tuned to the IF frequency.
In a direct-conversion receiver, the VFO is tuned to the same frequency as the incoming radio frequency and Hz. Demodulation takes place at baseband using low-pass filters and amplifiers.
In a radio frequency (RF) transmitter, VFOs are often used to tune the frequency of the output signal, often indirectly through a heterodyning process similar to that described above. Other uses include chirp generators for radar systems where the VFO is swept rapidly through a range of frequencies, timing signal generation for oscilloscopes and time domain reflectometers, and variable frequency audio generators used in musical instruments and audio test equipment.
Types
There are two main types of VFO in use: analog and digital.
Analog VFOs
An analog VFO is an electronic oscillator where the value of at least one of the passive components is adjustable under user control so as to alter its output frequency.
The passive component whose value is adjustable is usually a capacitor, but could be a variable inductor.
Tuning capacitor
The variable capacitor is a mechanical device in which the separation of a series of interleaved metal plates is physically altered to vary its capacitance. Adjustment of this capacitor is sometimes facilitated by a mechanical step-down gearbox to achieve fine tuning.
Varactor
A reversed-biased semiconductor diode exhibits capacitance. Since the width of its non-conducting depletion region depends on the magnitude of the reverse bias voltage, this voltage can be used to control the junction capacitance. The varactor bias voltage may be generated in a number of ways and there may need to be no significant moving parts in the final design.
Varactors have a number of disadvantages including temperature drift and aging, electronic noise, low Q factor and non-linearity.
Digital VFOs
Modern radio receivers and transmitters usually use some form of digital frequency synthesis to generate their VFO signal.
The advantages include smaller designs, lack of moving parts, the higher stability of set frequency reference oscillators, and the ease with which preset frequencies can be stored and manipulated in the digital computer that is usually embedded in the design in any case.
It is also possible for the radio to become extremely frequency-agile in that the control computer could alter the radio's tuned frequency many tens, thousands or even millions of times a second.
This capability allows communications receivers effectively to monitor many channels at once, perhaps using digital selective calling (DSC) techniques to decide when to open an audio output channel and alert users to incoming communications.
Pre-programmed frequency agility also forms the basis of some military radio encryption and stealth techniques.
Extreme frequency agility lies at the heart of spread spectrum techniques that have gained mainstream acceptance in computer wireless networking such as Wi-Fi.
There are disadvantages to digital synthesis such as the inability of a digital synthesiser to tune smoothly through all frequencies, but with the channelisation of many radio bands, this can also be seen as an advantage in that it prevents radios from operating in between two recognised channels.
Digital frequency synthesis relies on stable crystal controlled reference frequency sources. Crystal controlled oscillators are more stable than inductively and capacitively controlled oscillators. Their disadvantage is that changing frequency (more than a small amount) requires changing the crystal, but frequency synthesizer techniques have made this unnecessary in modern designs.
Digital frequency synthesis
The electronic and digital techniques involved in this include:
Direct digital synthesis (DDS) Enough data points for a mathematical sine function are stored in digital memory. These are recalled at the right speed and fed to a digital to analog converter where the required sine wave is built up.
Direct frequency synthesis Early channelized communication radios had multiple crystals - one for each channel on which they could operate. After a while this thinking was combined with the basic ideas of heterodyning and mixing described under purpose above. Multiple crystals can be mixed in various combinations to produce various output frequencies.
Phase locked loop (PLL) Using a varactor-controlled or voltage-controlled oscillator (VCO) (described above in varactor under analog VFO techniques) and a phase detector, a control-loop can be set up so that the VCO's output is frequency-locked to a crystal-controlled reference oscillator. The phase detector's comparison is made between the outputs of the two oscillators after frequency division by different divisors. Then by altering the frequency-division divisor(s) under computer control, a variety of actual (undivided) VCO output frequencies can be generated. The PLL technique dominates most radio VFO designs today.
Performance
The quality metrics for a VFO include frequency stability, phase noise and spectral purity. All of these factors tend to be inversely proportional to the tuning circuit's Q factor. Since in general the tuning range is also inversely proportional to Q, these performance factors generally degrade as the VFO's frequency range is increased.
Stability
Stability is the measure of how far a VFO's output frequency drifts with time and temperature. To mitigate this problem, VFOs are generally "phase locked" to a stable reference oscillator. PLLs use negative feedback to correct for the frequency drift of the VFO allowing for both wide tuning range and good frequency stability.
Repeatability
Ideally, for the same control input to the VFO, the oscillator should generate exactly the same frequency. A change in the calibration of the VFO can change receiver tuning calibration; periodic re-alignment of a receiver may be needed. VFO's used as part of a phase-locked loop frequency synthesizer have less stringent requirements since the system is as stable as the crystal-controlled reference frequency.
Purity
A plot of a VFO's amplitude vs. frequency may show several peaks, probably harmonically related. Each of these peaks can potentially mix with some other incoming signal and produce a spurious response. These spurii (sometimes spelled spuriae) can result in increased noise or two signals detected where there should only be one. Additional components can be added to a VFO to suppress high-frequency parasitic oscillations, should these be present.
In a transmitter, these spurious signals are generated along with the one desired signal. Filtering may be required to ensure the transmitted signal meets regulations for bandwidth and spurious emissions.
Phase noise
When examined with very sensitive equipment, the pure sine-wave peak in a VFO's frequency graph will most likely turn out not to be sitting on a flat noise-floor. Slight random 'jitters' in the signal's timing will mean that the peak is sitting on 'skirts' of phase noise at frequencies either side of the desired one.
These are also troublesome in crowded bands. They allow through unwanted signals that are fairly close to the expected one, but because of the random quality of these phase-noise 'skirts', the signals are usually unintelligible, appearing just as extra noise in the received signal. The effect is that what should be a clean signal in a crowded band can appear to be a very noisy signal, because of the effects of strong signals nearby.
The effect of VFO phase noise on a transmitter is that random noise is actually transmitted either side of the required signal. Again, this must be avoided for legal reasons in many cases.
Frequency reference
Digital or digitally controlled oscillators typically rely on constant single frequency references, which can be made to a higher standard than semiconductor and LC circuit-based alternatives. Most commonly a quartz crystal based oscillator is used, although in high accuracy applications such as TDMA cellular networks, atomic clocks such as the Rubidium standard are as of 2018 also common.
Because of the stability of the reference used, digital oscillators themselves tend to be more stable and more repeatable in the long term. This in part explains their huge popularity in low-cost and computer-controlled VFOs. In the shorter term the imperfections introduced by digital frequency division and multiplication (jitter), and the susceptibility of the common quartz standard to acoustic shocks, temperature variation, aging, and even radiation, limit the applicability of a naïve digital oscillator.
This is why higher end VFO's like RF transmitters locked to atomic time, tend to combine multiple different references, and in complex ways. Some references like rubidium or cesium clocks provide higher long term stability, while others like hydrogen masers yield lower short term phase noise. Then lower frequency (and so lower cost) oscillators phase locked to a digitally divided version of the master clock deliver the eventual VFO output, smoothing out the noise induced by the division algorithms. Such an arrangement can then give all of the longer term stability and repeatability of an exact reference, the benefits of exact digital frequency selection, and the short term stability, imparted even onto an arbitrary frequency analogue waveform—the best of all worlds.
See also
Numerically controlled oscillator
Resonance
Tuner (radio)
References
Electronic oscillators
Communication circuits
Radio electronics
Electronic design
Wireless tuning and filtering |
437052 | https://en.wikipedia.org/wiki/Positional%20notation | Positional notation | Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits.
Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers.
The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe.
History
Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, the Babylonian numeral system, credited as the first positional numeral system, was base-60. However, it lacked a real zero. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals. It was a placeholder rather than a true zero because it was not used alone or at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them.
The polymath Archimedes (ca. 287–212 BC) invented a decimal positional system in his Sand Reckoner which was based on 108 and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery.
Before positional notation became standard, simple additive systems (sign-value notation) such as Roman numerals were used, and accountants in ancient Rome and during the Middle Ages used the abacus or stone counters to do arithmetic.
Counting rods and most abacuses have been used to represent numbers in a positional numeral system. With counting rods or abacus to perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly.
The oldest extant positional notation system is that of Chinese rod numerals, used from at least the early 8th century.
It isn't clear whether this system was introduced from India or whether it was developed locally.
Indian numerals originate with the Brahmi numerals of about the 3rd century BC, which symbols were, at the time, not used positionally.
Medieval Indian numerals are positional, as are the derived Arabic numerals, recorded from the 10th century.
After the French Revolution (1789–1799), the new French government promoted the extension of the decimal system.
Some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful.
Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world.
History of positional fractions
J. Lennart Berggren notes that positional decimal fractions were used for the first time by Arab mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, but did not develop any notation to represent them. The Persian mathematician Jamshīd al-Kāshī made the same discovery of decimal fractions in the 15th century. Al Khwarizmi introduced fractions to Islamic countries in the early 9th century; his fraction presentation was similar to the traditional Chinese mathematical fractions from Sunzi Suanjing. This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithmetic Key".
The adoption of the decimal representation of numbers less than one, a fraction, is often credited to Simon Stevin through his textbook De Thiende; but both Stevin and E. J. Dijksterhuis indicate that Regiomontanus contributed to the European adoption of general decimals:
European mathematicians, when taking over from the Hindus, via the Arabs, the idea of positional value for integers, neglected to extend this idea to fractions. For some centuries they confined themselves to using common and sexagesimal fractions... This half-heartedness has never been completely overcome, and sexagesimal fractions still form the basis of our trigonometry, astronomy and measurement of time. ¶ ... Mathematicians sought to avoid fractions by taking the radius R equal to a number of units of length of the form 10n and then assuming for n so great an integral value that all occurring quantities could be expressed with sufficient accuracy by integers. ¶ The first to apply this method was the German astronomer Regiomontanus. To the extent that he expressed goniometrical line-segments in a unit R/10n, Regiomontanus may be called an anticipator of the doctrine of decimal positional fractions.
In the estimation of Dijksterhuis, "after the publication of De Thiende only a small advance was required to establish the complete system of decimal positional fractions, and this step was taken promptly by a number of writers ... next to Stevin the most important figure in this development was Regiomontanus." Dijksterhuis noted that [Stevin] "gives full credit to Regiomontanus for his prior contribution, saying that the trigonometric tables of the German astronomer actually contain the whole theory of 'numbers of the tenth progress'."
Issues
A key argument against the positional system was its susceptibility to easy fraud by simply putting a number at the beginning or end of a quantity, thereby changing (e.g.) 100 into 5100, or 100 into 1000. Modern cheques require a natural language spelling of an amount, as well as the decimal amount itself, to prevent such fraud. For the same reason the Chinese also use natural language numerals, for example 100 is written as 壹佰, which can never be forged into 壹仟(1000) or 伍仟壹佰(5100).
Many of the advantages claimed for the metric system could be realized by any consistent positional notation.
Dozenal advocates say duodecimal has several advantages over decimal, although the switching cost appears to be high.
Mathematics
Base of the numeral system
In mathematical numeral systems the radix is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In the interesting cases the radix is the absolute value of the base , which may also be negative. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use.
The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit. Negative bases are rarely used. In a system with more than unique digits, numbers may have many different possible representations.
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size.
(In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.)
In standard base-ten (decimal) positional notation, there are ten decimal digits and the number
.
In standard base-sixteen (hexadecimal), there are the sixteen hexadecimal digits (0–9 and A–F) and the number
where B represents the number eleven as a single symbol.
In general, in base-b, there are b digits and the number
has
Note that represents a sequence of digits, not multiplication.
Notation
When describing base in mathematical notation, the letter b is generally used as a symbol for this concept, so, for a binary system, b equals 2. Another common way of expressing the base is writing it as a decimal subscript after the number that is being represented (this notation is used in this article). 11110112 implies that the number 1111011 is a base-2 number, equal to 12310 (a decimal notation representation), 1738 (octal) and 7B16 (hexadecimal). In books and articles, when using initially the written abbreviations of number bases, the base is not subsequently printed: it is assumed that binary 1111011 is the same as 11110112.
The base b may also be indicated by the phrase "base-b". So binary numbers are "base-2"; octal numbers are "base-8"; decimal numbers are "base-10"; and so on.
To a given radix b the set of digits {0, 1, ..., b−2, b−1} is called the standard set of digits. Thus, binary numbers have digits {0, 1}; decimal numbers have digits and so on. Therefore, the following are notational errors: 522, 22, 1A9. (In all cases, one or more digits is not in the set of allowed digits for the given base.)
Exponentiation
Positional numeral systems work using exponentiation of the base. A digit's value is the digit multiplied by the value of its place. Place values are the number of the base raised to the nth power, where n is the number of other digits between a given digit and the radix point. If a given digit is on the left hand side of the radix point (i.e. its value is an integer) then n is positive or zero; if the digit is on the right hand side of the radix point (i.e., its value is fractional) then n is negative.
As an example of usage, the number 465 in its respective base b (which must be at least base 7 because the highest digit in it is 6) is equal to:
If the number 465 was in base-10, then it would equal:
(46510 = 46510)
If however, the number were in base 7, then it would equal:
(4657 = 24310)
10b = b for any base b, since 10b = 1×b1 + 0×b0. For example, 102 = 2; 103 = 3; 1016 = 1610. Note that the last "16" is indicated to be in base 10. The base makes no difference for one-digit numerals.
This concept can be demonstrated using a diagram. One object represents one unit. When the number of objects is equal to or greater than the base b, then a group of objects is created with b objects. When the number of these groups exceeds b, then a group of these groups of objects is created with b groups of b objects; and so on. Thus the same number in different bases will have different values:
241 in base 5:
2 groups of 52 (25) 4 groups of 5 1 group of 1
ooooo ooooo
ooooo ooooo ooooo ooooo
ooooo ooooo + + o
ooooo ooooo ooooo ooooo
ooooo ooooo
241 in base 8:
2 groups of 82 (64) 4 groups of 8 1 group of 1
oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo + + o
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo
The notation can be further augmented by allowing a leading minus sign. This allows the representation of negative numbers. For a given base, every representation corresponds to exactly one real number and every real number has at least one representation. The representations of rational numbers are those representations that are finite, use the bar notation, or end with an infinitely repeating cycle of digits.
Digits and numerals
A digit is a symbol that is used for positional notation, and a numeral consists of one or more digits used for representing a number with positional notation. Today's most common digits are the decimal digits "0", "1", "2", "3", "4", "5", "6", "7", "8", and "9". The distinction between a digit and a numeral is most pronounced in the context of a number base.
A non-zero numeral with more than one digit position will mean a different number in a different number base, but in general, the digits will mean the same. For example, the base-8 numeral 238 contains two digits, "2" and "3", and with a base number (subscripted) "8". When converted to base-10, the 238 is equivalent to 1910, i.e. 238 = 1910. In our notation here, the subscript "8" of the numeral 238 is part of the numeral, but this may not always be the case.
Imagine the numeral "23" as having an ambiguous base number. Then "23" could likely be any base, from base-4 up. In base-4, the "23" means 1110, i.e. 234 = 1110. In base-60, the "23" means the number 12310, i.e. 2360 = 12310. The numeral "23" then, in this case, corresponds to the set of base-10 numbers {11, 13, 15, 17, 19, 21, 23, ..., 121, 123} while its digits "2" and "3" always retain their original meaning: the "2" means "two of", and the "3" means "three of".
In certain applications when a numeral with a fixed number of positions needs to represent a greater number, a higher number-base with more digits per position can be used. A three-digit, decimal numeral can represent only up to 999. But if the number-base is increased to 11, say, by adding the digit "A", then the same three positions, maximized to "AAA", can represent a number as great as 1330. We could increase the number base again and assign "B" to 11, and so on (but there is also a possible encryption between number and digit in the number-digit-numeral hierarchy). A three-digit numeral "ZZZ" in base-60 could mean . If we use the entire collection of our alphanumerics we could ultimately serve a base-62 numeral system, but we remove two digits, uppercase "I" and uppercase "O", to reduce confusion with digits "1" and "0".
We are left with a base-60, or sexagesimal numeral system utilizing 60 of the 62 standard alphanumerics. (But see Sexagesimal system below.) In general, the number of possible values that can be represented by a digit number in base is .
The common numeral systems in computer science are binary (radix 2), octal (radix 8), and hexadecimal (radix 16). In binary only digits "0" and "1" are in the numerals. In the octal numerals, are the eight digits 0–7. Hex is 0–9 A–F, where the ten numerics retain their usual meaning, and the alphabetics correspond to values 10–15, for a total of sixteen digits. The numeral "10" is binary numeral "2", octal numeral "8", or hexadecimal numeral "16".
Radix point
The notation can be extended into the negative exponents of the base b. Thereby the so-called radix point, mostly ».«, is used as separator of the positions with non-negative from those with negative exponent.
Numbers that are not integers use places beyond the radix point. For every position behind this point (and thus after the units digit), the exponent n of the power bn decreases by 1 and the power approaches 0. For example, the number 2.35 is equal to:
Sign
If the base and all the digits in the set of digits are non-negative, negative numbers cannot be expressed. To overcome this, a minus sign, here »-«, is added to the numeral system. In the usual notation it is prepended to the string of digits representing the otherwise non-negative number.
Base conversion
The conversion to a base of an integer represented in base can be done by a succession of Euclidean divisions by the right-most digit in base is the remainder of the division of by the second right-most digit is the remainder of the division of the quotient by and so on. The left-most digit is the last quotient. In general, the th digit from the right is the remainder of the division by of the th quotient.
For example: converting A10BHex to decimal (41227):
0xA10B/10 = 0x101A R: 7 (ones place)
0x101A/10 = 0x19C R: 2 (tens place)
0x19C/10 = 0x29 R: 2 (hundreds place)
0x29/10 = 0x4 R: 1 ...
4
When converting to a larger base (such as from binary to decimal), the remainder represents as a single digit, using digits from . For example: converting 0b11111001 (binary) to 249 (decimal):
0b11111001/10 = 0b11000 R: 0b1001 (0b1001 = "9" for ones place)
0b11000/10 = 0b10 R: 0b100 (0b100 = "4" for tens)
0b10/10 = 0b0 R: 0b10 (0b10 = "2" for hundreds)
For the fractional part, conversion can be done by taking digits after the radix point (the numerator), and dividing it by the implied denominator in the target radix. Approximation may be needed due to a possibility of non-terminating digits if the reduced fraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to. For example, 0.1 in decimal (1/10) is 0b1/0b1010 in binary, by dividing this in that radix, the result is 0b0.00011 (because one of the prime factors of 10 is 5). For more general fractions and bases see the algorithm for positive bases.
In practice, Horner's method is more efficient than the repeated division required above. A number in positional notation can be thought of as a polynomial, where each digit is a coefficient. Coefficients can be larger than one digit, so an efficient way to convert bases is to convert each digit, then evaluate the polynomial via Horner's method within the target base. Converting each digit is a simple lookup table, removing the need for expensive division or modulus operations; and multiplication by x becomes right-shifting. However, other polynomial evaluation algorithms would work as well, like repeated squaring for single or sparse digits. Example:
Convert 0xA10B to 41227
A10B = (10*16^3) + (1*16^2) + (0*16^1) + (11*16^0)
Lookup table:
0x0 = 0
0x1 = 1
...
0x9 = 9
0xA = 10
0xB = 11
0xC = 12
0xD = 13
0xE = 14
0xF = 15
Therefore 0xA10B's decimal digits are 10, 1, 0, and 11.
Lay out the digits out like this. The most significant digit (10) is "dropped":
10 1 0 11 <- Digits of 0xA10B
---------------
10
Then we multiply the bottom number from the source base (16), the product is placed under the next digit of the source value, and then add:
10 1 0 11
160
---------------
10 161
Repeat until the final addition is performed:
10 1 0 11
160 2576 41216
---------------
10 161 2576 41227
and that is 41227 in decimal.
Convert 0b11111001 to 249
Lookup table:
0b0 = 0
0b1 = 1
Result:
1 1 1 1 1 0 0 1 <- Digits of 0b11111001
2 6 14 30 62 124 248
-------------------------
1 3 7 15 31 62 124 249
Terminating fractions
The numbers which have a finite representation form the semiring
More explicitly, if is a factorization of into the primes with exponents then with the non-empty set of denominators
we have
where is the group generated by the and is the so-called localization of with respect to
The denominator of an element of contains if reduced to lowest terms only prime factors out of .
This ring of all terminating fractions to base is dense in the field of rational numbers . Its completion for the usual (Archimedean) metric is the same as for , namely the real numbers . So, if then has not to be confused with , the discrete valuation ring for the prime , which is equal to with .
If divides , we have
Infinite representations
Rational numbers
The representation of non-integers can be extended to allow an infinite string of digits beyond the point. For example, 1.12112111211112 ... base-3 represents the sum of the infinite series:
Since a complete infinite string of digits cannot be explicitly written, the trailing ellipsis (...) designates the omitted digits, which may or may not follow a pattern of some kind. One common pattern is when a finite sequence of digits repeats infinitely. This is designated by drawing a vinculum across the repeating block:
This is the repeating decimal notation (to which there does not exist a single universally accepted notation or phrasing).
For base 10 it is called a repeating decimal or recurring decimal.
An irrational number has an infinite non-repeating representation in all integer bases. Whether a rational number has a finite representation or requires an infinite repeating representation depends on the base. For example, one third can be represented by:
or, with the base implied:
(see also 0.999...)
For integers p and q with gcd(p, q) = 1, the fraction p/q has a finite representation in base b if and only if each prime factor of q is also a prime factor of b.
For a given base, any number that can be represented by a finite number of digits (without using the bar notation) will have multiple representations, including one or two infinite representations:
1. A finite or infinite number of zeroes can be appended:
2. The last non-zero digit can be reduced by one and an infinite string of digits, each corresponding to one less than the base, are appended (or replace any following zero digits):
(see also 0.999...)
Irrational numbers
A (real) irrational number has an infinite non-repeating representation in all integer bases.
Examples are the non-solvable nth roots
with and , numbers which are called algebraic, or numbers like
which are transcendental. The number of transcendentals is uncountable and the sole way to write them down with a finite number of symbols is to give them a symbol or a finite sequence of symbols.
Applications
Decimal system
In the decimal (base-10) Hindu–Arabic numeral system, each position starting from the right is a higher power of 10. The first position represents 100 (1), the second position 101 (10), the third position 102 ( or 100), the fourth position 103 ( or 1000), and so on.
Fractional values are indicated by a separator, which can vary in different locations. Usually this separator is a period or full stop, or a comma. Digits to the right of it are multiplied by 10 raised to a negative power or exponent. The first position to the right of the separator indicates 10−1 (0.1), the second position 10−2 (0.01), and so on for each successive position.
As an example, the number 2674 in a base-10 numeral system is:
(2 × 103) + (6 × 102) + (7 × 101) + (4 × 100)
or
(2 × 1000) + (6 × 100) + (7 × 10) + (4 × 1).
Sexagesimal system
The sexagesimal or base-60 system was used for the integral and fractional portions of Babylonian numerals and other Mesopotamian systems, by Hellenistic astronomers using Greek numerals for the fractional portion only, and is still used for modern time and angles, but only for minutes and seconds. However, not all of these uses were positional.
Modern time separates each position by a colon or a prime symbol. For example, the time might be 10:25:59 (10 hours 25 minutes 59 seconds). Angles use similar notation. For example, an angle might be (10 degrees 25 minutes 59 seconds). In both cases, only minutes and seconds use sexagesimal notation—angular degrees can be larger than 59 (one rotation around a circle is 360°, two rotations are 720°, etc.), and both time and angles use decimal fractions of a second. This contrasts with the numbers used by Hellenistic and Renaissance astronomers, who used thirds, fourths, etc. for finer increments. Where we might write , they would have written or 10°2559233112.
Using a digit set of digits with upper and lowercase letters allows short notation for sexagesimal numbers, e.g. 10:25:59 becomes 'ARz' (by omitting I and O, but not i and o), which is useful for use in URLs, etc., but it is not very intelligible to humans.
In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integral and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days, and the angle used in the example above would be written 10;25,59,23,31,12 degrees.
Computing
In computing, the binary (base-2), octal (base-8) and hexadecimal (base-16) bases are most commonly used. Computers, at the most basic level, deal only with sequences of conventional zeroes and ones, thus it is easier in this sense to deal with powers of two. The hexadecimal system is used as "shorthand" for binary—every 4 binary digits (bits) relate to one and only one hexadecimal digit. In hexadecimal, the six digits after 9 are denoted by A, B, C, D, E, and F (and sometimes a, b, c, d, e, and f).
The octal numbering system is also used as another way to represent binary numbers. In this case the base is 8 and therefore only digits 0, 1, 2, 3, 4, 5, 6, and 7 are used. When converting from binary to octal every 3 bits relate to one and only one octal digit.
Hexadecimal, decimal, octal, and a wide variety of other bases have been used for binary-to-text encoding, implementations of arbitrary-precision arithmetic, and other applications.
For a list of bases and their applications, see list of numeral systems.
Other bases in human language
Base-12 systems (duodecimal or dozenal) have been popular because multiplication and division are easier than in base-10, with addition and subtraction being just as easy. Twelve is a useful base because it has many factors. It is the smallest common multiple of one, two, three, four and six. There is still a special word for "dozen" in English, and by analogy with the word for 102, hundred, commerce developed a word for 122, gross. The standard 12-hour clock and common use of 12 in English units emphasize the utility of the base. In addition, prior to its conversion to decimal, the old British currency Pound Sterling (GBP) partially used base-12; there were 12 pence (d) in a shilling (s), 20 shillings in a pound (£), and therefore 240 pence in a pound. Hence the term LSD or, more properly, £sd.
The Maya civilization and other civilizations of pre-Columbian Mesoamerica used base-20 (vigesimal), as did several North American tribes (two being in southern California). Evidence of base-20 counting systems is also found in the languages of central and western Africa.
Remnants of a Gaulish base-20 system also exist in French, as seen today in the names of the numbers from 60 through 99. For example, sixty-five is soixante-cinq (literally, "sixty [and] five"), while seventy-five is soixante-quinze (literally, "sixty [and] fifteen"). Furthermore, for any number between 80 and 99, the "tens-column" number is expressed as a multiple of twenty. For example, eighty-two is quatre-vingt-deux (literally, four twenty[s] [and] two), while ninety-two is quatre-vingt-douze (literally, four twenty[s] [and] twelve). In Old French, forty was expressed as two twenties and sixty was three twenties, so that fifty-three was expressed as two twenties [and] thirteen, and so on.
In English the same base-20 counting appears in the use of "scores". Although mostly historical, it is occasionally used colloquially. Verse 10 of Pslam 90 in the King James Version of the Bible starts: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow". The Gettysburg Address starts: "Four score and seven years ago".
The Irish language also used base-20 in the past, twenty being fichid, forty dhá fhichid, sixty trí fhichid and eighty ceithre fhichid. A remnant of this system may be seen in the modern word for 40, daoichead.
The Welsh language continues to use a base-20 counting system, particularly for the age of people, dates and in common phrases. 15 is also important, with 16–19 being "one on 15", "two on 15" etc. 18 is normally "two nines". A decimal system is commonly used.
The Inuit languages use a base-20 counting system. Students from Kaktovik, Alaska invented a base-20 numeral system in 1994
Danish numerals display a similar base-20 structure.
The Māori language of New Zealand also has evidence of an underlying base-20 system as seen in the terms Te Hokowhitu a Tu referring to a war party (literally "the seven 20s of Tu") and Tama-hokotahi, referring to a great warrior ("the one man equal to 20").
The binary system was used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to , with a 1/64 term thrown away (the system was called the Eye of Horus).
A number of Australian Aboriginal languages employ binary or binary-like counting systems. For example, in Kala Lagaw Ya, the numbers one through six are urapon, ukasar, ukasar-urapon, ukasar-ukasar, ukasar-ukasar-urapon, ukasar-ukasar-ukasar.
North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system.
A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60.
A base-8 system (octal) was devised by the Yuki tribe of Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight. There is also linguistic evidence which suggests that the Bronze Age Proto-Indo Europeans (from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9, newm, is suggested by some to derive from the word for "new", newo-, suggesting that the number 9 had been recently invented and called the "new number".
Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In some African languages the word for five is the same as "hand" or "fist" (Dyola language of Guinea-Bissau, Banda language of Central Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to as quinquavigesimal. It is found in many languages of the Sudan region.
The Telefol language, spoken in Papua New Guinea, is notable for possessing a base-27 numeral system.
Non-standard positional numeral systems
Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists.
Balanced ternary uses a base of 3 but the digit set is instead of {0,1,2}. The "" has an equivalent value of −1. The negation of a number is easily formed by switching the on the 1s. This system can be used to solve the balance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ... 3n known units can be used to determine any unknown weight up to 1 + 3 + ... + 3n units. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with , with 1 if used on the empty pan, and with 0 if not used. If an unknown weight W is balanced with 3 (31) on its pan and 1 and 27 (30 and 33) on the other, then its weight in decimal is 25 or 101 in balanced base-3.
The factorial number system uses a varying radix, giving factorials as place values; they are related to Chinese remainder theorem and residue number system enumerations. This system effectively enumerates permutations. A derivative of this uses the Towers of Hanoi puzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa.
Non-positional positions
Each position does not need to be positional itself. Babylonian sexagesimal numerals were positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge ( | ) and an open left pointing wedge (<))—up to 14 symbols per position (5 tens (<<<<<) and 9 ones ( ||||||||| ) grouped into one or two near squares containing up to three tiers of symbols, or a place holder (\\) for the lack of a position). Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or a zero symbol).
See also
Examples:
List of numeral systems
:Category:Positional numeral systems
Related topics:
Algorism
Hindu–Arabic numeral system
Mixed radix
Non-standard positional numeral systems
Numeral system
Scientific notation
Other:
Significant figures
Notes
References
External links
Accurate Base Conversion
The Development of Hindu Arabic and Traditional Chinese Arithmetics
Implementation of Base Conversion at cut-the-knot
Learn to count other bases on your fingers
Online Arbitrary Precision Base Converter
Mathematical notation
Articles containing proofs |
438022 | https://en.wikipedia.org/wiki/Nintendo%20DS | Nintendo DS | The is a handheld game console produced by Nintendo, released globally across 2004 and 2005. The DS, an initialism for "Developers' System" or "Dual Screen", introduced distinctive new features to handheld games: two LCD screens working in tandem (the bottom one being a touchscreen), a built-in microphone and support for wireless connectivity. Both screens are encompassed within a clamshell design similar to the Game Boy Advance SP. The Nintendo DS also features the ability for multiple DS consoles to directly interact with each other over Wi-Fi within a short range without the need to connect to an existing wireless network. Alternatively, they could interact online using the now-defunct Nintendo Wi-Fi Connection service. Its main competitor was Sony's PlayStation Portable during the seventh generation of video game consoles.
Prior to its release, the Nintendo DS was marketed as an experimental "third pillar" in Nintendo's console lineup, meant to complement the Game Boy Advance family and GameCube. However, backward compatibility with Game Boy Advance titles and strong sales ultimately established it as the successor to the Game Boy series. On March 2, 2006, Nintendo launched the Nintendo DS Lite, a slimmer and lighter redesign of the original Nintendo DS with brighter screens and a longer lasting battery. On November 1, 2008, Nintendo released the Nintendo DSi, another redesign with several hardware improvements and new features, although it lost backwards compatibility for Game Boy Advance titles and a few DS games that used the GBA slot. On November 21, 2009, Nintendo released the Nintendo DSi XL, a larger version of the DSi.
All Nintendo DS models combined have sold 154.02 million units, making it the best-selling Nintendo system, the best-selling handheld game console to date, and the second best-selling video game console of all time, overall, behind Sony's PlayStation 2. The Nintendo DS line was succeeded by the Nintendo 3DS family in February 2011, which maintains backward compatibility with nearly all Nintendo DS and DSi software except for some software that requires the GBA slot for use.
History
Development
Development on the Nintendo DS began around mid-2002, following an original idea from former Nintendo president Hiroshi Yamauchi about a dual-screened console. On November 13, 2003, Nintendo announced that it would be releasing a new game product in 2004. The company did not provide many details, but stated it would not succeed the Game Boy Advance or GameCube. On January 20, 2004, the console was announced under the codename "Nintendo DS". Nintendo released only a few details at that time, saying that the console would have two separate, 3-inch TFT LCD display panels, separate processors, and up to 1 gigabit (128 Megabytes) of semiconductor memory. Current Nintendo president at the time, Satoru Iwata, said, "We have developed Nintendo DS based upon a completely different concept from existing game devices in order to provide players with a unique entertainment experience for the 21st century." He also expressed optimism that the DS would help put Nintendo back at the forefront of innovation and move away from the conservative image that has been described about the company in years past. In March 2004, a document containing most of the console's technical specifications was leaked, also revealing its internal development name, "Nitro". In May 2004, the console was shown in prototype form at E3 2004, still under the name "Nintendo DS". On July 28, 2004, Nintendo revealed a new design that was described as "sleeker and more elegant" than the one shown at E3 and announced Nintendo DS as the device's official name. Following lukewarm GameCube sales, Hiroshi Yamauchi stressed the importance of its success to the company's future, making a statement which can be translated from Japanese as, "If the DS succeeds, we will rise to heaven, but if it fails we will sink to hell."
Launch
President Iwata referred to Nintendo DS as "Nintendo's first hardware launch in support of the basic strategy 'Gaming Population Expansion because the touch-based device "allows users to play intuitively". On September 20, 2004, Nintendo announced that the Nintendo DS would be released in North America on November 21, 2004, for US$149.99. It was set to release on December 2, 2004, in Japan for JP¥15,000; on February 24, 2005, in Australia for A$199.95; and on March 11, 2005, in Europe for €149.99 (£99.99 in the United Kingdom). The console was released in North America with a midnight launch event at Universal CityWalk EB Games in Los Angeles, California. The console was launched quietly in Japan compared to the North America launch; one source cited the cold weather as the reason. Regarding the European launch, Nintendo President Satoru Iwata said this:
North America and Japan
The Nintendo DS was launched in North America for US$149.99 on November 21, 2004; in Japan for JP¥15,000 on December 2 in the color "Titanium". Well over three million preorders were taken in North America and Japan; preorders at online stores were launched on November 3 and ended the same day as merchants had already sold their allotment. Initially, Nintendo planned to deliver one million units combined at the North American and Japanese launches; when it saw the preorder numbers, it brought another factory online to ramp up production. Nintendo originally slated 300,000 units for the U.S. debut; 550,000 were shipped, and just over 500,000 of those sold through in the first week. Later in 2005, the manufacturer suggested retail price for the Nintendo DS was dropped to US$129.99.
Both launches proved to be successful, but Nintendo chose to release the DS in North America prior to Japan, a first for a hardware launch from the Kyoto-based company. This choice was made to get the DS out for the largest shopping day of the year in the U.S. (the day after Thanksgiving, also known as "Black Friday"). Perhaps partly due to the release date, the DS met unexpectedly high demand in the United States, selling 1 million units by December 21, 2004. By the end of December, the total number shipped worldwide was 2.8 million, about 800,000 more than Nintendo's original forecast. At least 1.2 million of them were sold in the U.S. Some industry reporters referred to it as "the Tickle Me Elmo of 2004". In June 2005, Nintendo informed the press that a total of 6.65 million units had been sold worldwide.
As is normal for electronics, some were reported as having problems with stuck pixels in either of the two screens. Return policies for LCD displays vary between manufacturers and regions, however, in North America, Nintendo has chosen to replace a system with fixed pixels only if the owner claims that it interferes with their gaming experience. There were two exchange programs in place for North America. In the first, the owner of the defective DS in question would provide a valid credit card number and, afterward, Nintendo would ship a new DS system to the owner with shipping supplies to return the defective system. In the second, the owner of the defective DS in question would have shipped their system to Nintendo for inspection. After inspection, Nintendo technicians would have either shipped a replacement system or fixed the defective system. The first option allowed the owner to have a new DS in 3–5 business days.
Multiple games were released alongside the DS during its North American launch on November 21, 2004. At launch there was one pack-in demo, in addition to the built-in PictoChat program: Metroid Prime Hunters: First Hunt (published by Nintendo and is a demo for Metroid Prime Hunters, a game released in March 2006). At the time of the "Electric Blue" DS launch in June 2005, Nintendo bundled the system with Super Mario 64 DS.
In Japan, the games were released at the same time as the system's first release (December 2, 2004). In the launch period, The Prince of Tennis 2005 -Crystal Drive- (Konami) and Puyo Puyo Fever (Puyo Pop Fever) (Sega) were released.
Europe
The DS was released in Europe on March 11, 2005, for €149. A small supply of units was available prior to this in a package with a promotional "VIP" T-shirt, Metroid Prime Hunters - First Hunt, a WarioWare: Touched! demo and a pre-release version of Super Mario 64 DS, through the Nintendo Stars Catalogue; the bundle was priced at £129.99 for the UK and €189.99 for the rest of Europe, plus 1,000 of Nintendo's "star" loyalty points (to cover postage). , 1 million DS units had been sold in Europe, setting a sales record for a handheld console.
The European release of the DS, like the U.S., was originally packaged with a Metroid Prime Hunters: First Hunt demo. The European packaging for the console is noticeably more "aggressive" than that of the U.S./Japanese release. The European game cases are additionally about 1/4 inch thicker than their North American counterparts and transparent rather than solid black. Inside the case, there is room for one Game Boy Advance game pak and a DS card with the instructions on the left side of the case.
Australia and New Zealand
The DS launched in Australia and New Zealand on February 24, 2005. It retailed in Australia for AU$199 and in New Zealand for NZ$249. Like the North American launch, it includes the Metroid Prime Hunters - First Hunt demo. The first week of sales for the system broke Australian launch sales records for a console, with 19,191 units sold by the 27th.
China
"iQue DS", the official name of the Chinese Nintendo DS, was released in China on June 15, 2005. The price of the iQue DS was 980 RMB (roughly US$130) as of April 2006. This version of the DS includes updated firmware to block out the use of the PassMe device, along with the new Red DS. Chinese launch games were Zhi Gan Yi Bi (Polarium) (Nintendo/iQue) and Momo Waliou Zhizao (WarioWare: Touched!) (Nintendo/iQue). The iQue was also the name of the device that China received instead of the Nintendo 64.
Games available on launch
Promotion
The system's promotional slogans revolve around the word "Touch" in almost all countries, with the North American slogan being "Touching is good."
The Nintendo DS was seen by many analysts to be in the same market as Sony's PlayStation Portable, although representatives from both companies stated that each system targeted a different audience. Time magazine awarded the DS a Gadget of the Week award.
At the time of its release in the United States, the Nintendo DS retailed for . The price dropped to on August 21, 2005, one day before the releases of Nintendogs and Advance Wars: Dual Strike.
Nine official colors of the Nintendo DS were available through standard retailers. Titanium-colored units were available worldwide, Electric Blue was exclusive to North and Latin America. There was also a red version which was bundled with the game Mario Kart DS. Graphite Black, Pure White, Turquoise Blue, and Candy Pink were available in Japan. Mystic Pink and Cosmic Blue were available in Australia and New Zealand. Japan's Candy Pink and Australia's Cosmic Blue were also available in Europe and North America through a Nintendogs bundle, although the colors are just referred to as pink and blue; however, these colors were available only for the original style Nintendo DS; a different and more-limited set of colors were used for the Nintendo DS Lite.
Sales
As of March 31, 2016, all Nintendo DS models combined have sold 154.02 million units, making it the best-selling handheld game console to date, and the second best-selling video game console of all time.
Legacy
The success of the Nintendo DS introduced touchscreen controls and wireless online gaming to a wide audience. According to Damien McFerran of Nintendo Life, the "DS was the first encounter many people had with touch-based tech, and it left an indelible impression."
The DS established a large casual gaming market, attracting large non-gamer audiences and establishing touchscreens as the standard controls for future portable gaming devices. According to Jeremy Parish, writing for Polygon, the Nintendo DS laid the foundations for touchscreen mobile gaming on smartphones. He stated that the DS "had basically primed the entire world for" the iPhone, released in 2007, and that the DS paved the way for iPhone gaming mobile apps. However, the success of the iPhone "effectively caused the DS market to implode" by the early 2010s, according to Parish.
The success of the DS paved the way for its successor, the Nintendo 3DS, a handheld gaming console with a similar dual-screen setup that can display images on the top screen in stereoscopic 3D.
On January 29, 2014, Nintendo announced that Nintendo DS games would be added to the Wii U's Virtual Console, with the first game, Brain Age: Train Your Brain in Minutes a Day!, being released in Japan on June 3, 2014.
Hardware
The Nintendo DS design resembles that of the multi-screen games from the Game & Watch line, such as Donkey Kong and Zelda, which was also made by Nintendo.
The lower display of the Nintendo DS is overlaid with a resistive touchscreen designed to accept input from the included stylus, the user's fingers, or a curved plastic tab attached to the optional wrist strap. The touchscreen lets users interact with in-game elements more directly than by pressing buttons; for example, in the included chatting software, PictoChat, the stylus is used to write messages or draw.
The handheld features four lettered buttons (X, Y, A, B), a directional pad, and Start, Select, and Power buttons. On the top of the device are two shoulder buttons, a game card slot, a stylus holder and a power cable input. The bottom features the Game Boy Advance game card slot. The overall button layout resembles that of the Super Nintendo Entertainment System controller. When using backward compatibility mode on the DS, buttons X and Y and the touchscreen are not used as the Game Boy Advance line of systems do not feature these controls.
It also has stereo speakers providing virtual surround sound (depending on the software) located on either side of the upper display screen. This was a first for a Nintendo handheld, as the Game Boy line of systems had only supported stereo sound through the use of headphones or external speakers. A built-in microphone is located below the left side of the bottom screen. It has been used for a variety of purposes, including speech recognition, chatting online between and during gameplay sessions, and minigames that require the player to blow or shout into it.
Technical specifications
The system's 3D hardware consists of rendering engine and geometry engine which perform transform and lighting, Transparency Auto Sorting, Transparency Effects, Texture Matrix Effects, 2D Billboards, Texture Streaming, texture-coordinate transformation, perspective-correct texture mapping, per-pixel Alpha Test, per-primitive alpha blending, texture blending, Gouraud Shading, cel shading, z-buffering, W-Buffering, 1bit Stencil Buffer, per-vertex directional lighting and simulated point lighting, Depth Test, Stencil Test, Render to Texture, Lightmapping, Environment Mapping, Shadow Volumes, Shadow Mapping, Distance Fog, Edge Marking, Fade-In/Fade-Out, Edge-AA. Sprite special effects: scrolling, scaling, rotation, stretching, shear. However, it uses point (nearest neighbor) texture filtering, leading to some titles having a blocky appearance. Unlike most 3D hardware, it has a set limit on the number of triangles it can render as part of a single scene; the maximum amount is about 6144 vertices, or 2048 triangles per frame. The 3D hardware is designed to render to a single screen at a time, so rendering 3D to both screens is difficult and decreases performance significantly. The DS is generally more limited by its polygon budget than its pixel fill rate. There are also 512 kilobytes of texture memory, and the maximum texture size is 1024 × 1024 pixels.
The system has 656 kilobytes of video memory and two 2D engines (one per screen). These are similar to (but more powerful than) the Game Boy Advance's single 2D engine.
The Nintendo DS has compatibility with Wi-Fi (IEEE 802.11 (legacy mode)). Wi-Fi is used for accessing the Nintendo Wi-Fi Connection, compete with other users playing the same Wi-Fi compatible game, PictoChat or with a special cartridge and RAM extension, browse the internet.
Nintendo claims the battery lasts a maximum of 10 hours under ideal conditions on a full four-hour charge. Battery life is affected by multiple factors including speaker volume, use of one or both screens, use of wireless connectivity, and use of backlight, which can be turned on or off in selected games such as Super Mario 64 DS. The battery is user-replaceable using only a Phillips-head screwdriver. After about 500 charges the battery life starts dropping.
Users can close the Nintendo DS system to trigger its 'sleep' mode, which pauses the game being played and saves battery life by turning off the screens, speakers, and wireless communications; however, closing the system while playing a Game Boy Advance game will not put the Nintendo DS into sleep mode, and the game will continue to run normally. Certain DS games (such as Animal Crossing: Wild World) also will not pause but the backlight, screens, and speakers will turn off. Additionally, when saving the game in certain games, the DS will not go into sleep mode. Some games, such as The Legend of Zelda: Phantom Hourglass even use the closing motion needed to enter sleep mode as an unorthodox way of solving puzzles. Looney Tunes: Duck Amuck has a game mode in which you need to close the DS to play, helping Daffy Duck hunt a monster with the shoulder buttons.
Accessories
Although the secondary port on the Nintendo DS does accept and support Game Boy Advance cartridges (but not Game Boy or Game Boy Color cartridges), Nintendo emphasized that the main intention for its inclusion was to allow a wide variety of accessories to be released for the system.
Due to the lack of a second port on the Nintendo DSi, it is not compatible with any accessory that uses it.
Rumble Pak
The Rumble Pak was the first official expansion slot accessory. In the form of a Game Boy Advance cartridge, the Rumble Pak vibrates to reflect the action in compatible games, such as when the player bumps into an obstacle or loses a life. It was released in North America and Japan in 2005 bundled with Metroid Prime Pinball. In Europe, it was first available with the game Actionloop, and later Metroid Prime Pinball. The Rumble Pak was also released separately in those regions.
Headset
The Nintendo DS Headset is the official headset for the Nintendo DS. It plugs into the headset port (which is a combination of a standard 3.5 mm (1/8 in) headphone connector and a proprietary microphone connector) on the bottom of the system. It features one earphone and a microphone, and is compatible with all games that use the internal microphone. It was released alongside Pokémon Diamond and Pearl in Japan, North America, and Australia.
Browser
On February 15, 2006, Nintendo announced a version of the cross-platform web browser Opera for the DS system. The browser can use one screen as an overview, a zoomed portion of which appears on the other screen, or both screens together to present a single tall view of the page. The browser went on sale in Japan and Europe in 2006, and in North America on June 4, 2007. Browser operation requires that an included memory expansion pak is inserted into the GBA slot. The DSi has an internet browser available for download from the Nintendo DSi shop for free.
Wi-Fi USB Connector
This USB-flash-disk-sized accessory plugs into a PC's USB port and creates a miniature hotspot/wireless access point, allowing a Wii and up to five Nintendo DS units to access the Nintendo Wi-Fi Connection service through the host computer's Internet connection. When tried under Linux and Mac, it acts as a regular wireless adapter, connecting to wireless networks, an LED blinks when there is data being transferred. There is also a hacked driver for Windows XP/Vista/7/8/10 to make it function the same way. The Wi-Fi USB Connector was discontinued from retail stores.
MP3 Player
The Nintendo MP3 Player (a modified version of the device known as the Play-Yan in Japan) was released on December 8, 2006, by Nintendo of Europe at a retail price of £29.99/€30. The add-on uses removable SD cards to store MP3 audio files, and can be used in any device that features support for Game Boy Advance cartridges; however, due to this, it is limited in terms of its user-interface and functionality, as it does not support using both screens of the DS simultaneously, nor does it make use of its touch-screen capability. It is not compatible with the DSi, due to the lack of the GBA slot, but the DSi includes a music player via SD card. Although it stated on the box that it is only compatible with the Game Boy Micro, Nintendo DS and Nintendo DS Lite, it is also compatible with the Game Boy Advance SP and Game Boy Advance.
Guitar grip controller
The Guitar grip controller comes packaged with the game Guitar Hero: On Tour and is plugged into the GBA game slot. It features four colored buttons like the ones found on regular Guitar Hero guitar controllers for the stationary consoles, though it lacks the fifth orange button found on the guitar controllers. The DS Guitar Hero controller comes with a small "pick-stylus" (which is shaped like a guitar pick, as the name suggests) that can be put away into a small slot on the controller. It also features a hand strap. The game works with both the DS Lite and the original Nintendo DS as it comes with an adapter for the original DS. The Guitar Grip also works with its sequels, Guitar Hero On Tour: Decades, Guitar Hero On Tour: Modern Hits, and Band Hero.
Later models
Nintendo DS Lite
The is the first redesign of the Nintendo DS. While retaining the original model's basic characteristics, it features a sleeker appearance, larger stylus, longer lasting battery, and brighter screens. Nintendo considered a larger model of the Nintendo DS Lite for release, but decided against it as sales of the original redesign were still strong. It was the final DS to have backwards compatibility with Game Boy Advance games. As of March 31, 2014, shipments of the DS Lite had reached 93.86 million units worldwide, according to Nintendo.
Nintendo DSi
The is the second redesign of the Nintendo DS. It is based on the unreleased larger Nintendo DS Lite model. While similar to the previous DS redesign, new features include two inner and outer 0.3 megapixel digital cameras, a larger 3.25 inch display, internal and external content storage, compatibility with WPA wireless encryption, and connectivity to the Nintendo DSi Shop.
The Nintendo DSi XL (DSi LL in Japan) is a larger design of the Nintendo DSi, and the first model of the Nintendo DS family of consoles to be a size variation of a previous one. It features larger screens with wider view angles, improved battery life, and a greater overall size than the original DSi. While the original DSi was specifically designed for individual use, Nintendo president Satoru Iwata suggested that DSi XL buyers give the console a "steady place on a table in the living room", so that it might be shared by multiple household members.
Software and features
Nintendo Wi-Fi Connection
Nintendo Wi-Fi Connection was a free online game service run by Nintendo. Players with a compatible Nintendo DS game could connect to the service via a Wi-Fi network using a Nintendo Wi-Fi USB Connector or a wireless router. The service was launched in North America, Australia, Japan & Europe throughout November 2005. An online compatible Nintendo DS game was released on the same day for each region.
Additional Nintendo DS Wi-Fi Connection games and a dedicated Nintendo DS web browser were released afterwards. Nintendo later believed that the online platform's success directly propelled the commercial success of the entire Nintendo DS platform. The Nintendo Wi-Fi Connection served as part of the basis of what would become the Wii. Most functions (for games on both the DS and Wii consoles) were discontinued worldwide on May 20, 2014.
Download Play
With Download Play, it is possible for users to play multiplayer games with other Nintendo DS systems, and later Nintendo 3DS systems, using only one game card. Players must have their systems within wireless range (up to approximately 65 feet) of each other for the guest system to download the necessary data from the host system. Only certain games supported this feature and usually played with much more limited features than the full game allowed.
Download Play is also utilized to migrate Pokémon from fourth generation games into the fifth generation Pokémon Black and White, an example of a task requiring two different game cards, two handheld units, but only one player.
Some Nintendo DS retailers featured DS Download Stations that allowed users to download demos of current and upcoming DS games; however, due to memory limitations, the downloads were erased once the system was powered off. The Download Station was made up of 1 to 8 standard retail DS units, with a standard DS card containing the demo data. On May 7, 2008, Nintendo released the Nintendo Channel for download on the Wii. The Nintendo Channel used WiiConnect24 to download Nintendo DS demos through it. From there, a user can select the demo they wish to play and, similar to the Nintendo DS Download Stations at retail outlets, download it to their DS and play it until it is powered off.
Multi-Card Play
Multi-Card Play, like Download Play, allows users to play multiplayer games with other Nintendo DS systems. In this case, each system requires a game card. This mode is accessed from an in-game menu, rather than the normal DS menu.
PictoChat
PictoChat allows users to communicate with other Nintendo DS users within local wireless range. Users can enter text (via an on screen keyboard), handwrite messages or draw pictures (via the stylus and touchscreen). There are four chatrooms (A, B, C, D) in which people can go to chat. Up to sixteen people can connect in any one room.
On Nintendo DS and Nintendo DS Lite systems, users can only write messages in black. However, in the DSi and DSi XL, there is a function that allows the user to write in any colour from the rainbow that cycles through the spectrum, meaning the user cannot choose a color.
PictoChat was not available for the subsequent Nintendo 3DS series of systems.
Firmware
Nintendo's own firmware boots the system. A health and safety warning is displayed first, then the main menu is loaded. The main menu presents the player with four main options to select: play a DS game, use PictoChat, initiate DS Download Play, or play a Game Boy Advance game. The main menu also has secondary options such as turning on or off the back light, the system settings, and an alarm.
The firmware also features a clock, several options for customization (such as boot priority for when games are inserted and GBA screen preferences), and the ability to input user information and preferences (such as name, birthday, favorite color, etc.) that can be used in games.
It supports the following languages: English, Japanese, Spanish, French, German, and Italian.
Games
Compatibility
The Nintendo DS is backward compatible with Game Boy Advance (GBA) cartridges. The smaller Nintendo DS game cards fit into a slot on the top of the system, while Game Boy Advance games fit into a slot on the bottom. The Nintendo DS, like the Game Boy Micro, is not backward compatible with games made for the original Game Boy and Game Boy Color because the Sharp Z80 compatible processor is not included and the console has physical incompatibility with Game Boy and Game Boy Color games. The original Game Boy sound processor, however, is still included to maintain compatibility for GBA games that use the older sound hardware.
The handheld does not have a port for the Game Boy Advance Link Cable, so multiplayer and GameCube–Game Boy Advance link-up modes are not available in Game Boy Advance titles. Only single-player mode is supported on the Nintendo DS, as is the case with Game Boy Advance games played via the Virtual Console on the Nintendo 3DS (Ambassadors only) and Wii U.
The Nintendo DS only uses one screen when playing Game Boy Advance games. The user can configure the system to use either the top or bottom screen by default. The games are displayed within a black border on the screen, due to the slightly different screen resolution between the two systems (256 × 192 px for the Nintendo DS, and 240 × 160 px for the Game Boy Advance).
Nintendo DS games inserted into the top slot are able to detect the presence of specific Game Boy Advance games in the bottom slot. In many such games, either stated in-game during gameplay or explained in its instruction manual, extra content can be unlocked or added by starting the Nintendo DS game with the appropriate Game Boy Advance game inserted. Among those games were the popular Pokémon Diamond and Pearl or Pokémon Platinum, which allowed the player to find more/exclusive Pokémon in the wild if a suitable Game Boy Advance cartridge was inserted. Some of the content can stay permanently, even after the GBA game has been removed.
Additionally, the GBA slot can be used to house expansion paks, such as the Rumble Pak, Nintendo DS Memory Expansion Pak, and Guitar Grips for the Guitar Hero: On Tour series. The Nintendo DSi and the DSi XL have an SD card slot instead of a second cartridge slot and cannot play Game Boy Advance games or Guitar Hero: On Tour.
Regional division
The Nintendo DS is region free in the sense that any console will run a Nintendo DS game purchased anywhere in the world; however, the Chinese iQue DS games cannot be played on other versions of the original DS, whose firmware chip does not contain the required Chinese character glyph images; this restriction is removed on Nintendo DSi and 3DS systems. Although the Nintendo DS of other regions cannot play the Chinese games, the iQue DS can play games of other regions. Also, as with Game Boy games, some games that require both players to have a Nintendo DS game card for multiplayer play will not necessarily work together if the games are from different regions (e.g. a Japanese Nintendo DS game may not work with a North American copy, even though some titles, such as Mario Kart DS and Pokémon Diamond and Pearl versions are mutually compatible). With the addition of the Nintendo Wi-Fi Connection, certain games can be played over the Internet with users of a different region game.
Some Wi-Fi enabled games (e.g. Mario Kart DS) allow the selection of opponents by region. The options are "Regional" ("Continent" in Europe) and "Worldwide", as well as two non-location specific settings. This allows the player to limit competitors to only those opponents based in the same geographical area. This is based on the region code of the game in use.
The Nintendo DSi, however, has a region lock for DSiWare downloadable games, as well as DSi-specific cartridges. It still runs normal DS games of any region, however.
Media specifications
Nintendo DS games use a proprietary solid state mask ROM in their game cards. The mask ROM chips are manufactured by Macronix and have an access time of 150 ns. Cards range from 8–512 MiB (64 Mib to 4 Gib) in size (although data on the maximum capacity has not been released). Larger cards have a 25% slower data transfer rate than more common smaller cards. The cards usually have a small amount of flash memory or an EEPROM to save user data such as game progress or high scores. However, there are few games that have no save memory, such as Electroplankton. The game cards are (about half the width and depth of Game Boy Advance cartridges) and weigh around 3.5 g ( oz).
Hacking and homebrew
Since the release of the Nintendo DS, a great deal of hacking has occurred involving the DS's fully rewritable firmware, Wi-Fi connection, game cards that allow SD storage, and software use. There are now many emulators for the DS, as well as the NES, SNES, Sega Master System, Sega Mega Drive, Neo-Geo Pocket, Neo-Geo MVS (arcade), and older handheld consoles like the Game Boy Color.
There are a number of cards which either have built-in flash memory, or a slot which can accept an SD, or MicroSD (like the DSTT, R4 and ez-flash V/Vi) cards. These cards typically enable DS console gamers to use their console to play MP3s and videos, and other non-gaming functions traditionally reserved for separate devices.
In South Korea, many video game consumers exploit illegal copies of video games, including for the Nintendo DS. In 2007, 500,000 copies of DS games were sold, while the sales of the DS hardware units was 800,000.
Another modification device called Action Replay, manufactured by the company Datel, is a device which allows the user to input cheat codes that allows it to hack games, granting the player infinite health, power-ups, access to any part of the game, infinite in game currency, the ability to walk through walls, and various other abilities depending on the game and code used.
See also
List of Nintendo DS and 3DS flash cartridges
Notes
References
External links
Products introduced in 2004
Backward-compatible video game consoles
Handheld game consoles
Regionless game consoles
2000s toys
2010s toys
IQue consoles
Seventh-generation video game consoles
Discontinued handheld game consoles |
438494 | https://en.wikipedia.org/wiki/Card%20counting | Card counting | Card counting is a blackjack strategy used to determine whether the player or the dealer has an advantage on the next hand. Card counters are advantage players who try to overcome the casino house edge by keeping a running count of high and low valued cards dealt. They generally bet more when they have an advantage and less when the dealer has an advantage. They also change playing decisions based on the composition of the deck.
Basics
Card counting is based on statistical evidence that high cards (aces, 10s, and 9s) benefit the player, while low cards, (2s, 3s, 4s, 5s, 6s, and 7s) benefit the dealer. High cards benefit the player in the following ways:
They increase the player's probability of hitting a natural, which usually pays out at 3 to 2 odds.
Doubling down increases expected value. The elevated ratio of tens and aces improves the probability that doubling down will succeed.
They provide additional splitting opportunities for the player.
They can make the insurance bet profitable since that increases the probability of dealer blackjack.
They also increase the probability the dealer will bust. This also increases the odds of the player busting, but the player can choose to stand on lower totals based on the count.
On the other hand, low cards benefit the dealer. The rules require the dealer to hit stiff hands (12–16 total), and low cards are less likely to bust these totals. A dealer holding a stiff hand will bust if the next card is a 10.
Card counters do not need unusual mental abilities; they do not track or memorize specific cards. Instead, card counters assign a point score to each card that estimates the value of that card. They track the sum of these values with a "running count". The myth that counters keep track of every card was portrayed in the 1988 film Rain Man, in which the savant character Raymond Babbitt counts through six decks with ease and a casino employee comments that it is impossible to do so.
Systems
Basic card counting systems assign a positive, negative, or zero value to each card. When a card is dealt with, the count adjusts by that card's counting value. Low cards increase the count; they increase the percentage of high cards in the deck. High cards decrease the count for the opposite reason. For example, the Hi-Lo system subtracts one for each 10, jack, queen, king, or ace and adds one for any card between 2 and 6. 7s, 8s, and 9s count as zero and do not affect the count.
A card counting system aims to assign point values roughly correlating to a card's effect of removal (EOR). The EOR is the estimated effect of removing a given card from play. Counters gauge the effect of removal for all cards dealt and how that affects the current house edge. Larger ratios between point values create better correlations to actual EOR, increasing the efficiency of a system. Such systems are classified as level 1, level 2, level 3, and so on. The level corresponds to the ratio between values.
The Hi-Lo system is a level-1 count; the running count never increases or decreases by more than one. A multilevel count, such as Zen Count, Wong Halves, or Hi-Opt II, further distinguishes card values to increase accuracy. An advanced count includes values such as +2 and −2, or +0.5 and -0.5. Advanced players might also keep a side count of specific cards like aces. This is done where betting accuracy differs from playing accuracy.
Many side count techniques exist, including special-purpose counts used for games with nonstandard profitable-play options such as an over/under side bet.
Keeping track of more data with higher level counts can hurt speed and accuracy. Some counters earn more money playing a simple count quickly than by playing a complex count slowly.
This table illustrates some example counting systems.
Design and selection of systems
The primary goal of a card counting system is to assign point values to each card that roughly correlate to the card's "effect of removal" or EOR (that is, the effect a single card has on the house advantage once removed from play), thus enabling the player to gauge the house advantage based on the composition of cards still to be dealt. Larger ratios between point values can better correlate to actual EOR, but add complexity to the system. Counting systems may be referred to as "level 1", "level 2", etc., corresponding to the number of different point values the system calls for.
The ideal system is a system that is usable by the player and offers the highest average dollar return per period of time when dealt at a fixed rate. With this in mind, systems aim to achieve a balance of efficiency in three categories:
Betting correlation (BC)
When the sum of all the permutations of the undealt cards offers a positive expectation to a player using optimal playing strategy, there is a positive expectation to a player placing a bet. A system's BC gauges how effective a system is at informing the user of this situation.
Playing efficiency (PE)
A portion of the expected profit comes from modifying playing strategy based on the known altered composition of cards. For this reason, a system's PE gauges how effectively it informs the player to modify strategy according to the actual composition of undealt cards. A system's PE is important when the effect of PE has a large impact on the total gain, as in single- and double-deck games.
Insurance correlation (IC)
A portion of expected gain from counting cards comes from taking the insurance bet, which becomes profitable at high counts. An increase in IC will offer additional value to a card counting system.
Some strategies count the ace (ace-reckoned strategies) and some do not (ace-neutral strategies). Including aces in the count improves betting correlation since the ace is the most valuable card in the deck for betting purposes. However, since the ace can either be counted as one or eleven, including an ace in the count decreases the accuracy of playing efficiency. Since PE is more important in single- and double-deck games, and BC is more important in shoe games, counting the ace is more important in shoe games.
One way to deal with such tradeoffs is to ignore the ace to yield higher PE while keeping a side count which is used to detect an additional change in EV which the player will use to detect additional betting opportunities that ordinarily would not be indicated by the primary card counting system.
The most common side counted card is the ace since it is the most important card in terms of achieving a balance of BC and PE. In theory, a player could keep a side count of every card and achieve a near 100% PE, however, methods involving additional side counts for PE become more complex at an exponential rate as you add more side counts and the ability of the human mind is quickly overtasked and unable to make the necessary computations. Without any side counts, PE can approach 70%.
Since there is the potential to create an overtaxing demand on the human mind while using a card counting system another important design consideration is the ease of use. Higher-level systems and systems with side counts will obviously become more difficult and in an attempt to make them easier, unbalanced systems eliminate the need for a player to keep tabs on the number of cards/decks that have already entered play typically at the expense of lowering PE.
Running counts versus true counts in balanced counting systems
The running count is the running total of each card's assigned value. When using a balanced count (such as the Hi-Lo system), the running count is converted into a "true count", which takes into consideration the number of decks used. With Hi-Lo, the true count is the running count divided by the number of decks that have not yet been dealt; this can be calculated by division or approximated with an average card count per round times the number of rounds dealt. However, many variations of the true count calculation exist.
Back-counting
Back-counting, or "Wonging", consists of standing behind a blackjack table and counting the cards as they are dealt. Stanford Wong first proposed the idea of back-counting, hence the name.
The player will enter or "Wong in" to the game when the count reaches a point at which the player has an advantage. The player may then raise their bets as their advantage increases, or lower their bets as their advantage goes down. Some back-counters prefer to flat-bet, and only bet the same amount once they have entered the game. Some players will stay at the table until the game is shuffled, or they may "Wong out" or leave when the count reaches a level at which they no longer have an advantage.
Back-counting is generally done on shoe games, of 4, 6, or 8 decks, although it can be done on pitch games of 1 or 2 decks. The reason for this is that the count is more stable in a shoe game, so a player will be less likely to sit down for one or two hands and then have to get up. In addition, many casinos do not allow "mid-shoe entry" in single or double deck games which makes Wonging impossible. Another reason is that many casinos exhibit more effort to thwart card counters on their pitch games than on their shoe games, as a counter has a smaller advantage on an average shoe game than in a pitch game.
Advantages
Back-counting differs from traditional card-counting in that the player does not play every hand they see. This offers several advantages. For one, the player does not play hands without a statistical advantage. This increases the total advantage of the player. Another advantage is that the player does not have to change their bet size as much (or at all). Large variations in bet size are one way that casinos detect card counters.
Disadvantages
Back-counting has disadvantages, too. One is that the player frequently does not stay at the table long enough to earn comps. Another disadvantage is that some players may become irritated with players who enter in the middle of a game. They believe that this interrupts the "flow" of the cards. Their resentment may not merely be superstition, though, as this practice will negatively impact the other players at the table; with one fewer player at the table when the card composition becomes unfavorable, the other players will play through more hands under those conditions as they will use up fewer cards per hand. Similarly, they will play fewer hands in the rest of the shoe if the advantage player slips in during the middle of the shoe, when the cards become favorable; with one more player, more of those favorable cards will be used up per hand. This negatively impacts the other players whether they are counting cards or not. Also, a player who hops in and out of games may attract unwanted attention from casino personnel and may be detected as a card-counter.
Group counting
While a single player can maintain their own advantage with back-counting, card counting is most often used by teams of players to maximize their advantage. In such a team, some players called "spotters" will sit at a table and play the game at the table minimum, while keeping a count (basically doing the back "counting"). When the count is significantly high, the spotter will discreetly signal another player, known as a "big player", that the count is high (the table is "hot"). The big player will then "Wong in" and wager vastly higher sums (up to the table maximum) while the count is high. When the count "cools off" or the shoe is shuffled (resetting the count), the big player will "Wong out" and look for other counters who are signaling a high count. This was the system used by the MIT Blackjack Team, whose story was in turn the inspiration for the Canadian movie The Last Casino which was later re-made into the Hollywood version 21.
The main advantage of group play is that the team can count several tables while a single back-counting player can usually only track one table. This allows big players to move from table to table, maintaining the high-count advantage without being out of action very long. It also allows redundancy while the big player is seated as both the counter and big player can keep the count (as in the movie 21, the spotter can communicate the count to the big player discreetly as they sit down). The disadvantages include requiring multiple spotters who can keep an accurate count, splitting the "take" among all members of the team, requiring spotters to play a table regardless of the count (using only basic strategy, these players will lose money long-term), and requiring signals, which can alert pit bosses.
A simple variation removes the loss of having spotters play; the spotters simply watch the table instead of playing and signal big players to Wong in and out as normal. The disadvantages of this variation are reduced ability of the spotter and big player to communicate, reduced comps as the spotters are not sitting down, and vastly increased suspicion, as blackjack is not generally considered a spectator sport in casinos except among those actually playing (unlike craps, roulette, and wheels of fortune which have larger displays and so tend to attract more spectators).
Ranging bet sizes and the Kelly criterion
A mathematical principle called the Kelly criterion indicates that bet increases should be proportional to the player's advantage. In practice, this means that the higher the count, the more a player should bet to take advantage of the player's edge. Using this principle, a counter can vary bet sizes in proportion to the advantage dictated by a count. This creates a "bet ramp" according to the principles of the Kelly criterion. A bet ramp is a betting plan with a specific bet size tied to each true count value in such a way that the player wagers proportionally to the player's advantage to maximize bankroll growth. Taken to its conclusion, the Kelly criterion demands that a player not bet anything when the deck does not offer a positive expectation; "Wonging" implements this.
Expected profit
Historically, blackjack played with a perfect basic strategy offered a house edge of less than 0.5%. As more casinos have switched games to dealer hits soft-17 and blackjack pays 6:5, the average house edge in Nevada has increased to 1%. A typical card counter who ranges bets appropriately in a game with six decks will have an advantage of approximately 1% over the casino. Advantages of up to 2.5% are possible at normal penetrations from counting 6-deck Spanish 21, for the S17 or H17 with redoubling games. This amount varies based on the counter's skill level, penetration (1 – a fraction of pack cut off), and the betting spread (player's maximum bet divided by minimum bet). The variance in blackjack is high, so generating a sizable profit can take hundreds of hours of play. The deck will only have a positive enough count for the player to raise bets 10%-35% of the time depending on rules, penetration, and strategy.
At a table where a player makes a $100 average bet, a 1% advantage means a player will win an average of $1 per round. This translates into an average hourly winning of $50 if the player is dealt 50 hands per hour.
Under one set of circumstances, a player with a 1-15 unit bet spread with only one-deck cut off of a six-deck game will enjoy an advantage of as much as 1.2% with a Standard Deviation of 3.5 on a 2.1 unit average bet. Therefore, it is highly advisable for counters to set aside a large dedicated bankroll; one popular rule of thumb dictates a bankroll of 100 times the maximum bet per hand.
Another aspect of the probability of card counting is that, at higher counts, the player's probability of winning a hand is only slightly changed and still below 50%. The player's edge over the house on such hands does not come from the player's probability of winning the hands. Instead, it comes from the increased probability of blackjacks, increased gain and benefits from doubling, splitting, and surrender, and the insurance side bet, which becomes profitable at high counts.
Many factors affect expected profit, including:
The overall efficiency of a card counting system at detecting player advantage; affects how often the player will actually play a hand at an advantage per period of time
The overall efficiency at creating player advantage as a whole; a system may indicate a small advantage when in fact the advantage is much larger - this reduces the overall ROI of the system while in play.
The rules of the game.
Penetration will almost directly affect the magnitude of player advantage that is exploitable and the rate that hands are dealt with a player at an advantage.
The number of players seated at a table will slow the game pace, and reduce the number of hands a player will be able to play in a given time frame.
Game speed, table with side bets will be dealt at a slower pace than tables without them which will reduce the number of hands dealt over time.
The use of an automatic shuffle machine or in rare cases, a dealer dedicated solely to shuffling a new shoe while another is in play, will eliminate the need for the dealer to shuffle the shoe prior to dealing a new one, increasing game speed.
Devices
Card counting devices are available but are illegal in most U.S. casinos. Card counting with the mind is legal, although U.S. casinos reserve the right to remove suspected counters.
Legal status
Card counting is not illegal under British law, nor is it under federal, state, or local laws in the United States provided that no external card counting device or person assists the player in counting cards. Still, casinos object to the practice, and try to prevent it, banning players believed to be counters. In their pursuit to identify card counters, casinos sometimes misidentify and ban players suspected of counting cards even if they do not.
Atlantic City casinos in the US state of New Jersey are forbidden from barring card counters as a result of a New Jersey Supreme Court decision. In 1979, Ken Uston, a Blackjack Hall of Fame inductee, filed a lawsuit against an Atlantic City casino, claiming that casinos did not have the right to ban skilled players. The New Jersey Supreme Court agreed, ruling that "the state's control of Atlantic City's casinos is so complete that only the New Jersey Casino Control Commission has the power to make rules to exclude skillful players." The Commission has made no regulation on card counting, so Atlantic City casinos are not allowed to ban card counters. As they are unable to ban counters even when identified, Atlantic City casinos have increased the use of countermeasures.
Macau, the only legal gambling location in China, does not technically prohibit card counting but casinos reserve the right to expel or ban any customers, as is the case in the US and Britain. The use of electronic devices to aid such strategies, however, is strictly prohibited and can lead to arrest.
Casino reactions
Detection
Monitoring player behavior to assist with detecting the card counters falls into the hands of the on-floor casino personnel ("pit bosses") and casino-surveillance personnel, who may use video surveillance ("the eye in the sky") as well as computer analysis, to try to spot playing behavior indicative of card counting. Early counter-strategies featured the dealers learning to count the cards themselves to recognize the patterns in the players. Many casino chains keep databases of players that they consider undesirable. Casinos can also subscribe to databases of advantage players offered by agencies like Griffin Investigations, Biometrica, and OSN (Oregon Surveillance Network). Griffin Investigations filed for Chapter 11 bankruptcy protection in 2005 after losing a libel lawsuit filed by professional gamblers. In 2008 all Chapter 11 payments were said to be up to date and all requirements met, and information was being supplied using data encryption and secure servers. If a player is found to be in such a database, they will almost certainly be stopped from play and asked to leave regardless of their table play. For successful card counters, therefore, skill at "cover" behavior, to hide counting and avoid "drawing heat" and possibly being barred, may be just as important as playing skill.
Detection of card counters will be confirmed after a player is first suspected of counting cards; when seeking card counters, casino employees, whatever their position, could be alerted by many things that are most common when related to card counting but not common for other players. These include:
Large buy-ins
Dramatic bet variation especially with larger bets being placed only at the end of a shoe
Playing only a small number of hands during a shoe
Refusal to play rated
Table hopping
Playing multiple hands
Lifetime winnings
Card counters may make unique playing strategy deviations not normally used by non-counters. Plays such as splitting tens, doubling soft 18/19/20, standing on 15/16, and surrendering on 14, when basic strategy says otherwise, may be a sign of a card counter.
Extremely aggressive plays such as splitting tens and doubling soft 19 and 20 are often called out to the pit to notify them because they are telltale signs of not only card counters but hole carding.
Technology for detecting card counters
Several semi-automated systems have been designed to aid the detection of card counters. The MindPlay system (now discontinued) scanned card values as the cards were dealt. The Shuffle Master Intelligent Shoe system also scans card values as cards exit the shoe. Software called Bloodhound and Protec 21 allows voice input of card and bet values, in an attempt to determine the player edge. A more recent innovation is the use of RFID signatures embedded within the casino chips so that the table can automatically track bet amounts.
Automated card-reading technology has known abuse potential in that it can be used to simplify the practice of preferential shuffling—having the dealer reshuffle the cards whenever the odds favor the players. To comply with licensing regulations, some blackjack protection systems have been designed to delay access to real-time data on the remaining cards in the shoe. Other vendors consider real-time notification to surveillance that a shoe is "hot" to be an important product feature.
With card values, play decisions, and bet decisions conveniently accessible, the casino can analyze bet variation, play accuracy, and play variation.
Bet variation. The simplest way a card counter makes money is to bet more when they have an edge. While playing back the tapes of a recent session of play, the software can generate a scatter plot of the amount bet versus the count at the time the bet was made and find the trendline that best fits the scattered points. If the player is not counting cards, there will be no trend; their bet variation and the count variation will not consistently correlate. If the player is counting and varying bets according to the count, there will be a trend whose slope reflects the player's average edge from this technique.
Play variation. When card counters vary from basic strategy, they do so in response to the count, to gain an additional edge. The software can verify whether there is a pattern to play variation. Of particular interest is whether the player sometimes (when the count is positive) takes insurance and stands on 16 versus a dealer 10, but plays differently when the count is negative.
Countermeasures
Casinos have spent a great amount of effort and money in trying to thwart card counters. Countermeasures used to prevent card counters from profiting at blackjack include:
Decreasing penetration, the number of cards dealt before a shuffle. This reduces the advantage of card counting.
Banning known counters from playing blackjack, all games, or entering casino property (trespassing).
Shuffling when a player increases their wager or when the casino feels the remaining cards are advantageous to the player (preferential shuffling).
Changing rules for splitting, doubling down, or playing multiple hands. This also includes changing a table's stakes.
Not allowing entry into a game until a shuffle occurs (no mid-shoe entry).
Flat betting a player or making it so they cannot change the amount they bet during a shoe.
Canceling comps earned by counters.
Confiscation of chips.
Detention (back rooming).
Some jurisdictions (e.g. Nevada) have few legal restrictions placed on these countermeasures. Other jurisdictions such as New Jersey limit the countermeasures a casino can take against skilled players.
Some countermeasures result in disadvantages for the casino. Frequent or complex shuffling, for example, reduces the amount of playing time and consequently the house winnings. Some casinos use automatic shuffling machines to counter the loss of time, with some models of machines shuffling one set of cards while another is in play. Others, known as continuous shuffle machines (CSMs), allow the dealer to simply return used cards to a single shoe to allow playing with no interruption. Because CSMs essentially force minimal penetration, they greatly reduce the advantage of traditional counting techniques. In most online casinos the deck is shuffled at the start of each new round, ensuring the house always has the advantage.
History
American mathematician Edward O. Thorp is the father of card counting. His 1962 book, Beat the Dealer, outlines betting and playing strategies for optimal play. Although mathematically sound, some of the techniques no longer apply, as casinos took countermeasures (such as no longer dealing with the last card). The counting system in Beat the Dealer, the 10-count, is harder to use and less profitable than later systems. A history of how counting developed can be seen in David Layton's documentary film The Hot Shoe.
Before Beat the Dealer, a small number of professional card counters were beating blackjack games in Vegas and elsewhere. One was Jess Marcum, who developed the first full-fledged point-count system. Another pre-Thorp card counter was professional gambler Joe Bernstein. He is described in 1961's I Want To Quit Winners by Reno casino owner Harold Smith as an ace counter feared throughout Nevada. And in the 1957 book, Playing Blackjack to Win, Roger Baldwin, Wilbert Cantey, Herbert Maisel, and James McDermott (known as "The Four Horsemen") published the first accurate blackjack basic strategy and a rudimentary card counting system, devised solely with the aid of crude mechanical calculators — what used to be called "adding machines".
From the early days, some have been succeeded, including Al Francesco, the inventor of blackjack team play and the man who taught Ken Uston how to count cards, and Tommy Hyland, manager of the longest-running blackjack team in history. Ken Uston, perhaps the most famous card-counter through his 60 Minutes television appearance and his books, tended to overstate his winnings, as documented by players who worked with him, including Al Francesco and team member Darryl Purpose.
In the 1970s and 1980s, as computing power grew, more advanced and harder card counting systems came into favor. Many card counters agree, however, that a simpler and less advantageous system that can be played flawlessly for hours earns an overall higher return than a more complex system prone to user error.
Teams
In the 1970s Ken Uston was the first to write about a tactic of card counting he called the Big Player Team. The book was based on his experiences working as a "big player" (BP) on Al Francesco's teams. In big-player blackjack teams a number of card counters, called "spotters", are dispatched to tables around a casino, where their responsibility is to keep track of the count and signal to the big player when the count indicates a player advantage. The big player then joins the game at that table, placing maximum bets at a player advantage. When the spotter indicates that the count has dropped, they again signal the BP to leave the table. By jumping from table to table as called in by spotters, BP avoids all play at a disadvantage. In addition, since BP's play appears random and irrational, they avoid detection by the casinos. The spotters, who are doing the actual counting, are not themselves changing their bet size or strategy, so they are relatively inconspicuous.
With this style of play, a number of blackjack teams have cleared millions of dollars through the years. Well-known blackjack teams with documented earnings in the millions include those run by Al Francesco, Ken Uston, Tommy Hyland, various groups from the Massachusetts Institute of Technology (MIT), and, most recently, a team called "The Greeks". Ken Uston wrote about blackjack team play in Million Dollar Blackjack (), although many of the experiences he represents as his own in his books actually happened to other players, especially Bill Erb, a BP Uston worked with on Al Francesco's team. Ben Mezrich also covers team play in his book Bringing Down The House (), which describes how MIT students used it with great success. See also the Canadian movie The Last Casino and the American movie 21, which was based on Mezrich's book.
The publication of Ken Uston's books and of his landmark lawsuits against the casinos, both stimulated the growth of blackjack teams (Hyland's team and the first MIT team were formed in Atlantic City shortly after the publication of Million Dollar Blackjack) and increased casino awareness of the methods of blackjack teams, making it more difficult for such teams to operate. Hyland and Francesco soon switched to a form of shuffle tracking called "Ace sequencing". Also referred to as "cutting to the Ace", this technique involves various methods designed to spot the bottom card during a shuffle (ideally an Ace) and expertly cut the deck and play future hands to force the player to receive the Ace. This made it more difficult for casinos to detect when team members were playing with an advantage. In 1994, members of the Hyland team were arrested for ace sequencing and blackjack team play at Casino Windsor in Windsor, Ontario, Canada. It was documented in court that Nevada casinos with ownership stakes in the Windsor casino were instrumental in the decision to prosecute team members on cheating charges. However, the judge ruled that the players' conduct was not cheating, but merely the use of intelligent strategy.
Shuffling machines
Automatic shuffling machines (ASMs or batch shufflers), that randomly shuffle decks, interfere with the shuffle tracking variation of card counting by hiding the shuffle. Continuous shuffling machines (CSMs), that partially shuffle used cards back into the "shoe" after every hand, interfere with card counting. CSMs result in shallow penetration (number of seen cards), reducing the effectiveness of card counting.
See also
Advantage gambling
Blackjack Hall of Fame
References
Citations
Bibliography
Further reading
Blackjack
Gambling terminology
de:Black Jack#Kartenzählen |
438819 | https://en.wikipedia.org/wiki/OpenSSL | OpenSSL | OpenSSL is a software library for applications that secure communications over computer networks against eavesdropping or need to identify the party at the other end. It is widely used by Internet servers, including the majority of HTTPS websites.
OpenSSL contains an open-source implementation of the SSL and TLS protocols. The core library, written in the C programming language, implements basic cryptographic functions and provides various utility functions. Wrappers allowing the use of the OpenSSL library in a variety of computer languages are available.
The OpenSSL Software Foundation (OSF) represents the OpenSSL project in most legal capacities including contributor license agreements, managing donations, and so on. OpenSSL Software Services (OSS) also represents the OpenSSL project, for Support Contracts.
OpenSSL is available for most Unix-like operating systems (including Linux, macOS, and BSD) and Microsoft Windows.
Project history
The OpenSSL project was founded in 1998 to provide a free set of encryption tools for the code used on the Internet. It is based on a fork of SSLeay by Eric Andrew Young and Tim Hudson, which unofficially ended development on December 17, 1998, when Young and Hudson both went to work for RSA Security. The initial founding members were Mark Cox, Ralf Engelschall, Stephen Henson, Ben Laurie, and Paul Sutton.
, the OpenSSL management committee consisted of 7 people and there are 17 developers with commit access (many of whom are also part of the OpenSSL management committee).
There are only two full-time employees (fellows) and the remainder are volunteers.
The project has a budget of less than one million USD per year and relies primarily on donations. Development of TLS 1.3 is sponsored by Akamai.
Major version releases
Algorithms
OpenSSL supports a number of different cryptographic algorithms:
Ciphers
AES, Blowfish, Camellia, Chacha20, Poly1305, SEED, CAST-128, DES, IDEA, RC2, RC4, RC5, Triple DES, GOST 28147-89, SM4
Cryptographic hash functions
MD5, MD4, MD2, SHA-1, SHA-2, SHA-3, RIPEMD-160, MDC-2, GOST R 34.11-94, BLAKE2, Whirlpool, SM3
Public-key cryptography
RSA, DSA, Diffie–Hellman key exchange, Elliptic curve, X25519, Ed25519, X448, Ed448, GOST R 34.10-2001, SM2
(Perfect forward secrecy is supported using elliptic curve Diffie–Hellman since version 1.0.)
FIPS 140 validation
FIPS 140 is a U.S. Federal program for the testing and certification of cryptographic modules. An early FIPS 140-1 certificate for OpenSSL's FOM 1.0 was revoked in July 2006 "when questions were raised about the validated module's interaction with outside software." The module was re-certified in February 2007 before giving way to FIPS 140-2. OpenSSL 1.0.2 supported the use of the OpenSSL FIPS Object Module (FOM), which was built to deliver FIPS approved algorithms in a FIPS 140-2 validated environment. OpenSSL controversially decided to categorize the 1.0.2 architecture as 'End of Life' or 'EOL', effective December 31, 2019, despite objections that it was the only version of OpenSSL that was currently available with support for FIPS mode. As a result of the EOL, many users were unable to properly deploy the FOM 2.0 and fell out of compliance because they did not secure Extended Support for the 1.0.2 architecture, although the FOM itself remained validated for eight months further.
The FIPS Object Module 2.0 remained FIPS 140-2 validated in several formats until September 1, 2020, when NIST deprecated the usage of FIPS 186-2 for Digital Signature Standard and designated all non-compliant modules as 'Historical'. This designation includes a caution to Federal Agencies that they should not include the module in any new procurements. All three of the OpenSSL validations were included in the deprecation - the OpenSSL FIPS Object Module (certificate #1747), OpenSSL FIPS Object Module SE (certificate #2398), and OpenSSL FIPS Object Module RE (certificate #2473). Many 'Private Label' OpenSSL-based validations and clones created by consultants were also moved to the Historical List, although some FIPS validated modules with replacement compatibility avoided the deprecation, such as BoringCrypto from Google and CryptoComply from SafeLogic.
OpenSSL 3.0 restored FIPS mode and underwent FIPS 140-2 testing, but with significant delays: The effort was first kicked off in 2016 with support from SafeLogic and further support from Oracle in 2017, but the process has been challenging.
On October 20, 2020, the OpenSSL FIPS Provider 3.0 was added to the CMVP Implementation Under Test List, which reflected an official engagement with a testing lab to proceed with a FIPS 140-2 validation. This resulted in a slew of certifications in the following months.
Licensing
OpenSSL was dual-licensed under the OpenSSL License and the SSLeay License, which means that the terms of either licenses can be used. The OpenSSL License is Apache License 1.0 and SSLeay License bears some similarity to a 4-clause BSD License.
As the OpenSSL License was Apache License 1.0, but not Apache License 2.0, it requires the phrase "this product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit" to appear in advertising material and any redistributions (Sections 3 and 6 of the OpenSSL License). Due to this restriction, the OpenSSL License and the Apache License 1.0 are incompatible with the GNU GPL.
Some GPL developers have added an OpenSSL exception to their licenses that specifically permits using OpenSSL with their system. GNU Wget and climm both use such exceptions. Some packages (like Deluge) explicitly modify the GPL license by adding an extra section at the beginning of the license documenting the exception. Other packages use the LGPL-licensed GnuTLS, BSD-licensed Botan, or MPL-licensed NSS, which perform the same task.
OpenSSL announced in August 2015 that it would require most contributors to sign a Contributor License Agreement (CLA), and that OpenSSL would eventually be relicensed under the terms of Apache License 2.0. This process commenced in March 2017, and was complete in 2018.
On 7 September 2021, OpenSSL 3.0.0 was released under the Apache License 2.0.
Notable vulnerabilities
Denial of service: ASN.1 parsing
OpenSSL 0.9.6k has a bug where certain ASN.1 sequences triggered a large number of recursions on Windows machines, discovered on November 4, 2003. Windows could not handle large recursions correctly, so OpenSSL would crash as a result. Being able to send arbitrary large numbers of ASN.1 sequences would cause OpenSSL to crash as a result.
OCSP stapling vulnerability
When creating a handshake, the client could send an incorrectly formatted ClientHello message, leading to OpenSSL parsing more than the end of the message. Assigned the identifier by the CVE project, this affected all OpenSSL versions 0.9.8h to 0.9.8q and OpenSSL 1.0.0 to 1.0.0c. Since the parsing could lead to a read on an incorrect memory address, it was possible for the attacker to cause a DoS. It was also possible that some applications expose the contents of parsed OCSP extensions, leading to an attacker being able to read the contents of memory that came after the ClientHello.
ASN.1 BIO vulnerability
When using Basic Input/Output (BIO) or FILE based functions to read untrusted DER format data, OpenSSL is vulnerable. This vulnerability was discovered on April 19, 2012, and was assigned the CVE identifier . While not directly affecting the SSL/TLS code of OpenSSL, any application that was using ASN.1 functions (particularly d2i_X509 and d2i_PKCS12) were also not affected.
SSL, TLS and DTLS plaintext recovery attack
In handling CBC cipher-suites in SSL, TLS, and DTLS, OpenSSL was found vulnerable to a timing attack during the MAC processing. Nadhem Alfardan and Kenny Paterson discovered the problem, and published their findings on February 5, 2013. The vulnerability was assigned the CVE identifier .
Predictable private keys (Debian-specific)
OpenSSL's pseudo-random number generator acquires entropy using complex programming methods. To keep the Valgrind analysis tool from issuing associated warnings, a maintainer of the Debian distribution applied a patch to Debian's variant of the OpenSSL suite, which inadvertently broke its random number generator by limiting the overall number of private keys it could generate to 32,768. The broken version was included in the Debian release of September 17, 2006 (version 0.9.8c-1), also compromising other Debian-based distributions, for example Ubuntu. Ready-to-use exploits are easily available.
The error was reported by Debian on May 13, 2008. On the Debian 4.0 distribution (etch), these problems were fixed in version 0.9.8c-4etch3, while fixes for the Debian 5.0 distribution (lenny) were provided in version 0.9.8g-9.
Heartbleed
OpenSSL versions 1.0.1 through 1.0.1f have a severe memory handling bug in their implementation of the TLS Heartbeat Extension that could be used to reveal up to 64 KB of the application's memory with every heartbeat (). By reading the memory of the web server, attackers could access sensitive data, including the server's private key. This could allow attackers to decode earlier eavesdropped communications if the encryption protocol used does not ensure perfect forward secrecy. Knowledge of the private key could also allow an attacker to mount a man-in-the-middle attack against any future communications. The vulnerability might also reveal unencrypted parts of other users' sensitive requests and responses, including session cookies and passwords, which might allow attackers to hijack the identity of another user of the service.
At its disclosure on April 7, 2014, around 17% or half a million of the Internet's secure web servers certified by trusted authorities were believed to have been vulnerable to the attack. However, Heartbleed can affect both the server and client.
CCS injection vulnerability
The CCS Injection Vulnerability () is a security bypass vulnerability that results from a weakness in OpenSSL methods used for keying material.
This vulnerability can be exploited through the use of a man-in-the-middle attack, where an attacker may be able to decrypt and modify traffic in transit. A remote unauthenticated attacker could exploit this vulnerability by using a specially crafted handshake to force the use of weak keying material. Successful exploitation could lead to a security bypass condition where an attacker could gain access to potentially sensitive information. The attack can only be performed between a vulnerable client and server.
OpenSSL clients are vulnerable in all versions of OpenSSL before the versions 0.9.8za, 1.0.0m and 1.0.1h. Servers are only known to be vulnerable in OpenSSL 1.0.1 and 1.0.2-beta1. Users of OpenSSL servers earlier than 1.0.1 are advised to upgrade as a precaution.
ClientHello sigalgs DoS
This vulnerability () allows anyone to take a certificate, read its contents and modify it accurately to abuse the vulnerability causing a certificate to crash a client or server. If a client connects to an OpenSSL 1.0.2 server and renegotiates with an invalid signature algorithms extension, a null-pointer dereference occurs. This can cause a DoS attack against the server.
A Stanford Security researcher, David Ramos, had a private exploit and presented it to the OpenSSL team, which then patched the issue.
OpenSSL classified the bug as a high-severity issue, noting version 1.0.2 was found vulnerable.
Key recovery attack on Diffie–Hellman small subgroups
This vulnerability () allows, when some particular circumstances are met, to recover the OpenSSL server's private Diffie–Hellman key. An Adobe System Security researcher, Antonio Sanso, privately reported the vulnerability.
OpenSSL classified the bug as a high-severity issue, noting only version 1.0.2 was found vulnerable.
Forks
Agglomerated SSL
In 2009, after frustrations with the original OpenSSL API, Marco Peereboom, an OpenBSD developer at the time, forked the original API by creating Agglomerated SSL (assl), which reuses OpenSSL API under the hood, but provides a much simpler external interface. It has since been deprecated in light of the LibreSSL fork circa 2016.
LibreSSL
In April 2014 in the wake of Heartbleed, members of the OpenBSD project forked OpenSSL starting with the 1.0.1g branch, to create a project named LibreSSL. In the first week of pruning the OpenSSL's codebase, more than 90,000 lines of C code had been removed from the fork.
BoringSSL
In June 2014, Google announced its own fork of OpenSSL dubbed BoringSSL. Google plans to co-operate with OpenSSL and LibreSSL developers. Google has since developed a new library, Tink, based on BoringSSL.
See also
Comparison of TLS implementations
Comparison of cryptography libraries
List of free and open-source software packages
POSSE project
LibreSSL
Notes
References
External links
OpenSSL Manpages
OpenSSL Programming Guide (archived)
The OpenSSL License and the GPL by Mark McLoughlin
OpenSSL programming tutorial
OpenSSL Community Wiki
1998 software
C (programming language) libraries
Cryptographic software
Free security software
Transport Layer Security implementation |
439489 | https://en.wikipedia.org/wiki/Commitment%20scheme | Commitment scheme | A commitment scheme is a cryptographic primitive that allows one to commit to a chosen value (or chosen statement) while keeping it hidden to others, with the ability to reveal the committed value later. Commitment schemes are designed so that a party cannot change the value or statement after they have committed to it: that is, commitment schemes are binding. Commitment schemes have important applications in a number of cryptographic protocols including secure coin flipping, zero-knowledge proofs, and secure computation.
A way to visualize a commitment scheme is to think of a sender as putting a message in a locked box, and giving the box to a receiver. The message in the box is hidden from the receiver, who cannot open the lock themselves. Since the receiver has the box, the message inside cannot be changed—merely revealed if the sender chooses to give them the key at some later time.
Interactions in a commitment scheme take place in two phases:
the commit phase during which a value is chosen and committed to
the reveal phase during which the value is revealed by the sender, then the receiver verifies its authenticity
In the above metaphor, the commit phase is the sender putting the message in the box, and locking it. The reveal phase is the sender giving the key to the receiver, who uses it to open the box and verify its contents. The locked box is the commitment, and the key is the proof.
In simple protocols, the commit phase consists of a single message from the sender to the receiver. This message is called the commitment. It is essential that the specific value chosen cannot be known by the receiver at that time (this is called the hiding property). A simple reveal phase would consist of a single message, the opening, from the sender to the receiver, followed by a check performed by the receiver. The value chosen during the commit phase must be the only one that the sender can compute and that validates during the reveal phase (this is called the binding property).
The concept of commitment schemes was perhaps first formalized by Gilles Brassard, David Chaum, and Claude Crepeau in 1988, as part of various Zero-Knowledge protocols for NP, based on various types of commitment schemes (see also:). But the concept was used prior to that without being treated formally. The notion of commitments appeared earliest in works by Manuel Blum, Shimon Even, and Shamir et al. The terminology seems to have been originated by Blum, although commitment schemes can be interchangeably called bit commitment schemes—sometimes reserved for the special case where the committed value is a bit. Earlier to that, commitment via one-way hash functions was considered, e.g., as part of, say, Lamport signature, the original one-time one-bit signature scheme.
Applications
Coin flipping
Suppose Alice and Bob want to resolve some dispute via coin flipping. If they are physically in the same place, a typical procedure might be:
Alice "calls" the coin flip
Bob flips the coin
If Alice's call is correct, she wins, otherwise Bob wins
If Alice and Bob are not in the same place a problem arises. Once Alice has "called" the coin flip, Bob can stipulate the flip "results" to be whatever is most desirable for him. Similarly, if Alice doesn't announce her "call" to Bob, after Bob flips the coin and announces the result, Alice can report that she called whatever result is most desirable for her. Alice and Bob can use commitments in a procedure that will allow both to trust the outcome:
Alice "calls" the coin flip but only tells Bob a commitment to her call,
Bob flips the coin and reports the result,
Alice reveals what she committed to,
Bob verifies that Alice's call matches her commitment,
If Alice's revelation matches the coin result Bob reported, Alice wins
For Bob to be able to skew the results to his favor, he must be able to understand the call hidden in Alice's commitment. If the commitment scheme is a good one, Bob cannot skew the results. Similarly, Alice cannot affect the result if she cannot change the value she commits to.
A real-life application of this problem exists, when people (often in media) commit to a decision or give an answer in a "sealed envelope", which is then opened later. "Let's find out if that's what the candidate answered", for example on a game show, can serve as a model of this system.
Zero-knowledge proofs
One particular motivating example is the use of commitment schemes in zero-knowledge proofs. Commitments are used in zero-knowledge proofs for two main purposes: first, to allow the prover to participate in "cut and choose" proofs where the verifier will be presented with a choice of what to learn, and the prover will reveal only what corresponds to the verifier's choice. Commitment schemes allow the prover to specify all the information in advance, and only reveal what should be revealed later in the proof.
Second, commitments are also used in zero-knowledge proofs by the verifier, who will often specify their choices ahead of time in a commitment. This allows zero-knowledge proofs to be composed in parallel without revealing additional information to the prover.
Signature schemes
The Lamport signature scheme is a digital signature system that relies on maintaining two sets of secret data packets, publishing verifiable hashes of the data packets, and then selectively revealing partial secret data packets in a manner that conforms specifically to the data to be signed. In this way, the prior public commitment to the secret values becomes a critical part of the functioning of the system.
Because the Lamport signature system cannot be used more than once (see the relevant article for details), a system to combine many Lamport Key-sets under a single public value that can be tied to a person and verified by others was developed. This system uses trees of hashes to compress many published lamport-key-commitments sets into a single hash value that can be associated with the prospective author of later verified data.
Verifiable secret sharing
Another important application of commitments is in verifiable secret sharing, a critical building block of secure multiparty computation. In a secret sharing scheme, each of several parties receive "shares" of a value that is meant to be hidden from everyone. If enough parties get together, their shares can be used to reconstruct the secret, but even a malicious cabal of insufficient size should learn nothing. Secret sharing is at the root of many protocols for secure computation: in order to securely compute a function of some shared input, the secret shares are manipulated instead. However, if shares are to be generated by malicious parties, it may be important that those shares can be checked for correctness. In a verifiable secret sharing scheme, the distribution of a secret is accompanied by commitments to the individual shares. The commitments reveal nothing that can help a dishonest cabal, but the shares allow each individual party to check to see if their shares are correct.
Defining the security
Formal definitions of commitment schemes vary strongly in notation and in flavour. The first such flavour is whether the commitment scheme provides perfect or computational security with respect to the hiding or binding properties. Another such flavour is whether the commitment is interactive, i.e. whether both the commit phase and the reveal phase can be seen as being executed by a cryptographic protocol or whether they are non-interactive, consisting of two algorithms Commit and CheckReveal. In the latter case CheckReveal can often be seen as a derandomised version of Commit, with the randomness used by Commit constituting the opening information.
If the commitment C to a value x is computed as C:=Commit(x,open) with open the randomness used for computing the commitment, then CheckReveal (C,x,open) simply consists in verifying the equation C=Commit (x,open).
Using this notation and some knowledge about mathematical functions and probability theory we formalise different versions of the binding and hiding properties of commitments. The two most important combinations of these properties are perfectly binding and computationally hiding commitment schemes and computationally binding and perfectly hiding commitment schemes. Note that no commitment scheme can be at the same time perfectly binding and perfectly hiding – a computationally unbounded adversary can simply generate Commit(x,open) for every value of x and open until finding a pair that outputs C, and in a perfectly binding scheme this uniquely identifies x.
Computational binding
Let open be chosen from a set of size , i.e., it can be represented as a k bit string, and let be the corresponding commitment scheme. As the size of k determines the security of the commitment scheme it is called the security parameter.
Then for all non-uniform probabilistic polynomial time algorithms that output and of increasing length k, the probability that and is a negligible function in k.
This is a form of asymptotic analysis. It is also possible to state the same requirement using concrete security: A commitment scheme Commit is secure, if for all algorithms that run in time t and output the probability that and is at most .
Perfect, statistical, and computational hiding
Let be the uniform distribution over the opening values for security parameter k. A commitment scheme is respectively perfect, statistical, or computational hiding, if for all the probability ensembles and are equal, statistically close, or computationally indistinguishable.
Impossibility of universally composable commitment schemes
It is impossible to realize commitment schemes in the universal composability
(UC) framework. The reason is that UC commitment has to be extractable, as shown by Canetti and Fischlin and explained below.
The ideal commitment functionality, denoted here by F, works roughly as follows. Committer C sends value m to F, which
stores it and sends "receipt" to receiver R. Later, C sends "open" to
F, which sends m to R.
Now, assume we have a protocol π that realizes this functionality. Suppose that the committer C is corrupted. In the UC framework, that essentially means that C is now controlled by the environment, which attempts to distinguish protocol execution from the ideal process. Consider an environment that chooses a message m and then tells C to act as prescribed by π, as if it has committed to m. Note here that in order to realize F, the receiver must, after receiving a commitment, output a message "receipt". After the environment sees this message, it tells C to open the commitment.
The protocol is only secure if this scenario is indistinguishable from the ideal case, where the functionality interacts with a simulator S. Here, S has control of C. In particular, whenever R outputs "receipt", F has to do likewise. The only way to do that is for S to tell C to send a value to F. However, note
that by this point, m is not known to S. Hence, when the commitment is opened during protocol execution, it is unlikely that F will open to m, unless S can extract m from the messages it received from the environment before R outputs the receipt.
However a protocol that is extractable in this sense cannot be statistically hiding. Suppose such a simulator S exists. Now consider an
environment that, instead of corrupting C, corrupts R instead. Additionally it runs a copy of S. Messages received from C are fed into S, and replies from S are forwarded to C.
The environment initially tells C to commit to a message m. At some point in the interaction, S will commit to a value m′; this message is handed to R, who outputs m′. Note that by assumption we have m' = m with high probability. Now in the ideal process the simulator has to come up with m. But this is impossible, because at this point the commitment has not been opened yet, so the only message R can have received in the ideal process is a "receipt" message. We thus have a contradiction.
Construction
A commitment scheme can either be perfectly binding (it is impossible for Alice to alter her commitment after she has made it, even if she has unbounded computational resources); or perfectly concealing (it is impossible for Bob to find out the commitment without Alice revealing it, even if he has unbounded computational resources); or formulated as an instance-dependent commitment scheme, which is either hiding or binding depending on the solution to another problem. A commitment scheme can not be both hiding and binding at the same time.
Bit-commitment in the random oracle model
Bit-commitment schemes are trivial to construct in the random oracle model. Given a hash function H with a 3k bit output, to commit the k-bit message m, Alice generates a random k bit string R and sends Bob H(R||m). The probability that any R′, m′ exist where m′ ≠ m such that H(R′||m′) = H(R||m) is ≈ 2−k, but to test any guess at the message m Bob will need to make 2k (for an incorrect guess) or 2k-1 (on average, for a correct guess) queries to the random oracle. We note that earlier schemes based on hash functions, essentially can be thought of schemes based on idealization of these hash functions as random oracle.
Bit-commitment from any one-way permutation
One can create a bit-commitment scheme from any one-way function that is injective. The scheme relies on the fact that every one-way function can be modified (via the Goldreich-Levin theorem) to possess a computationally hard-core predicate (while retaining the injective property). Let f be an injective one-way function, with h a hard-core predicate. Then to commit to a bit b Alice picks a random input x and sends the triple
to Bob, where denotes XOR, i.e., bitwise addition modulo 2. To decommit, Alice simply sends x to Bob. Bob verifies by computing f(x) and comparing to the committed value. This scheme is concealing because for Bob to recover b he must recover h(x). Since h is a computationally hard-core predicate, recovering h(x) from f(x) with probability greater than one-half is as hard as inverting f. Perfect binding follows from the fact that f is injective and thus f(x) has exactly one preimage.
Bit-commitment from a pseudo-random generator
Note that since we do not know how to construct a one-way permutation from any one-way function, this section reduces the strength of the cryptographic assumption necessary to construct a bit-commitment protocol.
In 1991 Moni Naor showed how to create a bit-commitment scheme from a cryptographically secure pseudorandom number generator. The construction is as follows. If G is pseudo-random generator such that G takes n bits to 3n bits, then if Alice wants to commit to a bit b:
Bob selects a random 3n-bit vector R and sends R to Alice.
Alice selects a random n-bit vector Y and computes the 3n-bit vector G(Y).
If b=1 Alice sends G(Y) to Bob, otherwise she sends the bitwise exclusive-or of G(Y) and R to Bob.
To decommit Alice sends Y to Bob, who can then check whether he initially received G(Y) or G(Y) R.
This scheme is statistically binding, meaning that even if Alice is computationally unbounded she cannot cheat with probability greater than 2−n. For Alice to cheat, she would need to find a Y', such that G(Y') = G(Y) R. If she could find such a value, she could decommit by sending the truth and Y, or send the opposite answer and Y'. However, G(Y) and G(Y''') are only able to produce 2n possible values each (that's 22n) while R is picked out of 23n values. She does not pick R, so there is a 22n/23n = 2−n probability that a Y' satisfying the equation required to cheat will exist.
The concealing property follows from a standard reduction, if Bob can tell whether Alice committed to a zero or one, he can also distinguish the output of the pseudo-random generator G from true-random, which contradicts the cryptographic security of G.
A perfectly binding scheme based on the discrete log problem and beyond
Alice chooses a ring of prime order p, with multiplicative generator g.
Alice randomly picks a secret value x from 0 to p − 1 to commit to and calculates c = gx and publishes c. The discrete logarithm problem dictates that from c, it is computationally infeasible to compute x, so under this assumption, Bob cannot compute x. On the other hand, Alice cannot compute a x <> x, such that gx = c, so the scheme is binding.
This scheme isn't perfectly concealing as someone could find the commitment if he manages to solve the discrete logarithm problem. In fact, this scheme isn't hiding at all with respect to the standard hiding game, where an adversary should be unable to guess which of two messages he chose were committed to - similar to the IND-CPA game. One consequence of this is that if the space of possible values of x is small, then an attacker could simply try them all and the commitment would not be hiding.
A better example of a perfectly binding commitment scheme is one where the commitment is the encryption of x under a semantically secure, public-key encryption scheme with perfect completeness, and the decommitment is the string of random bits used to encrypt x. An example of an information-theoretically hiding commitment scheme is the Pedersen commitment scheme, which is binding under the discrete logarithm assumption. Additionally to the scheme above, it uses another generator h of the prime group and a random number r. The commitment is set .
These constructions are tightly related to and based on the algebraic properties of the underlying groups, and the notion originally seemed to be very much related to the algebra. However, it was shown that basing statistically binding commitment schemes on general unstructured assumption is possible, via the notion of interactive hashing
for commitments from general complexity assumptions (specifically and originally, based on any one way permutation) as in.
Partial reveal
Some commitment schemes permit a proof to be given of only a portion of the committed value. In these schemes, the secret value is a vector of many individually separable values.
The commitment is computed from in the commit phase. Normally, in the reveal phase, the prover would reveal all of and some additional proof data (such as in simple bit-commitment). Instead, the prover is able to reveal any single value from the vector, and create an efficient proof that it is the authentic th element of the original vector that created the commitment . The proof does not require any values of other than to be revealed, and it is impossible to create valid proofs that reveal different values for any of the than the true one.
Vector hashing
Vector hashing is a naive vector commitment partial reveal scheme based on bit-commitment. Values are chosen randomly. Individual commitments are created by hashing . The overall commitment is computed as
In order to prove one element of the vector , the prover reveals the values
The verifier is able to compute from and , and then is able to verify that the hash of all values is the commitment . This scheme is inefficient since the proof is in size and verification time. Alternately, if is the set of all values, then the commitment is in size, and the proof is in size and verification time. Either way, the commitment or the proof scales with .
Merkle tree
A common example of a practical partial reveal scheme is a Merkle tree, in which a binary hash tree is created of the elements of . This scheme creates commitments that are in size, and proofs that are in size and verification time. The root hash of the tree is the commitment . To prove that a revealed is part of the original tree, only hash values from the tree, one from each level, must be revealed as the proof. The verifier is able to follow the path from the claimed leaf node all the way up to the root, hashing in the sibling nodes at each level, and eventually arriving at a root node value that must equal .
KZG commitment
A Kate-Zaverucha-Goldberg commitment uses pairing-based cryptography to build a partial reveal scheme with commitment sizes, proof sizes, and proof verification time. In other words, as , the number of values in , increases, the commitments and proofs do not get larger, and the proofs do not take any more effort to verify.
A KZG commitment requires a predetermined set of parameters to create a pairing, and a trusted trapdoor element. For example, a Tate pairing can be used. Assume that are the additive groups, and is the multiplicative group of the pairing. In other words, the pairing is the map . Let be the trapdoor element (if is the prime order of and ), and let and be the generators of and respectively. As part of the parameter setup, we assume that and are known and shared values for arbitrarily many positive integer values of , while the trapdoor value itself is discarded and known to no one.
Commit
A KZG commitment reformulates the vector of values to be committed as a polynomial. First, we calculate a polynomial such that for all values of in our vector. Lagrange interpolation allows us to compute that polynomial
Under this formulation, the polynomial now encodes the vector, where . Let be the coefficients of , such that . The commitment is calculated as
This is computed simply as a dot product between the predetermined values and the polynomial coefficients . Since is an additive group with associativity and commutativity, is equal to simply , since all the additions and multiplications with can be distributed out of the evaluation. Since the trapdoor value is unknown, the commitment is essentially the polynomial evaluated at a number known to no one, with the outcome obfuscated into an opaque element of .
Reveal
A KZG proof must demonstrate that the revealed data is the authentic value of when was computed. Let , the revealed value we must prove. Since the vector of was reformulated into a polynomial, we really need to prove that the polynomial , when evaluated at takes on the value . Simply, we just need to prove that . We will do this by demonstrating that subtracting from yields a zero at . Define the polynomial as
This polynomial is itself the proof that , because if exists, then is divisible by , meaning it has a root at , so (or, in other words, ). The KZG proof will demonstrate that exists and has this property.
The prover computes through the above polynomial division, then calculates the KZG proof value
This is equal to , as above. In other words, the proof value is the polynomial again evaluated at the trapdoor value , hidden in the generator of .
This computation is only possible if the above polynomials were evenly divisible, because in that case the quotient is a polynomial, not a rational function. Due to the construction of the trapdoor, it is not possible to evaluate a rational function at the trapdoor value, only to evaluate a polynomial using linear combinations of the precomputed known constants of . This is why it is impossible to create a proof for an incorrect value of .
Verify
To verify the proof, the bilinear map of the pairing is used to show that the proof value summarizes a real polynomial that demonstrates the desired property, which is that was evenly divided by . The verification computation checks the equality
where is the bilinear map function as above. is a precomputed constant, is computed based on .
By rewriting the computation in the pairing group , substituting in and , and letting be a helper function for lifting into the pairing group, the proof verification is more clear
Assuming that the bilinear map is validly constructed, this demonstrates that , without the validator knowing what or are. The validator can be assured of this because if , then the polynomials evaluate to the same output at the trapdoor value . This demonstrates the polynomials are identical, because, if the parameters were validly constructed, the trapdoor value is known to no one, meaning that engineering a polynomial to have a specific value at the trapdoor is impossible (according to the Schwartz–Zippel lemma). If is now verified to be true, then is verified to exist, therefore must be polynomial-divisible by , so due to the factor theorem. This proves that the th value of the committed vector must have equaled , since that is the output of evaluating the committed polynomial at .
Additionally, a KZG commitment can be extended to prove the values of any arbitrary values of (not just one value), with the proof size remaining , but the proof verification time scales with . The proof is the same, but instead of subtracting a constant , we subtract a polynomial that causes multiple roots, at all the locations we want to prove, and instead of dividing by we divide by for those same locations.
Quantum bit commitment
It is an interesting question in quantum cryptography if unconditionally secure'' bit commitment protocols exist on the quantum level, that is, protocols which are (at least asymptotically) binding and concealing even if there are no restrictions on the computational resources. One could hope that there might be a way to exploit the intrinsic properties of quantum mechanics, as in the protocols for unconditionally secure key distribution.
However, this is impossible, as Dominic Mayers showed in 1996 (see for the original proof). Any such protocol can be reduced to a protocol where the system is in one of two pure states after the commitment phase, depending on the bit Alice wants to commit. If the protocol is unconditionally concealing, then Alice can unitarily transform these states into each other using the properties of the Schmidt decomposition, effectively defeating the binding property.
One subtle assumption of the proof is that the commit phase must be finished at some point in time. This leaves room for protocols that require a continuing information flow until the bit is unveiled or the protocol is cancelled, in which case it is not binding anymore. More generally, Mayers' proof applies only to protocols that exploit quantum physics but not special relativity. Kent has shown that there exist unconditionally secure protocols for bit commitment that exploit the principle of special relativity stating that information cannot travel faster than light.
Commitments based on physical unclonable functions
Physical unclonable functions (PUFs) rely on the use of a physical key with internal randomness, which is hard to clone or to emulate. Electronic, optical and other types of PUFs have been discussed extensively in the literature, in connection with their potential cryptographic applications including commitment schemes.
See also
Oblivious transfer
Accumulator (cryptography)
Key signing party
Web of trust
Zerocoin
References
External links
Quantum bit commitment on arxiv.org
Kate-Zaverucha-Goldberg (KZG) Constant-Sized Polynomial Commitments - Alin Tomescu
Kate polynomial commitments
Public-key cryptography
Zero-knowledge protocols
Secret sharing
Cryptographic primitives |
439526 | https://en.wikipedia.org/wiki/Cryptographic%20hash%20function | Cryptographic hash function | A cryptographic hash function (CHF) is a mathematical algorithm that maps data of an arbitrary size (often called the "message") to a bit array of a fixed size (the "hash value", "hash", or "message digest"). It is a one-way function, that is, a function for which it is practically infeasible to invert or reverse the computation. Ideally, the only way to find a message that produces a given hash is to attempt a brute-force search of possible inputs to see if they produce a match, or use a rainbow table of matched hashes. Cryptographic hash functions are a basic tool of modern cryptography.
A cryptographic hash function must be deterministic, meaning that the same message always results in the same hash. Ideally it should also have the following properties:
it is quick to compute the hash value for any given message
it is infeasible to generate a message that yields a given hash value (i.e. to reverse the process that generated the given hash value)
it is infeasible to find two different messages with the same hash value
a small change to a message should change the hash value so extensively that a new hash value appears uncorrelated with the old hash value (avalanche effect)
Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes.
Properties
Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length hash value.
A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. In theoretical cryptography, the security level of a cryptographic hash function has been defined using the following properties:
Pre-image resistance Given a hash value , it should be difficult to find any message such that . This concept is related to that of a one-way function. Functions that lack this property are vulnerable to preimage attacks.
Second pre-image resistance Given an input , it should be difficult to find a different input such that . This property is sometimes referred to as weak collision resistance. Functions that lack this property are vulnerable to second-preimage attacks.
Collision resistance It should be difficult to find two different messages and such that . Such a pair is called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It requires a hash value at least twice as long as that required for pre-image resistance; otherwise collisions may be found by a birthday attack.
Collision resistance implies second pre-image resistance but does not imply pre-image resistance. The weaker assumption is always preferred in theoretical cryptography, but in practice, a hash-function which is only second pre-image resistant is considered insecure and is therefore not recommended for real applications.
Informally, these properties mean that a malicious adversary cannot replace or modify the input data without changing its digest. Thus, if two strings have the same digest, one can be very confident that they are identical. Second pre-image resistance prevents an attacker from crafting a document with the same hash as a document the attacker cannot control. Collision resistance prevents an attacker from creating two distinct documents with the same hash.
A function meeting these criteria may still have undesirable properties. Currently, popular cryptographic hash functions are vulnerable to length-extension attacks: given and but not , by choosing a suitable an attacker can calculate , where ∥ denotes concatenation. This property can be used to break naive authentication schemes based on hash functions. The HMAC construction works around these problems.
In practice, collision resistance is insufficient for many practical uses. In addition to collision resistance, it should be impossible for an adversary to find two messages with substantially similar digests; or to infer any useful information about the data, given only its digest. In particular, a hash function should behave as much as possible like a random function (often called a random oracle in proofs of security) while still being deterministic and efficiently computable. This rules out functions like the SWIFFT function, which can be rigorously proven to be collision-resistant assuming that certain problems on ideal lattices are computationally difficult, but, as a linear function, does not satisfy these additional properties.
Checksum algorithms, such as CRC32 and other cyclic redundancy checks, are designed to meet much weaker requirements and are generally unsuitable as cryptographic hash functions. For example, a CRC was used for message integrity in the WEP encryption standard, but an attack was readily discovered, which exploited the linearity of the checksum.
Degree of difficulty
In cryptographic practice, "difficult" generally means "almost certainly beyond the reach of any adversary who must be prevented from breaking the system for as long as the security of the system is deemed important". The meaning of the term is therefore somewhat dependent on the application since the effort that a malicious agent may put into the task is usually proportional to their expected gain. However, since the needed effort usually multiplies with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a few dozen bits to the latter.
For messages selected from a limited set of messages, for example passwords or other short messages, it can be feasible to invert a hash by trying all possible messages in the set. Because cryptographic hash functions are typically designed to be computed quickly, special key derivation functions that require greater computing resources have been developed that make such brute-force attacks more difficult.
In some theoretical analyses "difficult" has a specific mathematical meaning, such as "not solvable in asymptotic polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash functions but do not usually have a strong connection to practical security. For example, an exponential-time algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial-time algorithm (e.g., one that requires steps for -digit keys) may be too slow for any practical use.
Illustration
An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob and claims that she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing. Therefore, Alice writes down her solution, computes its hash, and tells Bob the hash value (whilst keeping the solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution.)
Applications
Verifying the integrity of messages and files
An important application of secure hashes is the verification of message integrity. Comparing message digests (hash digests over the message) calculated before, and after, transmission can determine whether any changes have been made to the message or file.
MD5, SHA-1, or SHA-2 hash digests are sometimes published on websites or forums to allow verification of integrity for downloaded files, including files retrieved using file sharing such as mirroring. This practice establishes a chain of trust as long as the hashes are posted on a trusted site – usually the originating site – authenticated by HTTPS. Using a cryptographic hash and a chain of trust detects malicious changes to the file. Non-cryptographic error-detecting codes such as cyclic redundancy checks only prevent against non-malicious alterations of the file, since an intentional spoof can readily be crafted to have the colliding code value.
Signature generation and verification
Almost all digital signature schemes require a cryptographic hash to be calculated over the message. This allows the signature calculation to be performed on the relatively small, statically sized hash digest. The message is considered authentic if the signature verification succeeds given the signature and recalculated hash digest over the message. So the message integrity property of the cryptographic hash is used to create secure and efficient digital signature schemes.
Password verification
Password verification commonly relies on cryptographic hashes. Storing all user passwords as cleartext can result in a massive security breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash. A password reset method is required when password hashing is performed; original passwords cannot be recalculated from the stored hash value.
Standard cryptographic hash functions are designed to be computed quickly, and, as a result, it is possible to try guessed passwords at high rates. Common graphics processing units can try billions of possible passwords each second. Password hash functions that perform key stretching – such as PBKDF2, scrypt or Argon2 – commonly use repeated invocations of a cryptographic hash to increase the time (and in some cases computer memory) required to perform brute-force attacks on stored password hash digests. A password hash requires the use of a large random, non-secret salt value which can be stored with the password hash. The salt randomizes the output of the password hash, making it impossible for an adversary to store tables of passwords and precomputed hash values to which the password hash digest can be compared.
The output of a password hash function can also be used as a cryptographic key. Password hashes are therefore also known as password-based key derivation functions (PBKDFs).
Proof-of-work
A proof-of-work system (or protocol, or function) is an economic measure to deter denial-of-service attacks and other service abuses such as spam on a network by requiring some work from the service requester, usually meaning processing time by a computer. A key feature of these schemes is their asymmetry: the work must be moderately hard (but feasible) on the requester side but easy to check for the service provider. One popular system – used in Bitcoin mining and Hashcash – uses partial hash inversions to prove that work was done, to unlock a mining reward in Bitcoin, and as a good-will token to send an e-mail in Hashcash. The sender is required to find a message whose hash value begins with a number of zero bits. The average work that the sender needs to perform in order to find a valid message is exponential in the number of zero bits required in the hash value, while the recipient can verify the validity of the message by executing a single hash function. For instance, in Hashcash, a sender is asked to generate a header whose 160-bit SHA-1 hash value has the first 20 bits as zeros. The sender will, on average, have to try times to find a valid header.
File or data identifier
A message digest can also serve as a means of reliably identifying a file; several source code management systems, including Git, Mercurial and Monotone, use the sha1sum of various types of content (file content, directory trees, ancestry information, etc.) to uniquely identify them. Hashes are used to identify files on peer-to-peer filesharing networks. For example, in an ed2k link, an MD4-variant hash is combined with the file size, providing sufficient information for locating file sources, downloading the file, and verifying its contents. Magnet links are another example. Such file hashes are often the top hash of a hash list or a hash tree which allows for additional benefits.
One of the main applications of a hash function is to allow the fast look-up of data in a hash table. Being hash functions of a particular kind, cryptographic hash functions lend themselves well to this application too.
However, compared with standard hash functions, cryptographic hash functions tend to be much more expensive computationally. For this reason, they tend to be used in contexts where it is necessary for users to protect themselves against the possibility of forgery (the creation of data with the same digest as the expected data) by potentially malicious participants.
Hash functions based on block ciphers
There are several methods to use a block cipher to build a cryptographic hash function, specifically a one-way compression function.
The methods resemble the block cipher modes of operation usually used for encryption. Many well-known hash functions, including MD4, MD5, SHA-1 and SHA-2, are built from block-cipher-like components designed for the purpose, with feedback to ensure that the resulting function is not invertible. SHA-3 finalists included functions with block-cipher-like components (e.g., Skein, BLAKE) though the function finally selected, Keccak, was built on a cryptographic sponge instead.
A standard block cipher such as AES can be used in place of these custom block ciphers; that might be useful when an embedded system needs to implement both encryption and hashing with minimal code size or hardware area. However, that approach can have costs in efficiency and security. The ciphers in hash functions are built for hashing: they use large keys and blocks, can efficiently change keys every block, and have been designed and vetted for resistance to related-key attacks. General-purpose ciphers tend to have different design goals. In particular, AES has key and block sizes that make it nontrivial to use to generate long hash values; AES encryption becomes less efficient when the key changes each block; and related-key attacks make it potentially less secure for use in a hash function than for encryption.
Hash function design
Merkle–Damgård construction
A hash function must be able to process an arbitrary-length message into a fixed-length output. This can be achieved by breaking the input up into a series of equally sized blocks, and operating on them in sequence using a one-way compression function. The compression function can either be specially designed for hashing or be built from a block cipher. A hash function built with the Merkle–Damgård construction is as resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a collision in the compression function.
The last block processed should also be unambiguously length padded; this is crucial to the security of this construction. This construction is called the Merkle–Damgård construction. Most common classical hash functions, including SHA-1 and MD5, take this form.
Wide pipe versus narrow pipe
A straightforward application of the Merkle–Damgård construction, where the size of hash output is equal to the internal state size (between each compression step), results in a narrow-pipe hash design. This design causes many inherent flaws, including length-extension, multicollisions, long message attacks, generate-and-paste attacks, and also cannot be parallelized. As a result, modern hash functions are built on wide-pipe constructions that have a larger internal state size – which range from tweaks of the Merkle–Damgård construction to new constructions such as the sponge construction and HAIFA construction. None of the entrants in the NIST hash function competition use a classical Merkle–Damgård construction.
Meanwhile, truncating the output of a longer hash, such as used in SHA-512/256, also defeats many of these attacks.
Use in building other cryptographic primitives
Hash functions can be used to build other cryptographic primitives. For these other primitives to be cryptographically secure, care must be taken to build them correctly.
Message authentication codes (MACs) (also called keyed hash functions) are often built from hash functions. HMAC is such a MAC.
Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Luby-Rackoff constructions using hash functions can be provably secure if the underlying hash function is secure. Also, many hash functions (including SHA-1 and SHA-2) are built by using a special-purpose block cipher in a Davies–Meyer or other construction. That cipher can also be used in a conventional mode of operation, without the same security guarantees. See SHACAL, BEAR and LION.
Pseudorandom number generators (PRNGs) can be built using hash functions. This is done by combining a (secret) random seed with a counter and hashing it.
Some hash functions, such as Skein, Keccak, and RadioGatún, output an arbitrarily long stream and can be used as a stream cipher, and stream ciphers can also be built from fixed-length digest hash functions. Often this is done by first building a cryptographically secure pseudorandom number generator and then using its stream of random bytes as keystream. SEAL is a stream cipher that uses SHA-1 to generate internal tables, which are then used in a keystream generator more or less unrelated to the hash algorithm. SEAL is not guaranteed to be as strong (or weak) as SHA-1. Similarly, the key expansion of the HC-128 and HC-256 stream ciphers makes heavy use of the SHA-256 hash function.
Concatenation
Concatenating outputs from multiple hash functions provide collision resistance as good as the strongest of the algorithms included in the concatenated result. For example, older versions of Transport Layer Security (TLS) and Secure Sockets Layer (SSL) used concatenated MD5 and SHA-1 sums. This ensures that a method to find collisions in one of the hash functions does not defeat data protected by both hash functions.
For Merkle–Damgård construction hash functions, the concatenated function is as collision-resistant as its strongest component, but not more collision-resistant. Antoine Joux observed that 2-collisions lead to -collisions: if it is feasible for an attacker to find two messages with the same MD5 hash, then they can find as many additional messages with that same MD5 hash as they desire, with no greater difficulty. Among those messages with the same MD5 hash, there is likely to be a collision in SHA-1. The additional work needed to find the SHA-1 collision (beyond the exponential birthday search) requires only polynomial time.
Cryptographic hash algorithms
There are many cryptographic hash algorithms; this section lists a few algorithms that are referenced relatively often. A more extensive list can be found on the page containing a comparison of cryptographic hash functions.
MD5
MD5 was designed by Ronald Rivest in 1991 to replace an earlier hash function, MD4, and was specified in 1992 as RFC 1321. Collisions against MD5 can be calculated within seconds which makes the algorithm unsuitable for most use cases where a cryptographic hash is required. MD5 produces a digest of 128 bits (16 bytes).
SHA-1
SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification – now commonly called SHA-0 – of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. Collisions against the full SHA-1 algorithm can be produced using the shattered attack and the hash function should be considered broken. SHA-1 produces a hash digest of 160 bits (20 bytes).
Documents may refer to SHA-1 as just "SHA", even though this may conflict with the other Secure Hash Algorithms such as SHA-0, SHA-2, and SHA-3.
RIPEMD-160
RIPEMD (RACE Integrity Primitives Evaluation Message Digest) is a family of cryptographic hash functions developed in Leuven, Belgium, by Hans Dobbertin, Antoon Bosselaers, and Bart Preneel at the COSIC research group at the Katholieke Universiteit Leuven, and first published in 1996. RIPEMD was based upon the design principles used in MD4 and is similar in performance to the more popular SHA-1. RIPEMD-160 has, however, not been broken. As the name implies, RIPEMD-160 produces a hash digest of 160 bits (20 bytes).
Whirlpool
Whirlpool is a cryptographic hash function designed by Vincent Rijmen and Paulo S. L. M. Barreto, who first described it in 2000. Whirlpool is based on a substantially modified version of the Advanced Encryption Standard (AES). Whirlpool produces a hash digest of 512 bits (64 bytes).
SHA-2
SHA-2 (Secure Hash Algorithm 2) is a set of cryptographic hash functions designed by the United States National Security Agency (NSA), first published in 2001. They are built using the Merkle–Damgård structure, from a one-way compression function itself built using the Davies–Meyer structure from a (classified) specialized block cipher.
SHA-2 basically consists of two hash algorithms: SHA-256 and SHA-512. SHA-224 is a variant of SHA-256 with different starting values and truncated output. SHA-384 and the lesser-known SHA-512/224 and SHA-512/256 are all variants of SHA-512. SHA-512 is more secure than SHA-256 and is commonly faster than SHA-256 on 64-bit machines such as AMD64.
The output size in bits is given by the extension to the "SHA" name, so SHA-224 has an output size of 224 bits (28 bytes); SHA-256, 32 bytes; SHA-384, 48 bytes; and SHA-512, 64 bytes.
SHA-3
SHA-3 (Secure Hash Algorithm 3) was released by NIST on August 5, 2015. SHA-3 is a subset of the broader cryptographic primitive family Keccak. The Keccak algorithm is the work of Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche. Keccak is based on a sponge construction which can also be used to build other cryptographic primitives such as a stream cipher. SHA-3 provides the same output sizes as SHA-2: 224, 256, 384, and 512 bits.
Configurable output sizes can also be obtained using the SHAKE-128 and SHAKE-256 functions. Here the -128 and -256 extensions to the name imply the security strength of the function rather than the output size in bits.
BLAKE2
BLAKE2, an improved version of BLAKE, was announced on December 21, 2012. It was created by Jean-Philippe Aumasson, Samuel Neves, Zooko Wilcox-O'Hearn, and Christian Winnerlein with the goal of replacing the widely used but broken MD5 and SHA-1 algorithms. When run on 64-bit x64 and ARM architectures, BLAKE2b is faster than SHA-3, SHA-2, SHA-1, and MD5. Although BLAKE and BLAKE2 have not been standardized as SHA-3 has, BLAKE2 has been used in many protocols including the Argon2 password hash, for the high efficiency that it offers on modern CPUs. As BLAKE was a candidate for SHA-3, BLAKE and BLAKE2 both offer the same output sizes as SHA-3 – including a configurable output size.
BLAKE3
BLAKE3, an improved version of BLAKE2, was announced on January 9, 2020. It was created by Jack O'Connor, Jean-Philippe Aumasson, Samuel Neves, and Zooko Wilcox-O'Hearn. BLAKE3 is a single algorithm, in contrast to BLAKE and BLAKE2, which are algorithm families with multiple variants. The BLAKE3 compression function is closely based on that of BLAKE2s, with the biggest difference being that the number of rounds is reduced from 10 to 7. Internally, BLAKE3 is a Merkle tree, and it supports higher degrees of parallelism than BLAKE2.
Attacks on cryptographic hash algorithms
There is a long list of cryptographic hash functions but many have been found to be vulnerable and should not be used. For instance, NIST selected 51 hash functions as candidates for round 1 of the SHA-3 hash competition, of which 10 were considered broken and 16 showed significant weaknesses and therefore did not make it to the next round; more information can be found on the main article about the NIST hash function competitions.
Even if a hash function has never been broken, a successful attack against a weakened variant may undermine the experts' confidence. For instance, in August 2004 collisions were found in several then-popular hash functions, including MD5. These weaknesses called into question the security of stronger algorithms derived from the weak hash functions – in particular, SHA-1 (a strengthened version of SHA-0), RIPEMD-128, and RIPEMD-160 (both strengthened versions of RIPEMD).
On August 12, 2004, Joux, Carribault, Lemuel, and Jalby announced a collision for the full SHA-0 algorithm. Joux et al. accomplished this using a generalization of the Chabaud and Joux attack. They found that the collision had complexity and took about 80,000 CPU hours on a supercomputer with 256 Itanium 2 processors – equivalent to 13 days of full-time use of the supercomputer.
In February 2005, an attack on SHA-1 was reported that would find collision in about 269 hashing operations, rather than the 280 expected for a 160-bit hash function. In August 2005, another attack on SHA-1 was reported that would find collisions in 263 operations. Other theoretical weaknesses of SHA-1 have been known: and in February 2017 Google announced a collision in SHA-1. Security researchers recommend that new applications can avoid these problems by using later members of the SHA family, such as SHA-2, or using techniques such as randomized hashing that do not require collision resistance.
A successful, practical attack broke MD5 used within certificates for Transport Layer Security in 2008.
Many cryptographic hashes are based on the Merkle–Damgård construction. All cryptographic hashes that directly use the full output of a Merkle–Damgård construction are vulnerable to length extension attacks. This makes the MD5, SHA-1, RIPEMD-160, Whirlpool, and the SHA-256 / SHA-512 hash algorithms all vulnerable to this specific attack. SHA-3, BLAKE2, BLAKE3, and the truncated SHA-2 variants are not vulnerable to this type of attack.
Attacks on hashed passwords
A common use of hashes is to store password authentication data. Rather than store the plaintext of user passwords, a controlled access system stores the hash of each user's password in a file or database. When someone requests access, the password they submit is hashed and compared with the stored value. If the database is stolen (an all too frequent occurrence), the thief will only have the hash values, not the passwords.
However, most people choose passwords in predictable ways. Lists of common passwords are widely circulated and many passwords are short enough that all possible combinations can be tested if fast hashes are used.
The use of cryptographic salt prevents some attacks, such as building files of precomputing hash values, e.g. rainbow tables. But searches on the order of 100 billion tests per second are possible with high-end graphics processors, making direct attacks possible even with salt.
The United States National Institute of Standards and Technology recommends storing passwords using special hashes called key derivation functions (KDFs) that have been created to slow brute force searches. Slow hashes include pbkdf2, bcrypt, scrypt, argon2, Balloon and some recent modes of Unix crypt. For KSFs that perform multiple hashes to slow execution, NIST recommends an iteration count of 10,000 or more.
See also
Avalanche effect
Comparison of cryptographic hash functions
Cryptographic agility
CRYPTREC
File fixity
HMAC
Hash chain
Length extension attack
MD5CRK
Message authentication code
NESSIE
PGP word list
Random oracle
Security of cryptographic hash functions
SHA-3
Universal one-way hash function
References
Citations
Sources
External links
(companion web site contains online cryptography course that covers hash functions)
Open source python based application with GUI used to verify downloads.
Cryptography
Cryptographic primitives
Hashing |
441206 | https://en.wikipedia.org/wiki/Windows%20Media | Windows Media | Windows Media is a discontinued multimedia framework for media creation and distribution for Microsoft Windows. It consists of a software development kit (SDK) with several application programming interfaces (API) and a number of prebuilt technologies, and is the replacement of NetShow technologies.
The Windows Media SDK was replaced by Media Foundation when Windows Vista was released.
Software
Windows Media Center
Windows Media Player
Windows Media Encoder
Windows Media Services
Windows Movie Maker
Formats
Advanced Systems Format (ASF)
Advanced Stream Redirector (ASX)
Windows Media Audio (WMA)
Windows Media Playlist (WPL)
Windows Media Video (WMV) and VC-1
Windows Media Station (NSC)
WMV HD, (Windows Media Video High Definition), the branding name for high definition (HD) media content encoded using Windows Media codecs. WMV HD is not a separate codec.
HD Photo (formerly Windows Media Photo, standardized as JPEG XR)
DVR-MS, the recording format used by Windows Media Center
SAMI, the closed caption format developed by Microsoft. It can be used to synchronize captions and audio descriptions with online video.
Protocols
Media Stream Broadcast (MSB), for multicast distribution of Advanced Systems Format content over a network
Media Transfer Protocol (MTP), for transferring and synchronizing media on portable devices
Microsoft Media Services (MMS), the streaming transport protocol
Windows Media DRM, an implementation of digital rights management
Website
WindowsMedia.com
See also
QuickTime - Apple Computer's multimedia framework
Silverlight
External links
Official website
Description of the algorithm used for WMA encryption
Microsoft Windows multimedia technology
Multimedia frameworks |
441935 | https://en.wikipedia.org/wiki/Visual%20cryptography | Visual cryptography | Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image.
One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where an image was broken up into n shares so that only someone with all n shares could decrypt the image, while any shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector.
Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%.
Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication.
Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations.
Example
In this example, the image has been split into two component images. Each component image has a pair of pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these complementary pairs are overlapped, they will appear dark gray. On the other hand, if the original image pixel was white, the pixel pairs in the component images must match: both ■□ or both □■. When these matching pairs are overlapped, they will appear light gray.
So, when the two component images are superimposed, the original image appears. However, without the other component, a component image reveals no information about the original image; it is indistinguishable from a random pattern of ■□ / □■ pairs. Moreover, if you have one component image, you can use the shading rules above to produce a counterfeit component image that combines with it to produce any image at all.
(2, N) Visual Cryptography Sharing Case
Sharing a secret with an arbitrary number of people N such that at least 2 of them are required to decode the secret is one form of the visual secret sharing scheme presented by Moni Naor and Adi Shamir in 1994. In this scheme we have a secret image which is encoded into N shares printed on transparencies. The shares appear random and contain no decipherable information about the underlying secret image, however if any 2 of the shares are stacked on top of one another the secret image becomes decipherable by the human eye.
Every pixel from the secret image is encoded into multiple subpixels in each share image using a matrix to determine the color of the pixels.
In the (2,N) case a white pixel in the secret image is encoded using a matrix from the following set, where each row gives the subpixel pattern for one of the components:
{all permutations of the columns of} :
While a black pixel in the secret image is encoded using a matrix from the following set:
{all permutations of the columns of} :
For instance in the (2,2) sharing case (the secret is split into 2 shares and both shares are required to decode the secret) we use complementary matrices to share a black pixel and identical matrices to share a white pixel. Stacking the shares we have all the subpixels associated with the black pixel now black while 50% of the subpixels associated with the white pixel remain white.
thumb
Cheating the (2,N) Visual Secret Sharing Scheme
Horng et al. proposed a method that allows N − 1 colluding parties to cheat an honest party in visual cryptography. They take advantage of knowing the underlying distribution of the pixels in the shares to create new shares that combine with existing shares to form a new secret message of the cheaters choosing.
We know that 2 shares are enough to decode the secret image using the human visual system. But examining two shares also gives some information about the 3rd share. For instance, colluding participants may examine their shares to determine when they both have black pixels and use that information to determine that another participant will also have a black pixel in that location. Knowing where black pixels exist in another party's share allows them to create a new share that will combine with the predicted share to form a new secret message. In this way a set of colluding parties that have enough shares to access the secret code can cheat other honest parties.
In popular culture
In "Do Not Forsake Me Oh My Darling", a 1967 episode of TV series The Prisoner, the protagonist uses a visual cryptography overlay of multiple transparencies to reveal a secret message – the location of a scientist friend who had gone into hiding.
See also
Grille (cryptography)
Steganography
References
External links
Python implementation of Visual Cryptography
Visual Cryptography on Cipher Machines & Cryptology
Doug Stinson's visual cryptography page
Cryptography |
442754 | https://en.wikipedia.org/wiki/StuffIt%20Expander | StuffIt Expander | StuffIt Expander is a proprietary, freeware, closed source, decompression software utility developed by Allume Systems (a subsidiary of Smith Micro Software formerly known as Aladdin Systems). It runs on the classic Mac OS, macOS, and Microsoft Windows. Prior to 2011, a Linux version had also been available for download.
The latest version for each Mac platform is as follows:
16.0.5 for Mac OS X 10.8+ (as of January 2019);
15.0.7 (2011) for Mac OS X 10.6.8+;
15.0.4 (2011) for Mac OS X 10.5+;
14.0.1 (2010) for Mac OS X 10.4+;
10.0.2 for Mac OS X 10.3+;
8.0.2 for Mac OS X 10.0+;
7.0.3 for Mac OS 8.6+;
6.0.1 for Mac OS 8.1+ (PowerPC only);
5.5.1 for System 7.1+ (68020 and up, PowerPC);
4.5 for System 6+ (compatible with all 68k processors).
StuffIt has been a target of criticism and dissatisfaction from Mac users in the past as the file format changes frequently, notably during the introduction of StuffIt version 5.0. Expander 5.0 contained many bugs, and its file format was not readable by the earlier version 4.5, leaving Mac users of the time without a viable compression utility.
The latest stand-alone version for Windows is 2009 (13.0). Unlike the version before it (12.0), which was only able to decompress the newer (and ZIP) archives, version 2009 claims to be able to decompress over 30 formats, some listed below. The executables require both, the .NET v2.0 framework and MSVC 2008 (9.0) runtimes. The previous stand-alone version able to decompress and other classic Mac OS-specific archives was 7.02, distributed with StuffIt v7.0.x for Windows.
From versions 7.5.x to 11 the Expander capabilities were actually performed by the StuffIt Standard Edition, that allowed decompression even after the end of the trial period. To start StuffIt in Expander mode the following command line switches were used: . Note that the registration reminder dialogue box is not shown in this case. With older versions of StuffIt Expander on the classic Mac OS platform, such as StuffIt Expander 3.5, it was possible to enhance the capabilities of StuffIt Expander and to add support for decompressing additional archive formats by means of the shareware DropStuff with Expander Enhancer software from Aladdin Systems.
There is also a command line DOS application called UNSTUFF v1.1 that allows decompression of files.
StuffIt Expander 2009 decompresses files in the following formats:
7-Zip (, )
AppleSingle ()
Arc ()
ARJ ()
BinHex (), all versions
BTOA (, )
bzip2 (, , , , , , , )
CAB ()
Compact Pro ()
gzip (, )
LHA (, )
LZMA (, , )
MacBinary (, ), all versions
MIME/Base 64 ()
Private File (), Aladdin's encryption file format
RAR (, , , ), including segmented
SpaceSaver StuffIt compression format used in versions prior 5.x
StuffIt (, , , , , ) v1.5.1 to 8.0.x, including encrypted, segmented and self-extracting archive (Classic Mac OS file type code 'SIT!')
tar (, , , , )
Unix Compress (, , )
UU (, , ), PC/Unix 8 bit to 7 bit encoding similar to BinHex ()
yEncode (, )
ZIP (, , , ), including encrypted, Zip64, segmented and self-extracting archive
References
External links
Current Stuffit homepage, with links to download Mac and Windows versions
Classic Mac OS software
MacOS archivers and compression-related utilities
Data compression software
Freeware |
444233 | https://en.wikipedia.org/wiki/Inter-Asterisk%20eXchange | Inter-Asterisk eXchange | Inter-Asterisk eXchange (IAX) is a communications protocol native to the Asterisk private branch exchange (PBX) software, and is supported by a few other softswitches, PBX systems, and softphones. It is used for transporting VoIP telephony sessions between servers and to terminal devices.
The original IAX protocol is deprecated and has been superseded by a second version, commonly called IAX2. The IAX2 protocol was published as an informational (non-standards-track) RFC 5456 by discretion of the RFC Editor in February 2010.
Basic properties
IAX is a VoIP protocol that can be used for any type of streaming media including video, but is mainly designed for IP voice calls.
IAX uses a single User Datagram Protocol (UDP) data stream between endpoints for both the session signaling and the media payloads. Thus it uses only a single UDP port number, typically 4569. This feature provides benefits for traversing network address translators on network boundaries, as it simplifies firewall configuration. Other VoIP protocols typically use independent streams for signaling and media, such as the Session Initiation Protocol (SIP), H.323, and the Media Gateway Control Protocol (MGCP), which carry media with the Real-time Transport Protocol (RTP).
IAX is a binary-encoded protocol. New extension features must have a new numeric code allocated. Historically, this was modeled after the internal data passing of Asterisk modules.
IAX supports trunking, multiplexing channels over a single link. When trunking, data from multiple sessions are merged into a single stream of packets between two endpoints, reducing the IP overhead without creating additional latency. This is advantageous in VoIP transmissions, in which IP headers use a large percentage of bandwidth.
IAX2 supports native encryption of both control and media streams using AES-128.
Origin
Both versions of the IAX protocol were created by Mark Spencer and much of the development was carried out in the Asterisk open-source community.
Goals
The primary goals for IAX are to minimize bandwidth used in media transmissions, with particular attention drawn to control individual voice calls, and to provide native network address translation (NAT) transparency. It was intended to be easy to use behind firewalls.
Drawbacks
Awkward extensibility: Due to the lack of a generic extension mechanism, new features have to be added in the protocol specification, which makes the protocol less flexible than H.323, SIP or MGCP.
Vulnerability: Older implementations of IAX2 were vulnerable to resource exhaustion DoS attacks that are available to the public. While no solutions existed for these issues, the best practices included limiting UDP port access to specific trusted IP addresses. Internet-facing IAX2 ports are considered vulnerable and should be monitored closely. The fuzzer used to detect these application vulnerabilities was posted on milw0rm and is included in the VoIPer development tree. These issues were briefly mentioned in the IAX RFC 5456 on page 94. This flaw does not exist in up-to-date installations of Asterisk or other PBXes.
See also
SIP connection (aka SIP trunk)
References
External links
IAX: Inter-Asterisk eXchange Version 2
IANA Registration for Enumservice 'iax'
VoIP protocols
Asterisk (PBX)
Application layer protocols |
445327 | https://en.wikipedia.org/wiki/Locally%20cyclic%20group | Locally cyclic group | In mathematics, a locally cyclic group is a group (G, *) in which every finitely generated subgroup is cyclic.
Some facts
Every cyclic group is locally cyclic, and every locally cyclic group is abelian.
Every finitely-generated locally cyclic group is cyclic.
Every subgroup and quotient group of a locally cyclic group is locally cyclic.
Every homomorphic image of a locally cyclic group is locally cyclic.
A group is locally cyclic if and only if every pair of elements in the group generates a cyclic group.
A group is locally cyclic if and only if its lattice of subgroups is distributive .
The torsion-free rank of a locally cyclic group is 0 or 1.
The endomorphism ring of a locally cyclic group is commutative.
Examples of locally cyclic groups that are not cyclic
Examples of abelian groups that are not locally cyclic
The additive group of real numbers (R, +); the subgroup generated by 1 and (comprising all numbers of the form a + b) is isomorphic to the direct sum Z + Z, which is not cyclic.
References
.
.
Abelian group theory
Properties of groups |
445762 | https://en.wikipedia.org/wiki/Comparison%20of%20computer%20viruses | Comparison of computer viruses | The compilation of a unified list of computer viruses is made difficult because of naming. To aid the fight against computer viruses and other types of malicious software, many security advisory organizations and developers of anti-virus software compile and publish lists of viruses. When a new virus appears, the rush begins to identify and understand it as well as develop appropriate counter-measures to stop its propagation. Along the way, a name is attached to the virus. As the developers of anti-virus software compete partly based on how quickly they react to the new threat, they usually study and name the viruses independently. By the time the virus is identified, many names denote the same virus.
Another source of ambiguity in names is that sometimes a virus initially identified as a completely new virus is found to be a variation of an earlier known virus, in which cases, it is often renamed. For example, the second variation of the Sobig worm was initially called "Palyh" but later renamed "Sobig.b". Again, depending on how quickly this happens, the old name may persist.
Scope
In terms of scope, there are two major variants: the list of "in-the-wild" viruses, which list viruses in active circulation, and lists of all known viruses, which also contain viruses believed not to be in active circulation (also called "zoo viruses"). The sizes are vastly different: in-the-wild lists contain a hundred viruses but full lists contain tens of thousands.
Comparison of viruses and related programs
{|class="wikitable sortable" border="1"
!Virus
!Alias(es)
!Types
!Subtype
!Isolation Date
!Isolation
!Origin
!Author
!Notes
|-
|1260
|V2Px
|DOS
|Polymorphic
|1990
|
|
|
|First virus family to use polymorphic encryption
|-
|4K
|4096
|DOS
|
|1990-01
|
|
|
|The first known MS-DOS-file-infector to use stealth
|-
|5lo
|
|DOS
|
|1992-10
|
|
|
|Infects .EXE files only
|-
|Abraxas
|Abraxas5
|DOS,Windows 95, 98
|
|1993-04
|Europe
|
|ARCV group
|Infects COM file. Disk directory listing will be set to the system date and time when infection occurred.
|-
|Acid
|Acid.670, Acid.670a, Avatar.Acid.670, Keeper.Acid.670
|DOS,Windows 95, 98
|
|1992
|
|
|Corp-$MZU
|Infects COM file. Disk directory listing will not be altered.
|-
|Acme
|
|DOS,Windows 95 DOS
|
|1992
|
|
|
|Upon executing infected EXE, this infects another EXE in current directory by making a hidden COM file with same base name.
|-
|ABC
|ABC-2378, ABC.2378, ABC.2905
|DOS
|
|1992-10
|
|
|
|ABC causes keystrokes on the compromised machine to be repeated.
|-
|Actifed
|
|DOS
|
|
|
|
|
|
|-
|Ada
|
|DOS
|
|1991-10
|
|Argentina
|
|The Ada virus mainly targets .COM files, specifically COMMAND.COM.
|-
|AGI-Plan
|Month 4-6
|DOS
|
|
|Mülheim
|
|
|AGI-Plan is notable for reappearing in South Africa in what appeared to be an intentional re-release.
|-
|AI
|
|DOS
|
|
|
|
|
|
|-
|AIDS
|AIDSB, Hahaha, Taunt
|DOS
|
|1990
|
|
|
|AIDS is the first virus known to exploit the DOS "corresponding file" vulnerability.
|-
|AIDS II
|
|DOS
|
|circa 1990
|
|
|
|
|-
|Alabama
|Alabama.B
|DOS
|
|1989-10
|
|Hebrew University, Jerusalem
|
|Files infected by Alabama increase in size by 1,560 bytes.
|-
|Alcon
|RSY, Kendesm, Ken&Desmond, Ether
|DOS
|
|1997-12
|
|
|
|Overwrites random information on disk causing damage over time.
|-
|Ambulance
|
|DOS
|
|June,1990
|
|
|
|
|-
|Anna Kournikova
|
|E-MailVBScript
|
|2001-02-11
|
|Sneek, Netherlands
|Jan de Wit
|A Dutch court stated that US$166,000 in damages was caused by the worm.
|-
|ANTI
|ANTI-A, ANTI-ANGE, ANTI-B, Anti-Variant
|Classic Mac OS
|
|1989-02
|France
|
|
|The first Mac OS virus not to create additional resources; instead, it patches existing CODE resources.
|-
|AntiCMOS
|
|DOS
|
|January 1994 – 1995
|
|
|
|Due to a bug in the virus code, the virus fails to erase CMOS information as intended.
|-
|ARCV-n
|
|DOS
|
|1992-10/1992-11
|
|England, United Kingdom
|ARCV Group
|ARCV-n is a term for a large family of viruses written by the ARCV group.
|-
|Alureon
|TDL-4, TDL-1, TDL-2, TDL-3, TDL-TDSS
|Windows
|Botnet
|2007
|
|Estonia
|JD virus
|
|-
|Autostart
|Autostart.A—D
|Classic Mac OS
|
|1998
|Hong Kong
|China
|
|
|-
|Bomber
|CommanderBomber
|DOS
|
|
|
|Bulgaria
|
|Polymorphic virus which infects systems by inserting fragments of its code randomly into executable files.
|-
|Brain
|Pakistani flu
|DOS
|Boot sector virus
|1986-01
|
|Lahore, Pakistan
|Basit and Amjad Farooq Alvi
|Considered to be the first computer virus for the PC
|-
|Byte Bandit
|
|Amiga
|Boot sector virus
|1988-01
|
|
|Swiss Cracking Association
|It was one of the most feared Amiga viruses until the infamous Lamer Exterminator.
|-
|CDEF
|
|Classic Mac OS
|
|1990.08
|
|Ithaca, New York
|
|Cdef arrives on a system from an infected Desktop file on removable media. It does not infect any Macintosh systems beyond OS6.
|-
|Christmas Tree
|
|
|Worm
|1987-12
|
|Germany
|
|
|-
|CIH
|Chernobyl, Spacefiller
|Windows 95, 98, Me
|
|1998-06
|Taiwan
|Taiwan
|Chen ing-Hau
|Activates on April 26, in which it destroys partition tables, and tries to overwrite the BIOS.
|-
|Commwarrior
|
|Symbian Bluetooth worm
|
|
|
|
|
|Famous for being the first worm to spread via MMS and Bluetooth.
|-
|Creeper
|
|TENEX operating system
|Worm
|1971
|
|
|Bob Thomas
|An experimental self-replicating program which gained access via the ARPANET and copied itself to the remote system.
|-
|Eliza
|
|DOS
|
|1991-12
|
|
|
|
|-
|Elk Cloner
|
|Apple II
|
|1982
|Mt. Lebanon, Pennsylvania
|Mt. Lebanon, Pennsylvania
|Rich Skrenta
|The first virus observed "in the wild"
|-
|Esperanto
|
|DOS, MS Windows, Classic Mac OS
|
|1997.11
|Spain
|Spain
|Mister Sandman
|First multi-processor virus. The virus is capable of infecting files on computers running Microsoft Windows and DOS on the x86 processor and MacOS, whether they are on a Motorola or PowerPC processor.
|-
|Form
|
|DOS
|
|1990
|Switzerland
|
|
|A very common boot virus, triggers on the 18th of any month.
|-
|Fun
|
|Windows
|
|2008
|
|
|
| It registers itself as a Windows system process then periodically sends mail with spreading attachments as a response to any unopened emails in Outlook Express
|-
|Graybird
|Backdoor.GrayBird, BackDoor-ARR
|Windows
|Trojan Horse
|2003-02-04
|
|
|
|
|-
|Hare
|
|DOS,Windows 95, Windows 98
|
|1996-08
|
|
|
|Famous for press coverage which blew its destructiveness out of proportion
|-
|ILOVEYOU
|
|Microsoft
|Worm
|2000-05-05
|
|Manila, Philippines
|Michael Buen, Onel de Guzman
|Computer worm that attacked tens of millions of Windows personal computers
|-
|INIT 1984
|
|Classic Mac OS
|
|1992-03-13
|Ireland
|
|
|Malicious, triggered on Friday the 13th. Init1984 works on Classic Mac OS System 6 and 7.
|-
|Jerusalem
|
|DOS
|
|1987-10
|
|
|
|Jerusalem was initially very common and spawned a large number of variants.
|-
|Kama Sutra
|Blackworm, Nyxem, and Blackmal
|
|
|2006-01-16
|
|
|
|Designed to destroy common files such as Microsoft Word, Excel, and PowerPoint documents.
|-
|Koko
|
|DOS
|
|1991-03
|
|
|
|The payload of this virus activates on July 29 and February 15 and may erase data on the users hard drive
|-
|Lamer Exterminator
|
|Amiga
|Boot sector virus
|1989-10
|
|Germany
|
|Random encryption, fills random sector with "LAMER"
|-
|MacMag
|Drew, Bradow, Aldus, Peace
|Classic Mac OS
|
|1987-12
|
|United States
|
|Products (not necessarily the Classic Mac OS) were infected with the first actual virus.
|-
|MDEF
|Garfield, Top Cat
|Classic Mac OS
|
|1990-05-15
|
|Ithaca, New York
|
|Infects menu definition resource fork files. Mdef infects all Classic Mac OS versions from 4.1 to 6.
|-
|Melissa
|Mailissa, Simpsons, Kwyjibo, Kwejeebo
|Microsoft Word macro virus
|
|1999-03-26
|
|New Jersey
|David L. Smith
|Part macro virus and part worm. Melissa, a MS Word-based macro that replicates itself through e-mail.
|-
|Mirai
|
|Internet of Things
|DDoS
|2016
|
|
|
|
|-
|Michelangelo
|
|DOS
|
|1991-02-04
|Australia
|
|
|Ran March 6 (Michelangelo's birthday)
|-
|Mydoom
|Novarg, Mimail, Shimgapi
|Windows
|Worm
|2004-01-26
|World
|Russia
|
|Mydoom was the world's fastest spreading computer worm to date, surpassing Sobig, and the ILOVEYOU computer worms, yet it was used to DDoS servers.
|-
||Navidad
|
|Windows
|Mass-mailer worm
|2000-12
|
|South America
|
|
|-
|Natas
|Natas.4740, Natas.4744, Natas.4774, Natas.4988
|DOS
|Multipartite, stealth, polymorphic
|1994.06
|Mexico City
|United States
|Priest (AKA Little Loc)
|
|-
|nVIR
|MODM, nCAM, nFLU, kOOL, Hpat, Jude, Mev#, nVIR.B
|Classic Mac OS
|
|1987-12
|
|United States
|
|nVIR has been known to 'hybridize' with different variants of nVIR on the same machine.
|-
|Oompa
|Leap
|Mac OSX
|Worm
|2006.02.10
|
|
|
|First worm for Mac OSX. It propagates through iChat, an instant message client for Macintosh operating systems. Whether Oompa is a worm has been controversial. Some believe it is a trojan.
|-
|OneHalf
|Slovak Bomber, Freelove or Explosion-II
|DOS
|
|1994
|
|Slovakia
|Vyvojar
|It is also known as one of the first viruses to implement a technique of "patchy infection"
|-
|Ontario.1024
|
|
|
|
|
|
|
|
|-
|Ontario.2048
|
|
|
|
|
|
|
|
|-
|Ontario
|SBC
|DOS
|
|1990-07
|
|Ontario
|"Death Angel"
|
|-
|Petya
|GoldenEye, NotPetya
|Windows
|Trojan horse
|2016
|Ukraine
|Russia
|
|Total damages brought about by NotPetya to more than $10 billion.
|-
|Pikachu virus
|
|
|
|2000-06-28
|
|Asia
|
|The Pikachu virus is believed to be the first computer virus geared at children.
|-
|Ping-pong
| Boot, Bouncing Ball, Bouncing Dot, Italian, Italian-A, VeraCruz
| DOS
|Boot sector virus
|1988-03
|
|Turin
|
| Harmless to most computers
|-
|RavMonE.exe
|RJump.A, Rajump, Jisx
|Worm
|
|2006-06-20
|
|
|
|Once distributed in Apple iPods, but a Windows-only virus
|-
|SCA
|
|Amiga
|Boot sector virus
|1987-11
|
|Switzerland
|Swiss Cracking Association
|Puts a message on screen. Harmless except it might destroy a legitimate non-standard boot block.
|-
|Scores
|Eric, Vult, NASA, San Jose Flu
|Classic Mac OS
|
|1988.04
|United States
|Fort Worth, Texas
|Donald D. Burleson
|Designed to attack two specific applications which were never released.
|-
|Scott's Valley
|
|DOS
|
|1990-09
|Scotts Valley, California
|
|
|Infected files will contain the seemingly meaningless hex string 5E8BDE909081C63200B912082E.
|-
|SevenDust
|666, MDEF, 9806, Graphics Accelerator, SevenD, SevenDust.B—G
|Classic Mac OS
|Polymorphic
|1989-06
|
|
|
|-
|Marker
|Shankar's Virus, Marker.C, Marker.O, Marker.Q, Marker.X, Marker.AQ, Marker.BN, Marker.BO, Marker.DD, Marker.GR, W97M.Marker
|MS Word
|Polymorphic, Macro virus
|1999-06-03
|
|
|Sam Rogers
|Infects Word Documents
|-
|Simile
|Etap, MetaPHOR
|Windows
|Polymorphic
|
|
|
|The Mental Driller
|The metamorphic code accounts for around 90% of the virus' code
|-
|SMEG engine
|
|DOS
|Polymorphic
|1994
|
|United Kingdom
|The Black Baron
|Two viruses were created using the engine: Pathogen and Queeg.
|-
|Stoned
|
|DOS
|Boot sector virus
|1987
|Wellington
|
|
|One of the earliest and most prevalent boot sector viruses
|-
|Jerusalem
|Sunday, Jerusalem-113, Jeruspain, Suriv, Sat13, FuManchu
|DOS
|File virus
|1987-10
|Seattle
|
|
|Virus coders created many variants of the virus, making Jerusalem one of the largest families of viruses ever created. It even includes many sub-variants and a few sub-sub-variants.
|-
|WannaCry
|Wanna, Cryptor
|Windows
|Ransomware Cryptoworm
|2017-12
|World
|North Korea
|
|
|-
|WDEF
|WDEF A
|Classic Mac OS
|
|1989.12.15
|
|
|
|Given the unique nature of the virus, its origin is uncertain.
|-
|Whale
|
|DOS
|Polymorphic
|1990-07-01
|
|Hamburg
|R Homer
|At 9216 bytes, was for its time the largest virus ever discovered.
|-
|ZMist
|ZMistfall, ZombieMistfall
|Windows
|
|2001
|
|Russia
|Z0mbie
|It was the first virus to use a technique known as "code integration".
|-
|Xafecopy
|
|Android
|Trojan
|2017
|
|
|
|
|-
|Zuc
|Zuc.A., Zuc.B, Zuc.C
|Classic Mac OS
|
|1990.03
|Italy
|Italy
|
|
|-
|}
Related lists
List of computer worms
Timeline of computer viruses and worms
Unusual subtypes
Palm OS viruses
HyperCard viruses
Linux malware
Notable instances
Conficker
Creeper virus - The first malware that ran on ARPANET
ILOVEYOU
Leap - Mac OS X Trojan horse
Shamoon a wiper virus with stolen digital certificates destroyed over 35,000 computers owned by Saudi Aramco.
Storm Worm - A Windows trojan horse that forms the Storm botnet
Stuxnet First destructive ICS-targeting Trojan which destroyed part of Iran's nuclear program. The virus destroyed the centrifuge components making it impossible to enrich uranium to weapons grade.
Similar software
Adware
Malware
Spamming
Spyware
Computer worm
Trojan horse
Security topics
Antivirus software
Computer insecurity
Cryptovirology
Security through obscurity
Cyberwarfare
See also
Computer worm
Spyware
Virus hoax
Zombie computer
References
External links
The WildList, by WildList Organization International
List of Computer Viruses - listing of the Latest Viruses by Symantec.
List of all viruses All viruses cataloged in Panda Security's Collective Intelligence servers.
Computer viruses
Viruses
Viruses |
449061 | https://en.wikipedia.org/wiki/Simon%20Singh | Simon Singh | Simon Lehna Singh, (born 19 September 1964) is a British popular science author, theoretical and particle physicist whose works largely contain a strong mathematical element. His written works include Fermat's Last Theorem (in the United States titled Fermat's Enigma: The Epic Quest to Solve the World's Greatest Mathematical Problem), The Code Book (about cryptography and its history), Big Bang (about the Big Bang theory and the origins of the universe), Trick or Treatment? Alternative Medicine on Trial (about complementary and alternative medicine, co-written by Edzard Ernst) and The Simpsons and Their Mathematical Secrets (about mathematical ideas and theorems hidden in episodes of The Simpsons and Futurama). In 2012 Singh founded the Good Thinking Society, through which he created the website "Parallel" to help students learn mathematics.
Singh has also produced documentaries and works for television to accompany his books, is a trustee of the National Museum of Science and Industry, a patron of Humanists UK, founder of the Good Thinking Society, and co-founder of the Undergraduate Ambassadors Scheme.
Early life and education
Singh's parents emigrated from Punjab, India to Britain in 1950. He is the youngest of three brothers, his eldest brother being Tom Singh, the founder of the UK New Look chain of stores. Singh grew up in Wellington, Somerset, attending Wellington School, and went on to Imperial College London, where he studied physics. He was active in the student union, becoming President of the Royal College of Science Union. Later he completed a PhD in particle physics at the University of Cambridge as a postgraduate student of Emmanuel College, Cambridge while working at CERN, Geneva.
Career
In 1983, he was part of the UA2 experiment in CERN.
In 1987, Singh taught science at The Doon School, an independent all-boys' boarding school in India. In 1990 Singh returned to England and joined the BBC's Science and Features Department, where he was a producer and director working on programmes such as Tomorrow's World and Horizon. Singh was introduced to Richard Wiseman through their collaboration on Tomorrow's World. At Wiseman's suggestion, Singh directed a segment about politicians lying in different mediums, and getting the public's opinion on whether the person was lying or not.
After attending some of Wiseman's lectures, Singh came up with the idea to create a show together, and Theatre of Science was born. It was a way to deliver science to normal people in an entertaining manner. Richard Wiseman has influenced Singh in such a way that Singh states:
Singh directed his BAFTA award-winning documentary about the world's most notorious mathematical problem entitled Fermat's Last Theorem in 1996. The film was memorable for its opening shot of a middle-aged mathematician, Andrew Wiles, holding back tears as he recalled the moment when he finally realised how to resolve the fundamental error in his proof of Fermat's Last Theorem. The documentary was originally transmitted in January 1996 as an edition of the BBC Horizon series. It was also aired in America as part of the NOVA series. The Proof, as it was re-titled, was nominated for an Emmy Award.
The story of this celebrated mathematical problem was also the subject of Singh's first book, Fermat's Last Theorem. In 1997, he began working on his second book, The Code Book, a history of codes and codebreaking. As well as explaining the science of codes and describing the impact of cryptography on history, the book also contends that cryptography is more important today than ever before. The Code Book has resulted in a return to television for him. He presented The Science of Secrecy, a five-part series for Channel 4. The stories in the series range from the cipher that sealed the fate of Mary, Queen of Scots, to the coded Zimmermann Telegram that changed the course of the First World War. Other programmes discuss how two great 19th-century geniuses raced to decipher Egyptian hieroglyphs and how modern encryption can guarantee privacy on the Internet.
On his activities as author he said in an interview to Imperial College London:
In October 2004, Singh published a book entitled Big Bang, which tells the history of the universe. It is told in his trademark style, by following the remarkable stories of the people who put the pieces together.
He made headlines in 2005 when he criticised the Katie Melua song "Nine Million Bicycles" for inaccurate lyrics referring to the size of the observable universe. Singh proposed corrected lyrics, though he used the value of 13.7 billion light years; accounting for expansion of the universe, the comoving distance to the edge of the observable universe is 46.5 billion light years. BBC Radio 4's Today programme brought Melua and Singh together in a radio studio where Melua recorded a tongue-in-cheek version of the song that had been written by Singh.
Singh was part of an investigation about homeopathy in 2006. This investigation was made by the organization Sense About Science.
In the investigation, a student asked ten homeopaths for an alternative to her preventive malaria medication. All ten homeopaths recommended homeopathy as a substitute.
This investigation was reported by the BBC.
Singh is a member of the Advisory Council for the Campaign for Science and Engineering.
Singh has continued to be involved in television and radio programmes, including A Further Five Numbers (BBC Radio 4, 2005).
Honorary degrees
In 2003 Singh was awarded an honorary degree of Doctor of Letters (honoris causa) by Loughborough University, and in 2005 was given an honorary degree in Mathematics by the University of Southampton.
In 2006, he was awarded an honorary Doctor of Design degree by the University of the West of England "in recognition of Simon Singh's outstanding contribution to the public understanding of science, in particular in the promotion of science, engineering and mathematics in schools and in the building of links between universities and schools". This was followed up by his receipt of the Kelvin Medal from the Institute of Physics in 2008, for his achievements in promoting Physics to the general public. In July 2008, he was also awarded a degree of Doctor of Science (Honoris Causa) by Royal Holloway, University of London.
In July 2011, he was awarded another degree of Doctor of Science (Honoris Causa) by the University of Kent at Canterbury for services to Science. In June 2012, Singh was awarded the Honorary Degree of Doctor of Science (honoris causa) for his contribution to science communication, education and academic freedom by The University of St Andrews.
Other awards and honours
In 2003, Singh was made a Member of the Order of the British Empire (MBE) for services to science, technology and engineering in education and science communication.
In 2010 he became the inaugural recipient of the Lilavati Award.
In February 2011 he was elected as a Fellow of the Committee for Skeptical Inquiry.
Chiropractic lawsuit
On 19 April 2008, The Guardian published Singh's column "Beware the Spinal Trap", an article that was critical of the practice of chiropractic and which resulted in Singh being sued for libel by the British Chiropractic Association (BCA).
The article developed the theme of the book that Singh and Edzard Ernst had published, Trick or Treatment? Alternative Medicine on Trial, and made various statements about the lack of usefulness of chiropractic "for such problems as ear infections and infant colic":
When the case was brought against him, The Guardian supported him and funded his legal advice, as well as offering to pay the BCA's legal costs in an out-of-court settlement if Singh chose to settle.
A "furious backlash" to the lawsuit resulted in the filing of formal complaints of false advertising against more than 500 individual chiropractors within one 24-hour period, with one national chiropractic organisation ordering its members to take down their websites, and Nature Medicine noting that the case had gathered wide support for Singh, as well as prompting calls for the reform of English libel laws. On 1 April 2010, Simon Singh won his court appeal for the right to rely on the defence of fair comment. On 15 April 2010, the BCA officially withdrew its lawsuit, ending the case.
To defend himself for the libel suit, Singh's out-of-pocket legal costs were tens of thousands of pounds. The trial acted as a catalyst. The outrage over the initial ruling brought together several groups to support Singh and acted as a focus for libel reform campaigners, resulting in all major parties in the 2010 general election making manifesto commitments to libel reform.
On 25 April 2013 the Defamation Act 2013 received Queen Elizabeth II’s Royal Assent and became law. The purpose of the reformed law of defamation is to 'ensure that a fair balance is struck between the right to freedom of expression and the protection of reputation'. Under the new law, claimants must show that they suffer serious harm before the court will accept the case. Additional protection for website operators, defence of 'responsible publication on matters of public interest' and new statutory defences of truth and honest opinion are also part of the key areas of the new law.
Publications
Fermat's Last Theorem (1997) – the theorem's initial conjecture and eventual proof
The Code Book (1999) – a history of cryptography –
Big Bang (2004) – discusses models for the origin of the universe –
Trick or Treatment?: Alternative Medicine on Trial (2008) (with Edzard Ernst) – examines various types of alternative medicine, finds lack of evidence –
The Simpsons and Their Mathematical Secrets (2013) – highlights mathematical references in The Simpsons –
Personal life
Singh married journalist and broadcaster Anita Anand in 2007. The couple have two sons and live in Richmond, London.
References
External links
Official website
1964 births
Living people
Alumni of Emmanuel College, Cambridge
Alumni of Imperial College London
Critics of alternative medicine
The Doon School faculty
English people of Indian descent
English people of Punjabi descent
British writers of Indian descent
English humanists
English sceptics
English science writers
Mathematics popularizers
Mathematics writers
Members of the Order of the British Empire
People associated with CERN
People associated with The Institute for Cultural Research
People educated at Wellington School, Somerset
People from Wellington, Somerset
Recreational cryptographers |
449424 | https://en.wikipedia.org/wiki/Beale%20ciphers | Beale ciphers | The Beale ciphers are a set of three ciphertexts, one of which allegedly states the location of a buried treasure of gold, silver and jewels estimated to be worth over US$43 million Comprising three ciphertexts, the first (unsolved) text describes the location, the second (solved) ciphertext the content of the treasure, and the third (unsolved) lists the names of the treasure's owners and their next of kin.
The story of the three ciphertexts originates from an 1885 pamphlet called The Beale Papers, detailing treasure being buried by a man named Thomas J. Beale in a secret location in Bedford County, Virginia, in about 1820. Beale entrusted a box containing the encrypted messages to a local innkeeper named Robert Morriss and then disappeared, never to be seen again. According to the story, the innkeeper opened the box 23 years later, and then decades after that gave the three encrypted ciphertexts to a friend before he died. The friend then spent the next twenty years of his life trying to decode the messages, and was able to solve only one of them, which gave details of the treasure buried and the general location of the treasure. The unnamed friend then published all three ciphertexts in a pamphlet which was advertised for sale in the 1880s.
Since the publication of the pamphlet, a number of attempts have been made to decode the two remaining ciphertexts and to locate the treasure, but all efforts have resulted in failure.
There are many arguments that the entire story is a hoax, including the 1980 article "A Dissenting Opinion" by cryptographer Jim Gillogly, and a 1982 scholarly analysis of the Beale Papers and their related story by Joe Nickell, using historical records that cast doubt on the existence of Thomas J. Beale. Nickell also presents linguistic evidence demonstrating that the documents could not have been written at the time alleged (words such as "stampeding", for instance, are of later vintage). His analysis of the writing style showed that Beale was almost certainly James B. Ward, whose 1885 pamphlet brought the Beale Papers to light. Nickell argues that the tale is thus a work of fiction; specifically, a "secret vault" allegory of the Freemasons; James B. Ward was a Mason himself.
Background
A pamphlet published in 1885, entitled The Beale Papers, is the source of this story. The treasure was said to have been obtained by an American named Thomas J. Beale in the early 1800s, from a mine to the north of Nuevo México (New Mexico), at that time in the Spanish province of Santa Fe de Nuevo México (an area that today would most likely be part of Colorado). According to the pamphlet, Beale was the leader of a group of 30 gentlemen adventurers from Virginia who stumbled upon the rich mine of gold and silver while hunting buffalo. They spent 18 months mining thousands of pounds of precious metals, which they then charged Beale with transporting to Virginia and burying in a secure location. After Beale made multiple trips to stock the hiding place, he then encrypted three messages: the location, a description of the treasure, and the names of its owners and their relatives. The treasure location is traditionally linked to Montvale in Bedford County, Virginia.
Beale placed the ciphertexts and some other papers in an iron box. In 1822 he entrusted the box to a Lynchburg innkeeper named Robert Morriss. Beale told Morriss not to open the box unless he or one of his men failed to return from their journey within 10 years. Sending a letter from St. Louis a few months later, Beale promised Morriss that a friend in St. Louis would mail the key to the cryptograms; however, it never arrived. It was not until 1845 that Morriss opened the box. Inside he found two plaintext letters from Beale, and several pages of ciphertext separated into Papers "1", "2", and "3". Morriss had no luck in solving the ciphers, and decades later left the box and its contents to an unnamed friend.
The friend, then using an edition of the United States Declaration of Independence as the key for a modified book cipher, successfully deciphered the second ciphertext which gave a description of the buried treasure. Unable to solve the other two ciphertexts, the friend ultimately made the letters and ciphertexts public in a pamphlet entitled The Beale Papers, which was published by yet another friend, James B. Ward, in 1885.
Ward is thus not "the friend". Ward himself is almost untraceable in local records, except that a man with that name owned the home in which a Sarah Morriss, identified as the spouse of Robert Morriss, died at age 77, in 1863. He also is recorded as becoming a Master Mason in 1863.
The images below, transcribed from the pamphlet, show the original line-breaks for easy comparison. In the second cryptogram, the original cipher errors are highlighted in red.
Deciphered message
The plaintext of paper number 2 reads:
I have deposited in the county of Bedford, about four miles from Buford's, in an excavation or vault, six feet below the surface of the ground, the following articles, belonging jointly to the parties whose names are given in number three, herewith:
The first deposit consisted of ten hundred and fourteen pounds of gold, and thirty-eight hundred and twelve pounds of silver, deposited Nov. eighteen nineteen. The second was made Dec. eighteen twenty-one, and consisted of nineteen hundred and seven pounds of gold, and twelve hundred and eighty-eight of silver; also jewels, obtained in St. Louis in exchange to save transportation, and valued at thirteen thousand dollars.
The above is securely packed in iron pots, with iron covers. The vault is roughly lined with stone, and the vessels rest on solid stone, and are covered with others. Paper number one describes the exact locality of the vault, so that no difficulty will be had in finding it.
The second cipher can be decrypted fairly easily using a modified copy of the United States Declaration of Independence, but some editing is necessary. To decrypt it, one finds the word corresponding to the number (e.g., the first number is 115, and the 115th word in the Declaration of Independence is "instituted"), and takes the first letter of that word (in the case of the example, "I").
Beale used a version of United States Declaration of Independence slightly different from the original, and made mistakes in numbering it. To extract the hidden message, the following five modifications must be applied to the original text:
after word 154 ("institute") and before word 157 ("laying") one word must be added. The pamphlet handles this by inserting "a" before "new government".
after word 240 ("invariably") and before word 246 ("design") one word must be removed (probably "a"). The pamphlet's numbering has eleven words between the labels for 240 and 250.
after word 466 ("houses") and before word 495 ("be") ten words must be removed (probably "He has refused for a long time after such dissolutions"). The pamphlet has two labels for 480.
after word 630 ("eat") and before word 654 ("to") one word must be removed (probably "the"). The pamphlet's numbering has eleven words between the labels for 630 and 640.
after word 677 ("foreign") and before word 819 ("valuable") one word must be removed (probably "their"). The pamphlet's numbering has eleven words between the labels for 670 and 680.
Furthermore:
Words 78 and 79 ("self-evident"), shown hyphenated, are counted as 2 words.
The first letter of word 95 ("inalienable") is always used as a "u" ("unalienable").
Words 509 and 510 of the modified text ("mean time") are counted as two words, despite being shown as one word.
The first letter of the 811th word of the modified text ("fundamentally") is always used by Beale as a "y".
The first letter of the 1005th word of the modified text ("have") is always used by Beale as an "x".
Finally, in the decoded text there are six errors, probably due to wrong transcription of the original paper:
... 84, 57, 540, 217, 115, 71, 29, 84 (should be 85), 63, ... consistcd ("consisted").
... 53 (should be 54), 20, 125, 371, 38, 36, 10, 52, ... rhousand ("thousand").
... 2, 108 (should be 10, 8), 220, 106, 353, ... itron ("in iron").
... 440 (should be 40), 370, 643, 466, ... uith ("with").
... 14, 73, 84 (should be 85), ... thc ("the").
... 807, 81, 96 (should be 95), 405, 41, ... varlt ("vault").
Additional Declaration differences affect paper number 1: word 210 of the modified text ("more") is shown as "now"; words 919 and 920 of the modified text ("fellow citizens") are shown hyphenated (also affects paper number 3); two extra words ("made" and "the") are shown in modified text positions 1058 and 1188; a word is removed ("of") after modified text position 1125. The other slight changes probably have no consequences.
Many versions of the Declaration of Independence have been printed, with various adjustments to paragraphing, word inclusion, word changing, spelling, capitalization, and punctuation.
The lack of clear images of the original ciphers, combined with the large quantity of numerals, has led to numerals being misprinted or omitted in many sources.
The Beale Papers text, on pages 20 to 21, gives an alleged translation of the second ciphertext, but it has nine differences from the actual one. The differences are shown here as {alleged decipherment | actual decipherment}:
I have deposited, in the county of Bedford, about four miles from Buford's, in an excavation or vault, six feet below the surface of the ground, the following articles, belonging jointly to the parties whose names are given in number {“3,” | three} herewith:
The first deposit consisted of {one thousand | ten hundred} and fourteen pounds of gold, and {three thousand | thirty-} eight hundred and twelve pounds of silver, deposited {November, 1819 | Nov. eighteen nineteen}. The second was made {December, 1821 | Dec. eighteen twenty-one}, and consisted of nineteen hundred and seven pounds of gold, and twelve hundred and eighty-eight {pounds | } of silver; also jewels, obtained in St. Louis in exchange {for silver | } to save transportation, and valued at {$13,000 | thirteen thousand dollars}.
The above is securely packed in iron pots, with iron covers. The vault is roughly lined with stone, and the vessels rest on solid stone, and are covered with others. Paper number {“1” | one} describes the exact locality of the vault, so that no difficulty will be had in finding it.
A translation of the Cipher from the actual Declaration of Independence shows in fact very poor spelling:
"I haie deposoted in the copntt ol bedoort aboup four miles from bulords in an epcaiation or iault six fest below the surlact of thh gtound ths fotlowing articiss beaonging joiotlt to the partfes whosl namfs ate giiet in number thrff httewith.."
Value
The treasure's total weight is about 3 tons as described in inventory of the second cryptogram. This includes approximately 35,052 troy oz gold, 61,200 troy oz silver (worth about US$42 m and US$1 m, respectively, in January 2017) and jewels worth around US$220,000 in 2017.
Authenticity
There has been considerable debate over whether the remaining two ciphertexts are real or hoaxes. An early researcher, Carl Hammer of Sperry UNIVAC, used supercomputers of the late 1960s to analyze the ciphers and found that while the ciphers were poorly encoded, the two undeciphered ones did not show the patterns one would expect of randomly chosen numbers and probably encoded an intelligible text. Other questions remain about the authenticity of the pamphlet's account. In the words of one researcher "To me, the pamphlet story has all the earmarks of a fake . . . [There was] no evidence save the word of the unknown author of the pamphlet that he ever had the papers."
The pamphlet's background story has several implausibilities, and is based almost entirely on circumstantial evidence and hearsay.
Later cryptographers have claimed that the two remaining ciphertexts have statistical characteristics which suggest that they are not actually encryptions of an English plaintext. Alphabetical sequences such as are both non-random, as indicated by Carl Hammer, and not words in English.
Others have also questioned why Beale would have bothered writing three different ciphertexts (with at least two keys, if not ciphers) for what is essentially a single message in the first place, particularly if he wanted to ensure that the next of kin received their share (as it is, with the treasure described, there is no incentive to decode the third cipher).
Analysis of the language used by the author of the pamphlet (the uses of punctuation, relative clauses, infinitives, conjunctives, and so on) has detected significant correlations between it and the writing style of Beale's letters, including the plaintext of the second cipher, suggesting that they may have been written by the same person.
The letters also contain several English words, such as "improvise", not otherwise recorded before the 1820s in English but used from French from 1786 in the New Orleans area, and stampede (Spanish) "an uproar". Beale's "stampeding" apparently first appears in print in the English language in 1832 but was used from 1786 to 1823 in New Orleans in French and Spanish.
The second message, describing the treasure, has been deciphered, but the others have not, suggesting a deliberate ploy to encourage interest in deciphering the other two texts, only to discover that they are hoaxes. In addition, the original sale price of the pamphlet, 50 cents, was a high price for the time (adjusted for inflation, it is equivalent to $ today), and the author writes that he expects "a wide circulation".
The third cipher appears to be too short to list thirty individuals' next of kin.
If the modified Declaration of Independence is used as a key for the first cipher, it yields alphabetical sequences such as and others. According to the American Cryptogram Association, the chances of such sequences appearing multiple times in the one ciphertext by chance are less than one in a hundred million million. Although it is conceivable that the first cipher was intended as a proof of concept letting decoders know that they were "on the right track" for one or more of the subsequent ciphers, such a proof would be redundant, as the success of the key with respect to the second document would provide the same evidence on its own.
Robert Morriss, as represented in the pamphlet, says he was running the Washington Hotel in 1820. Yet contemporary records show he did not start in that position until at least 1823.
In fact a common literary device of fiction is the story of finding a lost treasure map from Edgar Allan Poe's The Gold-Bug to Robert Louis Stevenson Treasure Island to Milton Caniff's Terry and the Pirates.
There have been many attempts to break the remaining cipher(s). Most attempts have tried other historical texts as keys (e.g., Magna Carta, various books of the Bible, the U.S. Constitution, and the Virginia Royal Charter), assuming the ciphertexts were produced with some book cipher, but none have been recognized as successful to date. Breaking the cipher(s) may depend on random chance (as, for instance, stumbling upon a book key if the two remaining ciphertexts are actually book ciphers); so far, even the most skilled cryptanalysts who have attempted them have been defeated. Of course, Beale could have used a document that he had written himself for either or both of the remaining keys or either a document of his own or randomly selected characters for the third source, in either case rendering any further attempts to crack the codes useless.
Existence of Thomas J. Beale
A survey of U.S. Census records in 1810 shows two persons named Thomas Beale, in Connecticut and New Hampshire. However, the population schedules from the 1810 U.S. Census are completely missing for seven states, one territory, the District of Columbia, and 18 of the counties of Virginia. The 1820 U.S. Census has two persons named Thomas Beale, Captain Thomas Beale of the battle of New Orleans 1815 in Louisiana originally from Virginia Botetourt County – Fincastle area 12 miles from Bedford County and one in Tennessee, and a Thomas K. Beale in Virginia, but the population schedules are completely missing for three states and one territory.
Before 1850 the U.S. Census recorded the names of only the heads of households; others in the household were only counted. Beale, if he existed, may have been living in someone else's household.
In addition, a man named "Thomas Beall" appears in the customer lists of St. Louis Post Department in 1820. According to the pamphlet, Beale sent a letter from St. Louis in 1822.
Additionally, a Cheyenne legend exists about gold and silver being taken from the West and buried in mountains in the East, dating from roughly 1820.
Poe's alleged authorship
Edgar Allan Poe has been suggested as the pamphlet's real author because he had an interest in cryptography. It was well known he placed notices of his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger, inviting submissions of ciphers which he proceeded to solve. In 1843 he used a cryptogram as plot device in his short story "The Gold-Bug". From 1820, he was also living in Richmond, Virginia at the time of Beale's alleged encounters with Morriss. In February 1826 Poe enrolled as a student at the University of Virginia in Charlottesville. But with mounting debts, Poe left for Boston in April 1827.
However, research and facts debunk Poe's authorship. He died in 1849, well before The Beale Papers were first published in 1885. The pamphlet also mentions the American Civil War that started in 1861. William Poundstone, an American author and skeptic, had stylometric analysis performed on the pamphlet for his 1983 book Biggest Secrets, and found that Poe's prose is significantly different from the grammatical structure used by the author who wrote The Beale Papers.
Statistical Analysis
Another method to check the validity of the ciphers is to investigate some statistical aspects in different number bases. For example, one can investigate the frequency of the last digit in each number in the ciphers. These frequencies are not uniformly distributed – some digits are more common than others. This is true for all three ciphers.
However, if one considers a base that is relatively prime to 10, then the last digits of the numbers in the unsolved ciphers turn uniform – each digit is equally common. The frequency of the solved cipher stays non-uniform. This indicates a complex behaviour in the solved cipher as one might expect from an encoded message, while the unsolved ciphers have a simpler behaviour.
Humans have limited abilities when it comes to generating random numbers out of thin air. One explanation for the difference between base 10 and the other ones is that the numbers were produced by a human in base 10, which would mean that the ciphers are fraudulent.
Search attempts
Despite the Beale Papers' unproven veracity, treasure hunters have not been deterred from trying to find the vault. The "information" that there is buried treasure in Bedford County has stimulated many expeditions with shovels, and other implements of discovery, looking for likely spots. For more than a hundred years, people have been arrested for trespassing and unauthorized digging; some of them in groups as in the case of people from Pennsylvania in the 1990s.
Several digs were completed at the top of Porter's Mountain, one in late 1980s with the land owner's permission as long as any treasure found was split 50/50. However, the treasure hunters only found Civil War artifacts. As the value of these artifacts paid for time and equipment rental, the expedition broke even.
Media attention
The story has been the subject of multiple television documentaries, such as the UK's Mysteries series, a segment in the seventh special of Unsolved Mysteries; and the 2011 Declaration of Independence episode of the History Channel TV show Brad Meltzer's Decoded. There are also several books, and considerable Internet activity. In 2014, the National Geographic TV show The Numbers Game referred to the Beale ciphers as one of the strongest passwords ever created. In 2015 the UKTV series Myth Hunters (also known as Raiders of the Lost Past) devoted one of its season 3 episodes to the topic. Also in 2015, the Josh Gates series Expedition Unknown visited Bedford to investigate the Beale Ciphers and search for the treasure.
Simon Singh's 1999 book The Code Book explains the Beale cipher mystery in one of its chapters.
In 2010, an award-winning animated short film was made concerning the ciphers called The Thomas Beale Cipher.
See also
List of ciphertexts
Rennes-le-Château – a similar case where encrypted documents, discovered in a church in France, allegedly refer to a hidden treasure
Oak Island mystery – an undiscovered buried treasure on Oak Island in Nova Scotia
Captain Kidd – a 17th-century pirate who is supposed to have left behind clues to buried treasure
Treasure of Lima another legendary lost treasure
Lost Dutchman's Gold Mine legendary lost treasure
References
Further reading
Viemeister, Peter. The Beale Treasure: New History of a Mystery, 1997. Published by Hamilton's, Bedford, Virginia
Gillogly, James J. "The Beale Cipher: A Dissenting Opinion April 1980 Cryptologia, Volume 4, Number 2
Easterling, E.J. In Search Of A Golden Vault: The Beale Treasure Mystery ( CD/AUDIO BOOK 70 min. ) copyright 1995/ Revised In 2011 . Avenel Publishing 1122 Easter Lane Blue Ridge, VA 24064.
External links
, as text in separate pages, shown alongside images of the original.
have been made in video format with audio for the whole pamphlet.
"Historical and Analytical Studies in Relation to the Beale Ciphers"
Treasure
History of cryptography
Urban legends
Undeciphered historical codes and ciphers
Bedford County, Virginia |
449781 | https://en.wikipedia.org/wiki/Key%20derivation%20function | Key derivation function | In cryptography, a key derivation function (KDF) is a cryptographic algorithm that derives one or more secret keys from a secret value such as a main key, a password, or a passphrase using a pseudorandom function (which typically uses a cryptographic hash function or block cipher). KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of a Diffie–Hellman key exchange into a symmetric key for use with AES. Keyed cryptographic hash functions are popular examples of pseudorandom functions used for key derivation.
Key derivation
The original use for a KDF is key derivation, the generation of keys from secret passwords or passphrases. Variations on this theme include:
In conjunction with non-secret parameters to derive one or more keys from a common secret value (which is sometimes also referred to as "key diversification"). Such use may prevent an attacker who obtains a derived key from learning useful information about either the input secret value or any of the other derived keys. A KDF may also be used to ensure that derived keys have other desirable properties, such as avoiding "weak keys" in some specific encryption systems.
As components of multiparty key-agreement protocols. Examples of such key derivation functions include KDF1, defined in IEEE Std 1363-2000, and similar functions in ANSI X9.42.
To derive keys from secret passwords or passphrases (a password-based KDF).
To derive keys of different length from the ones provided: one example of KDFs designed for this purpose is HKDF.
Key stretching and key strengthening.
Key stretching and key strengthening
Key derivation functions are also used in applications to derive keys from secret passwords or passphrases, which typically do not have the desired properties to be used directly as cryptographic keys. In such applications, it is generally recommended that the key derivation function be made deliberately slow so as to frustrate brute-force attack or dictionary attack on the password or passphrase input value.
Such use may be expressed as , where is the derived key, is the key derivation function, is the original key or password, is a random number which acts as cryptographic salt, and refers to the number of iterations of a sub-function. The derived key is used instead of the original key or password as the key to the system. The values of the salt and the number of iterations (if it is not fixed) are stored with the hashed password or sent as cleartext (unencrypted) with an encrypted message.
The difficulty of a brute force attack is increased with the number of iterations. A practical limit on the iteration count is the unwillingness of users to tolerate a perceptible delay in logging into a computer or seeing a decrypted message. The use of salt prevents the attackers from precomputing a dictionary of derived keys.
An alternative approach, called key strengthening, extends the key with a random salt, but then (unlike in key stretching) securely deletes the salt. This forces both the attacker and legitimate users to perform a brute-force search for the salt value. Although the paper that introduced key stretching referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching.
Password hashing
Despite their original use for key derivation, KDFs are possibly better known for their use in password hashing (password verification by hash comparison), as used by the passwd file or shadow password file. Password hash functions should be relatively expensive to calculate in case of brute-force attacks, and the key stretching of KDFs happen to provide this characteristic. The non-secret parameters are called "salt" in this context.
In 2013 a Password Hashing Competition was announced to choose a new, standard algorithm for password hashing. On 20 July 2015 the competition ended and Argon2 was announced as the final winner. Four other algorithms received special recognition: Catena, Lyra2, Makwa and yescrypt.
History
The first deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after its man page), and was invented by Robert Morris in 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modified DES encryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in the Unix password file. While it was a great advance at the time, increases in processor speeds since the PDP-11 era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strong passphrases impossible.
Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computational cost not only on CPUs, but also resist the cost/performance advantages of modern massively-parallel platforms for such tasks. Various algorithms have been designed specifically for this purpose, including bcrypt, scrypt and, more recently, Lyra2 and Argon2 (the latter being the winner of the Password Hashing Competition). The large-scale Ashley Madison data breach in which roughly 36 million passwords hashes were stolen by attackers illustrated the importance of algorithm selection in securing passwords. Although bcrypt was employed to protect the hashes (making large scale brute-force cracking expensive and time-consuming), a significant portion of the accounts in the compromised data also contained a password hash based on the fast general-purpose MD5 algorithm, which made it possible for over 11 million of the passwords to be cracked in a matter of weeks.
In June 2017, The U.S. National Institute of Standards and Technology (NIST) issued a new revision of their digital authentication guidelines, NIST SP 800-63B-3, stating that: "Verifiers SHALL store memorized secrets [i.e. passwords] in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function. Key derivation functions take a password, a salt, and a cost factor as inputs then generate a password hash. Their purpose is to make each password guessing trial by an attacker who has obtained a password hash file expensive and therefore the cost of a guessing attack high or prohibitive."
Modern password-based key derivation functions, such as PBKDF2 (specified in RFC 2898), are based on a recognized cryptographic hash, such as SHA-2, use more salt (at least 64 bits and chosen randomly) and a high iteration count. NIST recommends a minimum iteration count of 10,000.
"For especially critical keys, or for very powerful systems or systems where user-perceived performance is not critical, an iteration count of 10,000,000 may be appropriate.”
References
Further reading
Key Derivation Functions
Cryptography
Key management |
450541 | https://en.wikipedia.org/wiki/Zero-knowledge%20proof | Zero-knowledge proof | In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true while the prover avoids conveying any additional information apart from the fact that the statement is indeed true. The essence of zero-knowledge proofs is that it is trivial to prove that one possesses knowledge of certain information by simply revealing it; the challenge is to prove such possession without revealing the information itself or any additional information.
If proving a statement requires that the prover possess some secret information, then the verifier will not be able to prove the statement to anyone else without possessing the secret information. The statement being proved must include the assertion that the prover has such knowledge, but without including or transmitting the knowledge itself in the assertion. Otherwise, the statement would not be proved in zero-knowledge because it provides the verifier with additional information about the statement by the end of the protocol. A zero-knowledge proof of knowledge is a special case when the statement consists only of the fact that the prover possesses the secret information.
Interactive zero-knowledge proofs require interaction between the individual (or computer system) proving their knowledge and the individual validating the proof.
A protocol implementing zero-knowledge proofs of knowledge must necessarily require interactive input from the verifier. This interactive input is usually in the form of one or more challenges such that the responses from the prover will convince the verifier if and only if the statement is true, i.e., if the prover does possess the claimed knowledge. If this were not the case, the verifier could record the execution of the protocol and replay it to convince someone else that they possess the secret information. The new party's acceptance is either justified since the replayer does possess the information (which implies that the protocol leaked information, and thus, is not proved in zero-knowledge), or the acceptance is spurious, i.e., was accepted from someone who does not actually possess the information.
Some forms of non-interactive zero-knowledge proofs exist, but the validity of the proof relies on computational assumptions (typically the assumptions of an ideal cryptographic hash function).
Abstract examples
The Ali Baba cave
There is a well-known story presenting the fundamental ideas of zero-knowledge proofs, first published by Jean-Jacques Quisquater and others in their paper "How to Explain Zero-Knowledge Protocols to Your Children". It is common practice to label the two parties in a zero-knowledge proof as Peggy (the prover of the statement) and Victor (the verifier of the statement).
In this story, Peggy has uncovered the secret word used to open a magic door in a cave. The cave is shaped like a ring, with the entrance on one side and the magic door blocking the opposite side. Victor wants to know whether Peggy knows the secret word; but Peggy, being a very private person, does not want to reveal her knowledge (the secret word) to Victor or to reveal the fact of her knowledge to the world in general.
They label the left and right paths from the entrance A and B. First, Victor waits outside the cave as Peggy goes in. Peggy takes either path A or B; Victor is not allowed to see which path she takes. Then, Victor enters the cave and shouts the name of the path he wants her to use to return, either A or B, chosen at random. Providing she really does know the magic word, this is easy: she opens the door, if necessary, and returns along the desired path.
However, suppose she did not know the word. Then, she would only be able to return by the named path if Victor were to give the name of the same path by which she had entered. Since Victor would choose A or B at random, she would have a 50% chance of guessing correctly. If they were to repeat this trick many times, say 20 times in a row, her chance of successfully anticipating all of Victor's requests would become vanishingly small (1 in 220, or very roughly 1 in a million).
Thus, if Peggy repeatedly appears at the exit Victor names, he can conclude that it is extremely probable that Peggy does, in fact, know the secret word.
One side note with respect to third-party observers: even if Victor is wearing a hidden camera that records the whole transaction, the only thing the camera will record is in one case Victor shouting "A!" and Peggy appearing at A or in the other case Victor shouting "B!" and Peggy appearing at B. A recording of this type would be trivial for any two people to fake (requiring only that Peggy and Victor agree beforehand on the sequence of A's and B's that Victor will shout). Such a recording will certainly never be convincing to anyone but the original participants. In fact, even a person who was present as an observer at the original experiment would be unconvinced, since Victor and Peggy might have orchestrated the whole "experiment" from start to finish.
Further notice that if Victor chooses his A's and B's by flipping a coin on-camera, this protocol loses its zero-knowledge property; the on-camera coin flip would probably be convincing to any person watching the recording later. Thus, although this does not reveal the secret word to Victor, it does make it possible for Victor to convince the world in general that Peggy has that knowledge—counter to Peggy's stated wishes. However, digital cryptography generally "flips coins" by relying on a pseudo-random number generator, which is akin to a coin with a fixed pattern of heads and tails known only to the coin's owner. If Victor's coin behaved this way, then again it would be possible for Victor and Peggy to have faked the "experiment", so using a pseudo-random number generator would not reveal Peggy's knowledge to the world in the same way that using a flipped coin would.
Notice that Peggy could prove to Victor that she knows the magic word, without revealing it to him, in a single trial. If both Victor and Peggy go together to the mouth of the cave, Victor can watch Peggy go in through A and come out through B. This would prove with certainty that Peggy knows the magic word, without revealing the magic word to Victor. However, such a proof could be observed by a third party, or recorded by Victor and such a proof would be convincing to anybody. In other words, Peggy could not refute such proof by claiming she colluded with Victor, and she is therefore no longer in control of who is aware of her knowledge.
Two balls and the colour-blind friend
Imagine your friend is red-green colour-blind (while you are not) and you have two balls: one red and one green, but otherwise identical. To your friend they seem completely identical and he is skeptical that they are actually distinguishable. You want to prove to him they are in fact differently-coloured, but nothing else; in particular, you do not want to reveal which one is the red and which is the green ball.
Here is the proof system. You give the two balls to your friend and he puts them behind his back. Next, he takes one of the balls and brings it out from behind his back and displays it. He then places it behind his back again and then chooses to reveal just one of the two balls, picking one of the two at random with equal probability. He will ask you, "Did I switch the ball?" This whole procedure is then repeated as often as necessary.
By looking at their colours, you can, of course, say with certainty whether or not he switched them. On the other hand, if they were the same colour and hence indistinguishable, there is no way you could guess correctly with probability higher than 50%.
Since the probability that you would have randomly succeeded at identifying each switch/non-switch is 50%, the probability of having randomly succeeded at all switch/non-switches approaches zero ("soundness"). If you and your friend repeat this "proof" multiple times (e.g. 20 times), your friend should become convinced ("completeness") that the balls are indeed differently coloured.
The above proof is zero-knowledge because your friend never learns which ball is green and which is red; indeed, he gains no knowledge about how to distinguish the balls.
Where's Wally?
Where's Wally? (titled Where's Waldo? in North America) is a picture book where the reader is challenged to find a small character called Wally hidden somewhere on a double-spread page that is filled with many other characters. The pictures are designed so that it is hard to find Wally.
Imagine that you are a professional Where's Wally? solver. A company comes to you with a Where's Wally? book that they need solved. The company wants you to prove that you are actually a professional Where's Wally? solver and thus asks you to find Wally in a picture from their book. The problem is that you don't want to do work for them without being paid.
Both you and the company want to cooperate, but you don't trust each other. It doesn't seem like it's possible to satisfy the company's demand without doing free work for them, but in fact there is a zero-knowledge proof which allows you to prove to the company that you know where Wally is in the picture without revealing to them how you found him, or where he is.
The proof goes as follows: You ask the company representative to turn around, and then you place a very large piece of cardboard (several times larger than the book) over the picture in the book such that the center of the cardboard is positioned over Wally. You cut out a small window in the center of the cardboard such that Wally is visible. You can now ask the company representative to turn around and view the large piece of cardboard with the hole in the middle, and observe that Wally is visible through the hole. The cardboard is large enough that the company rep cannot determine the position of the book under the cardboard. You then ask the representative to turn back around so that you can remove the cardboard and give back the book.
As described, this proof is an illustration only, and not completely rigorous. The company representative would need to be sure that you didn't smuggle a picture of Wally into the room. Something like a tamper-proof glovebox might be used in a more rigorous proof. The above proof also results in the body position of Wally being leaked to the company representative, which may help them find Wally if his body position changes in each Where's Wally? puzzle.
Definition
A zero-knowledge proof of some statement must satisfy three properties:
Completeness: if the statement is true, the honest verifier (that is, one following the protocol properly) will be convinced of this fact by an honest prover.
Soundness: if the statement is false, no cheating prover can convince the honest verifier that it is true, except with some small probability.
Zero-knowledge: if the statement is true, no verifier learns anything other than the fact that the statement is true. In other words, just knowing the statement (not the secret) is sufficient to imagine a scenario showing that the prover knows the secret. This is formalized by showing that every verifier has some simulator that, given only the statement to be proved (and no access to the prover), can produce a transcript that "looks like" an interaction between the honest prover and the verifier in question.
The first two of these are properties of more general interactive proof systems. The third is what makes the proof zero-knowledge.
Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, the soundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, zero-knowledge proofs are probabilistic "proofs" rather than deterministic proofs. However, there are techniques to decrease the soundness error to negligibly small values.
A formal definition of zero-knowledge has to use some computational model, the most common one being that of a Turing machine. Let , , and be Turing machines. An interactive proof system with for a language is zero-knowledge if for any probabilistic polynomial time (PPT) verifier there exists a PPT simulator such that
where is a record of the interactions between and . The prover is modeled as having unlimited computation power (in practice, usually is a probabilistic Turing machine). Intuitively, the definition states that an interactive proof system is zero-knowledge if for any verifier there exists an efficient simulator (depending on ) that can reproduce the conversation between and on any given input. The auxiliary string in the definition plays the role of "prior knowledge" (including the random coins of ). The definition implies that cannot use any prior knowledge string to mine information out of its conversation with , because if is also given this prior knowledge then it can reproduce the conversation between and just as before.
The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifier and the simulator are only computationally indistinguishable, given the auxiliary string.
Practical examples
Discrete log of a given value
We can apply these ideas to a more realistic cryptography application. Peggy wants to prove to Victor that she knows the discrete log of a given value in a given group.
For example, given a value , a large prime and a generator , she wants to prove that she knows a value such that , without revealing . Indeed, knowledge of could be used as a proof of identity, in that Peggy could have such knowledge because she chose a random value that she didn't reveal to anyone, computed and distributed the value of to all potential verifiers, such that at a later time, proving knowledge of is equivalent to proving identity as Peggy.
The protocol proceeds as follows: in each round, Peggy generates a random number , computes and discloses this to Victor. After receiving , Victor randomly issues one of the following two requests: he either requests that Peggy discloses the value of , or the value of . With either answer, Peggy is only disclosing a random value, so no information is disclosed by a correct execution of one round of the protocol.
Victor can verify either answer; if he requested , he can then compute and verify that it matches . If he requested , he can verify that is consistent with this, by computing and verifying that it matches . If Peggy indeed knows the value of , she can respond to either one of Victor's possible challenges.
If Peggy knew or could guess which challenge Victor is going to issue, then she could easily cheat and convince Victor that she knows when she does not: if she knows that Victor is going to request , then she proceeds normally: she picks , computes and discloses to Victor; she will be able to respond to Victor's challenge. On the other hand, if she knows that Victor will request , then she picks a random value , computes , and discloses to Victor as the value of that he is expecting. When Victor challenges her to reveal , she reveals , for which Victor will verify consistency, since he will in turn compute , which matches , since Peggy multiplied by the modular multiplicative inverse of .
However, if in either one of the above scenarios Victor issues a challenge other than the one she was expecting and for which she manufactured the result, then she will be unable to respond to the challenge under the assumption of infeasibility of solving the discrete log for this group. If she picked and disclosed , then she will be unable to produce a valid that would pass Victor's verification, given that she does not know . And if she picked a value that poses as , then she would have to respond with the discrete log of the value that she disclosed but Peggy does not know this discrete log, since the value C she disclosed was obtained through arithmetic with known values, and not by computing a power with a known exponent.
Thus, a cheating prover has a 0.5 probability of successfully cheating in one round. By executing a large enough number of rounds, the probability of a cheating prover succeeding can be made arbitrarily low.
Short summary
Peggy proves to know the value of x (for example her password).
Peggy and Victor agree on a prime and a generator of the multiplicative group of the field .
Peggy calculates the value and transfers the value to Victor.
The following two steps are repeated a (large) number of times.
Peggy repeatedly picks a random value and calculates . She transfers the value to Victor.
Victor asks Peggy to calculate and transfer either the value or the value . In the first case Victor verifies . In the second case he verifies .
The value can be seen as the encrypted value of . If is truly random, equally distributed between zero and , this does not leak any information about (see one-time pad).
Hamiltonian cycle for a large graph
The following scheme is due to Manuel Blum.
In this scenario, Peggy knows a Hamiltonian cycle for a large graph . Victor knows but not the cycle (e.g., Peggy has generated and revealed it to him.) Finding a Hamiltonian cycle given a large graph is believed to be computationally infeasible, since its corresponding decision version is known to be NP-complete. Peggy will prove that she knows the cycle without simply revealing it (perhaps Victor is interested in buying it but wants verification first, or maybe Peggy is the only one who knows this information and is proving her identity to Victor).
To show that Peggy knows this Hamiltonian cycle, she and Victor play several rounds of a game.
At the beginning of each round, Peggy creates , a graph which is isomorphic to (i.e. is just like except that all the vertices have different names). Since it is trivial to translate a Hamiltonian cycle between isomorphic graphs with known isomorphism, if Peggy knows a Hamiltonian cycle for she also must know one for .
Peggy commits to . She could do so by using a cryptographic commitment scheme. Alternatively, she could number the vertices of , then for each edge of write on a small piece of paper containing the two vertices of the edge and then put these pieces of paper face down on a table. The purpose of this commitment is that Peggy is not able to change while at the same time Victor has no information about .
Victor then randomly chooses one of two questions to ask Peggy. He can either ask her to show the isomorphism between and (see graph isomorphism problem), or he can ask her to show a Hamiltonian cycle in .
If Peggy is asked to show that the two graphs are isomorphic, she first uncovers all of (e.g. by turning over all pieces of papers that she put on the table) and then provides the vertex translations that map to . Victor can verify that they are indeed isomorphic.
If Peggy is asked to prove that she knows a Hamiltonian cycle in , she translates her Hamiltonian cycle in onto and only uncovers the edges on the Hamiltonian cycle. This is enough for Victor to check that does indeed contain a Hamiltonian cycle.
It is important that the commitment to the graph be such that Victor can verify, in the second case, that the cycle is really made of edges from . This can be done by, for example, committing to every edge (or lack thereof) separately.
Completeness
If Peggy does know a Hamiltonian cycle in , she can easily satisfy Victor's demand for either the graph isomorphism producing from (which she had committed to in the first step) or a Hamiltonian cycle in (which she can construct by applying the isomorphism to the cycle in ).
Zero-knowledge
Peggy's answers do not reveal the original Hamiltonian cycle in . Each round, Victor will learn only 's isomorphism to or a Hamiltonian cycle in . He would need both answers for a single to discover the cycle in , so the information remains unknown as long as Peggy can generate a distinct every round. If Peggy does not know of a Hamiltonian cycle in , but somehow knew in advance what Victor would ask to see each round then she could cheat. For example, if Peggy knew ahead of time that Victor would ask to see the Hamiltonian cycle in then she could generate a Hamiltonian cycle for an unrelated graph. Similarly, if Peggy knew in advance that Victor would ask to see the isomorphism then she could simply generate an isomorphic graph (in which she also does not know a Hamiltonian cycle). Victor could simulate the protocol by himself (without Peggy) because he knows what he will ask to see. Therefore, Victor gains no information about the Hamiltonian cycle in from the information revealed in each round.
Soundness
If Peggy does not know the information, she can guess which question Victor will ask and generate either a graph isomorphic to or a Hamiltonian cycle for an unrelated graph, but since she does not know a Hamiltonian cycle for she cannot do both. With this guesswork, her chance of fooling Victor is , where is the number of rounds. For all realistic purposes, it is infeasibly difficult to defeat a zero-knowledge proof with a reasonable number of rounds in this way.
Variants of zero-knowledge
Different variants of zero-knowledge can be defined by formalizing the intuitive concept of what is meant by the output of the simulator "looking like" the execution of the real proof protocol in the following ways:
We speak of perfect zero-knowledge if the distributions produced by the simulator and the proof protocol are distributed exactly the same. This is for instance the case in the first example above.
Statistical zero-knowledge means that the distributions are not necessarily exactly the same, but they are statistically close, meaning that their statistical difference is a negligible function.
We speak of computational zero-knowledge if no efficient algorithm can distinguish the two distributions.
Zero knowledge types
Proof of knowledge: the knowledge is hidden in the exponent like in the example shown above.
Pairing based cryptography: given and , without knowing and , it is possible to compute .
Witness indistinguishable proof: verifiers cannot know which witness is used for producing the proof.
Multi-party computation: while each party can keep their respective secret, they together produce a result.
Ring signature: outsiders have no idea which key is used for signing.
Applications
Authentication systems
Research in zero-knowledge proofs has been motivated by authentication systems where one party wants to prove its identity to a second party via some secret information (such as a password) but doesn't want the second party to learn anything about this secret. This is called a "zero-knowledge proof of knowledge". However, a password is typically too small or insufficiently random to be used in many schemes for zero-knowledge proofs of knowledge. A zero-knowledge password proof is a special kind of zero-knowledge proof of knowledge that addresses the limited size of passwords.
In April 2015, the Sigma protocol (one-out-of-many proofs) was introduced. In August 2021, Cloudflare, an American web infrastructure and security company decided to use the one-out-of-many proofs mechanism for private web verification using vendor hardware.
Ethical behavior
One of the uses of zero-knowledge proofs within cryptographic protocols is to enforce honest behavior while maintaining privacy. Roughly, the idea is to force a user to prove, using a zero-knowledge proof, that its behavior is correct according to the protocol. Because of soundness, we know that the user must really act honestly in order to be able to provide a valid proof. Because of zero knowledge, we know that the user does not compromise the privacy of its secrets in the process of providing the proof.
Nuclear disarmament
In 2016, the Princeton Plasma Physics Laboratory and Princeton University demonstrated a technique that may have applicability to future nuclear disarmament talks. It would allow inspectors to confirm whether or not an object is indeed a nuclear weapon without recording, sharing or revealing the internal workings which might be secret.
Blockchains
Zero-knowledge proofs were applied in Zerocoin and Zerocash protocols which culminated in the birth of Zcoin (later rebranded as Firo in 2020) and Zcash cryptocurrencies in 2016. Zerocoin has a built-in mixing model that does not trust any peers or centralised mixing providers to ensure anonymity. Users can transact in a base currency, and can cycle the currency into and out of Zerocoins. Zerocash protocol use a similar model (a variant known as non-interactive zero-knowledge proof) except that it can obscure the transaction amount while Zerocoin cannot. Given significant restrictions of transaction data on the Zerocash network, Zerocash is less prone to privacy timing attacks when compared to Zerocoin. However, this additional layer of privacy can cause potentially undetected hyperinflation of Zerocash supply because fraudulent coins cannot be tracked.
In 2018, Bulletproofs were introduced. Bulletproofs are an improvement from non-interactive zero-knowledge proof where trusted setup is not needed. It was later implemented into Mimblewimble protocol (where Grin and Beam cryptocurrencies based on) and Monero cryptocurrency. In 2019, Firo implemented the Sigma protocol, which is an improvement on Zerocoin protocol without trusted setup. In the same year, Firo introduced the Lelantus protocol, an improvement on the Sigma protocol where the former hides the origin and amount of a transaction.
History
Zero-knowledge proofs were first conceived in 1985 by Shafi Goldwasser, Silvio Micali, and Charles Rackoff in their paper "The Knowledge Complexity of Interactive Proof-Systems". This paper introduced the IP hierarchy of interactive proof systems (see interactive proof system) and conceived the concept of knowledge complexity, a measurement of the amount of knowledge about the proof transferred from the prover to the verifier. They also gave the first zero-knowledge proof for a concrete problem, that of deciding quadratic nonresidues mod . Together with a paper by László Babai and Shlomo Moran, this landmark paper invented interactive proof systems, for which all five authors won the first Gödel Prize in 1993.
In their own words, Goldwasser, Micali, and Rackoff say:
Of particular interest is the case where this additional knowledge is essentially 0 and we show that [it] is possible to interactively prove that a number is quadratic non residue mod m releasing 0 additional knowledge. This is surprising as no efficient algorithm for deciding quadratic residuosity mod m is known when m’s factorization is not given. Moreover, all known NP proofs for this problem exhibit the prime factorization of m. This indicates that adding interaction to the proving process, may decrease the amount of knowledge that must be communicated in order to prove a theorem.
The quadratic nonresidue problem has both an NP and a co-NP algorithm, and so lies in the intersection of NP and co-NP. This was also true of several other problems for which zero-knowledge proofs were subsequently discovered, such as an unpublished proof system by Oded Goldreich verifying that a two-prime modulus is not a Blum integer.
Oded Goldreich, Silvio Micali, and Avi Wigderson took this one step further, showing that, assuming the existence of unbreakable encryption, one can create a zero-knowledge proof system for the NP-complete graph coloring problem with three colors. Since every problem in NP can be efficiently reduced to this problem, this means that, under this assumption, all problems in NP have zero-knowledge proofs. The reason for the assumption is that, as in the above example, their protocols require encryption. A commonly cited sufficient condition for the existence of unbreakable encryption is the existence of one-way functions, but it is conceivable that some physical means might also achieve it.
On top of this, they also showed that the graph nonisomorphism problem, the complement of the graph isomorphism problem, has a zero-knowledge proof. This problem is in co-NP, but is not currently known to be in either NP or any practical class. More generally, Russell Impagliazzo and Moti Yung as well as Ben-Or et al. would go on to show that, also assuming one-way functions or unbreakable encryption, that there are zero-knowledge proofs for all problems in IP = PSPACE, or in other words, anything that can be proved by an interactive proof system can be proved with zero knowledge.
Not liking to make unnecessary assumptions, many theorists sought a way to eliminate the necessity of one way functions. One way this was done was with multi-prover interactive proof systems (see interactive proof system), which have multiple independent provers instead of only one, allowing the verifier to "cross-examine" the provers in isolation to avoid being misled. It can be shown that, without any intractability assumptions, all languages in NP have zero-knowledge proofs in such a system.
It turns out that in an Internet-like setting, where multiple protocols may be executed concurrently, building zero-knowledge proofs is more challenging. The line of research investigating concurrent zero-knowledge proofs was initiated by the work of Dwork, Naor, and Sahai. One particular development along these lines has been the development of witness-indistinguishable proof protocols. The property of witness-indistinguishability is related to that of zero-knowledge, yet witness-indistinguishable protocols do not suffer from the same problems of concurrent execution.
Another variant of zero-knowledge proofs are non-interactive zero-knowledge proofs. Blum, Feldman, and Micali showed that a common random string shared between the prover and the verifier is enough to achieve computational zero-knowledge without requiring interaction.
Zero-Knowledge Proof Protocols
The most popular interactive or non-interactive zero-knowledge proof (zk-SNARK) protocols can be broadly categorized in the following four categories: Succinct Non-Interactive Arguments of Knowledge (SNARK), Scalable Transparent ARgument of Knowledge (STARK), Verifiable Polynomial Delegation (VPD), and Succinct Non-interactive ARGuments (SNARG). A list of zero-knowledge proof protocols and libraries is provided below along with comparisons based on transparency, universality, plausible post-quantum security, and programming paradigm. A transparent protocol is one that does not require any trusted setup and uses public randomness. A universal protocol is one that does not require a separate trusted setup for each circuit. Finally, a plausibly post-quantum protocol is one that is not susceptible to known attacks involving quantum algorithms.
See also
Arrow information paradox
Cryptographic protocol
Feige–Fiat–Shamir identification scheme
Proof of knowledge
Topics in cryptography
Witness-indistinguishable proof
Zero-knowledge password proof
Non-interactive zero-knowledge proof
References
Theory of cryptography
Zero-knowledge protocols |
450587 | https://en.wikipedia.org/wiki/Hellschreiber | Hellschreiber | The Hellschreiber, Feldhellschreiber or Typenbildfeldfernschreiber (also Hell-Schreiber named after its inventor Rudolf Hell) is a facsimile-based teleprinter invented by Rudolf Hell. Compared to contemporary teleprinters that were based on typewriter systems and were mechanically complex and expensive, the Hellschreiber was much simpler and more robust, with far fewer moving parts. It has the added advantage of being capable of providing intelligible communication even over very poor quality radio or cable links, where voice or other teledata would be unintelligible.
The device was first developed in the late 1920s, and saw use starting in the 1930s, chiefly being used for land-line press services. During WW2 it was sometimes used by the German military in conjunction with the Enigma encryption system. In the post-war era, it became increasingly common among newswire services, and was used in this role well into the 1980s. In modern times Hellschreiber is used as a communication mode by amateur radio operators using computers and sound cards; the resulting mode is referred to as Hellschreiber, Feld-Hell, or simply Hell.
Operation
Hellschreiber sends a line of text as a series of vertical columns. Each column is broken down vertically into a series of pixels, normally using a 7 by 7 pixel grid to represent characters. The data for a line is then sent as a series of on-off signals to the receiver, using a variety of formats depending on the medium, but normally at a rate of 112.5 baud.
At the receiver end, a paper tape is fed at a constant speed over a roller. Located above the roller is a spinning cylinder with small bumps in a helical pattern on the surface. The received signal is amplified and sent to a magnetic actuator that pulls the cylinder down onto the roller, hammering out a dot into the surface of the paper. A Hellschreiber will print each received column twice, one below the other. This is to compensate for slight timing errors that are often present in the equipment, and causes the text to slant. The received text can look like two identical texts coming out one below the other, or a line of text coming out in the middle, with chopped-off lines above and below. In either case, at least one whole letter can be read at all times.
The original Hellschreiber machine was a mechanical device, so therefore it was possible to send "half-pixels". The right ends of the loops in B, for instance, could be shifted a little, so as to improve the readability. Any on-signal could in any case last no shorter than 8 ms, however, both because of having to restrict the occupied bandwidth on the radio, but also for reasons having to do with the mechanical makeup of the receiving machinery.
Improvements that came as a result of software implementation:
Depicting the received signal as shades of gray instead of monochrome, thereby making it much easier to read weak signals.
Changing to a different font. Here is one mode that is truly international and independent of character sets: any thing that can be depicted as markings within a 7 pixels high grid, can be transmitted over the air.
Variants
Hellschreiber has also spawned a number of variants over the years, many of them due to radio amateur efforts in the 1990s. Examples of them are:
PSK Hell encodes a pixel's brightness in the carrier phase instead of the amplitude. Strictly speaking, it's encoded in the change of the phase (differential phase shift keying): an unchanged phase in the beginning of a pixel means white, and a reversed phase means black. It operates at 105 or 245 baud.
FM Hell (or FSK Hell) uses frequency modulation with a careful control of phase, essentially minimum-shift keying. The most common variant is FSK Hell-105.
Duplo Hell is a dual tone mode which sends two columns at a time at different frequencies (980 Hz and 1225/1470 Hz).
C/MT Hell or concurrent multitone Hell sends all rows at the same time using tones at different frequencies. The transmission can be read using an FFT display. It allows for high resolutions.
S/MT Hell or sequential multitone Hell is like C/MT but it sends only one tone (for one row) at a time. As a result, characters received have a bit of slant, they look like an oblique font.
Slowfeld
Slowfeld is an experimental narrow band communication program that makes use of the Hellschreiber principle requiring that the transmitter and receiver both use the same column-scan speed. Data is sent at a very slow rate and received via a Fast Fourier Transform routine giving a bandwidth of several Hz. As long as tuning is within several signal bandwidths, the result will appear. The transmission rate is around 3, 1.5 and 0.75 characters per second. Slowfeld, along with similar modes such as very slow QRSS Morse code, may be used when all other communication methods fail.
Media
See also
Dot matrix teletypewriter
References
External links
Feld Hell Club website
Hellschreiber on Signal Identification Wiki
FELD HELL, WW2 Hellschreiber and Hagenuk Ha5K39b in use (Using WWII equipment)
Military radio systems
Telecommunications equipment
Quantized radio modulation modes
Amateur radio
German inventions
Impact matrix printers
History of telecommunications
Telegraphy
Typewriters
1929 in science
1929 in Germany |
450657 | https://en.wikipedia.org/wiki/Running%20key%20cipher | Running key cipher | In classical cryptography, the running key cipher is a type of polyalphabetic substitution cipher in which a text, typically from a book, is used to provide a very long keystream. Usually, the book to be used would be agreed ahead of time, while the passage to be used would be chosen randomly for each message and secretly indicated somewhere in the message.
Example
The text used is The C Programming Language (1978 edition), and the tabula recta is the tableau. The plaintext is "Flee at once".
Page 63, line 1 is selected as the running key:
errors can occur in several places. A label has...
The running key is then written under the plaintext:
The message is then sent as "JCVSR LQNPS". However, unlike a Vigenère cipher, if the message is extended, the key is not repeated; the key text itself is used as the key. If the message is extended, such as, "Flee at once. We are discovered", then the running key continues as before:
To determine where to find the running key, a fake block of five ciphertext characters is subsequently added, with three denoting the page number, and two the line number, using A=0, B=1 etc. to encode digits. Such a block is called an indicator block. The indicator block will be inserted as the second last of each message. (Many other schemes are possible for hiding indicator blocks.) Thus page 63, line 1 encodes as "AGDAB" (06301).
This yields a final message of "JCVSR LQNPS YGUIM QAWXS AGDAB MECTO".
Variants
Modern variants of the running key cipher often replace the traditional tabula recta with bitwise exclusive or, operate on whole bytes rather than alphabetic letters, and derive their running keys from large files. Apart from possibly greater entropy density of the files, and the ease of automation, there is little practical difference between such variants and traditional methods.
Permutation generated running keys
A more compact running key can be used if one combinatorially generates text using several
start pointers (or combination rules). For example, rather than start at one place
(a single pointer), one could use several start pointers and xor together the streams
to form a new running key, similarly skip rules can be used. What is exchanged then
is a series of pointers to the running key book and/or a series of rules for generating
the new permuted running key from the initial key text. (These may be exchanged
via public key encryption or in person. They may also be changed frequently
without changing the running key book.)
Ciphertext appearing to be plaintext
Traditional ciphertext appears to be quite different from plaintext.
To address this problem, one variant outputs "plaintext" words instead
of "plaintext" letters as the ciphertext output. This is done by creating
an "alphabet" of words (in practice multiple words can correspond to each ciphertext
output character). The result is a ciphertext output which looks like a long
sequence of plaintext words (the process can be nested). Theoretically, this is
no different from using standard ciphertext characters as output. However,
plaintext-looking ciphertext may result in a "human in the loop" to try to mistakenly
interpret it as decoded plaintext.
An example would be BDA (Berkhoff deflater algorithm), each ciphertext output
character has at least one noun, verb, adjective and adverb associated with it.
(E.g. (at least) one of each for every ASCII character). Grammatically plausible
sentences are generated as ciphertext output. Decryption requires mapping the words back to
ASCII, and then decrypting the characters to the real plaintext using the running key.
Nested-BDA will run the output through the reencryption process several times, producing
several layers of "plaintext-looking" ciphertext - each one potentially requiring
"human-in-the-loop" to try to interpret its non-existent semantic meaning.
Gromark cipher
The "Gromark cipher" ("Gronsfeld cipher with mixed alphabet and running key") uses a running numerical key formed by adding successive pairs of digits.
The VIC cipher uses a similar lagged Fibonacci generator.
Security
If the running key is truly random, never reused, and kept secret, the result is a one-time pad, a method that provides perfect secrecy (reveals no information about the plaintext). However, if (as usual) the running key is a block of text in a natural language, security actually becomes fairly poor, since that text will have non-random characteristics which can be used to aid cryptanalysis. As a result, the entropy per character of both plaintext and running key is low, and the combining operation is easily inverted.
To attack the cipher, a cryptanalyst runs guessed probable plaintexts along the ciphertext, subtracting them out from each possible position. When the result is a chunk of something intelligible, there is a high probability that the guessed plain text is correct for that position (as either actual plaintext, or part of the running key). The 'chunk of something intelligible' can then often be extended at either end, thus providing even more probable plaintext - which can in turn be extended, and so on. Eventually it is likely that the source of the running key will be identified, and the jig is up.
There are several ways to improve the security. The first and most obvious is to use a secret mixed alphabet tableau instead of a tabula recta. This does indeed greatly complicate matters but it is not a complete solution. Pairs of plaintext and running key characters are far more likely to be high frequency pairs such as 'EE' rather than, say, 'QQ'. The skew this causes to the output frequency distribution is smeared by the fact that it is quite possible that 'EE' and 'QQ' map to the same ciphertext character, but nevertheless the distribution is not flat. This may enable the cryptanalyst to deduce part of the tableau, then proceed as before (but with gaps where there are sections missing from the reconstructed tableau).
Another possibility is to use a key text that has more entropy per character than typical English. For this purpose, the KGB advised agents to use documents like almanacs and trade reports, which often contain long lists of random-looking numbers.
Another problem is that the keyspace is surprisingly small. Suppose that there are 100 million key texts that might plausibly be used, and that on average each has 11 thousand possible starting positions. To an opponent with a massive collection of possible key texts, this leaves possible a brute force search of the order of , which by computer cryptography standards is a relatively easy target. (See permutation generated running keys above for an approach to
this problem).
Confusion
Because both ciphers classically employed novels as part of their key material, many sources confuse the book cipher and the running key cipher. They are really only very distantly related. The running key cipher is a polyalphabetic substitution, the book cipher is a homophonic substitution. Perhaps the distinction is most clearly made by the fact that a running cipher would work best of all with a book of random numbers, whereas such a book (containing no text) would be useless for a book cipher.
See also
Polyalphabetic substitution
Substitution cipher
Book cipher
Topics in cryptography
References
Stream ciphers
Classical ciphers |
450714 | https://en.wikipedia.org/wiki/Affine%20cipher | Affine cipher | The affine cipher is a type of monoalphabetic substitution cipher, where each letter in an alphabet is mapped to its numeric equivalent, encrypted using a simple mathematical function, and converted back to a letter. The formula used means that each letter encrypts to one other letter, and back again, meaning the cipher is essentially a standard substitution cipher with a rule governing which letter goes to which. As such, it has the weaknesses of all substitution ciphers. Each letter is enciphered with the function , where is the magnitude of the shift.
Description
In the affine cipher the letters of an alphabet of size are first mapped to the integers in the range . It then uses modular arithmetic to transform the integer that each plaintext letter corresponds to into another integer that correspond to a ciphertext letter.
The encryption function for a single letter is
where modulus is the size of the alphabet and and are the keys of the cipher. The value must be chosen such that and are coprime. The decryption function is
where is the modular multiplicative inverse of modulo . I.e., it satisfies the equation
The multiplicative inverse of only exists if and are coprime. Hence without the restriction on , decryption might not be possible.
It can be shown as follows that decryption function is the inverse of the encryption function,
Weaknesses
Since the affine cipher is still a monoalphabetic substitution cipher, it inherits the weaknesses of that class of ciphers. The Caesar cipher is an Affine cipher with since the encrypting function simply reduces to a linear shift. The Atbash cipher uses .
Considering the specific case of encrypting messages in English (i.e. ), there are a total of 286 non-trivial affine ciphers, not counting the 26 trivial Caesar ciphers. This number comes from the fact there are 12 numbers that are coprime with 26 that are less than 26 (these are the possible values of ). Each value of can have 26 different addition shifts (the value); therefore, there are 12 × 26 or 312 possible keys. This lack of variety renders the system as highly insecure when considered in light of Kerckhoffs' Principle.
The cipher's primary weakness comes from the fact that if the cryptanalyst can discover (by means of frequency analysis, brute force, guessing or otherwise) the plaintext of two ciphertext characters then the key can be obtained by solving a simultaneous equation. Since we know and are relatively prime this can be used to rapidly discard many "false" keys in an automated system.
The same type of transformation used in affine ciphers is used in linear congruential generators, a type of pseudorandom number generator. This generator is not a cryptographically secure pseudorandom number generator for the same reason that the affine cipher is not secure.
Examples
In these two examples, one encrypting and one decrypting, the alphabet is going to be the letters A through Z, and will have the corresponding values found in the following table.
Encrypting
In this encrypting example, the plaintext to be encrypted is "AFFINE CIPHER" using the table mentioned above for the numeric values of each letter, taking to be 5, to be 8, and to be 26 since there are 26 characters in the alphabet being used. Only the value of has a restriction since it has to be coprime with 26. The possible values that could be are 1, 3, 5, 7, 9, 11, 15, 17, 19, 21, 23, and 25. The value for can be arbitrary as long as does not equal 1 since this is the shift of the cipher. Thus, the encryption function for this example will be . The first step in encrypting the message is to write the numeric values of each letter.
Now, take each value of , and solve the first part of the equation, . After finding the value of for each character, take the remainder when dividing the result of by 26. The following table shows the first four steps of the encrypting process.
The final step in encrypting the message is to look up each numeric value in the table for the corresponding letters. In this example, the encrypted text would be IHHWVCSWFRCP. The table below shows the completed table for encrypting a message in the Affine cipher.
Decrypting
In this decryption example, the ciphertext that will be decrypted is the ciphertext from the encryption example. The corresponding decryption function is , where is calculated to be 21, and is 8. To begin, write the numeric equivalents to each letter in the ciphertext, as shown in the table below.
Now, the next step is to compute , and then take the remainder when that result is divided by 26. The following table shows the results of both computations.
The final step in decrypting the ciphertext is to use the table to convert numeric values back into letters. The plaintext in this decryption is AFFINECIPHER. Below is the table with the final step completed.
Entire alphabet encoded
To make encrypting and decrypting quicker, the entire alphabet can be encrypted to create a one-to-one map between the letters of the cleartext and the ciphertext. In this example, the one-to-one map would be the following:
Programming examples
The following Python code can be used to encrypt text with the affine cipher:# Prints a transposition table for an affine cipher.
# a must be coprime to m=26.
def affine(a: int, b: int) -> None:
for i in range(26):
print(chr(i+ord('A')) + ": " + chr(((a*i+b)%26)+ord('A')))
# An example call
affine(5, 8)
See also
Affine functions
Atbash code
Caesar cipher
ROT13
Topics in cryptography
References
Classical ciphers |
451268 | https://en.wikipedia.org/wiki/Passive%20attack | Passive attack | A passive attack on a cryptosystem is one in which the cryptanalyst cannot interact with any of the parties involved, attempting to break the system solely based upon observed data (i.e. the ciphertext). This can also include known plaintext attacks where both the plaintext and its corresponding ciphertext are known.
While active attackers can interact with the parties by sending data, a passive attacker is limited to intercepting communications (eavesdropping), and seeks to decrypt data by interpreting the transcripts of authentication sessions. Since passive attackers do not introduce data of their own, they can be difficult to detect.
While most classical ciphers are vulnerable to this form of attack, most modern ciphers are designed to prevent this type of attack above all others.
Attributes
Traffic analysis
Non-evasive eavesdropping and monitoring of transmissions
Because data unaffected, tricky to detect
Emphasis on prevention (encryption) not detection
Sometimes referred to as "tapping"
The main types of passive attacks are traffic analysis and release of message contents.
During a traffic analysis attack, the eavesdropper analyzes the traffic, determines the location, identifies communicating hosts and observes the frequency and length of exchanged messages. He uses all this information to predict the nature of communication. All incoming and outgoing traffic of the network is analyzed, but not altered.
For a release of message content, a telephonic conversation, an E-mail message or a transferred file may contain confidential data. A passive attack monitors the contents of the transmitted data.
Passive attacks are very difficult to detect because they do not involve any alteration of the data. When the messages are exchanged neither the sender nor the receiver is aware that a third party may capture the messages. This can be prevented by encryption of data.
See also
Known plaintext attack
Chosen plaintext attack
Chosen ciphertext attack
Adaptive chosen ciphertext attack
Topics in cryptography
References
Further reading
Cryptography and Network Security By William Stallings
Cryptographic attacks |
451283 | https://en.wikipedia.org/wiki/Rabin%20cryptosystem | Rabin cryptosystem | The Rabin cryptosystem is an asymmetric cryptographic technique, whose security, like that of RSA, is related to the difficulty of integer factorization. However the Rabin cryptosystem has the advantage that it has been mathematically proven to be computationally secure against a chosen-plaintext attack as long as the attacker cannot efficiently factor integers, while there is no such proof known for RSA. It has the disadvantage that each output of the Rabin function can be generated by any of four possible inputs; if each output is a ciphertext, extra complexity is required on decryption to identify which of the four possible inputs was the true plaintext.
History
The algorithm was published in January 1979 by Michael O. Rabin. The Rabin cryptosystem was the first asymmetric cryptosystem where recovering the plaintext from the ciphertext could be proven to be as hard as factoring.
Encryption Algorithm
Like all asymmetric cryptosystems, the Rabin system uses a key pair: a public key for encryption and a private key for decryption. The public key is published for anyone to use, while the private key remains known only to the recipient of the message.
Key generation
The keys for the Rabin cryptosystem are generated as follows:
Choose two large distinct prime numbers and such that and .
Compute .
Then is the public key and the pair is the private key.
Encryption
A message can be encrypted by first converting it to a number using a reversible mapping, then computing . The ciphertext is .
Decryption
The message can be recovered from the ciphertext by taking its square root modulo as follows.
Compute the square root of modulo and using these formulas:
Use the extended Euclidean algorithm to find and such that .
Use the Chinese remainder theorem to find the four square roots of modulo :
One of these four values is the original plaintext , although which of the four is the correct one cannot be determined without additional information.
Computing square roots
We can show that the formulas in step 1 above actually produce the square roots of as follows. For the first formula, we want to prove that . Since the exponent is an integer. The proof is trivial if , so we may assume that does not divide . Note that implies that , so c is a quadratic residue modulo . Then
The last step is justified by Euler's criterion.
Example
As an example, take and , then . Take as our plaintext. The ciphertext is thus
.
Decryption proceeds as follows:
Compute and .
Use the extended Euclidean algorithm to compute and . We can confirm that .
Compute the four plaintext candidates:
and we see that is the desired plaintext. Note that all four candidates are square roots of 15 mod 77. That is, for each candidate, , so each encrypts to the same value, 15.
Digital Signature Algorithm
The Rabin cryptosystem can be used to create and verify digital signatures. Creating a signature requires the private key . Verifying a signature requires the public key .
Signing
A message can be signed with a private key as follows.
Generate a random value .
Use a cryptographic hash function to compute , where the bar denotes concatenation. should be an integer less than .
Treat as a Rabin-encrypted value and attempt to decrypt it, using the private key . This will produce the usual four results, .
One might expect that encrypting each would produce . However, this will be true only if happens to be a quadratic residue modulo and . To determine if this is the case, encrypt the first decryption result . If it does not encrypt to , repeat this algorithm with a new random . The expected number of times this algorithm needs to be repeated before finding a suitable is 4.
Having found an which encrypts to , the signature is .
Verifying a signature
A signature for a message can be verified using the public key as follows.
Compute .
Encrypt using the public key .
The signature is valid if and only if the encryption of equals .
Evaluation of the algorithm
Effectiveness
Decrypting produces three false results in addition to the correct one, so that the correct result must be guessed. This is the major disadvantage of the Rabin cryptosystem and one of the factors which have prevented it from finding widespread practical use.
If the plaintext is intended to represent a text message, guessing is not difficult; however, if the plaintext is intended to represent a numerical value, this issue becomes a problem that must be resolved by some kind of disambiguation scheme. It is possible to choose plaintexts with special structures, or to add padding, to eliminate this problem. A way of removing the ambiguity of inversion was suggested by Blum and Williams: the two primes used are restricted to primes congruent to 3 modulo 4 and the domain of the squaring is restricted to the set of quadratic residues. These restrictions make the squaring function into a trapdoor permutation, eliminating the ambiguity.
Efficiency
For encryption, a square modulo n must be calculated. This is more efficient than RSA, which requires the calculation of at least a cube.
For decryption, the Chinese remainder theorem is applied, along with two modular exponentiations. Here the efficiency is comparable to RSA.
Disambiguation introduces additional computational costs, and is what has prevented the Rabin cryptosystem from finding widespread practical use.
Security
It has been proven that any algorithm which decrypts a Rabin-encrypted value can be used to factor the modulus . Thus, Rabin decryption is at least as hard as the integer factorization problem, something that has not been proven for RSA. It is generally believed that there is no polynomial-time algorithm for factoring, which implies that there is no efficient algorithm for decrypting a Rabin-encrypted value without the private key .
The Rabin cryptosystem does not provide indistinguishability against chosen plaintext attacks since the process of encryption is deterministic. An adversary, given a ciphertext and a candidate message, can easily determine whether or not the ciphertext encodes the candidate message (by simply checking whether encrypting the candidate message yields the given ciphertext).
The Rabin cryptosystem is insecure against a chosen ciphertext attack (even when challenge messages are chosen uniformly at random from the message space). By adding redundancies, for example, the repetition of the last 64 bits, the system can be made to produce a single root. This thwarts this specific chosen-ciphertext attack, since the decryption algorithm then only produces the root that the attacker already knows. If this technique is applied, the proof of the equivalence with the factorization problem fails, so it is uncertain as of 2004 if this variant is secure. The Handbook of Applied Cryptography by Menezes, Oorschot and Vanstone considers this equivalence probable, however, as long as the finding of the roots remains a two-part process (1. roots and and 2. application of the Chinese remainder theorem).
See also
Topics in cryptography
Blum Blum Shub
Shanks–Tonelli algorithm
Schmidt–Samoa cryptosystem
Blum–Goldwasser cryptosystem
Notes
References
Buchmann, Johannes. Einführung in die Kryptographie. Second Edition. Berlin: Springer, 2001.
Menezes, Alfred; van Oorschot, Paul C.; and Vanstone, Scott A. Handbook of Applied Cryptography. CRC Press, October 1996.
Rabin, Michael. Digitalized Signatures and Public-Key Functions as Intractable as Factorization (in PDF). MIT Laboratory for Computer Science, January 1979.
Scott Lindhurst, An analysis of Shank's algorithm for computing square roots in finite fields. in R Gupta and K S Williams, Proc 5th Conf Can Nr Theo Assoc, 1999, vol 19 CRM Proc & Lec Notes, AMS, Aug 1999.
R Kumanduri and C Romero, Number Theory w/ Computer Applications, Alg 9.2.9, Prentice Hall, 1997. A probabilistic for square root of a quadratic residue modulo a prime.
External links
Menezes, Oorschot, Vanstone, Scott: Handbook of Applied Cryptography (free PDF downloads), see Chapter 8
Public-key encryption schemes |
451286 | https://en.wikipedia.org/wiki/Random%20oracle | Random oracle | In cryptography, a random oracle is an oracle (a theoretical black box) that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted.
Stated differently, a random oracle is a mathematical function chosen uniformly at random, that is, a function mapping each possible query to a (fixed) random response from its output domain.
Random oracles as a mathematical abstraction were first used in rigorous cryptographic proofs in the 1993 publication by Mihir Bellare and Phillip Rogaway (1993). They are typically used when the proof cannot be carried out using weaker assumptions on the cryptographic hash function. A system that is proven secure when every hash function is replaced by a random oracle is described as being secure in the random oracle model, as opposed to secure in the standard model of cryptography.
Applications
Random oracles are typically used as an idealised replacement for cryptographic hash functions in schemes where strong randomness assumptions are needed of the hash function's output. Such a proof often shows that a system or a protocol is secure by showing that an attacker must require impossible behavior from the oracle, or solve some mathematical problem believed hard in order to break it. However, it only proves such properties in the random oracle model, making sure no major design flaws are present. It is in general not true that such a proof implies the same properties in the standard model. Still, a proof in the random oracle model is considered better than no formal security proof at all.
Not all uses of cryptographic hash functions require random oracles: schemes that require only one or more properties having a definition in the standard model (such as collision resistance, preimage resistance, second preimage resistance, etc.) can often be proven secure in the standard model (e.g., the Cramer–Shoup cryptosystem).
Random oracles have long been considered in computational complexity theory, and many schemes have been proven secure in the random oracle model, for example Optimal Asymmetric Encryption Padding, RSA-FDH and Probabilistic Signature Scheme. In 1986, Amos Fiat and Adi Shamir showed a major application of random oracles – the removal of interaction from protocols for the creation of signatures.
In 1989, Russell Impagliazzo and Steven Rudich showed the limitation of random oracles – namely that their existence alone is not sufficient for secret-key exchange.
In 1993, Mihir Bellare and Phillip Rogaway were the first to advocate their use in cryptographic constructions. In their definition, the random oracle produces a bit-string of infinite length which can be truncated to the length desired.
When a random oracle is used within a security proof, it is made available to all players, including the adversary or adversaries. A single oracle may be treated as multiple oracles by pre-pending a fixed bit-string to the beginning of each query (e.g., queries formatted as "1|x" or "0|x" can be considered as calls to two separate random oracles, similarly "00|x", "01|x", "10|x" and "11|x" can be used to represent calls to four separate random oracles).
Limitations
According to the Church–Turing thesis, no function computable by a finite algorithm can implement a true random oracle (which by definition requires an infinite description because it has infinitely many possible inputs, and its outputs are all independent from each other and need to be individually specified by any description).
In fact, certain artificial signature and encryption schemes are known which are proven secure in the random oracle model, but which are trivially insecure when any real function is substituted for the random oracle. Nonetheless, for any more natural protocol a proof of security in the random oracle model gives very strong evidence of the practical security of the protocol.
In general, if a protocol is proven secure, attacks to that protocol must either be outside what was proven, or break one of the assumptions in the proof; for instance if the proof relies on the hardness of integer factorization, to break this assumption one must discover a fast integer factorization algorithm. Instead, to break the random oracle assumption, one must discover some unknown and undesirable property of the actual hash function; for good hash functions where such properties are believed unlikely, the considered protocol can be considered secure.
Random Oracle Hypothesis
Although the Baker–Gill–Solovay theorem showed that there exists an oracle A such that PA = NPA, subsequent work by Bennett and Gill, showed that for a random oracle B (a function from {0,1}n to {0,1} such that each input element maps to each of 0 or 1 with probability 1/2, independently of the mapping of all other inputs), PB ⊊ NPB with probability 1. Similar separations, as well as the fact that random oracles separate classes with probability 0 or 1 (as a consequence of the Kolmogorov's zero–one law), led to the creation of the Random Oracle Hypothesis, that two "acceptable" complexity classes C1 and C2 are equal if and only if they are equal (with probability 1) under a random oracle (the acceptability of a complexity class is defined in BG81). This hypothesis was later shown to be false, as the two acceptable complexity classes IP and PSPACE were shown to be equal despite IPA ⊊ PSPACEA for a random oracle A with probability 1.
Ideal Cipher
An ideal cipher is a random permutation oracle that is used to model an idealized block cipher. A random permutation decrypts each ciphertext block into one and only one plaintext block and vice versa, so there is a one-to-one correspondence. Some cryptographic proofs make not only the "forward" permutation available to all players, but also the "reverse" permutation.
Recent works showed that an ideal cipher can be constructed from a random oracle using 10-round or even 8-round Feistel networks.
Ideal Permutation
An ideal permutation is an idealized object sometimes used in cryptography to model the behaviour of a permutation whose outputs are indistinguishable from those of a random permutation. In the ideal permutation model, an additional oracle access is given to the ideal permutation and its inverse. The ideal permutation model can be seen as a special case of the ideal cipher model where access is given to only a single permutation, instead of a family of permutations as in the case of the ideal cipher model.
Quantum-accessible Random Oracles
Post-quantum cryptography studies quantum attacks on classical cryptographic schemes. As a random oracle is an abstraction of a hash function, it makes sense to assume that a quantum attacker can access the random oracle in quantum superposition. Many of the classical security proofs break down in that quantum random oracle model and need to be revised.
See also
Sponge function
Oracle machine
Topics in cryptography
References
Cryptography
Cryptographic hash functions
Theory of cryptography
Computation oracles |
454322 | https://en.wikipedia.org/wiki/Anonymous%20P2P | Anonymous P2P | An anonymous P2P communication system is a peer-to-peer distributed application in which the nodes, which are used to share resources, or participants are anonymous or pseudonymous. Anonymity of participants is usually achieved by special routing overlay networks that hide the physical location of each node from other participants.
Interest in anonymous P2P systems has increased in recent years for many reasons, ranging from the desire to share files without revealing one's network identity and risking litigation to distrust in governments, concerns over mass surveillance and data retention, and lawsuits against bloggers.
Motivation for anonymity
There are many reasons to use anonymous P2P technology; most of them are generic to all forms of online anonymity.
P2P users who desire anonymity usually do so as they do not wish to be identified as a publisher (sender), or reader (receiver), of information. Common reasons include:
Censorship at the local, organizational, or national level
Personal privacy preferences such as preventing tracking or data mining activities
The material or its distribution is considered illegal or incriminating by possible eavesdroppers
Material is legal but socially deplored, embarrassing or problematic in the individual's social world
Fear of retribution (against whistleblowers, unofficial leaks, and activists who do not believe in restrictions on information nor knowledge)
A particularly open view on legal and illegal content is given in The Philosophy Behind Freenet.
Governments are also interested in anonymous P2P technology. The United States Navy funded the original onion routing research that led to the development of the Tor network, which was later funded by the Electronic Frontier Foundation and is now developed by the non-profit organization The Tor Project, Inc.
Arguments for and against anonymous P2P communication
General
While anonymous P2P systems may support the protection of unpopular speech, they may also protect illegal activities, such as fraud, libel, the exchange of illegal pornography, the unauthorized copying of copyrighted works, or the planning of criminal activities. Critics of anonymous P2P systems hold that these disadvantages outweigh the advantages offered by such systems, and that other communication channels are already sufficient for unpopular speech.
Proponents of anonymous P2P systems believe that all restrictions on free speech serve authoritarian interests, information itself is ethically neutral, and that it is the people acting upon the information that can be good or evil. Perceptions of good and evil can also change (see moral panic); for example, if anonymous peer-to-peer networks had existed in the 1950s or 1960s, they might have been targeted for carrying information about civil rights or anarchism.
Easily accessible anonymous P2P networks are seen by some as a democratization of encryption technology, giving the general populace access to secure communications channels already used by governments. Supporters of this view, such as Phil Zimmermann, argue that anti-surveillance technologies help to equalize power between governments and their people, which is the actual reason for banning them. John Pilger opines that monitoring of the populace helps to contain threats to the "consensual view of established authority" or threats to the continuity of power structures and privilege.
Freedom of speech
Some claim that true freedom of speech, especially on controversial subjects, is difficult or impossible unless individuals can speak anonymously. If anonymity is not possible, one could be subjected to threats or reprisals for voicing an unpopular view. This is one reason why voting is done by secret ballot in many democracies. Controversial information which a party wants to keep hidden, such as details about corruption issues, is often published or leaked anonymously.
Anonymous blogging
Anonymous blogging is one widespread use of anonymous networks. While anonymous blogging is possible on the non-anonymous internet to some degree too, a provider hosting the blog in question might be forced to disclose the blogger's IP address (as when Google revealed an anonymous blogger's identity). Anonymous networks provide a better degree of anonymity. Flogs (anonymous blogs) in Freenet, Syndie and other blogging tools in I2P and Osiris sps are some examples of anonymous blogging technologies.
One argument for anonymous blogging is a delicate nature of work situation. Sometimes a blogger writing under their real name faces a choice between either staying silent or causing a harm to themselves, their colleagues or the company they work for.
Another reason is risk of lawsuits. Some bloggers have faced multimillion-dollar lawsuits (although they were later dropped completely); anonymous blogging provides protection against such risks.
Censorship via Internet domain names
On the non-anonymous Internet, a domain name like "example.com" is a key to accessing information. The censorship of the Wikileaks website shows that domain names are extremely vulnerable to censorship. Some domain registrars have suspended customers' domain names even in the absence of a court order.
For the affected customer, blocking of a domain name is a far bigger problem than a registrar refusing to provide a service; typically, the registrar keeps full control of the domain names in question. In the case of a European travel agency, more than 80 .com websites were shut down without any court process and held by the registrar since then. The travel agency had to rebuild the sites under the .net top-level domain instead.
On the other hand, anonymous networks do not rely on domain name registrars. For example, Freenet, I2P and Tor hidden services implement censorship-resistant URLs based on public-key cryptography: only a person having the correct private key can update the URL or take it down.
Control over online tracking
Anonymous P2P also has value in normal daily communication. When communication is anonymous, the decision to reveal the identities of the communicating parties is left up to the parties involved and is not available to a third party. Often there is no need or desire by the communicating parties to reveal their identities. As a matter of personal freedom, many people do not want processes in place by default which supply unnecessary data. In some cases, such data could be compiled into histories of their activities.
For example, most current phone systems transmit caller ID information by default to the called party (although this can be disabled either for a single call or for all calls). If a person calls to make an inquiry about a product or the time of a movie, the party called has a record of the calling phone number, and may be able to obtain the name, address and other information about the caller. This information is not available about someone who walks into a store and makes a similar inquiry.
Effects of surveillance on lawful activity
Online surveillance, such as recording and retaining details of web and e-mail traffic, may have effects on lawful activities. People may be deterred from accessing or communicating legal information because they know of possible surveillance and believe that such communication may be seen as suspicious. According to law professor Daniel J. Solove, such effects "harm society because, among other things, they reduce the range of viewpoints being expressed and the degree of freedom with which to engage in political activity."
Access to censored and copyrighted material
Most countries ban or censor the publication of certain books and movies, and certain types of content. Other material is legal to possess but not to distribute; for example, copyright and software patent laws may forbid its distribution. These laws are difficult or impossible to enforce in anonymous P2P networks.
Anonymous online money
With anonymous money, it becomes possible to arrange anonymous markets where one can buy and sell just about anything anonymously. Anonymous money could be used to avoid tax collection. However, any transfer of physical goods between two parties could compromise anonymity.
Proponents argue that conventional cash provides a similar kind of anonymity, and that existing laws are adequate to combat crimes like tax evasion that might result from the use of anonymous cash, whether online or offline.
Functioning of anonymous P2P
Anonymity and pseudonymity
Some of the networks commonly referred to as "anonymous P2P" are truly anonymous, in the sense that network nodes carry no identifiers. Others are actually pseudonymous: instead of being identified by their IP addresses, nodes are identified by pseudonyms such as cryptographic keys. For example, each node in the MUTE network has an overlay address that is derived from its public key. This overlay address functions as a pseudonym for the node, allowing messages to be addressed to it. In Freenet, on the other hand, messages are routed using keys that identify specific pieces of data rather than specific nodes; the nodes themselves are anonymous.
The term anonymous is used to describe both kinds of network because it is difficult—if not impossible—to determine whether a node that sends a message originated the message or is simply forwarding it on behalf of another node. Every node in an anonymous P2P network acts as a universal sender and universal receiver to maintain anonymity. If a node was only a receiver and did not send, then neighbouring nodes would know that the information it was requesting was for itself only, removing any plausible deniability that it was the recipient (and consumer) of the information. Thus, in order to remain anonymous, nodes must ferry information for others on the network.
Spam and DoS attacks in anonymous networks
Originally, anonymous networks were operated by small and friendly communities of developers. As interest in anonymous P2P increased and the user base grew, malicious users inevitably appeared and tried different attacks. This is similar to the Internet, where widespread use has been followed by waves of spam and distributed DoS (Denial of Service) attacks. Such attacks may require different solutions in anonymous networks. For example, blacklisting of originator network addresses does not work because anonymous networks conceal this information. These networks are more vulnerable to DoS attacks as well due to the smaller bandwidth, as has been shown in examples on the Tor network.
A conspiracy to attack an anonymous network could be considered criminal computer hacking, though the nature of the network makes this impossible to prosecute without compromising the anonymity of data in the network.
Opennet and darknet network types
Like conventional P2P networks, anonymous P2P networks can implement either opennet or darknet (often named friend-to-friend) network type. This describes how a node on the network selects peer nodes:
In opennet network, peer nodes are discovered automatically. There is no configuration required but little control available over which nodes become peers.
In a darknet network, users manually establish connections with nodes run by people they know. Darknet typically needs more effort to set up but a node only has trusted nodes as peers.
Some networks like Freenet support both network types simultaneously (a node can have some manually added darknet peer nodes and some automatically selected opennet peers) .
In a friend-to-friend (or F2F) network, users only make direct connections with people they know. Many F2F networks support indirect anonymous or pseudonymous communication between users who do not know or trust one another. For example, a node in a friend-to-friend overlay can automatically forward a file (or a request for a file) anonymously between two "friends", without telling either of them the other's name or IP address. These "friends" can in turn forward the same file (or request) to their own "friends", and so on. Users in a friend-to-friend network cannot find out who else is participating beyond their own circle of friends, so F2F networks can grow in size without compromising their users' anonymity.
Some friend-to-friend networks allow the user to control what kind of files can be exchanged with "friends" within the node, in order to stop them from exchanging files that user disapproves of.
Advantages and disadvantages of opennet compared to darknet are disputed, see friend-to-friend article for summary.
List of anonymous P2P networks and clients
Public P2P clients
Classified-ads - an open source DHT-based decentralized messaging and voice app. Allows users to not expose any personal details but does not hide network addresses of nodes.
DigitalNote XDN - an open-source anonymous decentralized encrypted messaging system based on blockchain technology
Freenet - a censorship-resistant distributed file system for anonymous publishing (open source, written in Java)
GNUnet - a P2P framework, includes anonymous file sharing as its primary application (GNU Project, written in C, alpha status)
MuWire - a filesharing software, with chat rooms. Even if running inside the I2P network, it is not called a 'I2P client' because it has a I2P router embedded, so this makes it a standalone software.
Perfect Dark - a Japanese filesharing client modeled on Share
Tribler - an open source BitTorrent client. Nodes forward files within the network, but only the IP address of the exit node can be associated with a file.
ZeroNet - a decentralized Internet-like network of peer-to-peer users. Allows tunneling of HTTP-traffic through Tor.
I2P clients
I2P - a fully decentralized overlay network for strong anonymity and end-to-end encryption, with many applications (P2P, browsing, distributed anonymous e-mail, instant messaging, IRC, ...) running on top of it (free/open source, platform-independent)
I2P-Bote an anonymous, secure (end-to-end encrypted), serverless mail application with remailer functionality for the I2P network
I2P-Messenger an anonymous, secure (end-to-end encrypted), serverless instant messenger for the I2P network
I2PSnark - an anonymous BitTorrent client for the I2P network
I2Phex - a Gnutella client which communicates anonymously through I2P
iMule - an aMule port running under I2P network
Robert (P2P Software) - another anonymous BitTorrent client for the I2P network
I2P-Tahoe-LAFS - a censorship-resistant distributed file system for anonymous publishing and file sharing (open source, written in Python, pre-alpha status)
Vuze (formerly Azureus) - a BitTorrent client with the option of using I2P or Tor (initially open source, written in Java)
Bigly BT - a successor of Vuze after development slowed down and stalled. Therefore also a BitTorrent client with the option of using I2P or Tor (open source, written in Java)
Defunct (Public P2P clients) or no longer developed
Bitblinder (2009-2010) - file sharing
Bitmessage - an anonymous decentralized messaging system serving as a secure replacement for email
Cashmere (2005) - resilient anonymous routing
Entropy (2003-2005) - Freenet compatible
EarthStation 5 (2003-2005) - anonymity controverted
Herbivore (2003-2005) - file sharing and messaging. Used the Dining cryptographers problem.
Marabunta (2005-2006) - distributed chat
MUTE (2003-2009) - file sharing
NeoLoader - a filesharing software compatible with bittorrent and edonkey2000. Anonymous when used with the "NeoShare" feature (that use the proprietary "NeoKad" network)
Netsukuku - a peer-to-peer routing system aiming to build a free and independent Internet
Nodezilla (2004-2010) - an anonymizing, closed source network layer upon which applications can be built
Osiris (Serverless Portal System) - an anonymous and distributed web portals creator.
OFF System (2006-2010) - a P2P distributed file system through which all shared files are represented by randomized data blocks
RShare (2006-2007) - file sharing
Share - a Japanese filesharing client modeled on Winny
Syndie - a content (mainly forums) syndication program that operates over numerous anonymous and non-anonymous networks (open source, written in Java)
StealthNet (2007-2011) - the successor to RShare
Winny - a Japanese filesharing program modeled on Freenet which relies on a mixnet and distributed datastore to provide anonymity
Private P2P clients
Private P2P networks are P2P networks that only allow some mutually trusted computers to share files. This can be achieved by using a central server or hub to authenticate clients, in which case the functionality is similar to a private FTP server, but with files transferred directly between the clients. Alternatively, users can exchange passwords or keys with their friends to form a decentralized network.
Examples include:
Syncthing - is a free, open-source peer-to-peer file synchronization application. It can sync files between devices. Data security and data safety are built into the design of the software.
Resilio Sync - a proprietary alternative to Syncthing
Private F2F (friend-to-friend) clients
Friend-to-friend networks are P2P networks that allows users only to make direct connections with people they know. Passwords or digital signatures can be used for authentication.
Examples include :
Filetopia - not anonymous but encrypted friend-to-friend. File sharing, chat, internal mail service
OneSwarm - a backwards compatible BitTorrent client with privacy-preserving sharing options, aims to create a large F2F network.
Retroshare - filesharing, serverless email, instant messaging, VoIP, chatrooms, and decentralized forums.
Hypothetical or defunct networks
Hypothetical
The following networks only exist as design or are in development
anoNet - extensible IP anonymizer with steganography support (in development)
Crowds - Reiter and Rubin's system for "blending into a crowd" has a known attack
P2PRIV - Peer-to-Peer diRect and anonymous dIstribution oVerlay - anonymity via virtual links parallelization - currently in development and has significant, unsolved problems in a real world environment
Phantom Anonymity Protocol - a fully decentralized high-throughput anonymization network (no longer in development)
Race (Resilient Anonymous Communication for Everyone) - A project by DARPA to build an anonymous, attack-resilient mobile communication system that can reside completely within a network environment, capable of avoiding large-scale compromise by preventing compromised information from being useful for identifying any of the system nodes because all such information is encrypted on the nodes at all times, even during computation; and preventing communications compromise by virtue of obfuscating communication protocols.
Defunct or dormant
Bitblinder - a decentralised P2P anonymity software program which included Tor but with increased speed. Website is down and clients are no longer functional.
Invisible IRC Project - anonymous IRC, inspired by Freenet, which later became I2P (Invisible Internet Project).
Mnet (formerly MojoNation) - a distributed file system
Anonymous P2P in a wireless mesh network
It is possible to implement anonymous P2P on a wireless mesh network; unlike fixed Internet connections, users don't need to sign up with an ISP to participate in such a network, and are only identifiable through their hardware.
Protocols for wireless mesh networks are Optimized Link State Routing Protocol (OLSR) and the follow-up protocol B.A.T.M.A.N., which is designed for decentralized auto-IP assignment. See also Netsukuku.
Even if a government were to outlaw the use of wireless P2P software, it would be difficult to enforce such a ban without a considerable infringement of personal freedoms. Alternatively, the government could outlaw the purchase of the wireless hardware itself.
See also
Anonymity application
Anonymous remailer
Anonymous web browsing
Comparison of file sharing applications
Dark web
Data privacy
Internet privacy
List of anonymously published works
Personally identifiable information
Privacy software and Privacy-enhancing technologies
FLAIM
I2P
I2P-Bote
Java Anon Proxy
Free Haven Project
Secure communication
Related items
Crypto-anarchism
Cypherpunk
Digital divide
Mesh Network
Wireless community network
References
External links
Planet Peer Wiki - a wiki about various anonymous P2P applications
A survey of anonymous peer-to-peer file-sharing (2005)
Anonymous, Decentralized and Uncensored File-Sharing is Booming by TorrentFreak (2011)
Anonymous file sharing networks
Anonymity networks
Internet privacy software
Crypto-anarchism
Peer-to-peer |
454486 | https://en.wikipedia.org/wiki/Windows%209x | Windows 9x | Windows 9x is a generic term referring to a series of Microsoft Windows computer operating systems produced from 1995 to 2000, which were based on the Windows 95 kernel and its underlying foundation of MS-DOS, both of which were updated in subsequent versions. The first version in the 9x series was Windows 95, which was succeeded by Windows 98 and then Windows Me, which was the third and last version of Windows on the 9x line, until the series was superseded by Windows XP.
Windows 9x is predominantly known for its use in home desktops. In 1998, Windows made up 82% of operating system market share.
Internal release versions for versions of Windows 9x are 4.x. The internal versions for Windows 95, 98, and Me are 4.0, 4.1, and 4.9, respectively. Previous MS-DOS-based versions of Windows used version numbers of 3.2 or lower. Windows NT, which was aimed at professional users such as networks and businesses, used a similar but separate version number between 3.1 and 4.0. All versions of Windows from Windows XP onwards are based on the Windows NT codebase.
History
Windows prior to 95
The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. Its name was initially "Interface Manager", but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to consumers. Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS. Consequently, it shared the inherent flaws and problems of MS-DOS.
The second installment of Microsoft Windows, version 2.0, was released on December 9, 1987, and used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasking system like DESQview, which used the 286 Protected Mode.
Microsoft Windows scored a significant success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed users to better multitask older MS-DOS-based software compared to Windows/386, thanks to the introduction of virtual memory.
Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0, but primarily consisted of bugfixes and multimedia support. It also excluded support for Real mode, and only ran on an Intel 80286 or better processor. In November 1993 Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in early 1992.
Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation. Microsoft hired him in August 1988 to create a successor to OS/2, but Cutler created a completely new system instead based on his MICA project at Digital.
Microsoft announced at its 1991 Professional Developers Conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, code-named Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP.
Windows 95
After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking, that of which was available in OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped.
Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors immediately began to impact the operating system's efficiency and stability.
Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995.
Microsoft went on to release five different versions of Windows 95:
Windows 95 – original release
Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation.
Windows 95 B – (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support.
Windows 95 B USB – (OSR2.1) included basic USB support.
Windows 95 C – (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced.
OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity).
The first Microsoft Plus! add-on pack was sold for Windows 95.
Windows 98
On June 25, 1998, Microsoft released Windows 98. It included new hardware drivers and better support for the FAT32 file system which allows support for disk partitions larger than the 2 GB maximum accepted by Windows 95. The USB support in Windows 98 was more robust than the basic support provided by the OEM editions of Windows 95. It also controversially integrated the Internet Explorer 4 browser into the Windows GUI and Windows Explorer file manager.
On May 5, 1999, Microsoft released Windows 98 Second Edition, an interim release whose notable features were the addition of Internet Connection Sharing and improved WDM audio and modem support. Internet Connection Sharing is a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Windows 98 Second Edition has certain improvements over the original release. Hardware support through device drivers was increased. Many minor problems present in the original Windows 98 were found and fixed which make it, according to many, the most stable release of Windows 9x family—to the extent that commentators used to say that Windows 98's beta version was more stable than Windows 95's final (gamma) version.
Windows Me
On September 14, 2000, Microsoft introduced Windows Me (Millennium Edition), which upgraded Windows 98 with enhanced multimedia and Internet features. It also introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of system failure. The first version of Windows Movie Maker was introduced as well.
Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Whistler (soon to be renamed to Windows XP). Many of the new features were available from the Windows Update site as updates for older Windows versions. As a result, Windows Me was not acknowledged as a distinct operating system along the lines of 95 or 98, and is often included in the Windows 9x series.
Windows Me was criticized by users for its instability and unreliability, due to frequent freezes and crashes. A PC World article dubbed Windows Me the "Mistake Edition" and placed it 4th in their "Worst Tech Products of All Time" feature.
The inability of users to easily boot into real mode MS-DOS, as in Windows 95 and 98, led users to quickly learn how to hack their Windows Me installations to provide the needed service.
Decline
The release of Windows 2000 marked a shift in the user experience between the Windows 9x series and the Windows NT series. Windows NT 4.0 suffered from a lack of support for USB, Plug and Play, and DirectX, preventing its users from playing contemporary games, whereas Windows 2000 featured an updated user interface, and better support for both Plug and Play and USB.
The release of Windows XP confirmed the change of direction for Microsoft, bringing the consumer and business operating systems together under Windows NT.
One by one, support for the Windows 9x series ended, and Microsoft stopped selling the software to end users, then later to OEMs. By March 2004, it was impossible to purchase any versions of the Windows 9x series.
End of service life
Microsoft continued to support the use of the Windows 9x series until July 11, 2006, when extended support ended for Windows 98, Windows 98 Second Edition (SE), and Windows Millennium Edition (Me) (extended support for Windows 95 ended on December 31, 2001).
Microsoft DirectX, a set of standard gaming APIs, stopped being updated on Windows 95 at Version 8.0a. The last version of DirectX supported for Windows 98 and Me is 9.0c.
Support for Microsoft Internet Explorer running on any Windows 9x system has also since ended. Internet Explorer 5.5 with Service Pack 2 is the last version of Internet Explorer compatible with Windows 95 and Internet Explorer 6 with Service Pack 1 is the last version compatible with Windows 98 and Me. Internet Explorer 7, the first major update to Internet Explorer 6 in half a decade, was only available for Windows XP SP2 and Windows Vista.
The Windows Update website continued to be available for Windows 98, Windows 98SE, and Windows Me after their end of support date (Windows Update was never available for Windows 95), however, during 2011, Microsoft retired the Windows Update v4 website and removed the updates for Windows 98, Windows 98SE, and Windows Me from its servers. Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows Me (and XP) would end on July 31, 2019.
The growing number of important updates caused by the end of service life of these pieces of software have slowly made Windows 9x even less practical for everyday use. Today, even open source projects such as Mozilla Firefox will not run on Windows 9x without rework.
RetroZilla is a fork of Gecko 1.8.1 aimed at bringing "improved compatibility on the modern web" for versions of Windows as old as Windows 95 and NT 4.0. The latest version, 2.2, was released in February 2019 and added support for TLS 1.2.
Design
Kernel
Windows 9x is a series of hybrid 16/32-bit operating systems.
Like most operating systems, Windows 9x consists of kernel space and user space memory. Although Windows 9x features some memory protection, it does not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt.
User mode
The user-mode parts of Windows 9x consist of three subsystems: the Win16 subsystem, the Win32 subsystem and MS-DOS.
Windows 9x/Me set aside two blocks of 64 KB memory regions for GDI and heap resources. By running multiple applications, applications with numerous GDI elements or by running applications over a long span of time, it could exhaust these memory areas. If free system resources dropped below 10%, Windows would become unstable and likely crash.
Kernel mode
The kernel mode parts consist of the Virtual Machine Manager (VMM), the Installable File System Manager (IFSHLP), the Configuration Manager, and in Windows 98 and later, the WDM Driver Manager (NTKERN). As a 32-bit operating system, virtual memory space is 4 GiB, divided into a lower 2 GiB for applications and an upper 2 GiB for kernel per process.
Registry
Like Windows NT, Windows 9x stores user-specific and configuration-specific settings in a large information database called the Windows registry. Hardware-specific settings are also stored in the registry, and many device drivers use the registry to load configuration data. Previous versions of Windows used files such as AUTOEXEC.BAT, CONFIG.SYS, WIN.INI, SYSTEM.INI and other files with an .INI extension to maintain configuration settings. As Windows became more complex and incorporated more features, .INI files became too unwieldy for the limitations of the then-current FAT filesystem. Backwards-compatibility with .INI files was maintained until Windows XP succeeded the 9x and NT lines.
Although Microsoft discourages using .INI files in favor of Registry entries, a large number of applications (particularly 16-bit Windows-based applications) still use .INI files. Windows 9x supports .INI files solely for compatibility with those applications and related tools (such as setup programs). The AUTOEXEC.BAT and CONFIG.SYS files also still exist for compatibility with real-mode system components and to allow users to change certain default system settings such as the PATH environment variable.
The registry consists of two files: User.dat and System.dat. In Windows Me, Classes.dat was added.
Virtual Machine Manager
The Virtual Machine Manager (VMM) is the 32-bit protected mode kernel at the core of Windows 9x. Its primary responsibility is to create, run, monitor and terminate virtual machines. The VMM provides services that manage memory, processes, interrupts and protection faults. The VMM works with virtual devices (loadable kernel modules, which consist mostly of 32-bit ring 0 or kernel mode code, but may include other types of code, such as a 16-bit real mode initialisation segment) to allow those virtual devices to intercept interrupts and faults to control the access that an application has to hardware devices and installed software. Both the VMM and virtual device drivers run in a single, 32-bit, flat model address space at privilege level 0 (also called ring 0). The VMM provides multi-threaded, preemptive multitasking. It runs multiple applications simultaneously by sharing CPU (central processing unit) time between the threads in which the applications and virtual machines run.
The VMM is also responsible for creating MS-DOS environments for system processes and Windows applications that still need to run in MS-DOS mode. It is the replacement for WIN386.EXE in Windows 3.x, and the file vmm32.vxd is a compressed archive containing most of the core VxD, including VMM.vxd itself and ifsmgr.vxd (which facilitates file system access without the need to call the real mode file system code of the DOS kernel).
Software support
Unicode
Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode.
File systems
Windows 9x does not natively support NTFS or HPFS, but there are third-party solutions which allow Windows 9x to have read-only access to NTFS volumes.
Early versions of Windows 95 did not support FAT32.
Like Windows for Workgroups 3.11, Windows 9x provides support for 32-bit file access based on IFSHLP.SYS, and unlike Windows 3.x, Windows 9x has support for the VFAT file system, allowing file names with a maximum of 255 characters instead of having 8.3 filenames.
Event logging and tracing
Also, there is no support for event logging and tracing or error reporting which the Windows NT family of operating systems has, although software like Norton CrashGuard can be used to achieve similar capabilities on Windows 9x.
Security
Windows 9x is designed as a single-user system. Thus, the security model is much less effective than the one in Windows NT. One reason for this is the FAT file systems (including FAT12/FAT16/FAT32), which are the only ones that Windows 9x supports officially, though Windows NT also supports FAT12 and FAT16 (but not FAT32) and Windows 9x can be extended to read and write NTFS volumes using third-party Installable File System drivers. FAT systems have very limited security; every user that has access to a FAT drive also has access to all files on that drive. The FAT file systems provide no access control lists and file-system level encryption like NTFS.
Some operating systems that were available at the same time as Windows 9x are either multi-user or have multiple user accounts with different access privileges, which allows important system files (such as the kernel image) to be immutable under most user accounts. In contrast, while Windows 95 and later operating systems offer the option of having profiles for multiple users, they have no concept of access privileges, making them roughly equivalent to a single-user, single-account operating system; this means that all processes can modify all files on the system that aren't open, in addition to being able to modify the boot sector and perform other low-level hard drive modifications. This enables viruses and other clandestinely installed software to integrate themselves with the operating system in a way that is difficult for ordinary users to detect or undo. The profile support in the Windows 9x family is meant for convenience only; unless some registry keys are modified, the system can be accessed by pressing "Cancel" at login, even if all profiles have a password. Windows 95's default login dialog box also allows new user profiles to be created without having to log in first.
Users and software can render the operating system unable to function by deleting or overwriting important system files from the hard disk. Users and software are also free to change configuration files in such a way that the operating system is unable to boot or properly function.
Installation software often replaced and deleted system files without properly checking if the file was still in use or of a newer version. This created a phenomenon often referred to as DLL hell.
Windows Me introduced System File Protection and System Restore to handle common problems caused by this issue.
Network sharing
Windows 9x offers share-level access control security for file and printer sharing as well as user-level access control if a Windows NT-based operating system is available on the network. In contrast, Windows NT-based operating systems offer only user-level access control but integrated with the operating system's own user account security mechanism.
Hardware support
Drivers
Device drivers in Windows 9x can be virtual device drivers or (starting with Windows 98) WDM drivers. VxDs usually have the filename extension .vxd or .386, whereas WDM compatible drivers usually use the extension .sys. The 32-bit VxD message server (msgsrv32) is a program that is able to load virtual device drivers (VxDs) at startup and then handle communication with the drivers. Additionally, the message server performs several background functions, including loading the Windows shell (such as Explorer.exe or Progman.exe).
Another type of device drivers are .DRV drivers. These drivers are loaded in user-mode, and are commonly used to control devices such as multimedia devices. To provide access to these devices, a dynamic link library is required (such as MMSYSTEM.DLL).
Windows 9x retains backwards compatibility with many drivers made for Windows 3.x and MS-DOS. Using MS-DOS drivers can limit performance and stability due to their use of conventional memory and need to run in real mode which requires the CPU to switch in and out of protected mode.
Drivers written for Windows 9x/Windows Me are loaded into the same address space as the kernel. This means that drivers can by accident or design overwrite critical sections of the operating system. Doing this can lead to system crashes, freezes and disk corruption. Faulty operating system drivers were a source of instability for the operating system.
Other monolithic and hybrid kernels, like Linux and Windows NT, are also susceptible to malfunctioning drivers impeding the kernel's operation.
Often the software developers of drivers and applications had insufficient experience with creating programs for the 'new' system, thus causing many errors which have been generally described as "system errors" by users, even if the error is not caused by parts of Windows or DOS. Microsoft has repeatedly redesigned the Windows Driver architecture since the release of Windows 95 as a result.
CPU and bus technologies
Windows 9x has no native support for hyper-threading, Data Execution Prevention, symmetric multiprocessing, or multi-core processors.
Windows 9x has no native support for SATA host bus adapters (and neither did Windows 2000 nor Windows XP), or USB drives (except Windows Me). There are, however, many SATA-I controllers for which Windows 98/Me drivers exist, and USB mass storage support has been added to Windows 95 OSR2 and Windows 98 through third party drivers. Hardware driver support for Windows 98/Me began to decline in 2005, most notably for motherboard chipsets and video cards.
Early versions of Windows 95 had no support for USB or AGP acceleration.
MS-DOS
Windows 95 was able to reduce the role of MS-DOS in Windows much further than had been done in Windows 3.1x and earlier. According to Microsoft developer Raymond Chen, MS-DOS served two purposes in Windows 95: as the boot loader, and as the 16-bit legacy device driver layer.
When Windows 95 started up, MS-DOS loaded, processed CONFIG.SYS, launched COMMAND.COM, ran AUTOEXEC.BAT and finally ran WIN.COM. The WIN.COM program used MS-DOS to load the virtual machine manager, read SYSTEM.INI, load the virtual device drivers, and then turn off any running copies of EMM386 and switch into protected mode. Once in protected mode, the virtual device drivers (VxDs) transferred all state information from MS-DOS to the 32-bit file system manager, and then shut off MS-DOS. These VxDs allow Windows 9x to interact with hardware resources directly, as providing low-level functionalities such as 32-bit disk access and memory management. All future file system operations would get routed to the 32-bit file system manager. In Windows Me, win.com was no longer executed during the startup process; instead it went directly to execute VMM32.VXD from IO.SYS.
The second role of MS-DOS (as the 16-bit legacy device driver layer) was as a backward compatibility tool for running DOS programs in Windows. Many MS-DOS programs and device drivers interacted with DOS in a low-level way, for example, by patching low-level BIOS interrupts such as int 13h, the low-level disk I/O interrupt. When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, which would attempt to detect this sort of patching. If it detects that the program has tried to hook into DOS, it will jump back into the 16-bit code to let the hook run. A 16-bit driver called IFSMGR.SYS would previously have been loaded by CONFIG.SYS, the job of which was to hook MS-DOS first before the other drivers and programs got a chance, then jump from 16-bit code back into 32-bit code, when the DOS program had finished, to let the 32-bit file system manager continue its work. According to Windows developer Raymond Chen, "MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack."
MS-DOS Virtualization
Windows 9x can run MS-DOS applications within itself using a method called "Virtualization", where an application is run on a Virtual DOS machine.
MS-DOS Mode
Windows 95 and Windows 98 also offer regressive support for DOS applications in the form of being able to boot into a native "DOS Mode" (MS-DOS can be booted without booting Windows, not putting the CPU in protected mode). Through Windows 9x's memory managers and other post-DOS improvements, the overall system performance and functionality is improved. This differs from the emulation used in Windows NT-based operating systems. Some old applications or games may not run properly in a DOS box within Windows and require real DOS Mode.
Having a command line mode outside of the GUI also offers the ability to fix certain system errors without entering the GUI. For example, if a virus is active in GUI mode it can often be safely removed in DOS mode, by deleting its files, which are usually locked while infected in Windows.
Similarly, corrupted registry files, system files or boot files can be restored from the command line. Windows 95 and Windows 98 can be started from DOS Mode by typing 'WIN' <enter> at the command prompt. However, the Recovery Console for Windows 2000, which as a version of Windows NT played a similar role in removing viruses.
Because DOS was not designed for multitasking purposes, Windows versions such as 9x that are DOS-based lack File System security, such as file permissions. Further, if the user uses 16-bit DOS drivers, Windows can become unstable. Hard disk errors often plague the Windows 9x series.
User interface
Users can control a Windows 9x-based system through a command-line interface (or CLI), or a graphical user interface (or GUI). For desktop systems, the default mode is usually graphical user interface, where the CLI is available through MS-DOS windows.
The GDI, which is a part of the Win32 and Win16 subsystems, is also a module that is loaded in user mode, unlike Windows NT where the GDI is loaded in kernel mode.
Alpha compositing and therefore transparency effects, such as fade effects in menus, are not supported by the GDI in Windows 9x.
On desktop machines, Windows Explorer is the default user interface, though a variety of additional Windows shell replacements exist.
Other GUIs include LiteStep, bbLean and Program Manager. The GUI provides a means to control the placement and appearance of individual application windows, and interacts with the Window System.
See also
Comparison of operating systems
Architecture of Windows 9x
MS-DOS 7
References
External links
Computing platforms
9x
Discontinued versions of Microsoft Windows |
454995 | https://en.wikipedia.org/wiki/Deep%20packet%20inspection | Deep packet inspection | Deep packet inspection (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and may take actions such as alerting, blocking, re-routing, or logging it accordingly. Deep packet inspection is often used to baseline application behavior, analyze network usage, troubleshoot network performance, ensure that data is in the correct format, check for malicious code, eavesdropping, and internet censorship, among other purposes. There are multiple headers for IP packets; network equipment only needs to use the first of these (the IP header) for normal operation, but use of the second header (such as TCP or UDP) is normally considered to be shallow packet inspection (usually called stateful packet inspection) despite this definition.
There are multiple ways to acquire packets for deep packet inspection. Using port mirroring (sometimes called Span Port) is a very common way, as well physically inserting a network tap which duplicates and sends the data stream to an analyzer tool for inspection.
Deep Packet Inspection (and filtering) enables advanced network management, user service, and security functions as well as internet data mining, eavesdropping, and internet censorship. Although DPI has been used for Internet management for many years, some advocates of net neutrality fear that the technique may be used anticompetitively or to reduce the openness of the Internet.
DPI is used in a wide range of applications, at the so-called "enterprise" level (corporations and larger institutions), in telecommunications service providers, and in governments.
Background
DPI technology boasts a long and technologically advanced history, starting in the 1990s, before the technology entered what is seen today as common, mainstream deployments. The technology traces its roots back over 30 years, when many of the pioneers contributed their inventions for use among industry participants, such as through common standards and early innovation, such as the following:
RMON
Sniffer
Wireshark
Essential DPI functionality includes analysis of packet headers and protocol fields. For example, Wireshark offers essential DPI functionality through its numerous dissectors that display field names and content and, in some cases, offer interpretation of field values.
Some security solutions that offer DPI combine the functionality of an intrusion detection system (IDS) and an Intrusion prevention system (IPS) with a traditional stateful firewall. This combination makes it possible to detect certain attacks that neither the IDS/IPS nor the stateful firewall can catch on their own. Stateful firewalls, while able to see the beginning and end of a packet flow, cannot catch events on their own that would be out of bounds for a particular application. While IDSs are able to detect intrusions, they have very little capability in blocking such an attack. DPIs are used to prevent attacks from viruses and worms at wire speeds. More specifically, DPI can be effective against buffer overflow attacks, denial-of-service attacks (DoS), sophisticated intrusions, and a small percentage of worms that fit within a single packet.
DPI-enabled devices have the ability to look at Layer 2 and beyond Layer 3 of the OSI model. In some cases, DPI can be invoked to look through Layer 2-7 of the OSI model. This includes headers and data protocol structures as well as the payload of the message. DPI functionality is invoked when a device looks or takes other action based on information beyond Layer 3 of the OSI model. DPI can identify and classify traffic based on a signature database that includes information extracted from the data part of a packet, allowing finer control than classification based only on header information. End points can utilize encryption and obfuscation techniques to evade DPI actions in many cases.
A classified packet may be redirected, marked/tagged (see quality of service), blocked, rate limited, and of course, reported to a reporting agent in the network. In this way, HTTP errors of different classifications may be identified and forwarded for analysis. Many DPI devices can identify packet flows (rather than packet-by-packet analysis), allowing control actions based on accumulated flow information.
At the enterprise level
Initially security at the enterprise level was just a perimeter discipline, with a dominant philosophy of keeping unauthorized users out, and shielding authorized users from the outside world. The most frequently used tool for accomplishing this has been a stateful firewall. It can permit fine-grained control of access from the outside world to pre-defined destinations on the internal network, as well as permitting access back to other hosts only if a request to the outside world has been made previously.
Vulnerabilities exist at network layers, however, that are not visible to a stateful firewall. Also, an increase in the use of laptops in enterprise makes it more difficult to prevent threats such as viruses, worms, and spyware from penetrating the corporate network, as many users will connect the laptop to less-secure networks such as home broadband connections or wireless networks in public locations. Firewalls also do not distinguish between permitted and forbidden uses of legitimately-accessed applications. DPI enables IT administrators and security officials to set policies and enforce them at all layers, including the application and user layer to help combat those threats.
Deep Packet Inspection is able to detect a few kinds of buffer overflow attacks.
DPI may be used by enterprise for Data Leak Prevention (DLP). When an e-mail user tries to send a protected file, the user may be given information on how to get the proper clearance to send the file.
At network/Internet service providers
In addition to using DPI to secure their internal networks, Internet service providers also apply it on the public networks provided to customers. Common uses of DPI by ISPs are lawful intercept, policy definition and enforcement, targeted advertising, quality of service, offering tiered services, and copyright enforcement.
Lawful interception
Service providers are required by almost all governments worldwide to enable lawful intercept capabilities. Decades ago in a legacy telephone environment, this was met by creating a traffic access point (TAP) using an intercepting proxy server that connects to the government's surveillance equipment. The acquisition component of this functionality may be provided in many ways, including DPI, DPI-enabled products that are "LI or CALEA-compliant" can be used – when directed by a court order – to access a user's datastream.
Policy definition and enforcement
Service providers obligated by the service-level agreement with their customers to provide a certain level of service and at the same time, enforce an acceptable use policy, may make use of DPI to implement certain policies that cover copyright infringements, illegal materials, and unfair use of bandwidth. In some countries the ISPs are required to perform filtering, depending on the country's laws. DPI allows service providers to "readily know the packets of information you are receiving online—from e-mail, to websites, to sharing of music, video and software downloads". Policies can be defined that allow or disallow connection to or from an IP address, certain protocols, or even heuristics that identify a certain application or behavior.
Targeted advertising
Because ISPs route the traffic of all of their customers, they are able to monitor web-browsing habits in a very detailed way allowing them to gain information about their customers' interests, which can be used by companies specializing in targeted advertising. At least 100,000 United States customers are tracked this way, and as many as 10% of U.S. customers have been tracked in this way. Technology providers include NebuAd, Front Porch, and Phorm. U.S. ISPs monitoring their customers include Knology and Wide Open West. In addition, the United Kingdom ISP British Telecom has admitted testing solutions from Phorm without their customers' knowledge or consent.
Quality of service
DPI can be used against net neutrality.
Applications such as peer-to-peer (P2P) traffic present increasing problems for broadband service providers. Typically, P2P traffic is used by applications that do file sharing. These may be any kind of files (i.e. documents, music, videos, or applications). Due to the frequently large size of media files being transferred, P2P drives increasing traffic loads, requiring additional network capacity. Service providers say a minority of users generate large quantities of P2P traffic and degrade performance for the majority of broadband subscribers using applications such as e-mail or Web browsing which use less bandwidth. Poor network performance increases customer dissatisfaction and leads to a decline in service revenues.
DPI allows the operators to oversell their available bandwidth while ensuring equitable bandwidth distribution to all users by preventing network congestion. Additionally, a higher priority can be allocated to a VoIP or video conferencing call which requires low latency versus web browsing which does not. This is the approach that service providers use to dynamically allocate bandwidth according to traffic that is passing through their networks.
Tiered services
Mobile and broadband service providers use DPI as a means to implement tiered service plans, to differentiate "walled garden" services from "value added", "all-you-can-eat" and "one-size-fits-all" data services. By being able to charge for a "walled garden", per application, per service, or "all-you-can-eat" rather than a "one-size-fits-all" package, the operator can tailor his offering to the individual subscriber and increase their average revenue per user (ARPU). A policy is created per user or user group, and the DPI system in turn enforces that policy, allowing the user access to different services and applications.
Copyright enforcement
ISPs are sometimes requested by copyright owners or required by courts or official policy to help enforce copyrights. In 2006, one of Denmark's largest ISPs, Tele2, was given a court injunction and told it must block its customers from accessing The Pirate Bay, a launching point for BitTorrent.
Instead of prosecuting file sharers one at a time, the International Federation of the Phonographic Industry (IFPI) and the big four record labels EMI, Sony BMG, Universal Music, and Warner Music have sued ISPs such as Eircom for not doing enough about protecting their copyrights. The IFPI wants ISPs to filter traffic to remove illicitly uploaded and downloaded copyrighted material from their network, despite European directive 2000/31/EC clearly stating that ISPs may not be put under a general obligation to monitor the information they transmit, and directive 2002/58/EC granting European citizens a right to privacy of communications.
The Motion Picture Association of America (MPAA) which enforces movie copyrights, has taken the position with the Federal Communications Commission (FCC) that network neutrality could hurt anti-piracy techniques such as deep packet inspection and other forms of filtering.
Statistics
DPI allows ISPs to gather statistical information about use patterns by user group. For instance, it might be of interest whether users with a 2Mbit connection use the network in a dissimilar manner to users with a 5Mbit connection. Access to trend data also helps network planning.
By governments
In addition to using DPI for the security of their own networks, governments in North America, Europe, and Asia use DPI for various purposes such as surveillance and censorship. Many of these programs are classified.
United States
FCC adopts Internet CALEA requirements: The FCC, pursuant to its mandate from the U.S. Congress, and in line with the policies of most countries worldwide, has required that all telecommunication providers, including Internet services, be capable of supporting the execution of a court order to provide real-time communication forensics of specified users. In 2006, the FCC adopted new Title 47, Subpart Z, rules requiring Internet Access Providers to meet these requirements. DPI was one of the platforms essential to meeting this requirement and has been deployed for this purpose throughout the U.S.
The National Security Agency (NSA), with cooperation from AT&T Inc., has used Deep Packet Inspection to make internet traffic surveillance, sorting, and forwarding more intelligent. The DPI is used to find which packets are carrying e-mail or a Voice over Internet Protocol (VoIP) telephone call.
Traffic associated with AT&T's Common Backbone was "split" between two fibers, dividing the signal so that 50 percent of the signal strength went to each output fiber. One of the output fibers was diverted to a secure room; the other carried communications on to AT&T's switching equipment. The secure room contained Narus traffic analyzers and logic servers; Narus states that such devices are capable of real-time data collection (recording data for consideration) and capture at 10 gigabits per second. Certain traffic was selected and sent over a dedicated line to a "central location" for analysis. According to an affidavit by expert witness J. Scott Marcus, a former senior advisor for Internet Technology at the US Federal Communications Commission, the diverted traffic "represented all, or substantially all, of AT&T’s peering traffic in the San Francisco Bay area", and thus, "the designers of the ... configuration made no attempt, in terms of location or position of the fiber split, to exclude data sources primarily of domestic data".
Narus's Semantic Traffic Analyzer software, which runs on IBM or Dell Linux servers using DPI, sorts through IP traffic at 10Gbit/s to pick out specific messages based on a targeted e-mail address, IP address or, in the case of VoIP, telephone number. President George W. Bush and Attorney General Alberto R. Gonzales have asserted that they believe the president has the authority to order secret intercepts of telephone and e-mail exchanges between people inside the United States and their contacts abroad without obtaining a FISA warrant.
The Defense Information Systems Agency has developed a sensor platform that uses Deep Packet Inspection.
China
The Chinese government uses Deep Packet Inspection to monitor and censor network traffic and content that it claims is harmful to Chinese citizens or state interests. This material includes pornography, information on religion, and political dissent. Chinese network ISPs use DPI to see if there is any sensitive keyword going through their network. If so, the connection will be cut. People within China often find themselves blocked while accessing Web sites containing content related to Taiwanese and Tibetan independence, Falun Gong, the Dalai Lama, the Tiananmen Square protests and massacre of 1989, political parties that oppose that of the ruling Communist party, or a variety of anti-Communist movements as those materials were signed as DPI sensitive keywords already. China previously blocked all VoIP traffic in and out of their country but many available VOIP applications now function in China. Voice traffic in Skype is unaffected, although text messages are subject to filtering, and messages containing sensitive material, such as curse-words, are simply not delivered, with no notification provided to either participant in the conversation. China also blocks visual media sites such as YouTube.com and various photography and blogging sites.
Iran
The Iranian government purchased a system, reportedly for deep packet inspection, in 2008 from Nokia Siemens Networks (NSN) (a joint venture Siemens AG, the German conglomerate, and Nokia Corp., the Finnish cell telephone company), now NSN is Nokia Solutions and Networks, according to a report in the Wall Street Journal in June, 2009, quoting NSN spokesperson Ben Roome. According to unnamed experts cited in the article, the system "enables authorities to not only block communication but to monitor it to gather information about individuals, as well as alter it for disinformation purposes".
The system was purchased by the Telecommunication Infrastructure Co., part of the Iranian government's telecom monopoly. According to the Journal, NSN "provided equipment to Iran last year under the internationally recognized concept of 'lawful intercept,' said Mr. Roome. That relates to intercepting data for the purposes of combating terrorism, child pornography, drug trafficking, and other criminal activities carried out online, a capability that most if not all telecom companies have, he said.... The monitoring center that Nokia Siemens Networks sold to Iran was described in a company brochure as allowing 'the monitoring and interception of all types of voice and data communication on all networks.' The joint venture exited the business that included the monitoring equipment, what it called 'intelligence solution,' at the end of March, by selling it to Perusa Partners Fund 1 LP, a Munich-based investment firm, Mr. Roome said. He said the company determined it was no longer part of its core business.
The NSN system followed on purchases by Iran from Secure Computing Corp. earlier in the decade.
Questions have been raised about the reporting reliability of the Journal report by David Isenberg, an independent Washington, D.C.-based analyst and Cato Institute Adjunct Scholar, specifically saying that Mr. Roome is denying the quotes attributed to him and that he, Isenberg, also had similar complaints with one of the same Journal reporters in an earlier story. NSN has issued the following denial: NSN "has not provided any deep packet inspection, web censorship or Internet filtering capability to Iran". A concurrent article in The New York Times stated the NSN sale had been covered in a "spate of news reports in April [2009], including The Washington Times," and reviewed censorship of the Internet and other media in the country, but did not mention DPI.
According to Walid Al-Saqaf, the developer of the internet censorship circumventor Alkasir, Iran was using deep packet inspection in February 2012, bringing internet speeds in the entire country to a near standstill. This briefly eliminated access to tools such as Tor and Alkasir.
Russian Federation
DPI is not yet mandated in Russia. Federal Law No.139 enforces blocking websites on the Russian Internet blacklist using IP filtering, but does not force ISPs into analyzing the data part of packets. Yet some ISPs still use different DPI solutions to implement blacklisting. For 2019, the governmental agency Roskomnadzor is planning a nationwide rollout of DPI after the pilot project in one of the country's regions, at an estimated cost of 20 billion roubles (US$300M).
Some human rights activists consider Deep Packet inspection contrary to Article 23 of the Constitution of the Russian Federation, though a legal process to prove or refute that has never taken place.
Singapore
The city state reportedly employs deep packet inspection of Internet traffic.
Syria
The state reportedly employs deep packet inspection of Internet traffic, to analyze and block forbidden transit.
Malaysia
The incumbent Malaysian government, headed by Barisan Nasional, was said to be using DPI against a political opponent during the run-up to the 13th general elections held on 5 May 2013.
The purpose of DPI, in this instance, was to block and/or hinder access to selected websites, e.g. Facebook accounts, blogs and news portals.
Egypt
Since 2015, Egypt reportedly started to join the list which was constantly being denied by the Egyptian National Telecom Regulatory Authority (NTRA) officials. However, it came to news when the country decided to block the encrypted messaging app Signal as announced by the application's developer.
In April 2017, all VOIP applications including FaceTime, Facebook Messenger, Viber, Whatsapp calls and Skype have been all blocked in the country.
Vietnam
Vietnam launched its network security center and required ISPs to upgrade their hardware systems to use deep packet inspection to block Internet traffic.
Net neutrality
People and organizations concerned about privacy or network neutrality find inspection of the content layers of the Internet protocol to be offensive, saying for example, "the 'Net was built on open access and non-discrimination of packets!" Critics of network neutrality rules, meanwhile, call them "a solution in search of a problem" and say that net neutrality rules would reduce incentives to upgrade networks and launch next-generation network services.
Deep packet inspection is considered by many to undermine the infrastructure of the internet.
Encryption and tunneling subverting DPI
With increased use of HTTPS and privacy tunneling using VPNs, the effectiveness of DPI is coming into question. In response, many web application firewalls now offer HTTPS inspection, where they decrypt HTTPS traffic to analyse it. The WAF can either terminate the encryption, so the connection between WAF and client browser uses plain HTTP, or re-encrypt the data using its own HTTPS certificate, which must be distributed to clients beforehand. The techniques used in HTTPS / SSL Inspection (also known as HTTPS / SSL Interception) are the same used by man-in-the-middle (MiTM) attacks
It works like this:
Client wants to connect to https://www.targetwebsite.com
Traffic goes through Firewall or Security Product
Firewall works as transparent Proxy
Firewall Creates SSL Certificate signed by its own "CompanyFirewall CA"
Firewall presents this "CompanyFirewall CA" Signed Certificate to Client (not the targetwebsite.com Certificate)
At the same time the Firewall on its own connects to https://www.targetwebsite.com
targetwebsite.com Presents its Officially Signed Certificate (Signed by a Trusted CA)
Firewall checks Certificate Trust chain on its own
Firewall now works as Man-in-the-middle.
Traffic from Client will be decrypted (with Key Exchange Information from Client), analysed (for harmful traffic, policy violation or viruses), encrypted (with Key Exchange Information from targetwebsite.com) and sent to targetwebsite.com
Traffic from targetwebsite.com will also be decrypted (with Key Exchange Information from targetwebsite.com), analysed (like above), encrypted (with Key Exchange Information from Client) and sent to Client.
The Firewall Product can read all information exchanged between SSL-Client and SSL-Server (targetwebsite.com)
This can be done with any TLS-Terminated connection (not only HTTPS) as long as the firewall product can modify the TrustStore of the SSL-Client
Infrastructure security
Traditionally the mantra which has served ISP well has been to only operate at layer 4 and below of the OSI model. This is because simply deciding where packets go and routing them is comparably very easy to handle securely. This traditional model still allows ISPs to accomplish required tasks safely such as restricting bandwidth depending on the amount of bandwidth that is used (layer 4 and below) rather than per protocol or application type (layer 7). There is a very strong and often ignored argument that ISP action above layer 4 of the OSI model provides what are known in the security community as 'stepping stones' or platforms to conduct man in the middle attacks from. This problem is exacerbated by ISP's often choosing cheaper hardware with poor security track records for the very difficult and arguably impossible to secure task of Deep Packet Inspection.
OpenBSD's packet filter specifically avoids DPI for the very reason that it cannot be done securely with confidence.
This means that DPI dependent security services such as TalkTalk's former HomeSafe implementation are actually trading the security of a few (protectable and often already protectable in many more effective ways) at a cost of decreased security for all where users also have a far less possibility of mitigating the risk. The HomeSafe service in particular is opt in for blocking but its DPI cannot be opted out of, even for business users.
Software
nDPI (a fork from OpenDPI which is EoL by the developers of ntop) is the open source version for non-obfuscated protocols. PACE, another such engine, includes obfuscated and encrypted protocols, which are the types associated with Skype or encrypted BitTorrent. As OpenDPI is no longer maintained, an OpenDPI-fork named nDPI has been created, actively maintained and extended with new protocols including Skype, Webex, Citrix and many others.
L7-Filter is a classifier for Linux's Netfilter that identifies packets based on application layer data. It can classify packets such as Kazaa, HTTP, Jabber, Citrix, Bittorrent, FTP, Gnucleus, eDonkey2000, and others. It classifies streaming, mailing, P2P, VOIP, protocols, and gaming applications. The software has been retired and replaced by the open source Netify DPI Engine.
Hippie (Hi-Performance Protocol Identification Engine) is an open source project which was developed as Linux kernel module. It was developed by Josh Ballard. It supports both DPI as well as firewall functionality.
SPID (Statistical Protocol IDentification) project is based on statistical analysis of network flows to identify application traffic. The SPID algorithm can detect the application layer protocol (layer 7) by signatures (a sequence of bytes at a particular offset in the handshake), by analyzing flow information (packet sizes, etc.) and payload statistics (how frequently the byte value occurs in order to measure entropy) from pcap files. It is just a proof of concept application and currently supports approximately 15 application/protocols such as eDonkey Obfuscation traffic, Skype UDP and TCP, BitTorrent, IMAP, IRC, MSN, and others.
Tstat (TCP STatistic and Analysis Tool) provides insight into traffic patterns and gives details and statistics for numerous applications and protocols.
Libprotoident introduces Lightweight Packet Inspection (LPI), which examines only the first four bytes of payload in each direction. That allows to minimize privacy concerns, while decreasing the disk space needed to store the packet traces necessary for the classification. Libprotoident supports over 200 different protocols and the classification is based on a combined approach using payload pattern matching, payload size, port numbers, and IP matching.
A French company called Amesys, designed and sold an intrusive and massive internet monitoring system Eagle to Muammar Gaddafi.
Comparison
A comprehensive comparison of various network traffic classifiers, which depend on Deep Packet Inspection (PACE, OpenDPI, 4 different configurations of L7-filter, NDPI, Libprotoident, and Cisco NBAR), is shown in the Independent Comparison of Popular DPI Tools for Traffic Classification.
Hardware
There is a greater emphasis being placed on deep packet inspection - this comes in light after the rejection of both the SOPA and PIPA bills. Many current DPI methods are slow and costly, especially for high bandwidth applications. More efficient methods of DPI are being developed. Specialized routers are now able to perform DPI; routers armed with a dictionary of programs will help identify the purposes behind the LAN and internet traffic they are routing. Cisco Systems is now on their second iteration of DPI enabled routers, with their announcement of the CISCO ISR G2 router.
See also
Common carrier
Data Retention Directive
Deep content inspection
ECHELON
Firewall
Foreign Intelligence Surveillance Act
Golden Shield
Intrusion prevention system
Network neutrality
NSA warrantless surveillance controversy
Packet analyzer
Stateful firewall
Theta Networks
Wireshark
References
External links
What is "Deep Inspection"? by Marcus J. Ranum. Retrieved 10 December 2018.
A collection of essays from industry experts
What Is Deep Packet Inspection and Why the Controversy
White Paper "Deep Packet Inspection – Technology, Applications & Net Neutrality"
Egypt's cyber-crackdown aided by US Company - DPI used by Egyptian government in recent internet crackdown
Deep Packet Inspection puts its stamp on an evolving Internet
Deep Packet Inspection Using Quotient Filter
Computer network security
Internet censorship in China
Internet censorship
Internet privacy
Net neutrality
Packets (information technology) |
455386 | https://en.wikipedia.org/wiki/Mental%20calculator | Mental calculator | Human calculator is a term to describe a person with a prodigious ability in some area of mental calculation (such as adding, subtracting, multiplying or dividing large numbers).
The world's best mental calculators are invited every two years to compete for the Mental Calculation World Cup. On September 30, 2018, 15-year old Tomohiro Iseda of Japan, succeeded 27-year-old Yuki Kimura of Japan, as the current world champion. (2018-2020). Tomohiro Iseda is the third Japanese person to win the Cup, after Naofumi Ogasawara (2012) and Yuki Kimura (2016). Shakuntala Devi from India has been often mentioned on the Guinness World Records. Neelakantha Bhanu Prakash from India has been often mentioned on the Limca Book of Records for racing past the speed of a calculator in addition. Srilankan-Malaysian performer Yaashwin Sarawanan was the runner-up in 2019 Asia's Got Talent.
In 2005, a group of researchers led by Michael W. O'Boyle, an American psychologist previously working in Australia and now at Texas Tech University, has used MRI scanning of blood flow during mental operation in computational prodigies. These math prodigies have shown increases in blood flow to parts of the brain responsible for mathematical operations during a mental rotation task that are greater than the typical increases.
Mental calculators were in great demand in research centers such as CERN before the advent of modern electronic calculators and computers. See, for instance, Steven B. Smith's 1983 book The Great Mental Calculators, or the 2016 book Hidden Figures and the film adapted from it.
Champion mental calculators
Every two years the world's best mental calculators are invited to participate in The Mental Calculation World Cup, an international competition that attempts to find the world's best mental calculator, and also the best at specific types of mental calculation, such as multiplication or calendar reckoning. The top three final placings from each of the world cups that have been staged to date are shown below.
First Mental Calculation World Cup (Annaberg-Buchholz, 2004)
Second Mental Calculation World Cup (Gießen, 2006)
Third Mental Calculation World Cup (Leipzig, 2008)
Fourth Mental Calculation World Cup (Magdeburg, 2010)
Fifth Mental Calculation World Cup (Gießen, 2012)
Sixth Mental Calculation World Cup (Dresden, 2014)
Seventh Mental Calculation World Cup (Bielefeld, 2016)
Eighth Mental Calculation World Cup (Wolfsburg, 2018)
The Mind Sports Olympiad has staged an annual world championships since 1998.
MSO mental calculation gold medal winners
The Mind Sports Organisation recognizes five grandmasters of mental calculation: Robert Fountain (1999), George Lane (2001), Gert Mittring (2005), Chris Bryant (2017) and Wenzel Grüß (2019), and one international master, Andy Robertshaw (2008). In 2021, Aaryan Nitin Shukla became the youngest champion ever at an age of just 11 years.
Mental calculators (deceased)
Aitken, Alexander Craig (1895-1967), New Zealand mathematician
Ampère, André-Marie
Bidder, George Parker
Buxton, Jedediah
Colburn, Zerah
Dase, Johann Zacharias
Devi, Shakuntala
Dysart, Willis (a.k.a. Willie the Wizard)
Eberstark, Hans
Euler, Leonhard
Finkelstein, Salo
Fuller, Thomas
Gauss, Carl Friedrich (1777-1855), German mathematician and physicist
Griffith, Arthur F.
Hamilton, William Rowan
Inaudi, Jacques
Klein, Wim (a.k.a. Willem Klein)
McCartney, Daniel
Neumann, John von
Ramanujan, Srinivasa
Riemann, Bernhard
Safford, Truman Henry
Shelushkov, Igor
Wallis, John
Mental calculators in fiction
Dune
In Frank Herbert's novel Dune, specially trained mental calculators known as Mentats have replaced mechanical computers completely. Several important supporting characters in the novel, namely Piter De Vries and Thufir Hawat, are Mentats. Paul Atreides was originally trained as one without his knowledge. However, these Mentats do not specialize in mathematical calculations, but in total recall of many different kinds of data. For example, Thufir Hawat is able to recite various details of a mining operation, including the number of various pieces of equipment, the people to work them, the profits and costs involved, etc. In the novel he is never depicted as doing actual academic mathematical calculations. Mentats were valued for their capacity as humans to store data, because "thinking machines" are outlawed.
Matilda
In Roald Dahl's novel Matilda, the lead character is portrayed having exceptional computational skills as she computes her father's profit without the need for paper computations. During class (she is a first-year elementary school student), she does large-number multiplication problems in her head almost instantly.
Other
In the 1988 movie Rain Man, Raymond Babbitt, who has savant syndrome, can mentally calculate large numbers, amongst other abilities.
Andrew Jackson "Slipstick" Libby is a calculating prodigy in Robert A. Heinlein's Sci-Fi story Methuselah's Children.
In the USA Network legal drama Suits, the main character, Mike Ross, is asked to multiply considerably large numbers in his head to impress two girls, and subsequently does so.
In Haruki Murakami's novel Hard-Boiled Wonderland and the End of the World, a class of mental calculators known as Calcutecs perform cryptography in a sealed-off portion of their brains, the results of which they are unable to access from their normal waking consciousness.
In the Fox television show Malcolm in the Middle, Malcolm Wilkerson displays astounding feats of automatic mental calculation, which causes him to fear his family will see him as a "freak", and causes his brother to ask, "Is Malcolm a robot?".
In the 1991 movie Little Man Tate, Fred Tate in the audience blurts out the answer during a mental calculation contest.
In the 1990s NBC TV sitcom NewsRadio, reporter/producer Lisa Miller can mentally calculate products, quotients, and square roots effortlessly and almost instantly, on demand.
In the 1997 Sci-Fi thriller Cube, one of the prisoners, Kazan, appears to be mentally disabled, but is revealed later in the film to be an autistic savant who is able to calculate prime factors in his head.
In 1998 Darren Aronofsky's film Pi, Maximillian Cohen is asked a few times by a young child with a calculator to do large multiplications and divisions in his head, which he promptly does, correctly.
In 1998 film Mercury Rising, a 9-year-old autistic savant with prodigious math abilities cracks a top secret government code.
In the 2006 film Stranger than Fiction, the main character, Harold Crick, is able to perform rapid arithmetic at the request of his co-workers.
In the 2009 Japanese animated film Summer Wars, the main character, mathematical genius Kenji Koiso, is able to mentally break purely mathematical encryption codes generated by the OZ virtual world's security system. He can also mentally calculate the day of the week a person was born, based on their birthday.
In another Fox television show, Fringe, in the third episode of the third season, Olivia and her fellow Fringe Division members encounter an individual with severe cognitive impairment who has been given experimental nootropics and as a result has become a mathematical genius. The individual is able to calculate hundreds of equations simultaneously, which he leverages to avoid being returned to his original state of cognitive impairment.
In the 2012 film Safe, a female child math genius is kidnapped to be used by the Chinese Triad.
In the 2014 Sci-Fi novel Double Bill by S. Ayoade, Devi Singh, a mental calculator, is one of the 70 lucky children who win a trip to the moon.
In the 2014 TV series Scorpion, Sylvester Dodd, a gifted mathematician and statistician with an IQ of 175; he is described as a "human calculator".
Shameless Season 7 Episode 1
In the 2016 film The Accountant, a high-functioning autistic tracks insider financial deceptions for numerous criminal organizations.
In the 2017 film Gifted, an intellectually gifted seven-year-old, Mary Adler, becomes the subject of a custody battle between her uncle and grandmother.
In 2020, an eponymous film Shakuntala Devi on the life of Indian mathematician, writer, astrologer and mental calculator Shakuntala Devi.
See also
Child prodigy
Human computer
Hypercalculia
Mental Calculation World Cup
Mnemonist
Genius
References
External links
Mental Calculation World Cup site
Memoriad site
Prodigy Calculators by Viktor Pekelis
Willem Klein
Prodigy Calculators by Viktor Pekelis
Thought and machine processes
Methods and Relevance to Brain efficiency of Neelakantha Bhanu
Tricks and techniques
MSO Results
Lightning Calculators is a three-part essay that discusses these individuals, their methods, and the media coverage of them.
Giftedness |
456619 | https://en.wikipedia.org/wiki/Kryptos | Kryptos | Kryptos is a sculpture by the American artist Jim Sanborn located on the grounds of the Central Intelligence Agency (CIA) in Langley, Virginia. Since its dedication on November 3, 1990, there has been much speculation about the meaning of the four encrypted messages it bears. Of these four messages, the first three have been solved, while the fourth message remains one of the most famous unsolved codes in the world. The sculpture continues to be of interest to cryptanalysts, both amateur and professional, who are attempting to decipher the fourth passage. The artist has so far given four clues to this passage.
Description
The main part of the sculpture is located in the northwest corner of the New Headquarters Building courtyard, outside of the Agency's cafeteria. The sculpture comprises four large copper plates with other elements consisting of water, wood, plants, red and green granite, white quartz, and petrified wood. The most prominent feature is a large vertical S-shaped copper screen resembling a scroll or a piece of paper emerging from a computer printer, half of which consists of encrypted text. The characters are all found within the 26 letters of the Latin alphabet, along with question marks, and are cut out of the copper plates. The main sculpture contains four separate enigmatic messages, three of which have been deciphered.
In addition to the main part of the sculpture, Jim Sanborn also placed other pieces of art at the CIA grounds, such as several large granite slabs with sandwiched copper sheets outside the entrance to the New Headquarters Building. Several morse code messages are found on these copper sheets, and one of the stone slabs has an engraving of a compass rose pointing to a lodestone. Other elements of Sanborn's installation include a landscaped garden area, a fish pond with opposing wooden benches, a reflecting pool, and other pieces of stone including a triangle-shaped black stone slab.
The name Kryptos comes from the ancient Greek word for "hidden", and the theme of the sculpture is "Intelligence Gathering".
The cost of the sculpture in 1988 was US $250,000 (worth US $501,000 in 2016).
Encrypted messages
The ciphertext on the left-hand side of the sculpture (as seen from the courtyard) of the main sculpture contains 869 characters in total: 865 letters and 4 question marks.
In April 2006, however, Sanborn released information stating that a letter was omitted from this side of Kryptos "for aesthetic reasons, to keep the sculpture visually balanced".
There are also three misspelled words in the plaintext of the deciphered first three passages, which Sanborn has said was intentional, and three letters (YAR) near the beginning of the bottom half of the left side are the only characters on the sculpture in superscript.
The right-hand side of the sculpture comprises a keyed Vigenère encryption tableau, consisting of 867 letters.
One of the lines of the Vigenère tableau has an extra character (L). Bauer, Link, and Molle suggest that this may be a reference to the Hill cipher as an encryption method for the fourth passage of the sculpture. However, Sanborn omitted the extra letter from small Kryptos models that he sold, so it must not have been considered important.
Sanborn worked with a retiring CIA employee named Ed Scheidt, Chairman of the CIA Office of Communications, to come up with the cryptographic systems used on the sculpture.
Sanborn has revealed that the sculpture contains a riddle within a riddle, which will be solvable only after the four encrypted passages have been deciphered.
He has given conflicting information about the sculpture's answer, saying at one time that he gave the complete solution to the then-CIA director William Webster during the dedication ceremony; but later, he also said that he had not given Webster the entire solution. He did, however, confirm that a passage of the plaintext of the second message reads "Who knows the exact location? Only WW."
Sanborn also confirmed that should he die before the entire sculpture becomes deciphered, there will be someone able to confirm the solution. In 2020, Sanborn stated that he planned to put the secret to the solution up for auction when he dies.
Solvers
The first person to announce publicly that he had solved the first three passages was Jim Gillogly, a computer scientist from southern California, who deciphered these passages using a computer, and revealed his solutions in 1999.
After Gillogly's announcement, the CIA revealed that their analyst David Stein had solved the same passages in 1998 using pencil and paper techniques, although at the time of his solution the information was only disseminated within the intelligence community. No public announcement was made until July 1999, although in November 1998 it was revealed that "a CIA analyst working on his own time [had] solved the lion's share of it".
The NSA also claimed that some of their employees had solved the same three passages, but would not reveal names or dates until March 2000, when it was learned that an NSA team led by Ken Miller, along with Dennis McDaniels and two other unnamed individuals, had solved passages 1–3 in late 1992.
In 2013, in response to a Freedom of Information Act request by Elonka Dunin, the NSA released documents which show the NSA became involved in attempts to solve the Kryptos puzzle in 1992, following a challenge by Bill Studeman, then Deputy Director of the CIA. The documents show that by June 1993, a small group of NSA cryptanalysts had succeeded in solving the first three passages of the sculpture.
The above attempts to solve Kryptos all had found that passage 2 ended with WESTIDBYROWS.
In 2005, Dr Nicole Friedrich, a Logician from Vancouver, Canada, determined that another possible plaintext was: WESTPLAYERTWO.
Dr Friedrich solved the ending to section K2 from a clue that became apparent after she had determined a running cipher of K4 that resulted in an incomplete but partially legible K4 plaintext, involving text such as XPIST, REALIZE, AYD EQ HR, and others, but the find that instigated her discovery of K2 plaintext was the clue WESTX.
On April 19, 2006, Sanborn contacted an online community dedicated to the Kryptos puzzle to inform them that what was once the accepted solution to passage 2 was incorrect.
Sanborn said that he made an error in the sculpture by omitting an S in the ciphertext, an X in the plaintext, and he confirmed that the last passage of the plaintext was WESTXLAYERTWO, and not WESTIDBYROWS.
Solutions
The following are the solutions of passages 1–3 of the sculpture.
Misspellings present in the text are included verbatim.
Solution of passage 1
Method: Vigenère
Keywords: Kryptos, Palimpsest
BETWEEN SUBTLE SHADING AND THE ABSENCE OF LIGHT LIES THE NUANCE OF IQLUSION
Iqlusion was an intentional misspelling of illusion.
Solution of passage 2
Method: Vigenère
Keywords: Kryptos, Abscissa
IT WAS TOTALLY INVISIBLE HOWS THAT POSSIBLE ? THEY USED THE EARTHS MAGNETIC FIELD X THE INFORMATION WAS GATHERED AND TRANSMITTED UNDERGRUUND TO AN UNKNOWN LOCATION X DOES LANGLEY KNOW ABOUT THIS ? THEY SHOULD ITS BURIED OUT THERE SOMEWHERE X WHO KNOWS THE EXACT LOCATION ? ONLY WW THIS WAS HIS LAST MESSAGE X THIRTY EIGHT DEGREES FIFTY SEVEN MINUTES SIX POINT FIVE SECONDS NORTH SEVENTY SEVEN DEGREES EIGHT MINUTES FORTY FOUR SECONDS WEST X LAYER TWO
The coordinates mentioned in the plaintext, , have been interpreted using a modern Geodetic datum as indicating a point that is approximately 174 feet (53 meters) southeast of the sculpture; however, the more likely datum NAD 27 (used for USGS topographic maps and United States Army Corps of Engineers projects) indicates a point at a cafeteria doorway.
Solution of passage 3
Method: Transposition
SLOWLY DESPARATLY SLOWLY THE REMAINS OF PASSAGE DEBRIS THAT ENCUMBERED THE LOWER PART OF THE DOORWAY WAS REMOVED WITH TREMBLING HANDS I MADE A TINY BREACH IN THE UPPER LEFT HAND CORNER AND THEN WIDENING THE HOLE A LITTLE I INSERTED THE CANDLE AND PEERED IN THE HOT AIR ESCAPING FROM THE CHAMBER CAUSED THE FLAME TO FLICKER BUT PRESENTLY DETAILS OF THE ROOM WITHIN EMERGED FROM THE MIST X CAN YOU SEE ANYTHING Q ?
This is a paraphrased quotation from Howard Carter's account of the opening of the tomb of Tutankhamun on November 26, 1922, as described in his 1923 book The Tomb of Tutankhamun. The question with which it ends is asked by Lord Carnarvon, to which Carter (in the book) famously replied "wonderful things". In the November 26, 1922, field notes, however, his reply was, "Yes, it is wonderful".
Clues given for passage 4
When commenting in 2006 about his error in passage 2, Sanborn said that the answers to the first three passages contain clues to the fourth passage. In November 2010, Sanborn released a clue, publicly stating that "NYPVTT", the 64th–69th letters in passage four, become "BERLIN" after decryption.
Sanborn gave The New York Times another clue in November 2014: the letters "MZFPK", the 70th–74th letters in passage four, become "CLOCK" after decryption. The 74th letter is K in both the plaintext and ciphertext, meaning that it is possible for a character to encrypt to itself. This means it does not have the weakness, where a character could never be encrypted as itself, that was known to be inherent in the German Enigma machine.
Sanborn further stated that in order to solve passage 4, "You'd better delve into that particular clock," but added, "There are several really interesting clocks in Berlin." The particular clock in question is presumably the Berlin Clock, although the Alexanderplatz World Clock and Clock of Flowing Time are other candidates.
In an article published on January 29, 2020, by the New York Times, Sanborn gave another clue: at positions 26–34, ciphertext "QQPRNGKSS" is the word "NORTHEAST".
In August 2020, Sanborn revealed that the four letters in positions 22–25, ciphertext "FLRV", in the plaintext are "EAST". Sanborn commented that he "released this layout to several people as early as April". The first person known to have shared this hint more widely was Sukhwant Singh.
Related sculptures
Kryptos was the first cryptographic sculpture made by Sanborn.
After producing Kryptos he went on to make several other sculptures with codes and other types of writing, including one entitled Antipodes, which is at the Hirshhorn Museum in Washington, D.C., an "Untitled Kryptos Piece" that was sold to a private collector, and Cyrillic Projector, which contains encrypted Russian Cyrillic text that included an extract from a classified KGB document.
The cipher on one side of Antipodes repeats the text from Kryptos. Much of the cipher on Antipodes other side is duplicated on Cyrillic Projector. The Russian portion of the cipher found on Cyrillic Projector and Antipodes was solved in 2003 by Frank Corr and Mike Bales independently from each other with translation from Russian plaintext provided by Elonka Dunin.
Ex Nexum was installed in 1997 at Little Rock Old U.S. Post Office & Courthouse.
Some additional sculptures by Sanborn include Native American texts: Rippowam was installed at the University of Connecticut, in Stamford in 1999, while Lux was installed in 2001 at an old US Post Office building in Fort Myers, Florida. Iacto is located at the University of Iowa, between the Adler Journalism Building and Main Library.
Indian Run is located next to the U.S. Federal Courthouse in Beltsville, Maryland, and contains a bronze cylinder perforated with the text of the Iroquois Book of the Great Law.
This document includes the contribution of the indigenous peoples to the United States legal system.
The text is written in Onondaga and was transcribed from the ancient oral tradition of five Iroquois nations.
A,A was installed at the Plaza in front of the new library at the University of Houston, in Houston, Texas, in 2004, and Radiance was installed at the Department of Energy, Coast, and Environment, Louisiana State University, Baton Rouge in 2008.
In popular culture
The dust jacket of the US version of Dan Brown's 2003 novel The Da Vinci Code contains two references to Kryptos—one on the back cover (coordinates printed light red on dark red, vertically next to the blurbs) is a reference to the coordinates mentioned in the plaintext of passage 2 (see above), except the degrees digit is off by one. When Brown and his publisher were asked about this, they both gave the same reply: "The discrepancy is intentional". The coordinates were part of the first clue of the second Da Vinci Code WebQuest, the first answer being Kryptos. The other reference is hidden in the brown "tear" artwork—upside-down words which say "Only WW knows", which is another reference to the second message on Kryptos.
Kryptos features in Dan Brown's 2009 novel The Lost Symbol.
A small version of Kryptos appears in the season 5 episode of Alias "S.O.S.". In it, Marshall Flinkman, in a small moment of comic relief, says he has cracked the code just by looking at it during a tour visit to the CIA office. The solution he describes sounds like the solution to the first two parts.
In the season 2 episode of The King of Queens "Meet By-Product", a framed picture of Kryptos hangs on the wall by the door.
The progressive metal band Between the Buried and Me has a reference to Kryptos in their song "Obfuscation" from their 2009 album, The Great Misdirect.
In the book "Muko and the Secret" four young pupils from the class of Naturals learn about the mysterious sculpture hidden in the school. The hints in the book suggest that this is the sculpture of "Kryptos".
See also
A,A
Copiale cipher
History of cryptography
Voynich manuscript
Notes
References
Books
(contains 1–2 pages about Kryptos)
Journal articles
Articles
Kryptos 1,735 Alphabetical letters
"Gillogly Cracks CIA Art", & "The Kryptos Code Unmasked", 1999, New York Times and Cypherpunks archive
"Unlocking the secret of Kryptos", March 17, 2000, Sun Journal
"Solving the Enigma of Kryptos", January 26, 2005, Wired, by Kim Zetter
"Interest grows in solving cryptic CIA puzzle after link to Da Vinci Code", June 11, 2005, The Guardian
"Cracking the Code", June 19, 2005, CNN
Mission Impossible: The Code Even the CIA Can't Crack
External links
Jim Sanborn's official Kryptos webpage by Jim Sanborn
Kryptos website maintained by Elonka Dunin (includes Kryptos FAQ, transcript, pictures and links)
Kryptos photos by Jim Gillogly
Washington Post: Cracking the Code of a CIA Sculpture
Wired : Documents Reveal How the NSA Cracked the Kryptos Sculpture Years Before the CIA
PBS : Segment (Video) on Kryptos from Nova ScienceNow
The General Services Administration Kryptos webpage
The General Services Administration Ex Nexum webpage
The General Services Administration Indian Run webpage
The General Services Administration Binary Systems webpage
The Central Intelligence Agency Kryptos webpage
The National Security Agency Kryptos webpage
The Kryptos Project by Julie (Jew-Lee) Irena Lann
Nicole (Monet) Friedrich's Kryptos Observations
Patrick Foster's Kryptos page
History of cryptography
Outdoor sculptures in Virginia
Central Intelligence Agency
Riddles
Undeciphered historical codes and ciphers
1990 sculptures
McLean, Virginia
Sculptures by Jim Sanborn
Buildings and structures in Fairfax County, Virginia
Copper sculptures in the United States
Granite sculptures in Virginia
Stone sculptures in Virginia
Wooden sculptures in the United States
1990 establishments in Virginia |
458253 | https://en.wikipedia.org/wiki/Secret%20sharing | Secret sharing | Secret sharing (also called secret splitting) refers to methods for distributing a secret among a group of participants, each of whom is allocated a share of the secret. The secret can be reconstructed only when a sufficient number, of possibly different types, of shares are combined; individual shares are of no use on their own.
In one type of secret sharing scheme there is one dealer and n players. The dealer gives a share of the secret to the players, but only when specific conditions are fulfilled will the players be able to reconstruct the secret from their shares. The dealer accomplishes this by giving each player a share in such a way that any group of t (for threshold) or more players can together reconstruct the secret but no group of fewer than t players can. Such a system is called a -threshold scheme (sometimes it is written as an -threshold scheme).
Secret sharing was invented independently by Adi Shamir and George Blakley in 1979.
Importance
Secret sharing schemes are ideal for storing information that is highly sensitive and highly important. Examples include: encryption keys, missile launch codes, and numbered bank accounts. Each of these pieces of information must be kept highly confidential, as their exposure could be disastrous, however, it is also critical that they should not be lost. Traditional methods for encryption are ill-suited for simultaneously achieving high levels of confidentiality and reliability. This is because when storing the encryption key, one must choose between keeping a single copy of the key in one location for maximum secrecy, or keeping multiple copies of the key in different locations for greater reliability. Increasing reliability of the key by storing multiple copies lowers confidentiality by creating additional attack vectors; there are more opportunities for a copy to fall into the wrong hands. Secret sharing schemes address this problem, and allow arbitrarily high levels of confidentiality and reliability to be achieved.
Secret sharing schemes are important in cloud computing environments. Thus a key can be distributed over many servers by a threshold secret sharing mechanism. The key is then reconstructed when needed. Secret sharing has also been suggested for sensor networks where the links are liable to be tapped by sending the data in shares which makes the task of the eavesdropper harder. The security in such environments can be made greater by continuous changing of the way the shares are constructed.
"Secure" versus "insecure" secret sharing
A secure secret sharing scheme distributes shares so that anyone with fewer than t shares has no more information about the secret than someone with 0 shares.
Consider for example the secret sharing scheme in which the secret phrase "password" is divided into the shares "pa––––––", "––ss––––", "––––wo––", and "––––––rd". A person with 0 shares knows only that the password consists of eight letters, and thus would have to guess the password from 268 = 208 billion possible combinations. A person with one share, however, would have to guess only the six letters, from 266 = 308 million combinations, and so on as more persons collude. Consequently, this system is not a "secure" secret sharing scheme, because a player with fewer than t secret shares is able to reduce the problem of obtaining the inner secret without first needing to obtain all of the necessary shares.
In contrast, consider the secret sharing scheme where X is the secret to be shared, Pi are public asymmetric encryption keys and Qi their corresponding private keys. Each player J is provided with In this scheme, any player with private key 1 can remove the outer layer of encryption, a player with keys 1 and 2 can remove the first and second layer, and so on. A player with fewer than N keys can never fully reach the secret X without first needing to decrypt a public-key-encrypted blob for which he does not have the corresponding private key – a problem that is currently believed to be computationally infeasible. Additionally we can see that any user with all N private keys is able to decrypt all of the outer layers to obtain X, the secret, and consequently this system is a secure secret distribution system.
Limitations
Several secret-sharing schemes are said to be information-theoretically secure and can be proven to be so, while others give up this unconditional security for improved efficiency while maintaining enough security to be considered as secure as other common cryptographic primitives. For example, they might allow secrets to be protected by shares with 128-bits of entropy each, since each share would be considered enough to stymie any conceivable present-day adversary, requiring a brute force attack of average size 2127.
Common to all unconditionally secure secret sharing schemes, there are limitations:
Each share of the secret must be at least as large as the secret itself. This result is based in information theory, but can be understood intuitively. Given shares, no information whatsoever can be determined about the secret. Thus, the final share must contain as much information as the secret itself. There is sometimes a workaround for this limitation by first compressing the secret before sharing it, but this is often not possible because many secrets (keys for example) look like high-quality random data and thus are hard to compress.
All secret sharing schemes use random bits. To distribute a one-bit secret among threshold t people, random bits are necessary. To distribute a secret of arbitrary length b bits, entropy of bits is necessary.
Trivial secret sharing
t = 1
t = 1 secret sharing is trivial. The secret can simply be distributed to all n participants.
t = n
There are several secret-sharing schemes for , when all shares are necessary to recover the secret:
Encode the secret as an arbitrary length binary number s. Give to each player i (except one) a random number pi with the same length as s. Give to the last player the result of (s XOR p1 XOR p2 XOR ... XOR pn−1) where XOR is bitwise exclusive or. The secret is the bitwise XOR of all the players' numbers (p).
Additionally, (1) can be performed using any linear operator in any field. For example, here's an alternative that is functionally equivalent to (1). Let's select 32-bit integers with well-defined overflow semantics (i.e. the correct answer is preserved, modulo 232). First, s can be divided into a vector of M 32-bit integers called vsecret. Then players are each given a vector of M random integers, player i receiving vi. The remaining player is given vn = (vsecret − v1 − v2 − ... − vn−1). The secret vector can then be recovered by summing across all the player's vectors.
1 < t < n, and, more generally, any desired subset of {1,2,...,n}
The difficulty lies in creating schemes that are still secure, but do not require all n shares. For example, imagine that the Board of Directors of a company would like to protect their secret formula. The president of the company should be able to access the formula when needed, but in an emergency any 3 of the 12 board members would be able to unlock the secret formula together. This can be accomplished by a secret sharing scheme with t = 3 and n = 15, where 3 shares are given to the president, and 1 is given to each board member.
When space efficiency is not a concern, trivial t = n schemes can be used to reveal a secret to any desired subsets of the players simply by applying the scheme for each subset. For example, to reveal a secret s to any two of the three players Alice, Bob and Carol, create three () different secret shares for s, giving the three sets of two shares to Alice and Bob, Alice and Carol, and Bob and Carol.
Efficient secret sharing
The trivial approach quickly becomes impractical as the number of subsets increases, for example when revealing a secret to any 50 of 100 players, which would require schemes to be created and each player to maintain distinct sets of shares for each scheme. In the worst case, the increase is exponential. This has led to the search for schemes that allow secrets to be shared efficiently with a threshold of players.
Shamir's scheme
In this scheme, any t out of n shares may be used to recover the secret. The system relies on the idea that you can fit a unique polynomial of degree to any set of t points that lie on the polynomial. It takes two points to define a straight line, three points to fully define a quadratic, four points to define a cubic curve, and so on. That is, it takes t points to define a polynomial of degree . The method is to create a polynomial of degree with the secret as the first coefficient and the remaining coefficients picked at random. Next find n points on the curve and give one to each of the players. When at least t out of the n players reveal their points, there is sufficient information to fit a th degree polynomial to them, the first coefficient being the secret.
Blakley's scheme
Two nonparallel lines in the same plane intersect at exactly one point. Three nonparallel planes in space intersect at exactly one point. More generally, any n nonparallel -dimensional hyperplanes intersect at a specific point. The secret may be encoded as any single coordinate of the point of intersection. If the secret is encoded using all the coordinates, even if they are random, then an insider (someone in possession of one or more of the -dimensional hyperplanes) gains information about the secret since he knows it must lie on his plane. If an insider can gain any more knowledge about the secret than an outsider can, then the system no longer has information theoretic security. If only one of the n coordinates is used, then the insider knows no more than an outsider (i.e., that the secret must lie on the x-axis for a 2-dimensional system). Each player is given enough information to define a hyperplane; the secret is recovered by calculating the planes' point of intersection and then taking a specified coordinate of that intersection.
Blakley's scheme is less space-efficient than Shamir's; while Shamir's shares are each only as large as the original secret, Blakley's shares are t times larger, where t is the threshold number of players. Blakley's scheme can be tightened by adding restrictions on which planes are usable as shares. The resulting scheme is equivalent to Shamir's polynomial system.
Using the Chinese remainder theorem
The Chinese remainder theorem can also be used in secret sharing, for it provides us with a method to uniquely determine a number S modulo k many pairwise coprime integers , given that . There are two secret sharing schemes that make use of the Chinese Remainder Theorem, Mignotte's and Asmuth-Bloom's Schemes. They are threshold secret sharing schemes, in which the shares are generated by reduction modulo the integers , and the secret is recovered by essentially solving the system of congruences using the Chinese Remainder Theorem.
Proactive secret sharing
If the players store their shares on insecure computer servers, an attacker could crack in and steal the shares. If it is not practical to change the secret, the uncompromised (Shamir-style) shares can be renewed. The dealer generates a new random polynomial with constant term zero and calculates for each remaining player a new ordered pair, where the x-coordinates of the old and new pairs are the same. Each player then adds the old and new y-coordinates to each other and keeps the result as the new y-coordinate of the secret.
All of the non-updated shares the attacker accumulated become useless. An attacker can only recover the secret if he can find enough other non-updated shares to reach the threshold. This situation should not happen because the players deleted their old shares. Additionally, an attacker cannot recover any information about the original secret from the update files because they contain only random information.
The dealer can change the threshold number while distributing updates, but must always remain vigilant of players keeping expired shares.
Verifiable secret sharing
A player might lie about his own share to gain access to other shares. A verifiable secret sharing (VSS) scheme allows players to be certain that no other players are lying about the contents of their shares, up to a reasonable probability of error. Such schemes cannot be computed conventionally; the players must collectively add and multiply numbers without any individual's knowing what exactly is being added and multiplied. Tal Rabin and Michael Ben-Or devised a multiparty computing (MPC) system that allows players to detect dishonesty on the part of the dealer or on part of up to one third of the threshold number of players, even if those players are coordinated by an "adaptive" attacker who can change strategies in realtime depending on what information has been revealed.
Computationally secure secret sharing
The disadvantage of unconditionally secure secret sharing schemes is that the storage and transmission of the shares requires an amount of storage and bandwidth resources equivalent to the size of the secret times the number of shares. If the size of the secret were significant, say 1 GB, and the number of shares were 10, then 10 GB of data must be stored by the shareholders. Alternate techniques have been proposed for greatly increasing the efficiency of secret sharing schemes, by giving up the requirement of unconditional security.
One of these techniques, known as secret sharing made short, combines Rabin's information dispersal algorithm (IDA) with Shamir's secret sharing. Data is first encrypted with a randomly generated key, using a symmetric encryption algorithm. Next this data is split into N pieces using Rabin's IDA. This IDA is configured with a threshold, in a manner similar to secret sharing schemes, but unlike secret sharing schemes the size of the resulting data grows by a factor of (number of fragments / threshold). For example, if the threshold were 10, and the number of IDA-produced fragments were 15, the total size of all the fragments would be (15/10) or 1.5 times the size of the original input. In this case, this scheme is 10 times more efficient than if Shamir's scheme had been applied directly on the data. The final step in secret sharing made short is to use Shamir secret sharing to produce shares of the randomly generated symmetric key (which is typically on the order of 16–32 bytes) and then give one share and one fragment to each shareholder.
A related approach, known as AONT-RS, applies an All-or-nothing transform to the data as a pre-processing step to an IDA. The All-or-nothing transform guarantees that any number of shares less than the threshold is insufficient to decrypt the data.
Multi-secret and space efficient (batched) secret sharing
An information-theoretically secure k-of-n secret-sharing scheme generates n shares, each of size at least that of the secret itself, leading to the total required storage being at least n-fold larger that the secret. In multi-secret sharing designed by Matthew K. Franklin and Moti Yung, multiple points of the polynomial host secrets; the method was found useful in numerous applications from coding to multi-party computations. In space efficient secret sharing, devised by Abhishek Parakh and Subhash Kak, each share is roughly the size of the secret divided by .
This scheme makes use of repeated polynomial interpolation and has potential applications in secure information dispersal on the Web and in
sensor networks. This method is based on data partitioning involving the roots of a polynomial in finite field. Some vulnerabilities of related space efficient secret sharing schemes were pointed out later. They show that a scheme based on interpolation method cannot be used to implement a scheme when the k secrets to be distributed are inherently generated from a polynomial of degree less than , and the scheme does not work if all of the secrets to be shared are the same, etc.
Other uses and applications
A secret sharing scheme can secure a secret over multiple servers and remain recoverable despite multiple server failures. The dealer may act as several distinct participants, distributing the shares among the participants. Each share may be stored on a different server, but the dealer can recover the secret even if several servers break down as long as they can recover at least t shares; however, crackers that break into one server would still not know the secret as long as fewer than t shares are stored on each server.
This is one of the major concepts behind the Vanish computer project at the University of Washington, where a random key is used to encrypt data, and the key is distributed as a secret across several nodes in a P2P network. In order to decrypt the message, at least t nodes on the network must be accessible; the principle for this particular project being that the number of secret-sharing nodes on the network will decrease naturally over time, therefore causing the secret to eventually vanish. However, the network is vulnerable to a Sybil attack, thus making Vanish insecure.
Any shareholder who ever has enough information to decrypt the content at any point is able to take and store a copy of X. Consequently, although tools and techniques such as Vanish can make data irrecoverable within their own system after a time, it is not possible to force the deletion of data once a malicious user has seen it. This is one of the leading conundrums of Digital Rights Management.
A dealer could send t shares, all of which are necessary to recover the original secret, to a single recipient. An attacker would have to intercept all t shares to recover the secret, a task which is more difficult than intercepting a single file, especially if the shares are sent using different media (e.g. some over the Internet, some mailed on CDs).
For large secrets, it may be more efficient to encrypt the secret and then distribute the key using secret sharing.
Secret sharing is an important primitive in several protocols for secure multiparty computation.
Secret sharing can also be used for user authentication in a system.
See also
References
External links
Ubuntu Manpage: gfshare – explanation of Shamir Secret Sharing in GF(28)
Description of Shamir's and Blakley's schemes
Patent for use of secret sharing for recovering PGP (and other?) pass phrases
A bibliography on secret-sharing schemes
Cryptography |
458524 | https://en.wikipedia.org/wiki/EMV | EMV | EMV is a payment method based upon a technical standard for smart payment cards and for payment terminals and automated teller machines which can accept them. EMV originally stood for "Europay, Mastercard, and Visa", the three companies that created the standard.
EMV cards are smart cards, also called chip cards, integrated circuit cards, or IC cards which store their data on integrated circuit chips, in addition to magnetic stripes for backward compatibility. These include cards that must be physically inserted or "dipped" into a reader, as well as contactless cards that can be read over a short distance using near-field communication technology. Payment cards which comply with the EMV standard are often called chip and PIN or chip and signature cards, depending on the authentication methods employed by the card issuer, such as a personal identification number (PIN) or digital signature.
There are standards based on ISO/IEC 7816 for contact cards, and standards based on ISO/IEC 14443 for contactless cards (Mastercard Contactless, Visa PayWave, American Express ExpressPay).
In February 2010, computer scientists from Cambridge University demonstrated that an implementation of EMV PIN entry is vulnerable to a man-in-the-middle attack but only implementations where the PIN was validated offline were vulnerable.
History
Until the introduction of Chip & PIN, all face-to-face credit or debit card transactions involved the use of a magnetic stripe or mechanical imprint to read and record account data, and a signature for purposes of identity verification. The customer hands their card to the cashier at the point of sale who then passes the card through a magnetic reader or makes an imprint from the raised text of the card. In the former case, the system verifies account details and prints a slip for the customer to sign. In the case of a mechanical imprint, the transaction details are filled in, a list of stolen numbers is consulted, and the customer signs the imprinted slip. In both cases the cashier must verify that the customer's signature matches that on the back of the card to authenticate the transaction.
Using the signature on the card as a verification method has a number of security flaws, the most obvious being the relative ease with which cards may go missing before their legitimate owners can sign them. Another involves the erasure and replacement of legitimate signature, and yet another involves the forgery of the correct signature.
The invention of the silicon integrated circuit chip in 1959 led to the idea of incorporating it onto a plastic smart card in the late 1960s by two German engineers, Helmut Gröttrup and Jürgen Dethloff. The earliest smart cards were introduced as calling cards in the 1970s, before later being adapted for use as payment cards. Smart cards have since used MOS integrated circuit chips, along with MOS memory technologies such as flash memory and EEPROM (electrically erasable programmable read-only memory).
The first standard for smart payment cards was the Carte Bancaire B0M4 from Bull-CP8 deployed in France in 1986, followed by the B4B0' (compatible with the M4) deployed in 1989. Geldkarte in Germany also predates EMV. EMV was designed to allow cards and terminals to be backwardly compatible with these standards. France has since migrated all its card and terminal infrastructure to EMV.
EMV originally stood for Europay, Mastercard, and Visa, the three companies that created the standard. The standard is now managed by EMVCo, a consortium with control split equally among Visa, Mastercard, JCB, American Express, China UnionPay, and Discover. EMVCo also refers to "Associates," companies able to provide input and receive feedback on detailed technical and operational issues connected to the EMV specifications and related processes.
JCB joined the consortium in February 2009, China UnionPay in May 2013, and Discover in September 2013.
Differences and benefits
There are two major benefits to moving to smart-card-based credit card payment systems: improved security (with associated fraud reduction), and the possibility for finer control of "offline" credit-card transaction approvals. One of the original goals of EMV was to provide for multiple applications on a card: for a credit and debit card application or an e-purse. New issue debit cards in the US contain two applications — a card association (Visa, Mastercard etc.) application, and a common debit application. The common debit application ID is somewhat of a misnomer as each "common" debit application actually uses the resident card association application.
EMV chip card transactions improve security against fraud compared to magnetic stripe card transactions that rely on the holder's signature and visual inspection of the card to check for features such as hologram. The use of a PIN and cryptographic algorithms such as Triple DES, RSA and SHA provide authentication of the card to the processing terminal and the card issuer's host system. The processing time is comparable to online transactions, in which communications delay accounts for the majority of the time, while cryptographic operations at the terminal take comparatively little time. The supposed increased protection from fraud has allowed banks and credit card issuers to push through a "liability shift", such that merchants are now liable (as of 1 January 2005 in the EU region and 1 October 2015 in the US) for any fraud that results from transactions on systems that are not EMV-capable.
The majority of implementations of EMV cards and terminals confirm the identity of the cardholder by requiring the entry of a personal identification number (PIN) rather than signing a paper receipt. Whether or not PIN authentication takes place depends upon the capabilities of the terminal and programming of the card.
When credit cards were first introduced, merchants used mechanical rather than magnetic portable card imprinters that required carbon paper to make an imprint. They did not communicate electronically with the card issuer, and the card never left the customer's sight. The merchant had to verify transactions over a certain currency limit by telephoning the card issuer. During the 1970s in the United States, many merchants subscribed to a regularly-updated list of stolen or otherwise invalid credit card numbers. This list was commonly printed in booklet form on newsprint, in numerical order, much like a slender phone book, yet without any data aside from the list of invalid numbers. Checkout cashiers were expected to thumb through this booklet each and every time a credit card was presented for payment of any amount, prior to approving the transaction, which incurred a short delay.
Later, equipment electronically contacted the card issuer, using information from the magnetic stripe to verify the card and authorize the transaction. This was much faster than before, but required the transaction to occur in a fixed location. Consequently, if the transaction did not take place near a terminal (in a restaurant, for example) the clerk or waiter had to take the card away from the customer and to the card machine. It was easily possible at any time for a dishonest employee to swipe the card surreptitiously through a cheap machine that instantly recorded the information on the card and stripe; in fact, even at the terminal, a thief could bend down in front of the customer and swipe the card on a hidden reader. This made illegal cloning of cards relatively easy, and a more common occurrence than before.
Since the introduction of payment card Chip and PIN, cloning of the chip is not feasible; only the magnetic stripe can be copied, and a copied card cannot be used by itself on a terminal requiring a PIN. The introduction of Chip and PIN coincided with wireless data transmission technology becoming inexpensive and widespread. In addition to mobile-phone-based magnetic readers, merchant personnel can now bring wireless PIN pads to the customer, so the card is never out of the cardholder's sight. Thus, both chip-and-PIN and wireless technologies can be used to reduce the risks of unauthorized swiping and card cloning.
Chip and PIN versus chip and signature
Chip and PIN is one of the two verification methods that EMV enabled cards can employ. Rather than physically signing a receipt for identification purposes, the user just enters a personal identification number (PIN), typically of 4 to 6 digits in length. This number must correspond to the information stored on the chip. Chip and PIN technology makes it much harder for fraudsters to use a found card, so if someone steals a card, they can't make fraudulent purchases unless they know the PIN.
Chip and signature, on the other hand, differentiates itself from chip and PIN by verifying a consumer's identity with a signature.
As of 2015, chip and signature cards are more common in the US, Mexico, parts of South America (such as Argentina, Colombia, Peru) and some Asian countries (such as Taiwan, Hong Kong, Thailand, South Korea, Singapore, and Indonesia), whereas chip and PIN cards are more common in most European countries (e.g., the UK, Ireland, France, Portugal, Finland and the Netherlands) as well as in Iran, Brazil, Venezuela, India, Sri Lanka, Canada, Australia and New Zealand.
Online, phone, and mail order transactions
While EMV technology has helped reduce crime at the point of sale, fraudulent transactions have shifted to more vulnerable telephone, Internet, and mail order transactions—known in the industry as card-not-present or CNP transactions. CNP transactions made up at least 50% of all credit card fraud. Because of physical distance, it is not possible for the merchant to present a keypad to the customer in these cases, so alternatives have been devised, including
Software approaches for online transactions that involve interaction with the card-issuing bank or network's website, such as Verified by Visa and Mastercard SecureCode (implementations of Visa's 3-D Secure protocol). 3-D Secure is now being replaced by Strong Customer Authentication as defined in the European Second Payment Services Directive.
Creating a one-time virtual card linked to a physical card with a given maximum amount.
Additional hardware with keypad and screen that can produce a one-time password, such as the Chip Authentication Program.
Keypad and screen integrated into complex cards to produce a one-time password. Since 2008, Visa has been running pilot projects using the Emue card where the generated number replaces the code printed on the back of standard cards.
Commands
ISO/IEC 7816-3 defines the transmission protocol between chip cards and readers. Using this protocol, data is exchanged in application protocol data units (APDUs). This comprises sending a command to a card, the card processing it, and sending a response. EMV uses the following commands:
application block
application unblock
card block
external authenticate (7816-4)
generate application cryptogram
get data (7816-4)
get processing options
internal authenticate (7816-4)
PIN change / unblock
read record (7816-4)
select (7816-4)
verify (7816-4).
Commands followed by "7816-4" are defined in ISO/IEC 7816-4 and are interindustry commands used for many chip card applications such as GSM SIM cards.
Transaction flow
An EMV transaction has the following steps:
Application selection
Initiate application processing
Read application data
Processing restrictions
Offline data authentication
Certificates
Cardholder verification
Terminal risk management
Terminal action analysis
First card action analysis
Online transaction authorization (only carried out if required by the result of the previous steps; mandatory in ATMs)
Second card action analysis
Issuer script processing.
Application selection
ISO/IEC 7816 defines a process for application selection. The intent of application selection was to let cards contain completely different applications—for example GSM and EMV. However, EMV developers implemented application selection as a way of identifying the type of product, so that all product issuers (Visa, Mastercard, etc.) must have their own application. The way application selection is prescribed in EMV is a frequent source of interoperability problems between cards and terminals. Book 1 of the EMV standard devotes 15 pages to describing the application selection process.
An application identifier (AID) is used to address an application in the card or Host Card Emulation (HCE) if delivered without a card. An AID consists of a registered application provider identifier (RID) of five bytes, which is issued by the ISO/IEC 7816-5 registration authority. This is followed by a proprietary application identifier extension (PIX), which enables the application provider to differentiate among the different applications offered. The AID is printed on all EMV cardholder receipts. Card issuers can alter the application name from the name of the card network. Chase, for example, renames the Visa application on its Visa cards to "CHASE VISA", and the Mastercard application on its Mastercard cards to "CHASE MASTERCARD". Capital One renames the Mastercard application on its Mastercard cards to "CAPITAL ONE", and the Visa application on its Visa cards to "CAPITAL ONE VISA". The applications are otherwise the same.
List of applications:
Initiate application processing
The terminal sends the get processing options command to the card. When issuing this command, the terminal supplies the card with any data elements requested by the card in the processing options data objects list (PDOL). The PDOL (a list of tags and lengths of data elements) is optionally provided by the card to the terminal during application selection. The card responds with the application interchange profile (AIP), a list of functions to perform in processing the transaction. The card also provides the application file locator (AFL), a list of files and records that the terminal needs to read from the card.
Read application data
Smart cards store data in files. The AFL contains the files that contain EMV data. These all must be read using the read record command. EMV does not specify which files data is stored in, so all the files must be read. Data in these files is stored in BER TLV format. EMV defines tag values for all data used in card processing.
Processing restrictions
The purpose of the processing restrictions is to see if the card should be used. Three data elements read in the previous step are checked: Application version number, Application usage control (This shows whether the card is only for domestic use, etc.), Application effective/expiration dates checking.
If any of these checks fails, the card is not necessarily declined. The terminal sets the appropriate bit in the terminal verification results (TVR), the components of which form the basis of an accept/decline decision later in the transaction flow. This feature lets, for example, card issuers permit cardholders to keep using expired cards after their expiry date, but for all transactions with an expired card to be performed on-line.
Offline data authentication (ODA)
Offline data authentication is a cryptographic check to validate the card using public-key cryptography. There are three different processes that can be undertaken depending on the card:
Static data authentication (SDA) ensures data read from the card has been signed by the card issuer. This prevents modification of data, but does not prevent cloning.
Dynamic data authentication (DDA) provides protection against modification of data and cloning.
Combined DDA/generate application cryptogram (CDA) combines DDA with the generation of a card's application cryptogram to assure card validity. Support of CDA in devices may be needed, as this process has been implemented in specific markets. This process is not mandatory in terminals and can only be carried out where both card and terminal support it.
EMV certificates
To verify the authenticity of payment cards, EMV certificates are used. The EMV Certificate Authority issues digital certificates to payment card issuers. When requested, the payment card chip provides the card issuer's public key certificate and SSAD to the terminal. The terminal retrieves the CA's public key from local storage and uses it to confirm trust for the CA and, if trusted, to verify the card issuer's public key was signed by the CA. If the card issuer's public key is valid, the terminal uses the card issuer's public key to verify the card's SSAD was signed by the card issuer.
Cardholder verification
Cardholder verification is used to evaluate whether the person presenting the card is the legitimate cardholder. There are many cardholder verification methods (CVMs) supported in EMV. They are
Signature
Offline plaintext PIN
Offline enciphered PIN
Offline plaintext PIN and signature
Offline enciphered PIN and signature
Online PIN
No CVM required
Consumer Device CVM
Fail CVM processing
The terminal uses a CVM list read from the card to determine the type of verification to perform. The CVM list establishes a priority of CVMs to use relative to the capabilities of the terminal. Different terminals support different CVMs. ATMs generally support online PIN. POS terminals vary in their CVM support depending on type and country.
For offline enciphered PIN methods, the terminal encrypts the cleartext PIN block with the card's public key before sending it to the card with the Verify command. For the online PIN method, the cleartext PIN block is encrypted by the terminal using its point-to-point encryption key before sending it to the acquirer processor in the authorization request message.
In 2017, EMVCo added support for biometric verification methods in version 4.3 of the EMV specifications
Terminal risk management
Terminal risk management is only performed in devices where there is a decision to be made whether a transaction should be authorised on-line or offline. If transactions are always carried out on-line (e.g., ATMs) or always off-line, this step can be skipped. Terminal risk management checks the transaction amount against an offline ceiling limit (above which transactions should be processed on-line). It is also possible to have a 1 in an online counter, and a check against a hot card list (which is only necessary for off-line transactions). If the result of any of these tests is positive, the terminal sets the appropriate bit in the terminal verification results (TVR).
Terminal action analysis
The results of previous processing steps are used to determine whether a transaction should be approved offline, sent online for authorization, or declined offline. This is done using a combination of data objects known as terminal action codes (TACs) held in the terminal and issuer action codes (IACs) read from the card. The TAC is logically OR'd with the IAC, to give the transaction acquirer a level of control over the transaction outcome.
Both types of action code take the values Denial, Online, and Default. Each action code contains a series of bits which correspond to the bits in the Terminal verification results (TVR), and are used in the terminal's decision whether to accept, decline or go on-line for a payment transaction. The TAC is set by the card acquirer; in practice card schemes advise the TAC settings that should be used for a particular terminal type depending on its capabilities. The IAC is set by the card issuer; some card issuers may decide that expired cards should be rejected, by setting the appropriate bit in the Denial IAC. Other issuers may want the transaction to proceed on-line so that they can in some cases allow these transactions to be carried out.
An online-only device such as an ATM always attempts to go on-line with the authorization request, unless declined off-line due to issuer action codes—Denial settings. During IAC—Denial and TAC—Denial processing, for an online only device, the only relevant Terminal verification results bit is "Service not allowed".
When an online-only device performs IAC—Online and TAC—Online processing the only relevant TVR bit is "Transaction value exceeds the floor limit". Because the floor limit is set to zero, the transaction should always go online and all other values in TAC—Online or IAC—Online are irrelevant. Online-only devices do not need to perform IAC-default processing.
First card action analysis
One of the data objects read from the card in the Read application data stage is CDOL1 (Card Data object List). This object is a list of tags that the card wants to be sent to it to make a decision on whether to approve or decline a transaction (including transaction amount, but many other data objects too). The terminal sends this data and requests a cryptogram using the generate application cryptogram command. Depending on the terminal's decision (offline, online, decline), the terminal requests one of the following cryptograms from the card:
Transaction certificate (TC)—Offline approval
Authorization Request Cryptogram (ARQC)—Online authorization
Application Authentication Cryptogram (AAC)—Offline decline.
This step gives the card the opportunity to accept the terminal's action analysis or to decline a transaction or force a transaction on-line. The card cannot return a TC when an ARQC has been asked for, but can return an ARQC when a TC has been asked for.
Online transaction authorization
Transactions go online when an ARQC has been requested. The ARQC is sent in the authorisation message. The card generates the ARQC. Its format depends on the card application. EMV does not specify the contents of the ARQC. The ARQC created by the card application is a digital signature of the transaction details, which the card issuer can check in real time. This provides a strong cryptographic check that the card is genuine. The issuer responds to an authorization request with a response code (accepting or declining the transaction), an authorisation response cryptogram (ARPC) and optionally an issuer script (a string of commands to be sent to the card).
ARPC processing is not performed in contact transactions processed with Visa Quick Chip for EMV and Mastercard M/Chip Fast, and in contactless transactions across schemes because the card is removed from the reader after the ARQC has been generated.
Second card action analysis
CDOL2 (Card data object list) contains a list of tags that the card wanted to be sent after online transaction authorisation (response code, ARPC, etc.). Even if for any reason the terminal could not go online (e.g., communication failure), the terminal should send this data to the card again using the generate authorisation cryptogram command. This lets the card know the issuer's response. The card application may then reset offline usage limits.
Issuer script processing
If a card issuer wants to update a card post issuance it can send commands to the card using issuer script processing. Issuer scripts are meaningless to the terminal and can be encrypted between the card and the issuer to provide additional security. Issuer script can be used to block cards, or change card parameters.
Issuer script processing is not available in contact transactions processed with Visa Quick Chip for EMV and Mastercard M/Chip Fast, and for contactless transactions across schemes.
Control of the EMV standard
The first version of EMV standard was published in 1995. Now the standard is defined and managed by the privately owned corporation EMVCo LLC. The current members of EMVCo are American Express, Discover Financial, JCB International, Mastercard, China UnionPay, and Visa Inc. Each of these organizations owns an equal share of EMVCo and has representatives in the EMVCo organization and EMVCo working groups.
Recognition of compliance with the EMV standard (i.e., device certification) is issued by EMVCo following submission of results of testing performed by an accredited testing house.
EMV Compliance testing has two levels: EMV Level 1, which covers physical, electrical and transport level interfaces, and EMV Level 2, which covers payment application selection and credit financial transaction processing.
After passing common EMVCo tests, the software must be certified by payment brands to comply with proprietary EMV implementations such as Visa VSDC, American Express AEIPS, Mastercard MChip, JCB JSmart, or EMV-compliant implementations of non-EMVCo members such as LINK in the UK, or Interac in Canada.
List of EMV documents and standards
As of 2011, since version 4.0, the official EMV standard documents which define all the components in an EMV payment system are published as four "books" and some additional documents:
Book 1: Application Independent ICC to Terminal Interface Requirements
Book 2: Security and Key Management
Book 3: Application Specification
Book 4: Cardholder, Attendant, and Acquirer Interface Requirements
Common Payment Application Specification
EMV Card Personalisation Specification
Versions
The first EMV standard came into view in 1995 as EMV 2.0. This was upgraded to EMV 3.0 in 1996 (sometimes referred to as EMV '96) with later amendments to EMV 3.1.1 in 1998. This was further amended to version 4.0 in December 2000 (sometimes referred to as EMV 2000). Version 4.0 became effective in June 2004. Version 4.1 became effective in June 2007. Version 4.2 is in effect since June 2008. Version 4.3 is in effect since November 2011.
Vulnerabilities
Opportunities to harvest PINs and clone magnetic stripes
In addition to the track-two data on the magnetic stripe, EMV cards generally have identical data encoded on the chip, which is read as part of the normal EMV transaction process. If an EMV reader is compromised to the extent that the conversation between the card and the terminal is intercepted, then the attacker may be able to recover both the track-two data and the PIN, allowing construction of a magnetic stripe card, which, while not usable in a Chip and PIN terminal, can be used, for example, in terminal devices that permit fallback to magstripe processing for foreign customers without chip cards, and defective cards. This attack is possible only where (a) the offline PIN is presented in plaintext by the PIN entry device to the card, where (b) magstripe fallback is permitted by the card issuer and (c) where geographic and behavioural checking may not be carried out by the card issuer.
APACS, representing the UK payment industry, claimed that changes specified to the protocol (where card verification values differ between the magnetic stripe and the chip – the iCVV) rendered this attack ineffective and that such measures would be in place from January 2008. Tests on cards in February 2008 indicated this may have been delayed.
Successful attacks
Conversation capturing is a form of attack which was reported to have taken place against Shell terminals in May 2006, when they were forced to disable all EMV authentication in their filling stations after more than £1 million was stolen from customers.
In October 2008, it was reported that hundreds of EMV card readers for use in Britain, Ireland, the Netherlands, Denmark, and Belgium had been expertly tampered with in China during or shortly after manufacture. For 9 months details and PINs of credit and debit cards were sent over mobile phone networks to criminals in Lahore, Pakistan. United States National Counterintelligence Executive Joel Brenner said, "Previously only a nation state's intelligence agency would have been capable of pulling off this type of operation. It's scary." Data were typically used a couple of months after the card transactions to make it harder for investigators to pin down the vulnerability. After the fraud was discovered it was found that tampered-with terminals could be identified as the additional circuitry increased their weight by about 100 g. Tens of millions of pounds sterling are believed to have been stolen. This vulnerability spurred efforts to implement better control of electronic POS devices over their entire life cycle, a practice endorsed by electronic payment security standards like those being developed by the Secure POS Vendor Alliance (SPVA).
PIN harvesting and stripe cloning
In a February 2008 BBC Newsnight programme Cambridge University researchers Steven Murdoch and Saar Drimer demonstrated one example attack, to illustrate that Chip and PIN is not secure enough to justify passing the liability to prove fraud from the banks onto customers. The Cambridge University exploit allowed the experimenters to obtain both card data to create a magnetic stripe and the PIN.
APACS, the UK payments association, disagreed with the majority of the report, saying "The types of attack on PIN entry devices detailed in this report are difficult to undertake and not currently economically viable for a fraudster to carry out." They also said that changes to the protocol (specifying different card verification values between the chip and magnetic stripe – the iCVV) would make this attack ineffective from January 2008. The fraud reported in October 2008 to have operated for 9 months (see above) was probably in operation at the time, but was not discovered for many months.
In August 2016, NCR (payment technology company) computer security researchers showed how credit card thieves can rewrite the code of a magnetic strip to make it appear like a chipless card, which allows for counterfeiting.
2010: Hidden hardware disables PIN checking on stolen card
On 11 February 2010 Murdoch and Drimer's team at Cambridge University announced that they had found "a flaw in chip and PIN so serious they think it shows that the whole system needs a re-write" that was "so simple that it shocked them". A stolen card is connected to an electronic circuit and to a fake card which is inserted into the terminal ("man-in-the-middle attack"). Any four digits are typed in and accepted as a valid PIN.
A team from the BBC's Newsnight programme visited a Cambridge University cafeteria (with permission) with the system, and were able to pay using their own cards (a thief would use stolen cards) connected to the circuit, inserting a fake card and typing in "0000" as the PIN. The transactions were registered as normal, and were not picked up by banks' security systems. A member of the research team said, "Even small-scale criminal systems have better equipment than we have. The amount of technical sophistication needed to carry out this attack is really quite low." The announcement of the vulnerability said, "The expertise that is required is not high (undergraduate level electronics) ... We dispute the assertion by the banking industry that criminals are not sophisticated enough, because they have already demonstrated a far higher level of skill than is necessary for this attack in their miniaturized PIN entry device skimmers." It is not known if this vulnerability has been exploited.
EMVCo disagreed and published a response saying that, while such an attack might be theoretically possible, it would be extremely difficult and expensive to carry out successfully, that current compensating controls are likely to detect or limit the fraud, and that the possible financial gain from the attack is minimal while the risk of a declined transaction or exposure of the fraudster is significant.
When approached for comment, several banks (Co-operative Bank, Barclays and HSBC) each said that this was an industry-wide issue, and referred the Newsnight team to the banking trade association for further comment. According to Phil Jones of the Consumers' Association, Chip and PIN has helped to bring down instances of card crime, but many cases remain unexplained. "What we do know is that we do have cases that are brought forward from individuals which seem quite persuasive."
Because submission of the PIN is suppressed, this is the exact equivalent of a merchant performing a PIN bypass transaction. Such transactions can't succeed offline, as a card never generates an offline authorisation without a successful PIN entry. As a result of this, the transaction ARQC must be submitted online to the issuer, who knows that the ARQC was generated without a successful PIN submission (since this information is included in the encrypted ARQC) and hence would be likely to decline the transaction if it were for a high value, out of character, or otherwise outside of the typical risk management parameters set by the issuer.
Originally, bank customers had to prove that they had not been negligent with their PIN before getting redress, but UK regulations in force from 1 November 2009 placed the onus firmly on the banks to prove that a customer has been negligent in any dispute, with the customer given 13 months to make a claim. Murdoch said that "[the banks] should look back at previous transactions where the customer said their PIN had not been used and the bank record showed it has, and consider refunding these customers because it could be they are victim of this type of fraud."
2011: CVM downgrade allows arbitrary PIN harvest
At the CanSecWest conference in March 2011, Andrea Barisani and Daniele Bianco presented research uncovering a vulnerability in EMV that would allow arbitrary PIN harvesting despite the cardholder verification configuration of the card, even when the supported CVMs data is signed.
The PIN harvesting can be performed with a chip skimmer. In essence, a CVM list that has been modified to downgrade the CVM to Offline PIN is still honoured by POS terminals, despite its signature being invalid.
PIN bypass
In 2020, researchers David Basin, Ralf Sasse, and Jorge Toro from ETH Zurich reported a critical security issue affecting Visa contactless cards. The issue consists of lack of cryptographic protection of critical data sent by the card to the terminal during an EMV transaction. The data in question determines the cardholder verification method (CVM, such as PIN verification) to be used for the transaction. The team demonstrated that it is possible to modify this data to trick the terminal into believing that no PIN is required because the cardholder was verified using their device (e.g. smartphone). The researchers developed a proof-of-concept Android app that effectively turns a physical Visa card into a mobile payment app (e.g. Apple Pay, Google Pay) to perform PIN-free, high-value purchases. The attack is carried out using two NFC-enabled smartphones, one held near the physical card and the second held near the payment terminal. The attack might affect cards by Discover and China's UnionPay but this was not demonstrated in practice, in contrast to the case of cards by Visa.
In early 2021, the same team disclosed that Mastercard cards are also vulnerable to a PIN bypass attack. They showed that criminals can trick a terminal into transacting with a Mastercard contactless card while believing it to be a Visa card. This card brand mixup has critical consequences since it can be used in combination with the PIN bypass for Visa to also bypass the PIN for Mastercard cards. "Complex systems such as EMV must be analyzed by automated tools, like model checkers"'', researchers point out as the main takeaway of their findings. As opposed to humans, model-checking tools like Tamarin are up to the task since they can deal with the complexity of real-world systems like EMV.
Implementation
EMV originally stood for "Europay, Mastercard, and Visa", the three companies that created the standard. The standard is now managed by EMVCo, a consortium of financial companies. The most widely known chips of the EMV standard are:
VIS: Visa
Mastercard chip: Mastercard
AEIPS: American Express
UICS: China Union Pay
J Smart: JCB
D-PAS: Discover/Diners Club International
Rupay: NPCI
Verve
Visa and Mastercard have also developed standards for using EMV cards in devices to support card not present transactions (CNP) over the telephone and Internet. Mastercard has the Chip Authentication Program (CAP) for secure e-commerce. Its implementation is known as EMV-CAP and supports a number of modes. Visa has the Dynamic Passcode Authentication (DPA) scheme, which is their implementation of CAP using different default values.
In many countries of the world, debit card and/or credit card payment networks have implemented liability shifts. Normally, the card issuer is liable for fraudulent transactions. However, after a liability shift is implemented, if the ATM or merchant's point of sale terminal does not support EMV, the ATM owner or merchant is liable for the fraudulent transaction.
Chip and PIN systems can cause problems for travellers from countries that do not issue Chip and PIN cards as some retailers may refuse to accept their chipless cards. While most terminals still accept a magnetic strip card, and the major credit card brands require vendors to accept them, some staff may refuse to take the card, under the belief that they are held liable for any fraud if the card cannot verify a PIN. Non-chip-and-PIN cards may also not work in some unattended vending machines at, for example, train stations, or self-service check-out tills at supermarkets.
Africa
Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions.
Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008.
South Africa
Mastercard's liability shift took place on 1 January 2005.
Asian and Pacific countries
Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions, except for domestic transactions in China and Japan.
Visa's liability shift for points of sale took place on 1 October 2010. For ATMs, the liability shift date took place on 1 October 2015, except in China, India, Japan, and Thailand, where the liability shift was on 1 October 2017. Domestic ATM transactions in China are not currently not subject to a liability shift deadline.
Australia
Mastercard required that all point of sale terminals be EMV capable by April 2013. For ATMs, the liability shift took place in April 2012. ATMs must be EMV compliant by the end of 2015
Visa's liability shift for ATMs took place 1 April 2013.
Malaysia
Malaysia is the first country in the world to completely migrate to EMV-compliant smart cards two years after its implementation in 2005.
New Zealand
Mastercard required all point of sale terminals to be EMV compliant by 1 July 2011. For ATMs, the liability shift took place in April 2012. ATMs are required to be EMV compliant by the end of 2015.
Visa's liability shift for ATMs was 1 April 2013.
Europe
Mastercard's liability shift took place on 1 January 2005.
Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008.
France has cut card fraud by more than 80% since its introduction in 1992 (see Carte Bleue).
United Kingdom
Chip and PIN was trialled in Northampton, England from May 2003, and as a result was rolled out nationwide in the United Kingdom on 14 February 2006 with advertisements in the press and national television touting the "Safety in Numbers" slogan. During the first stages of deployment, if a fraudulent magnetic swipe card transaction was deemed to have occurred, the retailer was refunded by the issuing bank, as was the case prior to the introduction of Chip and PIN. On January 1, 2005, the liability for such transactions was shifted to the retailer; this acted as an incentive for retailers to upgrade their point of sale (PoS) systems, and most major high-street chains upgraded on time for the EMV deadline. Many smaller businesses were initially reluctant to upgrade their equipment, as it required a completely new PoS system—a significant investment.
New cards featuring both magnetic strips and chips are now issued by all major banks. The replacement of pre-Chip and PIN cards was a major issue, as banks simply stated that consumers would receive their new cards "when their old card expires" — despite many people having had cards with expiry dates as late as 2007. The card issuer Switch lost a major contract with HBOS to Visa, as they were not ready to issue the new cards as early as the bank wanted.
The Chip and PIN implementation was criticised as designed to reduce the liability of banks in cases of claimed card fraud by requiring the customer to prove that they had acted "with reasonable care" to protect their PIN and card, rather than on the bank having to prove that the signature matched. Before Chip and PIN, if a customer's signature was forged, the banks were legally liable and had to reimburse the customer. Until 1 November 2009 there was no such law protecting consumers from fraudulent use of their Chip and PIN transactions, only the voluntary Banking Code. There were many reports that banks refused to reimburse victims of fraudulent card use, claiming that their systems could not fail under the circumstances reported, despite several documented successful large-scale attacks.
The Payment Services Regulations 2009 came into force on 1 November 2009 and shifted the onus onto the banks to prove, rather than assume, that the cardholder is at fault. The Financial Services Authority (FSA) said "It is for the bank, building society or credit card company to show that the transaction was made by you, and there was no breakdown in procedures or technical difficulty" before refusing liability.
Latin America and the Caribbean
Mastercard's liability shift among countries within this region took place on 1 January 2005.
Visa's liability shift for points of sale took place on 1 October 2012, for any countries in this region that had not already implemented a liability shift. For ATMs, the liability shift took place on 1 October 2014, for any countries in this region that had not already implemented a liability shift.
Brazil
Mastercard's liability shift took place on 1 March 2008.
Visa's liability shift for points of sale took place on 1 April 2011. For ATMs, the liability shift took place on 1 October 2012.
Colombia
Mastercard's liability shift took place on 1 October 2008.
Mexico
Discover implemented a liability shift on 1 October 2015. For pay at the pump at gas stations, the liability shift was on 1 October 2017.
Visa's liability shift for points of sale took place on 1 April 2011. For ATMs, the liability shift took place on 1 October 2012.
Venezuela
Mastercard's liability shift took place on 1 July 2009.
Middle East
Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions.
Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008.
North America
Canada
American Express implemented a liability shift on 31 October 2012.
Discover implemented a liability shift on 1 October 2015 for all transactions except pay-at-the-pump at gas stations; those transactions shifted on 1 October 2017.
Interac (Canada's debit card network) stopped processing non-EMV transactions at ATMs on 31 December 2012, and mandated EMV transactions at point-of-sale terminals on 30 September 2016, with a liability shift taking place on 31 December 2015.
Mastercard implemented domestic transaction liability shift on 31 March 2011, and international liability shift on 15 April 2011. For pay at the pump at gas stations, the liability shift was implemented 31 December 2012.
Visa implemented domestic transaction liability shift on 31 March 2011, and international liability shift on 31 October 2010. For pay at the pump at gas stations, the liability shift was implemented 31 December 2012.
Over a 5-year period post-EMV migration, domestic card-card present fraudulent transactions significantly reduced in Canada. According to Helcim's reports, card-present domestic debit card fraud reduced 89.49% and credit card fraud 68.37%.
United States
After widespread identity theft due to weak security in the point-of-sale terminals at Target, Home Depot, and other major retailers, Visa, Mastercard and Discover in March 2012 – and American Express in June 2012 – announced their EMV migration plans for the United States. Since the announcement, multiple banks and card issuers have announced cards with EMV chip-and-signature technology, including American Express, Bank of America, Citibank, Wells Fargo, JPMorgan Chase, U.S. Bank, and several credit unions.
In 2010, a number of companies began issuing pre-paid debit cards that incorporate Chip and PIN and allow Americans to load cash as euros or pound sterling. United Nations Federal Credit Union was the first United States issuer to offer Chip and PIN credit cards. In May 2010, a press release from Gemalto (a global EMV card producer) indicated that United Nations Federal Credit Union in New York would become the first EMV card issuer in the United States, offering an EMV Visa credit card to its customers. JPMorgan was the first major bank to introduce a card with EMV technology, namely its Palladium card, in mid-2012.
As of April 2016, 70% of U.S. consumers have EMV cards and as of December 2016 roughly 50% of merchants are EMV compliant. However, deployment has been slow and inconsistent across vendors. Even merchants with EMV hardware may not be able to process chip transactions due to software or compliance deficiencies. Bloomberg has also cited issues with software deployment, including changes to audio prompts for Verifone machines which can take several months to release and deploy software out. Industry experts, however, expect more standardization in the United States for software deployment and standards. Visa and Mastercard have both implemented standards to speed up chip transactions with a goal of reducing the time for these to be under three seconds. These systems are labelled as Visa Quick Chip and Mastercard M/Chip Fast.
American Express implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift is 16 April 2021. This was extended from 1 October 2020 due to complications from the coronavirus.
Discover implemented liability shift on 1 October 2015. For pay at the pump, at gas stations, the liability shift is 1 October 2020.
Maestro implemented liability shift of 19 April 2013, for international cards used in the United States.
Mastercard implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift formally is on 1 October 2020. For ATMs, the liability shift date was on 1 October 2016.
Visa implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift formally is on 1 October 2020. For ATMs, the liability shift date was on 1 October 2017.
Notes
See also
Contactless payment
Supply chain attack
Two-factor authentication
MM code
References
External links |
466203 | https://en.wikipedia.org/wiki/Books%20on%20cryptography | Books on cryptography | Books on cryptography have been published sporadically and with highly variable quality for a long time. This is despite the tempting, though superficial, paradox that secrecy is of the essence in sending confidential messages — see Kerckhoffs' principle.
In contrast, the revolutions in cryptography and secure communications since the 1970s are well covered in the available literature.
Early history
An early example of a book about cryptography was a Roman work, now lost and known only by references. Many early cryptographic works were esoteric, mystical, and/or reputation-promoting; cryptography being mysterious, there was much opportunity for such things. At least one work by Trithemius was banned by the Catholic Church and put on the Index Librorum Prohibitorum as being about black magic or witchcraft. Many writers claimed to have invented unbreakable ciphers. None were, though it sometimes took a long while to establish this.
In the 19th century, the general standard improved somewhat (e.g., works by Auguste Kerckhoffs, Friedrich Kasiski, and Étienne Bazeries). Colonel Parker Hitt and William Friedman in the early 20th century also wrote books on cryptography. These authors, and others, mostly abandoned any mystical or magical tone.
Open literature versus classified literature
With the invention of radio, much of military communications went wireless, allowing the possibility of enemy interception much more readily than tapping into a landline. This increased the need to protect communications. By the end of World War I, cryptography and its literature began to be officially limited. One exception was the 1931 book The American Black Chamber by Herbert Yardley, which gave some insight into American cryptologic success stories, including the Zimmermann telegram and the breaking of Japanese codes during the Washington Naval Conference.
List
Overview of cryptography
Bertram, Linda A. / Dooble, Gunther van / et al. (Eds.): Nomenclatura: Encyclopedia of modern Cryptography and Internet Security - From AutoCrypt and Exponential Encryption to Zero-Knowledge-Proof Keys, 2019, .
Piper, Fred and Sean Murphy, Cryptography : A Very Short Introduction This book outlines the major goals, uses, methods, and developments in cryptography.
Significant books
Significant books on cryptography include:
Aumasson, Jean-Philippe (2017), Serious Cryptography: A Practical Introduction to Modern Encryption. No Starch Press, 2017, . Presents modern cryptography in a readable way, suitable for practitioners, software engineers, and others who want to learn practice-oriented cryptography. Each chapter includes a discussion of common implementation mistakes using real-world examples and details what could go wrong and how to avoid these pitfalls.
Aumasson, Jean-Philippe (2021), Crypto Dictionary: 500 Tasty Tidbits for the Curious Cryptographer. No Starch Press, 2021, . Ultimate desktop dictionary with hundreds of definitions organized alphabetically for all things cryptographic. The book also includes discussions of the threat that quantum computing is posing to current cryptosystems and a nod to post-quantum algorithms, such as lattice-based cryptographic schemes.
Bertram, Linda A. / Dooble, Gunther van: Transformation of Cryptography - Fundamental concepts of Encryption, Milestones, Mega-Trends and sustainable Change in regard to Secret Communications and its Nomenclatura, 2019, .
Candela, Rosario (1938). The Military Cipher of Commandant Bazeries. New York: Cardanus Press, This book detailed the cracking of a famous code from 1898 created by Commandant Bazeries, a brilliant French Army Cryptanalyst.
Falconer, John (1685). Cryptomenysis Patefacta, or Art of Secret Information Disclosed Without a Key. One of the earliest English texts on cryptography.
Ferguson, Niels, and Schneier, Bruce (2003). Practical Cryptography, Wiley, . A cryptosystem design consideration primer. Covers both algorithms and protocols. This is an in-depth consideration of one cryptographic problem, including paths not taken and some reasons why. At the time of its publication, most of the material was not otherwise available in a single source. Some was not otherwise available at all. According to the authors, it is (in some sense) a follow-up to Applied Cryptography.
Gaines, Helen Fouché (1939). Cryptanalysis, Dover, . Considered one of the classic books on the subject, and includes many sample ciphertext for practice. It reflects public amateur practice as of the inter-War period. The book was compiled as one of the first projects of the American Cryptogram Association.
Goldreich, Oded (2001 and 2004). Foundations of Cryptography. Cambridge University Press. Presents the theoretical foundations of cryptography in a detailed and comprehensive manner. A must-read for anyone interested in the theory of cryptography.
Katz, Jonathan and Lindell, Yehuda (2007 and 2014). Introduction to Modern Cryptography, CRC Press. Presents modern cryptography at a level appropriate for undergraduates, graduate students, or practitioners. Assumes mathematical maturity but presents all the necessary mathematical and computer science background.
Konheim, Alan G. (1981). Cryptography: A Primer, John Wiley & Sons, . Written by one of the IBM team who developed DES.
Mao, Wenbo (2004). Modern Cryptography Theory and Practice . An up-to-date book on cryptography. Touches on provable security, and written with students and practitioners in mind.
Mel, H.X., and Baker, Doris (2001). Cryptography Decrypted, Addison Wesley . This technical overview of basic cryptographic components (including extensive diagrams and graphics) explains the evolution of cryptography from the simplest concepts to some modern concepts. It details the basics of symmetric key, and asymmetric key ciphers, MACs, SSL, secure mail and IPsec. No math background is required, though there's some coverage of the mathematics underlying public key/private key crypto in the appendix.
A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone (1996) Handbook of Applied Cryptography . Equivalent to Applied Cryptography in many ways, but somewhat more mathematical. For the technically inclined. Covers few meta-cryptographic topics, such as crypto system design. This is currently (2004) regarded as the standard reference work in technical cryptography.
Paar, Christof and Jan Pelzl (2009). Understanding Cryptography: A Textbook for Students and Practitioners, Springer, . Very accessible introduction to applied cryptography which covers most schemes of practical relevance. The focus is on being a textbook, i.e., it has pedagogical approach, many problems and further reading sections. The main target audience are readers without a background in pure mathematics.
Patterson, Wayne (1987). Mathematical Cryptology for Computer Scientists and Mathematicians, Rowman & Littlefield,
Rosulek, Mike (2018). The Joy of Cryptography Presents modern cryptography at a level appropriate for undergraduates.
Schneier, Bruce (1996). Applied Cryptography, 2 ed, Wiley, (). The most accessible single volume available covering modern cryptographic practice; it is approachable by the non mathematically oriented. Extensive bibliography which can serve as an entry into the modern literature. It is a great book for beginners but note that it is getting a bit dated—many important schemes such as AES or the eSTREAM candidates are missing entirely, others like elliptic curves are only very briefly treated. Less immediately mathematical than some others, e.g. Menezes et al. Handbook of Applied Cryptography.
Smart, Nigel (2004). Cryptography: An introduction . Similar in intent to Applied Cryptography but less comprehensive. Covers more modern material and is aimed at undergraduates covering topics such as number theory and group theory not generally covered in cryptography books.
Stinson, Douglas (2005). Cryptography: Theory and Practice . Covers topics in a textbook style but with more mathematical detail than is usual.
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, .
Young, Adam L. and Moti Yung (2004). Malicious Cryptography: Exposing Cryptovirology, , , John Wiley & Sons. Covers topics regarding use of cryptography as an attack tool in systems as was introduced in the 1990s: Kleptography which deals with hidden subversion of cryptosystems, and, more generally, Cryptovirology which predicted Ransomware in which cryptography is used as a tool to disable computing systems, in a way that is reversible only by the attacker, generally requiring ransom payment(s).
Washington, Lawrence C. (2003). Elliptic Curves: Number Theory and Cryptography . A book focusing on elliptic curves, beginning at an undergraduate level (at least for those who have had a course on abstract algebra), and progressing into much more advanced topics, even at the end touching on Andrew Wiles' proof of the Taniyama–Shimura conjecture which led to the proof of Fermat's Last Theorem.
Welsh, Dominic (1988). Codes and Cryptography, Oxford University Press, A brief textbook intended for undergraduates. Some coverage of fundamental information theory. Requires some mathematical maturity; is well written, and otherwise accessible.
The Codebreakers
From the end of World War II until the early 1980s most aspects of modern cryptography were regarded as the special concern of governments and the military and were protected by custom and, in some cases, by statute. The most significant work to be published on cryptography in this period is undoubtedly David Kahn's The Codebreakers, which was published at a time (mid-1960s) when virtually no information on the modern practice of cryptography was available. Kahn has said that over ninety percent of its content was previously unpublished.
The book caused serious concern at the NSA despite its lack of coverage of specific modern cryptographic practice, so much so that after failing to prevent the book being published, NSA staff were informed to not even acknowledge the existence of the book if asked. In the US military, mere possession of a copy by cryptographic personnel was grounds for some considerable suspicion. Perhaps the single greatest importance of the book was the impact it had on the next generation of cryptographers. Whitfield Diffie has made comments in interviews about the effect it had on him.
Cryptographic environment/context or security
Schneier, Bruce – Secrets and Lies, Wiley, , a discussion of the context within which cryptography and cryptosystems work. Practical Cryptography also includes some contextual material in the discussion of crypto system design.
Schneier, Bruce – Beyond Fear: Thinking Sensibly About Security in an Uncertain World, Wiley,
Anderson, Ross – Security Engineering, Wiley, (online version), advanced coverage of computer security issues, including cryptography. Covers much more than merely cryptography. Brief on most topics due to the breadth of coverage. Well written, especially compared to the usual standard.
Edney, Jon and Arbaugh, William A – Real 802.11 Security: Wi-Fi Protected Access and 802.11i, Addison-Wesley, , covers the use of cryptography in Wi-Fi networks. Includes details on Wi-Fi Protected Access (which is based on the IEEE 802.11i specification). The book is slightly out of date as it was written before IEEE 802.11i was finalized but much of the content is still useful for those who want to find out how encryption and authentication is done in a Wi-Fi network.
Declassified works
Boak, David G. A History of U.S. Communications Security (Volumes I and II); the David G. Boak Lectures, National Security Agency (NSA), 1973, A frank, detailed, and often humorous series of lectures delivered to new NSA hires by a long time insider, largely declassified as of 2015.
Callimahos, Lambros D. and Friedman, William F. Military Cryptanalytics. A (partly) declassified text intended as a training manual for NSA cryptanalysts.
Friedman, William F., Six Lectures on Cryptology, National Cryptology School, U.S. National Security Agency, 1965, declassified 1977, 1984
(How the Japanese Purple cipher was broken, declassified 2001)
History of cryptography
Bamford, James, The Puzzle Palace: A Report on America's Most Secret Agency (1982)(), and the more recent Body of Secrets: Anatomy of the Ultra-Secret National Security Agency (2001). The first is one of a very few books about the US Government's NSA. The second is also about NSA but concentrates more on its history. There is some very interesting material in Body of Secrets about US attempts (the TICOM mission) to investigate German cryptographic efforts immediately as WW II wound down.
Gustave Bertrand, Enigma ou la plus grande énigme de la guerre 1939–1945 (Enigma: the Greatest Enigma of the War of 1939–1945), Paris, 1973. The first public disclosure in the West of the breaking of Enigma, by the chief of French military cryptography prior to WW II. The first public disclosure anywhere was made in the first edition of Bitwa o tajemnice by the late Władysław Kozaczuk.
James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001: an overview of major 20th-century episodes in cryptology and espionage, particularly strong regarding the misappropriation of credit for conspicuous achievements.
Kahn, David – The Codebreakers (1967) () A single-volume source for cryptographic history, at least for events up to the mid-'60s (i.e., to just before DES and the public release of asymmetric key cryptography). The added chapter on more recent developments (in the most recent edition) is quite thin. Kahn has written other books and articles on cryptography, and on cryptographic history. They are very highly regarded.
Kozaczuk, Władysław, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War II, edited and translated by Christopher Kasparek, Frederick, MD, 1984: a history of cryptological efforts against Enigma, concentrating on the contributions of Polish mathematicians Marian Rejewski, Jerzy Różycki and Henryk Zygalski; of particular interest to specialists will be several technical appendices by Rejewski.
Levy, Steven – Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age (2001) (): a journalistic overview of the development of public cryptographic techniques and the US regulatory context for cryptography. This is an account of a major policy conflict.
Singh, Simon, The Code Book (): an anecdotal introduction to the history of cryptography. Covers more recent material than does even the revised edition of Kahn's The Codebreakers. Clearly written and quite readable. The included cryptanalytic contest has been won and the prize awarded, but the cyphertexts are still worth attempting.
Bauer, F L, Decrypted Secrets, This book is unusual. It is both a history of cryptography, and a discussion of mathematical topics related to cryptography. In his review, David Kahn said he thought it the best book he'd read on the subject. It is essentially two books, in more or less alternating chapters. Originally in German, and the translation shows it in places. Some surprising content, e.g., in the discussion of President Edgar Hoover's Secretary of State, Henry Stimson.
Budiansky, Stephen, Battle of Wits: a one-volume history of cryptography in WW II. It is well written, well researched, and responsible. Technical material (e.g., a description of the cryptanalysis of Enigma) is limited, but clearly presented.
Budiansky, Stephen, Code Warriors: NSA's Codebreakers and the Secret Intelligence War Against the Soviet Union (Knopf, 2016). (): A sweeping, in-depth history of NSA, whose famous “cult of silence” has left the agency shrouded in mystery for decades.
Prados, John – Combined Fleet Decoded, An account of cryptography in the Pacific Theatre of World War II with special emphasis on the Japanese side. Reflects extensive research in Japanese sources and recently available US material. Contains material not previously accessible or unavailable.
Marks, Leo, Between Silk and Cyanide: a Codemaker's Story, 1941–1945, (HarperCollins, 1998). (). A humorous but informative account of code-making and -breaking in Britain's WWII Special Operations Executive.
Mundy, Liza, Code Girls, (Hachette Books, 2017) () An account of some of the thousands of women recruited for U.S. cryptologic work before and during World War II, including top analysts such as Elizebeth Smith Friedman and Agnes Meyer Driscoll, lesser known but outstanding contributors like Genevieve Grotjan Feinstein and Ann Zeilinger Caracristi, and many others, and how the women made a strategic difference in the war.
Yardley, Herbert, The American Black Chamber (), a classic 1931 account of American code-breaking during and after World War I; and Chinese Black Chamber: An Adventure in Espionage (), about Yardley's work with the Chinese government in the years just before World War II. Yardley has an enduring reputation for embellishment, and some of the material in these books is less than reliable. The American Black Chamber was written after the New York operation Yardley ran was shut down by Secretary of State Henry L. Stimson and the US Army, on the grounds that "gentlemen don't read each other's mail".
Historic works
Abu Yusuf Yaqub ibn Ishaq al-Sabbah Al-Kindi, (A Manuscript on Deciphering Cryptographic Messages), 9th century included first known explanation of frequency analysis cryptanalysis
Michel de Nostredame, (16th century prophet famed since 1555 for prognostications), known widely for his "Les Propheties" sets of quatrains composed from four languages into a ciphertext, deciphered in a series called "Rise to Consciousness" (Deschausses, M., Outskirts Press, Denver, CO, Nov 2008).
Roger Bacon (English friar and polymath), Epistle on the secret Works of Art and Nullity of Magic, 13th century, possibly the first European work on cryptography since Classical times, written in Latin and not widely available then or now
Johannes Trithemius, Steganographia ("Hidden Writing"), written ca. 1499; pub 1606, banned by the Catholic Church 1609 as alleged discussion of magic, see Polygraphiae (below).
Johannes Trithemius, Polygraphiae Libri Sex ("Six Books on Polygraphy"), 1518, first printed book on cryptography (thought to really be about magic by some observers at the time)
Giovan Battista Bellaso, La cifra del. Sig. Giovan Battista Bellaso, 1553, first pub of the cypher widely misattributed to Vigenère.
Giambattista della Porta, De Furtivis Literarum Notis ("On concealed characters in writing"), 1563.
Blaise de Vigenère, Traicte de Chiffres, 1585.
Gustavus Selenus, Cryptomenytics, 1624, (modern era English trans by J W H Walden)
John Wilkins, Mercury, 1647, earliest printed book in English about cryptography
Johann Ludwig Klüber, Kryptographik Lehrbuch der Geheimschreibekunst ("Cryptology: Instruction Book on the Art of Secret Writing"), 1809.
Friedrich Kasiski, Die Geheimschriften und die Dechiffrierkunst ("Secret writing and the Art of Deciphering"), pub 1863, contained the first public description of a technique for cryptanalyzing polyalphabetic cyphers.
Etienne Bazeries, Les Chiffres secrets dévoilés ("Secret ciphers unveiled") about 1900.
Émile Victor Théodore Myszkowski, Cryptographie indéchiffrable: basée sur de nouvelles combinaisons rationelles ("Unbreakable cryptography"), published 1902.
William F. Friedman and others, the Riverbank Publications, a series of pamphlets written during and after World War I that are considered seminal to modern cryptanalysis, including no. 22 on the Index of Coincidence.
Fiction
Neal Stephenson – Cryptonomicon (1999) () The adventures of some World War II codebreakers and their modern-day progeny.
Edgar Allan Poe – "The Gold-Bug" (1843) An eccentric man discovers an ancient parchment which contains a cryptogram which, when solved, leads to the discovery of buried treasure. Includes a lengthy discourse on a method of solving a simple cypher.
Sir Arthur Conan Doyle – The Dancing Men. Holmes becomes involved in a case which features messages left lying around. They are written in a substitution cypher, which Holmes promptly discerns. Solving the cypher leads to solving the case.
Ken Follett – The Key to Rebecca (1980), World War II spy novel whose plot revolves around the heroes' efforts to cryptanalyze a book cipher with time running out.
Clifford B. Hicks – Alvin's Secret Code (1963), a children's novel which introduces some basics of cryptography and cryptanalysis.
Robert Harris – Enigma (1995) () Novel partly set in Britain's World War II codebreaking centre at Bletchley Park.
Ari Juels – Tetraktys (2009) () Pits a classicist turned cryptographer against an ancient Pythagorean cult. Written by RSA Labs chief scientist.
Dan Brown - Digital Fortress (1998), a thriller takes a plunge into the NSA's cryptology wing giving the readers a modern and technology oriented view of the codebreaking in vogue.
Max Hernandez - Thieves Emporium (2013), a novel that examines how the world will change if cryptography makes fully bi-directional anonymous communications possible. A technically accurate document, it shows the effects of crypto from the citizen's standpoint rather than the NSA.
Barry Eisler, Fault Line (2009) . A thriller about a race to nab software (of the cryptovirology type) which is capable of shutting down cyberspace.
References
External links
Listing and reviews for a large number of books in cryptography
A long list of works of fiction where the use of cryptology is a significant plot element. The list is in English.
List of where cryptography features in literature — list is presented in German. It draws on the English list above.
Lists of books
Computer security books
Cryptography lists and comparisons
Communications bibliographies |
466854 | https://en.wikipedia.org/wiki/FileVault | FileVault | FileVault is a disk encryption program in Mac OS X 10.3 (2003) and later. It performs on-the-fly encryption with volumes on Mac computers.
Versions and key features
FileVault was introduced with Mac OS X Panther (10.3), and could only be applied to a user's home directory, not the startup volume. The operating system uses an encrypted sparse disk image (a large single file) to present a volume for the home directory. Mac OS X Leopard and Mac OS X Snow Leopard use more modern sparse bundle disk images which spread the data over 8 MB files (called bands) within a bundle. Apple refers to this original iteration of FileVault as legacy FileVault.
Mac OS X Lion (10.7) and newer offer FileVault 2, which is a significant redesign. This encrypts the entire OS X startup volume and typically includes the home directory, abandoning the disk image approach. For this approach to disk encryption, authorised users' information is loaded from a separate non-encrypted boot volume (partition/slice type Apple_Boot).
FileVault
The original version of FileVault was added in Mac OS X Panther to encrypt a user's home directory.
Master passwords and recovery keys
When FileVault is enabled the system invites the user to create a master password for the computer. If a user password is forgotten, the master password or recovery key may be used to decrypt the files instead.
Migration
Migration of FileVault home directories is subject to two limitations:
there must be no prior migration to the target computer
the target must have no existing user accounts.
If Migration Assistant has already been used or if there are user accounts on the target:
before migration, FileVault must be disabled at the source.
If transferring FileVault data from a previous Mac that uses 10.4 using the built-in utility to move data to a new machine, the data continues to be stored in the old sparse image format, and the user must turn FileVault off and then on again to re-encrypt in the new sparse bundle format.
Manual encryption
Instead of using FileVault to encrypt a user's home directory, using Disk Utility a user can create an encrypted disk image themselves and store any subset of their home directory in there (for example, ). This encrypted image behaves similar to a FileVault encrypted home directory, but is under the user's maintenance.
Encrypting only a part of a user's home directory might be problematic when applications need access to the encrypted files, which will not be available until the user mounts the encrypted image. This can be mitigated to a certain extent by making symbolic links for these specific files.
Limitations and issues
Backups
These limitations apply to versions of Mac OS X prior to v10.7 only.
Without Mac OS X Server, Time Machine will back up a FileVault home directory only while the user is logged out. In such cases, Time Machine is limited to backing up the home directory in its entirety. Using Mac OS X Server as a Time Machine destination, backups of FileVault home directories occur while users are logged in.
Because FileVault restricts the ways in which other users' processes can access the user's content, some third party backup solutions can back up the contents of a user's FileVault home directory only if other parts of the computer (including other users' home directories) are excluded.
Issues
Several shortcomings were identified in Legacy FileVault. Its security can be broken by cracking either 1024-bit RSA or 3DES-EDE.
Legacy FileVault used the CBC mode of operation (see disk encryption theory); FileVault 2 uses stronger XTS-AESW mode. Another issue is storage of keys in the macOS "safe sleep" mode. A study published in 2008 found data remanence in dynamic random-access memory (DRAM), with data retention of seconds to minutes at room temperature and much longer times when memory chips were cooled to low temperature. The study authors were able to use a cold boot attack to recover cryptographic keys for several popular disk encryption systems, including FileVault, by taking advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as in key scheduling. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not in physical control by the owner.
Early versions of FileVault automatically stored the user's passphrase in the system keychain, requiring the user to notice and manually disable this security hole.
In 2006, following a talk at the 23rd Chaos Communication Congress titled Unlocking FileVault: An Analysis of Apple's Encrypted Disk Storage System, Jacob Appelbaum & Ralf-Philipp Weinmann released VileFault which decrypts encrypted Mac OS X disk image files.
A free space wipe using Disk Utility left a large portion of previously deleted file remnants intact. Similarly, FileVault compact operations only wiped small parts of previously deleted data.
FileVault 2
Security
FileVault uses the user's login password as the encryption pass phrase. It uses the XTS-AES mode of AES with 128 bit blocks and a 256 bit key to encrypt the disk, as recommended by NIST. Only unlock-enabled users can start or unlock the drive. Once unlocked, other users may also use the computer until it is shut down.
Performance
The I/O performance penalty for using FileVault 2 was found to be in the order of around 3% when using CPUs with the AES instruction set, such as the Intel Core i, and OS X 10.10.3. Performance deterioration will be larger for CPUs without this instruction set, such as older Core CPUs.
Master passwords and recovery keys
When FileVault 2 is enabled while the system is running, the system creates and displays a recovery key for the computer, and optionally offers the user to store the key with Apple. The 120 bit recovery key is encoded with all letters and numbers 1 through 9, and read from , and therefore relies on the security of the PRNG used in macOS. During a cryptanalysis in 2012, this mechanism was found safe.
Changing the recovery key is not possible without re-encrypting the File Vault volume.
Validation
Users who use FileVault 2 in OS X 10.9 and above can validate their key correctly works after encryption by running in Terminal after encryption has finished. The key must be in form and will return true if correct.
Starting the OS with FileVault 2 without a user account
If a volume to be used for startup is erased and encrypted before clean installation of OS X 10.7.4 or 10.8:
there is a password for the volume
the clean system will immediately behave as if FileVault was enabled after installation
there is no recovery key, no option to store the key with Apple (but the system will behave as if a key was created)
when the computer is started, Disk Password will appear at the EfiLoginUI – this may be used to unlock the volume and start the system
the running system will present the traditional login window.
Apple describes this type of approach as Disk Password—based DEK.
See also
Apple Keychain
BitLocker
TrueCrypt
VeraCrypt
LUKS
References
MacOS
Cryptographic software
Disk encryption |
468001 | https://en.wikipedia.org/wiki/FASTA%20format | FASTA format | In bioinformatics and biochemistry, the FASTA format is a text-based format for representing either nucleotide sequences or amino acid (protein) sequences, in which nucleotides or amino acids are represented using single-letter codes. The format also allows for sequence names and comments to precede the sequences. The format originates from the FASTA software package, but has now become a near universal standard in the field of bioinformatics.
The simplicity of FASTA format makes it easy to manipulate and parse sequences using text-processing tools and scripting languages like the R programming language, Python, Ruby, Haskell, and Perl.
Original format & overview
The original FASTA/Pearson format is described in the documentation for the FASTA suite of programs. It can be downloaded with any free distribution of FASTA (see fasta20.doc, fastaVN.doc or fastaVN.me—where VN is the Version Number).
In the original format, a sequence was represented as a series of lines, each of which was no longer than 120 characters and usually
did not exceed 80 characters. This probably was to allow for preallocation of fixed line sizes in software: at the time most users relied on Digital Equipment Corporation (DEC) VT220 (or compatible) terminals which could display 80 or 132 characters per line. Most people preferred the bigger font in 80-character modes and so it became the recommended fashion to use 80 characters or less (often 70) in FASTA lines. Also, the width of a standard printed page is 70 to 80 characters (depending on the font). Hence, 80 characters became the norm.
The first line in a FASTA file started either with a ">" (greater-than) symbol or, less frequently, a ";" (semicolon) was taken as a comment. Subsequent lines starting with a semicolon would be ignored by software. Since the only comment used was the first, it quickly became used to hold a summary description of the sequence, often starting with a unique library accession number, and with time it has become commonplace to always use ">" for the first line and to not use ";" comments (which would otherwise be ignored).
Following the initial line (used for a unique description of the sequence) was the actual sequence itself in standard
one-letter character string. Anything other than a valid character would be ignored (including spaces, tabulators, asterisks, etc...). It was also common to end the sequence with an "*" (asterisk) character (in analogy with use in PIR formatted sequences) and, for the same reason, to leave a blank line between the description and the sequence. Below are a few sample sequences:
;LCBO - Prolactin precursor - Bovine
; a sample sequence in FASTA format
MDSKGSSQKGSRLLLLLVVSNLLLCQGVVSTPVCPNGPGNCQVSLRDLFDRAVMVSHYIHDLSS
EMFNEFDKRYAQGKGFITMALNSCHTSSLPTPEDKEQAQQTHHEVLMSLILGLLRSWNDPLYHL
VTEVRGMKGAPDAILSRAIEIEEENKRLLEGMEMIFGQVIPGAKETEPYPVWSGLPSLQTKDED
ARYSAFYNLLHCLRRDSSKIDTYLKLLNCRIIYNNNC*
>MCHU - Calmodulin - Human, rabbit, bovine, rat, and chicken
MADQLTEEQIAEFKEAFSLFDKDGDGTITTKELGTVMRSLGQNPTEAELQDMINEVDADGNGTID
FPEFLTMMARKMKDTDSEEEIREAFRVFDKDGNGYISAAELRHVMTNLGEKLTDEEVDEMIREA
DIDGDGQVNYEEFVQMMTAK*
>gi|5524211|gb|AAD44166.1| cytochrome b [Elephas maximus maximus]
LCLYTHIGRNIYYGSYLYSETWNTGIMLLLITMATAFMGYVLPWGQMSFWGATVITNLFSAIPYIGTNLV
EWIWGGFSVDKATLNRFFAFHFILPFTMVALAGVHLTFLHETGSNNPLGLTSDSDKIPFHPYYTIKDFLG
LLILILLLLLLALLSPDMLGDPDNHMPADPLNTPLHIKPEWYFLFAYAILRSVPNKLGGVLALFLSIVIL
GLMPFLHTSKHRSMMLRPLSQALFWTLTMDLLTLTWIGSQPVEYPYTIIGQMASILYFSIILAFLPIAGX
IENY
A multiple sequence FASTA format would be obtained by concatenating several single sequence FASTA files in a common file (also known as multi-FASTA format). This does not imply a contradiction with the format as only the first line in a FASTA file may start with a ";" or ">", hence forcing all subsequent sequences to start with a ">" in order to be taken as different ones (and further forcing the exclusive reservation of ">" for the sequence definition line). Thus, the examples above may as well be taken as a multisequence (i.e multi-FASTA) file if taken together.
Nowadays, modern bioinformatic programs that rely on the FASTA format expect the sequence headers to be preceded by ">", and the actual sequence, while generally represented as "interleaved", i.e. on multiple lines as in the above example, may also be "sequential" when the full stretch is found on a single line. Users may often need to perform conversion between "Sequential" and "Interleaved" FASTA format to run different bioinformatic programs.
Description line
The description line (defline) or header/identifier line, which begins with '>', gives a name and/or a unique identifier for the sequence, and may also contain additional information. In a deprecated practice, the header line sometimes contained more than one header, separated by a ^A (Control-A) character. In the original Pearson FASTA format, one or more comments, distinguished by a semi-colon at the beginning of the line, may occur after the header. Some databases and bioinformatics applications do not recognize these comments and follow the NCBI FASTA specification. An example of a multiple sequence FASTA file follows:
>SEQUENCE_1
MTEITAAMVKELRESTGAGMMDCKNALSETNGDFDKAVQLLREKGLGKAAKKADRLAAEG
LVSVKVSDDFTIAAMRPSYLSYEDLDMTFVENEYKALVAELEKENEERRRLKDPNKPEHK
IPQFASRKQLSDAILKEAEEKIKEELKAQGKPEKIWDNIIPGKMNSFIADNSQLDSKLTL
MGQFYVMDDKKTVEQVIAEKEKEFGGKIKIVEFICFEVGEGLEKKTEDFAAEVAAQL
>SEQUENCE_2
SATVSEINSETDFVAKNDQFIALTKDTTAHIQSNSLQSVEELHSSTINGVKFEEYLKSQI
ATIGENLVVRRFATLKAGANGVVNGYIHTNGRVGVVIAAACDSAEVASKSRDLLRQICMH
NCBI identifiers
The NCBI defined a standard for the unique identifier used for the sequence (SeqID) in the header line. This allows a sequence that was obtained from a database to be labelled with a reference to its database record. The database identifier format is understood by the NCBI tools like makeblastdb and table2asn. The following list describes the NCBI FASTA defined format for sequence identifiers.
The vertical bars ("|") in the above list are not separators in the sense of the Backus–Naur form, but are part of the format. Multiple identifiers can be concatenated, also separated by vertical bars.
Sequence representation
Following the header line, the actual sequence is represented. Sequences may be protein sequences or nucleic acid sequences, and they can contain gaps or alignment characters (see sequence alignment). Sequences are expected to be represented in the standard IUB/IUPAC amino acid and nucleic acid codes, with these exceptions: lower-case letters are accepted and are mapped into upper-case; a single hyphen or dash can be used to represent a gap character; and in amino acid sequences, U and * are acceptable letters (see below). Numerical digits are not allowed but are used in some databases to indicate the position in the sequence. The nucleic acid codes supported are:
The amino acid codes supported (22 amino acids and 3 special codes) are:
FASTA file
Filename extension
There is no standard filename extension for a text file containing FASTA formatted sequences. The table below shows each extension and its respective meaning.
Compression
The compression of FASTA files requires a specific compressor to handle both channels of information: identifiers and sequence. For improved compression results, these are mainly divided in two streams where the compression is made assuming independence. For example, the algorithm MFCompress performs lossless compression of these files using context modelling and arithmetic encoding. For benchmarks of FASTA files compression algorithms, see Hosseini et al., 2016, and Kryukov et al., 2020.
Encryption
The encryption of FASTA files has been mostly addressed with a specific encryption tool: Cryfa. Cryfa uses AES encryption and enables to compact data besides encryption. It can also address FASTQ files.
Extensions
FASTQ format is a form of FASTA format extended to indicate information related to sequencing. It is created by the Sanger Centre in Cambridge.
A2M/A3M are a family of FASTA-derived formats used for sequence alignments. In A2M/A3M sequences, lowercase characters are taken to mean insertions, which are then indicated in the other sequences as the dot ("") character. The dots can be discarded for compactness without loss of information. As with typical FASTA used in alignments, the gap ("") is taken to mean exactly one position. A3M is similar to A2M, with the added rule that gaps aligned to insertions can too be discarded.
Working with FASTA files
A plethora of user-friendly scripts are available from the community to perform FASTA file manipulations. Online toolboxes are also available such as FaBox or the FASTX-Toolkit within Galaxy servers. For instance, these can be used to segregate sequence headers/identifiers, rename them, shorten them, or extract sequences of interest from large FASTA files based on a list of wanted identifiers (among other available functions). A tree-based approach to sorting multi-FASTA files (TREE2FASTA) also exists based on the coloring and/or annotation of sequence of interest in the FigTree viewer. Additionally, Bioconductor.org's Biostrings package can be used to read and manipulate FASTA files in R.
Several online format converters exist to rapidly reformat multi-FASTA files to different formats (e.g. NEXUS, PHYLIP) for their use with different phylogenetic programs (e.g. such as the converter available on phylogeny.fr.
See also
The FASTQ format, used to represent DNA sequencer reads along with quality scores.
The SAM format, used to represent genome sequencer reads, generally but not necessarily after they have been aligned to genome sequences.
The GVF format (Genome Variation Format), an extension based on the GFF3 format.
References
External links
Bioconductor
FASTX-Toolkit
FigTree viewer
Phylogeny.fr
GTO
Bioinformatics
Biological sequence format |
468925 | https://en.wikipedia.org/wiki/Blaise%20de%20Vigen%C3%A8re | Blaise de Vigenère | Blaise de Vigenère (5 April 1523 – 19 February 1596) () was a French diplomat, cryptographer, translator and alchemist.
Biography
Vigenère was born into a respectable family in the village of Saint-Pourçain. His mother, Jean, arranged for him to have a classical education in France. He studied Greek, Hebrew and Italian under Adrianus Turnebus and Jean Dorat.
At age 26 he entered the diplomatic service and remained there for 30 years, retiring in 1570. Five years into his career he accompanied the French envoy Louis Adhémar de Grignan to the Diet of Worms as a junior secretary. At age 24, he entered the service of the Duke of Nevers as his secretary, a position he held until the deaths of the Duke and his son in 1562. He also served as a secretary to Henry III.
In 1549 he visited Rome on a two-year diplomatic mission, and again in 1566. On both trips, he read books about cryptography and came in contact with cryptologists. When Vigenère retired aged 47, he donated his 1,000 livres a year income to the poor in Paris. He died of throat cancer in 1596 and is buried in the Saint-Étienne-du-Mont church.
Vigenère cipher
The method of encryption known as the "Vigenère cipher" was misattributed to Blaise de Vigenère in the 19th century and was in fact first described by Giovan Battista Bellaso in his 1553 book La cifra del. Sig. Giovan Battista Bellaso.(published in Vigenère created a different, stronger autokey cipher in (1586). It differs from Bellaso's in several ways:
Bellaso used a "reciprocal table" of five alphabets; Vigenère used ten;
Bellaso's cipher was based on the first letter of the word; Vigenère used a letter agreed upon before communication.
Works
After his retirement, Vigenère composed and translated over 20 books, including:
Les Chroniques et annales de Poloigne. Paris: Jean Richer, 1573. Available on Gallica.
La somptueuse et magnifique entrée du roi Henri III en la cité de Mantoue. Paris: Nicolas Chesneau, 1576. (Includes a description of contemporary Mantua.)
Les Commentaires de César, des guerres de la Gaule. Mis en françois par Blaise de Vigenère, Secretaire de la Chambre du Roy. Avec quelques annotations dessus. 1582.
Les Décades qui se trouvent de Tite-Live, mises en langue francoise avec des annotations & figures pour l'intelligence de l'antiquité romaine, plus une description particulière des lieux : & une chronologie generale des principaux potentats de la terre. Paris: Abel L'Angelier, 1583 and 1606.
Les commentaires de Cesar, des Guerres de la Gaule. Mise en francois par Blaise de Vigenere. Bourbonnois : revues et corrigez par luy-mesme en cette derniere edition. Avec quelques annotations dessus. 1584.
Traicté des Chiffres ou Secrètes Manières d'Escrire. 1586. Available on Gallica.
Le psaultier de David torne en prose mesuree, ou vers libres. Par Blaise de Vigenère, Bourbonnois. Paris: Abel L'Angelier, 1588.
Le psaultier de David: torné en prose mesurée ou vers libres, de Le psaultier de David: torne en prose mesurée ou vers libres, édition de 1588, Pascale Blum-Cuny, ed., Le Miroir volant, 1991.
Les images, ou Tableaux de platte peinture de Philostrate Lemnien ,... mis en françois par Blaise de Vigénère,... avec des arguments et annotations sur chacun d'iceux... Edition nouvelle reveue corrigee et augmentee de beaucoup par le traslateur. Paris: Abel Langelier, 1597; Tournon: Claude Michel, 1611. Translation of a work by Philostratus of Lemnos; available on Gallica.
Traicté du Feu et du Sel. Excellent et rare opuscule du sieur Blaise de Vigenère Bourbonnois, trouvé parmy ses papiers après son decés. First ed., 1608. Paris: Abel Langelier, 1618. Rouen: Jacques Calloué, 1642. A book on alchemy; available on Gallica.
Traicté de Cometes
Traicté des Chiffres
See also
Vigenère cipher
Wild Fields
References
Ernst Bouchard. Notice biographique sur Blaise de Vigenère […], 1868,
Marc Fumaroli (editor). Blaise de Vigenère poète & mythographe au temps de Henri III, Cahiers V.L. Saulnier, no. 11, Paris: Éditions Rue d'Ulm, 1994
Métral, Denyse. Blaise de Vigenère archéologue et critique d'art, Paris: E. Droz, 1939
Maurice Sarazin. Blaise de Vigenère, Bourbonnais 1523-1596. Introduction à la vie et à l'œuvre d'un écrivain de la Renaissance, preface by Marc Fumaroli, Éditions des Cahiers bourbonnais, 1997
External links
1523 births
1596 deaths
16th-century French diplomats
Pre-19th-century cryptographers
French alchemists
Deaths from esophageal cancer
French cryptographers
Deaths from cancer in France
16th-century alchemists |
469246 | https://en.wikipedia.org/wiki/Lattice%20%28group%29 | Lattice (group) | In geometry and group theory, a lattice in the real coordinate space is an infinite set of points in this space with the properties that coordinatewise addition or subtraction of two points in the lattice produces another lattice point, that the lattice points are all separated by some minimum distance, and that every point in the space is within some maximum distance of a lattice point. Closure under addition and subtraction means that a lattice must be a subgroup of the additive group of the points in the space, and the requirements of minimum and maximum distance can be summarized by saying that a lattice is a Delone set. More abstractly, a lattice can be described as a free abelian group of dimension which spans the vector space . For any basis of , the subgroup of all linear combinations with integer coefficients of the basis vectors forms a lattice, and every lattice can be formed from a basis in this way. A lattice may be viewed as a regular tiling of a space by a primitive cell.
Lattices have many significant applications in pure mathematics, particularly in connection to Lie algebras, number theory and group theory. They also arise in applied mathematics in connection with coding theory, in cryptography because of conjectured computational hardness of several lattice problems, and are used in various ways in the physical sciences. For instance, in materials science and solid-state physics, a lattice is a synonym for the "frame work" of a crystalline structure, a 3-dimensional array of regularly spaced points coinciding in special cases with the atom or molecule positions in a crystal. More generally, lattice models are studied in physics, often by the techniques of computational physics.
Symmetry considerations and examples
A lattice is the symmetry group of discrete translational symmetry in n directions. A pattern with this lattice of translational symmetry cannot have more, but may have less symmetry than the lattice itself. As a group (dropping its geometric structure) a lattice is a finitely-generated free abelian group, and thus isomorphic to .
A lattice in the sense of a 3-dimensional array of regularly spaced points coinciding with e.g. the atom or molecule positions in a crystal, or more generally, the orbit of a group action under translational symmetry, is a translate of the translation lattice: a coset, which need not contain the origin, and therefore need not be a lattice in the previous sense.
A simple example of a lattice in is the subgroup . More complicated examples include the E8 lattice, which is a lattice in , and the Leech lattice in . The period lattice in is central to the study of elliptic functions, developed in nineteenth century mathematics; it generalises to higher dimensions in the theory of abelian functions. Lattices called root lattices are important in the theory of simple Lie algebras; for example, the E8 lattice is related to a Lie algebra that goes by the same name.
Dividing space according to a lattice
A typical lattice in thus has the form
where {v1, ..., vn} is a basis for . Different bases can generate the same lattice, but the absolute value of the determinant of the vectors vi is uniquely determined by Λ, and is denoted by d(Λ).
If one thinks of a lattice as dividing the whole of into equal polyhedra (copies of an n-dimensional parallelepiped, known as the fundamental region of the lattice), then d(Λ) is equal to the n-dimensional volume of this polyhedron. This is why d(Λ) is sometimes called the covolume of the lattice. If this equals 1, the lattice is called unimodular.
Lattice points in convex sets
Minkowski's theorem relates the number d(Λ) and the volume of a symmetric convex set S to the number of lattice points contained in S. The number of lattice points contained in a polytope all of whose vertices are elements of the lattice is described by the polytope's Ehrhart polynomial. Formulas for some of the coefficients of this polynomial involve d(Λ) as well.
Computational lattice problems
Computational lattice problems have many applications in computer science. For example, the Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) has been used in the cryptanalysis of many public-key encryption schemes, and many lattice-based cryptographic schemes are known to be secure under the assumption that certain lattice problems are computationally difficult.
Lattices in two dimensions: detailed discussion
There are five 2D lattice types as given by the crystallographic restriction theorem. Below, the wallpaper group of the lattice is given in IUC notation, Orbifold notation, and Coxeter notation, along with a wallpaper diagram showing the symmetry domains. Note that a pattern with this lattice of translational symmetry cannot have more, but may have less symmetry than the lattice itself. A full list of subgroups is available. For example below the hexagonal/triangular lattice is given twice, with full 6-fold and a half 3-fold reflectional symmetry. If the symmetry group of a pattern contains an n-fold rotation then the lattice has n-fold symmetry for even n and 2n-fold for odd n.
For the classification of a given lattice, start with one point and take a nearest second point. For the third point, not on the same line, consider its distances to both points. Among the points for which the smaller of these two distances is least, choose a point for which the larger of the two is least. (Not logically equivalent but in the case of lattices giving the same result is just "Choose a point for which the larger of the two is least".)
The five cases correspond to the triangle being equilateral, right isosceles, right, isosceles, and scalene. In a rhombic lattice, the shortest distance may either be a diagonal or a side of the rhombus, i.e., the line segment connecting the first two points may or may not be one of the equal sides of the isosceles triangle. This depends on the smaller angle of the rhombus being less than 60° or between 60° and 90°.
The general case is known as a period lattice. If the vectors p and q generate the lattice, instead of p and q we can also take p and p-q, etc. In general in 2D, we can take a p + b q and c p + d q for integers a,b, c and d such that ad-bc is 1 or -1. This ensures that p and q themselves are integer linear combinations of the other two vectors. Each pair p, q defines a parallelogram, all with the same area, the magnitude of the cross product. One parallelogram fully defines the whole object. Without further symmetry, this parallelogram is a fundamental parallelogram.
The vectors p and q can be represented by complex numbers. Up to size and orientation, a pair can be represented by their quotient. Expressed geometrically: if two lattice points are 0 and 1, we consider the position of a third lattice point. Equivalence in the sense of generating the same lattice is represented by the modular group: represents choosing a different third point in the same grid, represents choosing a different side of the triangle as reference side 0-1, which in general implies changing the scaling of the lattice, and rotating it. Each "curved triangle" in the image contains for each 2D lattice shape one complex number, the grey area is a canonical representation, corresponding to the classification above, with 0 and 1 two lattice points that are closest to each other; duplication is avoided by including only half of the boundary. The rhombic lattices are represented by the points on its boundary, with the hexagonal lattice as vertex, and i for the square lattice. The rectangular lattices are at the imaginary axis, and the remaining area represents the parallelogrammetic lattices, with the mirror image of a parallelogram represented by the mirror image in the imaginary axis.
Lattices in three dimensions
The 14 lattice types in 3D are called Bravais lattices. They are characterized by their space group. 3D patterns with translational symmetry of a particular type cannot have more, but may have less symmetry than the lattice itself.
Lattices in complex space
A lattice in is a discrete subgroup of which spans as a real vector space. As the dimension of as a real vector space is equal to , a lattice in will be a free abelian group of rank .
For example, the Gaussian integers form a lattice in , as is a basis of over .
In Lie groups
More generally, a lattice Γ in a Lie group G is a discrete subgroup, such that the quotient G/Γ is of finite measure, for the measure on it inherited from Haar measure on G (left-invariant, or right-invariant—the definition is independent of that choice). That will certainly be the case when G/Γ is compact, but that sufficient condition is not necessary, as is shown by the case of the modular group in SL2(R), which is a lattice but where the quotient isn't compact (it has cusps). There are general results stating the existence of lattices in Lie groups.
A lattice is said to be uniform or cocompact if G/Γ is compact; otherwise the lattice is called non-uniform.
Lattices in general vector-spaces
While we normally consider lattices in this concept can be generalized to any finite-dimensional vector space over any field. This can be done as follows:
Let K be a field, let V be an n-dimensional K-vector space, let be a K-basis for V and let R be a ring contained within K. Then the R lattice in V generated by B is given by:
In general, different bases B will generate different lattices. However, if the transition matrix T between the bases is in - the general linear group of R (in simple terms this means that all the entries of T are in R and all the entries of are in R - which is equivalent to saying that the determinant of T is in - the unit group of elements in R with multiplicative inverses) then the lattices generated by these bases will be isomorphic since T induces an isomorphism between the two lattices.
Important cases of such lattices occur in number theory with K a p-adic field and R the p-adic integers.
For a vector space which is also an inner product space, the dual lattice can be concretely described by the set
or equivalently as
Related notions
Primitive element of a lattice is an element that is not a positive integer multiple of another element in the lattice.
See also
Lattice (order)
Lattice (module)
Reciprocal lattice
Unimodular lattice
Crystal system
Mahler's compactness theorem
Lattice graph
Lattice-based cryptography
Notes
References
External links
Catalogue of Lattices (by Nebe and Sloane)
Discrete groups
Lie groups
Analytic geometry |
470209 | https://en.wikipedia.org/wiki/General%20Inter-ORB%20Protocol | General Inter-ORB Protocol | In distributed computing, General Inter-ORB Protocol (GIOP) is the message protocol by which object request brokers (ORBs) communicate in CORBA. Standards associated with the protocol are maintained by the Object Management Group (OMG). The current version of GIOP is 2.0.2. The GIOP architecture provides several concrete protocols, including:
Internet InterORB Protocol (IIOP) — The Internet Inter-Orb Protocol is an implementation of the GIOP for use over the Internet, and provides a mapping between GIOP messages and the TCP/IP layer.
SSL InterORB Protocol (SSLIOP) — SSLIOP is IIOP over SSL, providing encryption and authentication.
HyperText InterORB Protocol (HTIOP) — HTIOP is IIOP over HTTP, providing transparent proxy bypassing.
Zipped InterORB Protocol (ZIOP) — A zipped version of GIOP that reduces the bandwidth usage
Environment Specific Inter-ORB Protocols
As an alternative to GIOP, CORBA includes the concept of an Environment Specific Inter-ORB Protocol (ESIOP). While GIOP is defined to meet general-purpose needs of most CORBA implementations, an ESIOP attempts to address special requirements. For example, an ESIOP might use an alternative protocol encoding to improve efficiency over networks with limited bandwidth or high latency. ESIOPs can also be used to layer CORBA on top of some non-CORBA technology stack, such as Distributed Computing Environment (DCE).
DCE Common Inter-ORB Protocol (DCE-CIOP) is an ESIOP for use in DCE. It maps CORBA to DCE RPC and CDR (Command Data Representation). DCE-CIOP is defined in chapter 16 of the CORBA 2.6.1 standard.
Messages
Further reading
See also
DIIOP
References
Distributed computing |
471217 | https://en.wikipedia.org/wiki/Pick%20operating%20system | Pick operating system | The Pick operating system (often called just "the Pick system" or simply "Pick") is a demand-paged, multiuser, virtual memory, time-sharing computer operating system based around a MultiValue database. Pick is used primarily for business data processing. It is named after one of its developers, Richard A. (Dick) Pick.
The term "Pick system" has also come to be used as the general name of all operating environments which employ this multivalued database and have some implementation of Pick/BASIC and ENGLISH/Access queries. Although Pick started on a variety of minicomputers, the system and its various implementations eventually spread to a large assortment of microcomputers, personal computers and mainframe computers.
Overview
The Pick operating system consists of a database, dictionary, query language, procedural language (PROC), peripheral management, multi-user management, and a compiled BASIC Programming language.
The database is a 'hash-file' data management system. A hash-file system is a collection of dynamic associative arrays which are organized altogether and linked and controlled using associative files as a database management system. Being hash-file oriented, Pick provides efficiency in data access time. Originally, all data structures in Pick were hash-files (at the lowest level) meaning records are stored as associated couplets of a primary key to a set of values. Today a Pick system can also natively access host files in Windows or Unix in any format.
A Pick database is divided into one or more accounts, master dictionaries, dictionaries, files, and sub-files, each of which is a hash-table oriented file. These files contain records made up of fields, sub-fields, and sub-sub-fields. In Pick, records are called items, fields are called attributes, and sub-fields are called values or sub-values (hence the present-day label "multivalued database"). All elements are variable-length, with field and values marked off by special delimiters, so that any file, record, or field may contain any number of entries of the lower level of entity. As a result, a Pick item (record) can be one complete entity (one entire invoice, purchase order, sales order, etc.), or is like a file on most conventional systems. Entities that are stored as 'files' in other common-place systems (e.g. source programs and text documents) must be stored as records within files on Pick.
The file hierarchy is roughly equivalent to the common Unix-like hierarchy of directories, sub-directories, and files. The master dictionary is similar to a directory in that it stores pointers to other dictionaries, files and executable programs. The master dictionary also contains the command-line language.
All files (accounts, dictionaries, files, sub-files) are organized identically, as are all records. This uniformity is exploited throughout the system, both by system functions, and by the system administration commands. For example, the 'find' command will find and report the occurrence of a word or phrase in a file, and can operate on any account, dictionary, file or sub-file.
Each record must have a unique primary key which determines where in a file that record is stored. To retrieve a record, its key is hashed and the resultant value specifies which of a set of discrete "buckets" (called "groups") to look in for the record. Within a bucket, records are scanned sequentially. Therefore, most records (e.g. a complete document) can be read using one single disk-read operation. This same method is used to write the record back to its correct "bucket".
In its initial implementation, Pick records were limited to 32 KB in total (when a 10 MB hard disk cost US$5000), although this limit was removed in the 1980s. Files can contain an unlimited number of records, but retrieval efficiency is determined by the number of records relative to the number of buckets allocated to the file. Each file may be initially allocated as many buckets as required, although changing this extent later may (for some file types) require the file to be quiescent. All modern multi-value databases have a special file-type which changes extent dynamically as the file is used. These use a technique called linear hashing, whose cost is proportional to the change in file size, not (as in typical hashed files) the file size itself. All files start as a contiguous group of disk pages, and grow by linking additional "overflow" pages from unused disk space.
Initial Pick implementations had no index structures as they were not deemed necessary. Around 1990, a B-tree indexing feature was added. This feature makes secondary key look-ups operate much like keyed inquiries of any other database system: requiring at least two disk reads (a key read then a data-record read).
Pick data files are usually two levels. The first level is known as the "dictionary" level and is mandatory. It contains:
Dictionary itemsthe optional items that serve as definitions for the names and structure of the items in the data fork, used in reporting
The data-level identifiera pointer to the second or "data" level of the file
Files created with only one level are, by default, dictionary files. Some versions of the Pick system allow multiple data levels to be linked to one dictionary level file, in which case there would be multiple data-level identifiers in the dictionary file.
A Pick database has no data typing, since all data is stored as characters, including numbers (which are stored as character decimal digits). Data integrity, rather than being controlled by the system, is controlled by the applications and the discipline of the programmers. Because a logical document in Pick is not fragmented (as it would be in SQL), intra-record integrity is automatic.
In contrast to many SQL database systems, Pick allows for multiple, pre-computed field aliases. For example, a date field may have an alias definition for the format "12 Oct 1999", and another alias formatting that same date field as "10/12/99". File cross-connects or joins are handled as a synonym definition of the foreign key. A customer's data, such as name and address, are "joined" from the customer file into the invoice file via a synonym definition of "customer number" in the "invoice" dictionary.
Pick record structure favors a non-first-normal-form composition, where all of the data for an entity is stored in a single record, obviating the need to perform joins. Managing large, sparse data sets in this way can result in efficient use of storage space. This is why these databases are sometimes called NF2 or NF-squared databases.
History
Pick was originally implemented as the Generalized Information Retrieval Language System (GIRLS) on an IBM System/360 in 1965 by Don Nelson and Richard (Dick) Pick at TRW, whose government contract for the Cheyenne Helicopter project required developing a database. It was supposed to be used by the U.S. Army to control the inventory of Cheyenne helicopter parts.
Pick was subsequently commercially released in 1973 by Microdata Corporation (and its British distributor CMC) as the Reality Operating System now supplied by Northgate Information Solutions. McDonnell Douglas bought Microdata in 1981.
Originally on the Microdata implementation, and subsequently implemented on all Pick systems, a BASIC language called Data/BASIC with numerous syntax extensions for smart terminal interface and database operations was the primary programming language for applications. A PROC procedure language was provided for executing scripts. A SQL-style language called ENGLISH allowed database retrieval and reporting, but not updates (although later, the ENGLISH command "REFORMAT" allowed updates on a batch basis). ENGLISH did not fully allow manipulating the 3-dimensional multivalued structure of data records. Nor did it directly provide common relational capabilities such as joins. This was because powerful data dictionary redefinitions for a field allowed joins via the execution of a calculated lookup in another file. The system included a spooler. A simple text editor for file-system records was provided, but the editor was only suitable for system maintenance, and could not lock records, so most applications were written with the other tools such as Batch, RPL, or the BASIC language so as to ensure data validation and allow record locking.
By the early 1980s observers saw the Pick operating system as a strong competitor to Unix. BYTE in 1984 stated that "Pick is simple and powerful, and it seems to be efficient and reliable, too ... because it works well as a multiuser system, it's probably the most cost-effective way to use an XT". Dick Pick founded Pick & Associates, later renamed Pick Systems, then Raining Data,then () TigerLogic, and finally Rocket Software. He licensed "Pick" to a large variety of manufacturers and vendors who have produced different "flavors" of Pick. The database flavors sold by TigerLogic were D3, mvBase, and mvEnterprise. Those previously sold by IBM under the "U2" umbrella are known as UniData and UniVerse. Rocket Software purchased IBM's U2 family of products in 2010 and TigerLogic's D3 and mvBase family of products in 2014. In 2021, Rocket acquired OpenQM and jBASE as well.
Dick Pick died of stroke complications in October 1994.
Pick Systems often became tangled in licensing litigation, and devoted relatively little effort to marketing and improving its software. Subsequent ports of Pick to other platforms generally offered the same tools and capabilities for many years, usually with relatively minor improvements and simply renamed (for example, Data/BASIC became Pick/BASIC and ENGLISH became ACCESS). Licensees often developed proprietary variations and enhancements (for example, Microdata created their own input processor called ScreenPro).
Derivative and related products
The Pick database was licensed to roughly three dozen licensees between 1978 and 1984. Application-compatible implementations evolved into derivatives and also inspired similar systems.
Reality – The first implementation of the Pick database was on a Microdata platform using firmware and called Reality. The first commercial release was in 1973. Microdata acquired CMC Ltd. in the early 80s and were based in Hemel Hempstead, England. The Microdata implementations ran in firmware, so each upgrade had to be accompanied by a new configuration chip. Microdata itself was eventually bought by McDonnell Douglas Information Systems. Pick and Microdata sued each other for the right to market the database, the final judgment being that they both had the right. In addition to the Reality Sequoia and Pegasus series of computers, Microdata and CMC Ltd. sold the Sequel (Sequoia) series which was a much larger class able to handle over 1000 simultaneous users. The earlier Reality minicomputers were known to handle well over 200 simultaneous users, although performance was slow and it was above the official limit. Pegasus systems superseded Sequoia and could handle even more simultaneous users than its predecessors. The modern version of this original Pick implementation is owned and distributed by Northgate Information Solutions Reality.
Ultimate – The second implementation of the Pick database was developed in about 1978 by a New Jersey company called The Ultimate Corp, run by Ted Sabarese. Like the earlier Microdata port, this was a firmware implementation, with the Pick instruction set in firmware and the monitor in assembly code on a Honeywell Level 6 machine. The system had dual personalities in that the monitor/kernel functions (mostly hardware I/O and scheduling) were executed by the native Honeywell Level 6 instruction set. When the monitor "select next user" for activation control was passed to the Honeywell WCS (writable control store) to execute Pick assembler code (implemented in microcode) for the selected process. When the user's time slice expired control was passed back to the kernel running the native Level 6 instruction set.
Ultimate took this concept further with the DEC LSI/11 family of products by implementing a co-processor in hardware (bit-slice, firmware driven). Instead of a single processor with a WCS microcode enhanced instruction set, this configuration used two independent but cooperating CPUs. The LSI11 CPU executed the monitor functions and the co-processor executed the Pick assembler instruction set. The efficiencies of this approach resulted in a 2× performance improvement. The co-processor concept was used again to create a 5×, 7×, and dual-7× versions for Honeywell Level 6 systems. Dual ported memory with private busses to the co-processors were used to increase performance of the LSI11 and Level 6 systems.
Another version used a DEC LSI-11 for the IOP and a 7X board. Ultimate enjoyed moderate success during the 1980s, and even included an implementation running as a layer on top of DEC VAX systems, the 750, 780, 785, and later the MicroVAX. Ultimate also had versions of the Ultimate Operating System running on IBM 370 series systems (under VM and native) and also the 9370 series computers. Ultimate was renamed Allerion, Inc., before liquidation of its assets. Most assets were acquired by Groupe Bull, and consisted of mostly maintaining extant hardware. Bull had its own problems and in approximately 1994 the US maintenance operation was sold to Wang.
Prime INFORMATION – Devcom, a Microdata reseller, wrote a Pick-style database system called INFORMATION in FORTRAN and assembler in 1979 to run on Prime Computer 50-series systems. It was then sold to Prime Computer and renamed Prime INFORMATION. It was subsequently sold to VMark Software Inc. This was the first of the guest operating environment implementations. INFO/BASIC, a variant of Dartmouth BASIC, was used for database applications.
UniVerse – Another implementation of the system, called UniVerse, was created by VMark Software and operated under Unix and Microsoft Windows. This was the first one to incorporate the ability to emulate other implementations of the system, such as Microdata's Reality Operating System, and Prime INFORMATION. Originally running on Unix, it was later also made available for Windows. It now is owned by Rocket Software. (The systems developed by Prime Computer and VMark are now owned by Rocket Software and referred to as "U2".)
UniData – Very similar to UniVerse, but UniData had facilities to interact with other Windows applications. It is also owned and distributed by Rocket Software.
PI/open – Prime Computer rewrote Prime INFORMATION in C for the Unix-based systems it was selling, calling it PI+. It was then ported to other Unix systems offered by other hardware vendors and renamed PI/open.
Applied Digital Data Systems (ADDS) – This was the first implementation to be done in software only, so upgrades were accomplished by a tape load, rather than a new chip. The "Mentor" line was initially based on the Zilog Z-8000 chipset and this port set off a flurry of other software implementations across a wide array of processors with a large emphasis on the Motorola 68000.
Fujitsu Microsystems of America – Another software implementation, existing in the late 1980s. Fujitsu Microsystems of America was acquired by Alpha Microsystems on October 28, 1989.
Pyramid – Another software implementation existing in the 1980s
General Automation "Zebra" – Another software implementation existing in the 1980s
Altos – A software implementation on an 8086 chipset platform launched around 1983.
WICAT/Pick – Another software implementation existing in the 1980s
Sequoia – Another software implementation, existing from 1984. Sequoia was most well known for its fault-tolerant multi-processor model, which could be dialed into with the user's permission and his switching terminal zero to remote with the key on the system consol. He could watch what was done by the support person who had dialed on his terminal 0, a printer with a keyboard. Pegasus came out in 1987. The Enterprise Systems business unit (which was the unit that sold Pick), was sold to General Automation in 1996/1997.
Revelation – In 1984, Cosmos released a Pick-style database called Revelation, later Advanced Revelation, for DOS on the IBM PC. Advanced Revelation is now owned by Revelation Technologies, which publishes a GUI-enabled version called OpenInsight.
jBASE – jBASE was released in 1991 by a small company of the same name in Hemel Hempstead, England. Written by former Microdata engineers, jBASE emulates all implementations of the system to some degree. jBASE compiles applications to native machine code form, rather than to an intermediate byte code. In 2015, cloud solutions provider Zumasys in Irvine, California, acquired the jBASE distribution rights from Mpower1 as well as the intellectual property from Temenos Group. On 14 Oct 2021, Zumasys announced they had sold their databases and tools, including jBASE to Rocket Software.
UniVision – UniVision was a Pick-style database designed as a replacement for the Mentor version, but with extended features, released in 1992 by EDP in Sheffield, England.
OpenQM – The only MultiValue database product available both as a fully supported non-open source commercial product and in open source form under the General Public License. OpenQM is available from its exclusive worldwide distributor, Zumasys.
Caché – In 2005 InterSystems, the maker of Caché database, announced support for a broad set of MultiValue extensions, Caché for MultiValue.
ONware – ONware equips MultiValue applications with the ability to use common databases such as Oracle and SQL Server. Using ONware, MultiValue applications can be integrated with relational, object, and object-relational applications.
D3 – Pick Systems ported the Pick operating system to run as a database product utilizing host operating systems such as Unix, Linux, or Windows servers, with the data stored within the file system of the host operating system. Previous Unix or Windows versions had to run in a separate partition, which made interfacing with other applications difficult. The D3 releases opened the possibility of integrating internet access to the database or interfacing to popular word processing and spreadsheet applications, which has been successfully demonstrated by a number of users. The D3 family of databases and related tools is owned and distributed by Rocket Software.
Through the implementations above, and others, Pick-like systems became available as database/programming/emulation environments running under many variants of Unix and Microsoft Windows.
Over the years, many important and widely used applications have been written using Pick or one of the derivative implementations. In general, the end users of these applications are unaware of the underlying Pick implementation.
Criticisms and comparisons
Run-time environment
Native Pick did not require an underlying operating system (OS) to run. This changed with later implementations when Pick was re-written to run on various host OS (Windows, Linux, Unix, etc.). While the host OS provided access to hardware resources (processor, memory, storage, etc.), Pick had internal processes for memory management. Object-oriented Caché addressed some of these problems.
Networking in mvBase was not possible without an accompanying application running in the host OS that could manage network connections via TCP ports and relay them to Pick internal networking (via serial connection).
Credentials and security
Individual user accounts must be created within the Pick OS, and cannot be tied to an external source (such as local accounts on the host OS, or LDAP).
User passwords are stored within the Pick OS as an encrypted value. The encrypted password can be "cracked" via brute force methods, but requires system access and Pick programming skills as part of the attack vector.
The Rocket D3 implementation supports SSL file encryption.
Expertise and support
Companies looking to hire developers and support personnel for MultiValue-based (Pick-based) systems recognize that although developers typically do not learn the environment in college and university courses, developers can be productive quickly with some mentoring and training. Due to the efficient design and nature of the programming language (a variant of BASIC), the learning curve is generally considered low. Pick products such as D3, UniVerse, UniData, jBASE, Revelation, MVON, Caché, OpenQM, and Reality are still supported globally via well established distribution channels and resellers. The mvdbms Google Group is a useful place to start when looking for resources. (See mvdbms on Google Groups)
See also
MUMPS, the predecessor of Caché
References
Bibliography
The REALITY Pocket Guide ; Jonathan E. Sisk ; Irvine, CA ; JES & Associates, Inc. ; 1981
The PICK Pocket Guide; Jonathan E. Sisk ; Irvine, CA ; Pick Systems ; 1982
Exploring The Pick Operating System ; Jonathan E. Sisk ; Steve VanArsdale ; Hasbrouck Heights, N.J. ; Hayden Book Co. 1985.
The Pick Pocket Guide ; Jonathan E. Sisk ; Desk reference ed ; Hasbrouck Heights, N.J. ; Hayden Book Co. 1985.
The Pick Perspective ; Ian Jeffrey Sandler ; Blue Ridge Summit, PA ; TAB Professional and Reference Books; 1989. Part of The Pick Library Series, Edited by Jonathan E. Sisk
Pick for professionals : advanced methods and techniques ; Harvey Rodstein ; Blue Ridge Summit, PA ; TAB Professional and Reference Books; 1990. Part of The Pick Library Series, Edited by Jonathan E. Sisk
Encyclopedia PICK (EPICK) ; Jonathan E. Sisk ; Irvine, CA ; Pick Systems ; 1992
Le Système d'exploitation PICK ; Malcolm Bull ; Paris: Masson, 1989.
The Pick operating system ; Joseph St John Bate; Mike Wyatt; New York : Van Nostrand Reinhold, 1986.
The Pick operating system ; Malcolm Bull ; London ; New York : Chapman and Hall, 1987.
Systeme pick ; Patrick Roussel, Pierre Redoin, Michel Martin ; Paris: CEdi Test, 1988.
Advanced PICK et UNIX : la nouvelle norme informatique ; Bruno Beninca; Aulnay-sous-Bois, Seine-Saint-Denis ; Relais Informatique International, 1990.
Le systeme PICK : mode d'emploi d'un nouveau standard informatique ; Michel Lallement, Jeanne-Françoise Beltzer; Aulnay-sous-Bois, Seine-Saint-Denis ; Relais Informatique International, 1987.
The Pick operating system : a practical guide ; Roger J Bourdon; Wokingham, England ; Reading, Mass. : Addison-Wesley, 1987.
Le Système d'éxploitation : réalités et perspectives ; Bernard de Coux; Paris : Afnor, 1988.
Pick BASIC : a programmer's guide ; Jonathan E Sisk;Blue Ridge Summit, PA : TAB Professional and Reference Books, 1987. Part of The Pick Library Series, Edited by Jonathan E. Sisk
Pick BASIC : a reference guide ; Linda Mui; Sebastopol, CA : O'Reilly & Associates, 1990.
Programming with IBM PC Basic and the Pick database system ; Blue Ridge Summit, PA : TAB Books, 1990. Part of The Pick Library Series, Edited by Jonathan E. Sisk
An overview of PICK system ;Shailesh Kamat; 1993.
Pick: A Multilingual Operating System ; Charles M. Somerville; Computer Language Magazine, May 1987, p. 34.
Encyclopedia Pick; Jonathan E. Sisk; Pick Systems, June 1991
External links
Photo of Dick Pick in his anti-gravity boots on the cover of Computer Systems News, 1983.
Pick/BASIC: A Programmer's Guidethe full text of the first and most widely read textbook by Pick educator and author Jonathan E. Sisk.
Life the Universe and Everything: introduction to and online training course in Universe developed by Pick software engineer Manny Neira.
Video: "History of the PICK System" made in 1990
Pick Publications Database
1987 Interview with Dick Pick in the Pick Pavilion at COMDEX
1990 Interview with Dick Pick in the Pick Pavilion at COMDEX
1990 Interview with Jonathan Sisk in the Pick Pavilion at COMDEX
1991 Pick Rap Show at COMDEX, co-written by Jonathan Sisk and John Treankler
1992 Video of Dick and Zion Pick, who appeared in the Ross Perot campaign rally - includes entire unedited Perot speech
An insightful early history of the Pick System, by Chandru Murthi, who was there at the time
1984 PC Magazine article "Choosing the Pick of the Litter", by Jonathan E. Sisk and Steve VanArsdale
Database Management Approach to Operating Systems Development, by Richard A. Pick Chapter 5 of New Directions for Database Systems, Gad Ariav, James Clifford editors
Doing More With Less Hardware, Computer History Museum piece on Pick
1965 software
Data processing
Legacy systems
Proprietary database management systems
Proprietary operating systems
Assembly language software
Time-sharing operating systems
X86 operating systems
68k architecture |
472463 | https://en.wikipedia.org/wiki/Maurice%20Wilkes | Maurice Wilkes | Sir Maurice Vincent Wilkes (26 June 1913 – 29 November 2010) was a British computer scientist who designed and helped build the Electronic Delay Storage Automatic Calculator (EDSAC), one of the earliest stored program computers, and who invented microprogramming, a method for using stored-program logic to operate the control unit of a central processing unit's circuits. At the time of his death, Wilkes was an Emeritus Professor at the University of Cambridge.
Early life, education, and military service
Wilkes was born in Dudley, Worcestershire, England the only child of Ellen (Helen), née Malone (1885–1968) and Vincent Joseph Wilkes (1887–1971), an accounts clerk at the estate of the Earl of Dudley. He grew up in Stourbridge, West Midlands, and was educated at King Edward VI College, Stourbridge. During his school years he was introduced to amateur radio by his chemistry teacher.
He studied the Mathematical Tripos at St John's College, Cambridge from 1931 to 1934, and in 1936 completed his PhD in physics on the subject of radio propagation of very long radio waves in the ionosphere. He was appointed to a junior faculty position of the University of Cambridge, through which he was involved in the establishment of a computing laboratory. He was called up for military service during World War II and worked on radar at the Telecommunications Research Establishment (TRE) and in operational research.
Research and career
In 1945, Wilkes was appointed as the second director of the University of Cambridge Mathematical Laboratory (later known as the Computer Laboratory).
The Cambridge laboratory initially had many different computing devices, including a differential analyser. One day Leslie Comrie visited Wilkes and lent him a copy of John von Neumann's prepress description of the EDVAC, a successor to the ENIAC under construction by Presper Eckert and John Mauchly at the Moore School of Electrical Engineering. He had to read it overnight because he had to return it and no photocopying facilities existed. He decided immediately that the document described the logical design of future computing machines, and that he wanted to be involved in the design and construction of such machines. In August 1946 Wilkes travelled by ship to the United States to enroll in the Moore School Lectures, of which he was only able to attend the final two weeks because of various travel delays. During the five-day return voyage to England, Wilkes sketched out in some detail the logical structure of the machine which would become EDSAC.
EDSAC
Since his laboratory had its own funding, he was immediately able to start work on a small practical machine, EDSAC (for "Electronic Delay Storage Automatic Calculator"), once back at Cambridge. He decided that his mandate was not to invent a better computer, but simply to make one available to the university. Therefore, his approach was relentlessly practical. He used only proven methods for constructing each part of the computer. The resulting computer was slower and smaller than other planned contemporary computers. However, his laboratory's computer was the second practical stored-program computer to be completed and operated successfully from May 1949, well over a year before the much larger and more complex EDVAC. In 1950, along with David Wheeler, Wilkes used EDSAC to solve a differential equation relating to gene frequencies in a paper by Ronald Fisher. This represents the first use of a computer for a problem in the field of biology.
Other computing developments
In 1951, he developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. This concept greatly simplified CPU development. Microprogramming was first described at the University of Manchester Computer Inaugural Conference in 1951, then expanded and published in IEEE Spectrum in 1955. This concept was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor. The next computer for his laboratory was the Titan, a joint venture with Ferranti Ltd begun in 1963. It eventually supported the UK's first time-sharing system which was inspired by CTSS and provided wider access to computing resources in the university, including time-shared graphics systems for mechanical CAD.
A notable design feature of the Titan's operating system was that it provided controlled access based on the identity of the program, as well as or instead of, the identity of the user. It introduced the password encryption system used later by Unix. Its programming system also had an early version control system.
Wilkes is also credited with the idea of symbolic labels, macros and subroutine libraries. These are fundamental developments that made programming much easier and paved the way for high-level programming languages. Later, Wilkes worked on an early timesharing system (now termed a multi-user operating system) and distributed computing. Toward the end of the 1960s, Wilkes also became interested in capability-based computing, and the laboratory assembled a unique computer, the Cambridge CAP.
In 1974, Wilkes encountered a Swiss data network (at Hasler AG) that used a ring topology to allocate time on the network. The laboratory initially used a prototype to share peripherals. Eventually, commercial partnerships were formed, and similar technology became widely available in the UK.
Awards, honours and leadership
Wilkes received a number of distinctions: he was a Knight Bachelor, Distinguished Fellow of the British Computer Society, a Fellow of the Royal Academy of Engineering and a Fellow of the Royal Society.
Wilkes was a founder member of the British Computer Society (BCS) and its first president (1957–1960). He received the Turing Award in 1967, with the following citation: "Professor Wilkes is best known as the builder and designer of the EDSAC, the first computer with an internally stored program. Built in 1949, the EDSAC used a mercury delay-line memory. He is also known as the author, with David Wheeler and Stanley Gill, of a volume on Preparation of Programs for Electronic Digital Computers in 1951, in which program libraries were effectively introduced." In 1968 he received the Harry H. Goode Memorial Award, with the following citation: "For his many original achievements in the computer field, both in engineering and software, and for his contributions to the growth of professional society activities and to international cooperation among computer professionals."
In 1972, Wilkes was awarded an honorary Doctor of Science by Newcastle University.
In 1980, he retired from his professorships and post as the head of the Computer Laboratory and joined the central engineering staff of Digital Equipment Corporation in Maynard, Massachusetts, USA.
Wilkes was awarded the Faraday Medal by the Institution of Electrical Engineers in 1981. The Maurice Wilkes Award, awarded annually for an outstanding contribution to computer architecture made by a young computer scientist or engineer, is named after him.
In 1986, he returned to England and became a member of Olivetti's Research Strategy Board. In 1987, he was awarded an Honorary Degree (Doctor of Science) by the University of Bath. In 1993 Wilkes was presented, by Cambridge University, with an honorary Doctor of Science degree. In 1994 he was inducted as a Fellow of the Association for Computing Machinery. He was awarded the Mountbatten Medal in 1997 and in 2000 presented the inaugural Pinkerton Lecture. He was knighted in the 2000 New Years Honours List. In 2001, he was inducted as a Fellow of the Computer History Museum "for his contributions to computer technology, including early machine design, microprogramming, and the Cambridge Ring network." In 2002, Wilkes moved back to the Computer Laboratory, University of Cambridge, as an emeritus professor.
In his memoirs Wilkes wrote:
Publications
Oscillations of the Earth's Atmosphere (1949), Cambridge University Press
Preparation of Programs for an Electronic Digital Computer (1951), with D. J. Wheeler and S. Gill, Addison Wesley Press
Automatic Digital Computers (1956), Methuen Publishing
A Short Introduction to Numerical Analysis (1966), Cambridge University Press
Time-sharing Computer Systems (1968), Macdonald
The Cambridge CAP Computer and its Operating System (1979), with R. M. Needham, Elsevier
Memoirs of a Computer Pioneer (1985), MIT Press
Computing Perspectives (1995) Morgan-Kauffman
Personal life
Wilkes married Nina Twyman in 1947 who died in 2008. He died in November 2010 and was survived by his son, Anthony, and two daughters, Margaret and Helen.
References
External links
Oral history interview with David J. Wheeler, Charles Babbage Institute, University of Minnesota. Wheeler was a research student under Wilkes at the University Mathematical Laboratory at Cambridge from 1948–51. Wheeler discusses the EDSAC project, the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701 computers, as well as visits to Cambridge by Douglas Hartree, Nelson Blackman (of ONR), Peter Naur, Aad van Wijngarden, Arthur van der Poel, Friedrich Bauer, and Louis Couffignal.
Listen to an oral history interview with Maurice Wilkes – recorded in June 2010 for An Oral History of British Science at the British Library
An after-dinner talk by Maurice Wilkes at King's College, Cambridge, about Alan Turing. Filmed on 1 October 1997 by Ian Pratt (video)
1913 births
2010 deaths
Alumni of St John's College, Cambridge
British computer scientists
Computer designers
Digital Equipment Corporation people
English physicists
Fellows of the Royal Academy of Engineering
Fellows of the Royal Society
Fellows of the British Computer Society
Fellows of the Association for Computing Machinery
Foreign associates of the National Academy of Sciences
History of computing in the United Kingdom
Knights Bachelor
Members of the University of Cambridge Computer Laboratory
People educated at King Edward VI College, Stourbridge
People from Dudley
Kyoto laureates in Advanced Technology
Presidents of the British Computer Society
Turing Award laureates |
472823 | https://en.wikipedia.org/wiki/Syskey | Syskey | The SAM Lock Tool, better known as Syskey (the name of its executable file) is a discontinued component of Windows NT that encrypts the Security Account Manager (SAM) database using a 128-bit RC4 encryption key.
First introduced in the Q143475 hotfix which was included in Windows NT 4.0 SP3, it was removed in Windows 10 1709 due to its use of cryptography considered unsecure by modern standards, and its use as part of scams as a form of ransomware. Microsoft officially recommended use of BitLocker disk encryption as an alternative.
History
First introduced in the Q143475 hotfix included in Windows NT 4.0 SP3, Syskey was intended to protect against offline password cracking attacks by preventing the possessor of an unauthorized copy of the SAM file from extracting useful information from it.
Syskey can optionally be configured to require the user to enter the key during boot (as a startup password) or load the key onto removable storage media (e.g., a floppy disk or USB flash drive).
In mid-2017, Microsoft removed syskey.exe from future versions of Windows. Microsoft recommends the use of "Bitlocker or similar technologies instead of the syskey.exe utility."
Security issues
The "Syskey Bug"
In December 1999, a security team from BindView found a security hole in Syskey that indicated that a certain form of offline cryptanalytic attack is possible, making a brute force attack appear to be possible. The problem is that SYSKEY has RC4 keystream reuse problems.
Microsoft later issued a fix for the problem (dubbed the "Syskey Bug"). The bug affected both Windows NT 4.0 and pre-RC3 versions of Windows 2000.
Use as ransomware
Syskey is commonly abused by "tech support" scammers to lock victims out of their own computers, in order to coerce them into paying a ransom.
See also
LM hash
pwdump
References
External links
Cryptographic software
Microsoft Windows security technology
Windows administration |
474702 | https://en.wikipedia.org/wiki/Skipjack%20%28cipher%29 | Skipjack (cipher) | In cryptography, Skipjack is a block cipher—an algorithm for encryption—developed by the U.S. National Security Agency (NSA). Initially classified, it was originally intended for use in the controversial Clipper chip. Subsequently, the algorithm was declassified.
History of Skipjack
Skipjack was proposed as the encryption algorithm in a US government-sponsored scheme of key escrow, and the cipher was provided for use in the Clipper chip, implemented in tamperproof hardware. Skipjack is used only for encryption; the key escrow is achieved through the use of a separate mechanism known as the Law Enforcement Access Field (LEAF).
The algorithm was initially secret, and was regarded with considerable suspicion by many for that reason. It was declassified on 24 June 1998, shortly after its basic design principle had been discovered independently by the public cryptography community.
To ensure public confidence in the algorithm, several academic researchers from outside the government were called in to evaluate the algorithm (Brickell et al., 1993). The researchers found no problems with either the algorithm itself or the evaluation process. Moreover, their report gave some insight into the (classified) history and development of Skipjack:
[Skipjack] is representative of a family of encryption algorithms developed in 1980 as part of the NSA suite of "Type I" algorithms... Skipjack was designed using building blocks and techniques that date back more than forty years. Many of the techniques are related to work that was evaluated by some of the world's most accomplished and famous experts in combinatorics and abstract algebra. Skipjack's more immediate heritage dates to around 1980, and its initial design to 1987...The specific structures included in Skipjack have a long evaluation history, and the cryptographic properties of those structures had many prior years of intense study before the formal process began in 1987.
In March 2016, NIST published a draft of its cryptographic standard which no longer certifies Skipjack for US government applications.
Description
Skipjack uses an 80-bit key to encrypt or decrypt 64-bit data blocks. It is an unbalanced Feistel network with 32 rounds.
It was designed to be used in secured phones.
Cryptanalysis
Eli Biham and Adi Shamir discovered an attack against 16 of the 32 rounds within one day of declassification, and (with Alex Biryukov) extended this to 31 of the 32 rounds (but with an attack only slightly faster than exhaustive search) within months using impossible differential cryptanalysis.
A truncated differential attack was also published against 28 rounds of Skipjack cipher.
A claimed attack against the full cipher was published in 2002, but a later paper with attack designer as a co-author clarified in 2009 that no attack on the full 32 round cipher was then known.
In pop culture
An algorithm named Skipjack forms part of the back-story to Dan Brown's 1998 novel Digital Fortress. In Brown's novel, Skipjack is proposed as the new public-key encryption standard, along with a back door secretly inserted by the NSA ("a few lines of cunning programming") which would have allowed them to decrypt Skipjack using a secret password and thereby "read the world's email". When details of the cipher are publicly released, programmer Greg Hale discovers and announces details of the backdoor. In real life there is evidence to suggest that the NSA has added back doors to at least one algorithm; the Dual_EC_DRBG random number algorithm may contain a backdoor accessible only to the NSA.
Additionally, in the Half-Life 2 modification Dystopia, the "encryption" program used in cyberspace apparently uses both Skipjack and Blowfish algorithms.
References
Further reading
External links
SCAN's entry for the cipher
fip185 Escrowed Encryption Standard EES
Type 2 encryption algorithms
National Security Agency cryptography |
475485 | https://en.wikipedia.org/wiki/Disk%20array | Disk array | A disk array is a disk storage system which contains multiple disk drives. It is differentiated from a disk enclosure, in that an array has cache memory and advanced functionality, like RAID, deduplication, encryption and virtualization.
Components of a disk array include:
Disk array controllers
Cache in form of both volatile random-access memory and non-volatile flash memory.
Disk enclosures for both magnetic rotational hard disk drives and electronic solid-state drives.
Power supplies
Typically a disk array provides increased availability, resiliency, and maintainability by using additional redundant components (controllers, power supplies, fans, etc.), often up to the point where all single points of failure (SPOFs) are eliminated from the design. Additionally, disk array components are often hot-swappable.
Traditionally disk arrays were divided into categories:
Network attached storage (NAS) arrays
Storage area network (SAN) arrays:
Modular SAN arrays
Monolithic SAN arrays
Utility Storage Arrays
Storage virtualization
Primary vendors of storage systems include Coraid, Inc., DataDirect Networks, Dell EMC, Fujitsu, Hewlett Packard Enterprise, Hitachi Data Systems, Huawei, IBM, Infortrend, NetApp, Oracle Corporation, Panasas, Pure Storage and other companies that often act as OEM for the above vendors and do not themselves market the storage components they manufacture.
References
Computer data storage
Fault-tolerant computer systems
RAID |
477556 | https://en.wikipedia.org/wiki/Agnes%20Meyer%20Driscoll | Agnes Meyer Driscoll | Agnes Meyer Driscoll (July 24, 1889 – September 16, 1971), known as "Miss Aggie" or "Madame X'", was an American
cryptanalyst during both World War I and World War II and was known as “the first lady of naval cryptology."
Early years
Born in Geneseo, Illinois, in 1889, Driscoll moved with her family to Westerville, Ohio, in 1895 where her father, Gustav Meyer, had taken a job teaching music at Otterbein College. In 1909, he donated the family home to the Anti-Saloon League, which had recently moved its headquarters to Westerville. The home was later donated to the Westerville Public Library and is now home to the Anti-Saloon League Museum and the Westerville Local History Center.
Education
Driscoll attended Otterbein College from 1907 to 1909. In 1911, she received a Bachelor of Arts degree from the Ohio State University, having majored in mathematics and physics and studied foreign languages, statistics and music. She was fluent in English, French, German, Latin and Japanese. From her earliest days as a college student, she pursued technical and scientific studies. After graduation, she moved to Amarillo, Texas, where she lived from 1911 to 1918 and worked as director of music at a military academy and, later, chair of the mathematics department at the local high school.
1918–1939
On June 22, 1918, about one year after America entered World War I when America had just started allowing women to enlist, Driscoll enlisted in the United States Navy. She was recruited at the highest possible rank of chief yeoman and after some time in the Postal Cable and Censorship Office she was assigned to the Code and Signal section of the Director of Naval Communications. After the war ended, she made use of an option to continue working at her post as a civilian. Except for a two-year break, when she worked for a private firm, she remained a leading cryptanalyst for the U.S. Navy until 1949.
In 1920, while continuing to work with the Navy, Driscoll studied at the Riverbank Laboratories in Geneva, Illinois, where fellow code breakers, including William F. Friedman and Elizebeth Smith Friedman worked. She is known to have also worked at the American Black Chamber run by Herbert Yardley. This was the first U.S. peace time code-breaking agency which set out to break codes used in diplomatic correspondence.
Her efforts were not limited to manual systems; she was involved also in the emerging machine technology of the time, which was being applied both to making and breaking ciphers. In her first days in the Code and Signal section, she co-developed one of the U.S. Navy's cipher machines, the "Communications Machine". This cipher machine became a standard enciphering device for the Navy for most of the 1920s. In recognition of her work, the United States Congress awarded Driscoll $15,000, which she shared with the widow of the machine's co-inventor, William Greshem.
In 1923, the inventor Edward Hebern, creator of the fledgling Hebern Electric Code Company, was attempting to create a more secure rotor-driven cipher machine. Driscoll left the Navy to test the machine, but it failed to deliver a more secure encryption system. She returned to the Navy in spring 1924.
In August 1924, she married Michael Driscoll, a Washington, D.C. lawyer.
Driscoll, with Lieutenant Joseph Rochefort, broke the Japanese Navy manual code, the Red Book Code, in 1926 after three years of work and helped to break the Blue Book Code in 1930.
In early 1935, Driscoll led the attack on the Japanese M-1 cipher machine (also known to the U.S. as the ORANGE machine), used to encrypt the messages of Japanese naval attaches around the world.
In 1939, she made important inroads into JN-25, the Japanese fleet's operational code used for the most important of messages. She successfully solved the cipher component of the "5-num" system which used number groups as substitutes for words and numbers which was further encrypted with a digital cipher. After that, the Navy could read some standard format messages, such as weather reports, but the bulk of the messages remained to be discovered. This work was later developed and exploited after the attack on Pearl Harbor for the rest of the Pacific War and provided advance warning of the Japanese attack on Midway Atoll. She was unable to take part in this work because, in October 1940, she was transferred to a team working to break the German naval Enigma cipher.
During this period, Driscoll mentored the following naval cryptographers:
Joseph Rochefort
Thomas Dyer
Edwin T. Layton
Joseph Wenger
1940–1959
After starting the work against JN-25, Driscoll was transferred to a new group, which attacked the German Enigma ciphers using a catalog approach (similar to rainbow tables). After almost two years of work on her new assignment, Driscoll and her team were unable to make progress in solving the German device. That was partly due to her unwillingness to use machine support or a mathematical approach, but she also refused the help of British code breakers from Bletchley Park who had traveled to the United States to advise her. Besides, the US and UK did not communicate effectively and her approach was both fruitless and had been tried by the British, who determined that it was unlikely to work. Ultimately this work was superseded by the US-UK cryptologic exchanges of 1942–43.
In 1943, she worked with a team to break the Japanese cipher Coral. It was broken two months later, although Driscoll is said to have had little influence on the project.
In 1945, she appears to have worked on attacking Russian ciphers.
Driscoll was part of the navy contingent that joined the new national cryptologic agencies, firstly the Armed Forces Security Agency in 1949 and then the National Security Agency in 1952. While with the Armed Forces Security Agency she may have contributed to attacking a cipher called Venona.
From 1946 until her retirement from the National Security Agency, she filled a number of positions, but she did not advance to the ranks of senior leadership.
She retired from Armed Forces Security Agency in 1959.
Death
She died in 1971 and is buried in Arlington National Cemetery.
Honors
In 2000, she was inducted into the National Security Agency's Hall of Honor.
In 2017, an Ohio Historical Marker was placed in front of the Meyer home in Westerville honoring Agnes Meyer Driscoll and her achievements, referring to her as "the first lady of naval cryptology."
References
The original version of this article appears to have been copied from the NSA Hall of Honor entry for Agnes Meyer Driscoll, which is in the public domain.
External links
"Agnes Meyer Driscoll", Biographies of Women Mathematicians, Agnes Scott College
– National Security Agency
1889 births
1971 deaths
Ohio State University College of Arts and Sciences alumni
Otterbein University alumni
American cryptographers
20th-century American educators
Burials at Arlington National Cemetery
Intelligence analysts
National Security Agency cryptographers
People from Westerville, Ohio
American women civilians in World War II
American women mathematicians
20th-century American mathematicians
Mathematicians from Washington, D.C.
Mathematicians from Ohio
20th-century women mathematicians
Women cryptographers
Yeoman (F) personnel
20th-century women educators |
479861 | https://en.wikipedia.org/wiki/CFD | CFD |
CFD may refer to:
Science and computing
Computational fluid dynamics, a branch of fluid mechanics using computational methods to predict the behavior of fluid flows
Counterfactual definiteness, the ability, in quantum mechanics, to consider results of unperformed measurements
CFEngine daemon, the process that runs CFEngine
Common fill device, an electronic module used to load cryptographic keys into electronic encryption machines
Constant fraction discriminator, a signal processing component
Complement factor D, an enzyme encoded by the CFD gene
Business
Contract for difference, a type of financial derivative, where two parties exchange the difference between opening and closing value of an underlying asset
Control Flow Diagram, is a diagram to describe the control flow of a business process, process or program
Cumulative Flow Diagram, an area graph that depicts the quantity of work in a given state
Firefighting organizations
Calgary Fire Department, the fire department of Calgary, Alberta
Charlotte Fire Department, the fire department of Charlotte, North Carolina
Chicago Fire Department, the fire department of Chicago, Illinois
Other
Carbon fee and dividend, an economic mechanism to lower and ultimately do away with, emissions of climate gases
Cheyenne Frontier Days, an annual rodeo and celebration held in Cheyenne, Wyoming
Christie Front Drive, a second wave indie/emo band from Denver, Colorado
Coulter Field (IATA: CFD), an airport in Bryan, Texas
Compagnie des chemins de fer départementaux, French rail vehicle manufacturer producing narrow gauge rail vehicles, see Captrain France
Congenital Femoral Deficiency, a rare birth defect that affects the pelvis and the proximal femur
Centre for Finance and Development, an interdisciplinary research centre at the Graduate Institute of International and Development Studies |
480015 | https://en.wikipedia.org/wiki/Traffic%20analysis | Traffic analysis | Traffic analysis is the process of intercepting and examining messages in order to deduce information from patterns in communication, which can be performed even when the messages are encrypted. In general, the greater the number of messages observed, or even intercepted and stored, the more can be inferred from the traffic. Traffic analysis can be performed in the context of military intelligence, counter-intelligence, or pattern-of-life analysis, and is a concern in computer security.
Traffic analysis tasks may be supported by dedicated computer software programs. Advanced traffic analysis techniques may include various forms of social network analysis.
Breaking the anonymity of networks
Traffic analysis method can be used to break the anonymity of anonymous networks, e.g., TORs. There are two methods of traffic-analysis attack, passive and active.
In passive traffic-analysis method, the attacker extracts features from the traffic of a specific flow on one side of the network and looks for those features on the other side of the network.
In active traffic-analysis method, the attacker alters the timings of the packets of a flow according to a specific pattern and looks for that pattern on the other side of the network; therefore, the attacker can link the flows in one side to the other side of the network and break the anonymity of it. It is shown, although timing noise is added to the packets, there are active traffic analysis methods robust against such a noise.
In military intelligence
In a military context, traffic analysis is a basic part of signals intelligence, and can be a source of information about the intentions and actions of the target. Representative patterns include:
Frequent communications – can denote planning
Rapid, short communications – can denote negotiations
A lack of communication – can indicate a lack of activity, or completion of a finalized plan
Frequent communication to specific stations from a central station – can highlight the chain of command
Who talks to whom – can indicate which stations are 'in charge' or the 'control station' of a particular network. This further implies something about the personnel associated with each station
Who talks when – can indicate which stations are active in connection with events, which implies something about the information being passed and perhaps something about the personnel/access of those associated with some stations
Who changes from station to station, or medium to medium – can indicate movement, fear of interception
There is a close relationship between traffic analysis and cryptanalysis (commonly called codebreaking). Callsigns and addresses are frequently encrypted, requiring assistance in identifying them. Traffic volume can often be a sign of an addressee's importance, giving hints to pending objectives or movements to cryptanalysts.
Traffic flow security
Traffic-flow security is the use of measures that conceal the presence and properties of valid messages on a network to prevent traffic analysis. This can be done by operational procedures or by the protection resulting from features inherent in some cryptographic equipment. Techniques used include:
changing radio callsigns frequently
encryption of a message's sending and receiving addresses (codress messages)
causing the circuit to appear busy at all times or much of the time by sending dummy traffic
sending a continuous encrypted signal, whether or not traffic is being transmitted. This is also called masking or link encryption.
Traffic-flow security is one aspect of communications security.
COMINT metadata analysis
The Communications' Metadata Intelligence, or COMINT metadata is a term in communications intelligence (COMINT) referring to the concept of producing intelligence by analyzing only the technical metadata, hence, is a great practical example for traffic analysis in intelligence.
While traditionally information gathering in COMINT is derived from intercepting transmissions, tapping the target's communications and monitoring the content of conversations, the metadata intelligence is not based on content but on technical communicational data.
Non-content COMINT is usually used to deduce information about the user of a certain transmitter, such as locations, contacts, activity volume, routine and its exceptions.
Examples
For example, if an emitter is known as the radio transmitter of a certain unit, and by using direction finding (DF) tools, the position of the emitter is locatable, the change of locations from one point to another can be deduced, without listening to any orders or reports. If one unit reports back to a command on a certain pattern, and another unit reports on the same pattern to the same command, then the two units are probably related, and that conclusion is based on the metadata of the two units' transmissions, not on the content of their transmissions.
Using all, or as much of the metadata available is commonly used to build up an Electronic Order of Battle (EOB) – mapping different entities in the battlefield and their connections. Of course the EOB could be built by tapping all the conversations and trying to understand which unit is where, but using the metadata with an automatic analysis tool enables a much faster and accurate EOB build-up that alongside tapping builds a much better and complete picture.
World War I
British analysts in World War I noticed that the call sign of German Vice Admiral Reinhard Scheer, commanding the hostile fleet, had been transferred to a land station. Admiral of the Fleet Beatty, ignorant of Scheer's practice of changing callsigns upon leaving harbor, dismissed its importance and disregarded Room 40 analysts' attempts to make the point. The German fleet sortied, and the British were late in meeting them at the Battle of Jutland. If traffic analysis had been taken more seriously, the British might have done better than a "draw".
French military intelligence, shaped by Kerckhoffs's legacy, had erected a network of intercept stations at the Western front in pre-war times. When the Germans crossed the frontier, the French worked out crude means for direction-finding based on intercepted signal intensity. Recording of call-signs and volume of traffic further enabled them to identify German combat groups and to distinguish between fast-moving cavalry and slower infantry.
World War II
In early World War II, the aircraft carrier was evacuating pilots and planes from Norway. Traffic analysis produced indications and were moving into the North Sea, but the Admiralty dismissed the report as unproven. The captain of Glorious did not keep sufficient lookout, and was subsequently surprised and sunk. Harry Hinsley, the young Bletchley Park liaison to the Admiralty, later said his reports from the traffic analysts were taken much more seriously thereafter.
During the planning and rehearsal for the attack on Pearl Harbor, very little traffic was passed by radio, subject to interception. The ships, units, and commands involved were all in Japan and in touch by phone, courier, signal lamp, or even flag. None of that traffic was intercepted, and could not be analyzed.
The espionage effort against Pearl Harbor before December didn't send an unusual number of messages; Japanese vessels regularly called in Hawaii and messages were carried aboard by consular personnel. At least one such vessel carried some Japanese Navy Intelligence officers. Such messages cannot be analyzed. It has been suggested, however, the volume of diplomatic traffic to and from certain consular stations might have indicated places of interest to Japan, which might thus have suggested locations to concentrate traffic analysis and decryption efforts.
Admiral Nagumo's Pearl Harbor Attack Force sailed under radio silence, with its radios physically locked down. It is unclear if this deceived the U.S.; Pacific Fleet intelligence was unable to locate the Japanese carriers in the days immediately preceding the attack on Pearl Harbor.
The Japanese Navy played radio games to inhibit traffic analysis (see Examples, below) with the attack force after it sailed in late November. Radio operators normally assigned to carriers, with a characteristic Morse Code "fist", transmitted from inland Japanese waters, suggesting the carriers were still near Japan.
Operation Quicksilver, part of the British deception plan for the Invasion of Normandy in World War II, fed German intelligence a combination of true and false information about troop deployments in Britain, causing the Germans to deduce an order of battle which suggested an invasion at the Pas-de-Calais instead of Normandy. The fictitious divisions created for this deception were supplied with real radio units, which maintained a flow of messages consistent with the deception.
In computer security
Traffic analysis is also a concern in computer security. An attacker can gain important information by monitoring the frequency and timing of network packets. A timing attack on the SSH protocol can use timing information to deduce information about passwords since, during interactive session, SSH transmits each keystroke as a message. The time between keystroke messages can be studied using hidden Markov models. Song, et al. claim that it can recover the password fifty times faster than a brute force attack.
Onion routing systems are used to gain anonymity. Traffic analysis can be used to attack anonymous communication systems like the Tor anonymity network. Adam Back, Ulf Möeller and Anton Stiglic present traffic analysis attacks against anonymity providing systems
. Steven J. Murdoch and George Danezis from University of Cambridge presented
research showing that traffic-analysis allows adversaries to infer which nodes relay the anonymous streams. This reduces the anonymity provided by Tor. They have shown that otherwise unrelated streams can be linked back to the same initiator.
Remailer systems can also be attacked via traffic analysis. If a message is observed going to a remailing server, and an identical-length (if now anonymized) message is seen exiting the server soon after, a traffic analyst may be able to (automatically) connect the sender with the ultimate receiver. Variations of remailer operations exist that can make traffic analysis less effective.
Countermeasures
It is difficult to defeat traffic analysis without both encrypting messages and masking the channel. When no actual messages are being sent, the channel can be masked
by sending dummy traffic, similar to the encrypted traffic, thereby keeping bandwidth usage constant
. "It is very hard to hide information about the size or timing of messages. The known solutions require Alice to send a continuous stream of messages at the maximum bandwidth she will ever use...This might be acceptable for military applications, but it is not for most civilian applications." The military-versus-civilian problems applies in situations where the user is charged for the volume of information sent.
Even for Internet access, where there is not a per-packet charge, ISPs make statistical assumption that connections from user sites will not be busy 100% of the time. The user cannot simply increase the bandwidth of the link, since masking would fill that as well. If masking, which often can be built into end-to-end encryptors, becomes common practice, ISPs will have to change their traffic assumptions.
See also
Chatter (signals intelligence)
Data warehouse
ECHELON
Electronic order of battle
ELINT
Pattern-of-life analysis
SIGINT
Social network analysis
Telecommunications data retention
Zendian Problem
References
FMV Sweden
Multi-source data fusion in NATO coalition operations
Further reading
http://www.cyber-rights.org/interception/stoa/interception_capabilities_2000.htm — a study by Duncan Campbell
https://web.archive.org/web/20070713232218/http://www.onr.navy.mil/02/baa/docs/07-026_07_026_industry_briefing.pdf
Selected Papers in Anonymity — on Free Haven
Cryptographic attacks
Intelligence analysis
Military communications
Telecommunications |
480685 | https://en.wikipedia.org/wiki/Paillier%20cryptosystem | Paillier cryptosystem | The Paillier cryptosystem, invented by and named after Pascal Paillier in 1999, is a probabilistic asymmetric algorithm for public key cryptography. The problem of computing n-th residue classes is believed to be computationally difficult. The decisional composite residuosity assumption is the intractability hypothesis upon which this cryptosystem is based.
The scheme is an additive homomorphic cryptosystem; this means that, given only the public key and the
encryption of and , one can compute the encryption of .
Algorithm
The scheme works as follows:
Key generation
Choose two large prime numbers p and q randomly and independently of each other such that . This property is assured if both primes are of equal length.
Compute and . lcm means Least Common Multiple.
Select random integer where
Ensure divides the order of by checking the existence of the following modular multiplicative inverse: ,
where function is defined as .
Note that the notation does not denote the modular multiplication of times the modular multiplicative inverse of but rather the quotient of divided by , i.e., the largest integer value to satisfy the relation .
The public (encryption) key is .
The private (decryption) key is
If using p,q of equivalent length, a simpler variant of the above key generation steps would be to set and , where .
Encryption
Let be a message to be encrypted where
Select random where
Compute ciphertext as:
Decryption
Let be the ciphertext to decrypt, where
Compute the plaintext message as:
As the original paper points out, decryption is "essentially one exponentiation modulo ."
Homomorphic properties
A notable feature of the Paillier cryptosystem is its homomorphic properties along with its non-deterministic encryption (see Electronic voting in Applications for usage). As the encryption function is additively homomorphic, the following identities can be described:
Homomorphic addition of plaintexts
The product of two ciphertexts will decrypt to the sum of their corresponding plaintexts,
The product of a ciphertext with a plaintext raising will decrypt to the sum of the corresponding plaintexts,
Homomorphic multiplication of plaintexts
A ciphertext raised to the power of a plaintext will decrypt to the product of the two plaintexts,
More generally, a ciphertext raised to a constant k will decrypt to the product of the plaintext and the constant,
However, given the Paillier encryptions of two messages there is no known way to compute an encryption of the product of these messages without knowing the private key.
Background
Paillier cryptosystem exploits the fact that certain discrete logarithms can be computed easily.
For example, by binomial theorem,
This indicates that:
Therefore, if:
then
.
Thus:
,
where function is defined as (quotient of integer division) and .
Semantic security
The original cryptosystem as shown above does provide semantic security against chosen-plaintext attacks (IND-CPA). The ability to successfully distinguish the challenge ciphertext essentially amounts to the ability to decide composite residuosity. The so-called decisional composite residuosity assumption (DCRA) is believed to be intractable.
Because of the aforementioned homomorphic properties however, the system is malleable, and therefore does not enjoy the highest level of semantic security, protection against adaptive chosen-ciphertext attacks (IND-CCA2).
Usually in cryptography the notion of malleability is not seen as an "advantage," but under certain applications such as secure electronic voting and threshold cryptosystems, this property may indeed be necessary.
Paillier and Pointcheval however went on to propose an improved cryptosystem that incorporates the combined hashing of message m with random r. Similar in intent to the Cramer–Shoup cryptosystem, the hashing prevents an attacker, given only c, from being able to change m in a meaningful way. Through this adaptation the improved scheme can be shown to be IND-CCA2 secure in the random oracle model.
Applications
Electronic voting
Semantic security is not the only consideration. There are situations under which malleability may be desirable. The above homomorphic properties can be utilized by secure electronic voting systems. Consider a simple binary ("for" or "against") vote. Let m voters cast a vote of either 1 (for) or 0 (against). Each voter encrypts their choice before casting their vote. The election official takes the product of the m encrypted votes and then decrypts the result and obtains the value n, which is the sum of all the votes. The election official then knows that n people voted for and m-n people voted against. The role of the random r ensures that two equivalent votes will encrypt to the same value only with negligible likelihood, hence ensuring voter privacy.
Electronic cash
Another feature named in paper is the notion of self-blinding. This is the ability to change one ciphertext into another without changing the content of its decryption. This has application to the development of ecash, an effort originally spearheaded by David Chaum. Imagine paying for an item online without the vendor needing to know your credit card number, and hence your identity. The goal in both electronic cash and electronic voting, is to ensure the e-coin (likewise e-vote) is valid, while at the same time not disclosing the identity of the person with whom it is currently associated.
See also
The Naccache–Stern cryptosystem and the Okamoto–Uchiyama cryptosystem are historical antecedents of Paillier.
The Damgård–Jurik cryptosystem is a generalization of Paillier.
References
Notes
External links
The Homomorphic Encryption Project implements the Paillier cryptosystem along with its homomorphic operations.
Encounter: an open-source library providing an implementation of Paillier cryptosystem and a cryptographic counters construction based on the same.
python-paillier a library for Partially Homomorphic Encryption in Python, including full support for floating point numbers.
The Paillier cryptosystem interactive simulator demonstrates a voting application.
An interactive demo of the Paillier cryptosystem.
A proof-of-concept Javascript implementation of the Paillier cryptosystem with an interactive demo.
A googletechtalk video on voting using cryptographic methods.
A Ruby implementation of Paillier homomorphic addition and a zero-knowledge proof protocol (documentation)
Public-key encryption schemes |
481515 | https://en.wikipedia.org/wiki/List%20of%20Nokia%20products | List of Nokia products | The following is a list of products branded by Nokia.
Current products and services
Products by Nokia Technologies
Wi-Fi routers
Nokia WiFi Beacon 1
Nokia WiFi Beacon 3
Digital audio
Nokia OZO Audio
Smart TVs
Nokia markets smart TVs that run on Android TV.
Nokia Smart TV 55 inch
Nokia Smart TV 43 inch (to be launched on June 4, 2020)
Products by Nokia Networks
Nokia Networks is a multinational data networking and telecommunications equipment company headquartered in Espoo, Finland and wholly owned subsidiary of Nokia Corporation.
The list of products is available here: Nokia Networks - Solutions, Services & Products
HMD Global products
HMD Global develops devices under the Nokia brand. The company has signed a deal with Nokia allowing it to use the Nokia brand for its devices.
Smartphones (Android One)
Nokia 9 PureView
Nokia 8.3 5G
Nokia 8.1 (released in China as Nokia X7)
Nokia 8 Sirocco
Nokia 8
Nokia 7.2
Nokia X71 (available in Taiwan & China only)
Nokia 7.1
Nokia 7 Plus
Nokia 7 (available in China only)
Nokia 6.2
Nokia 6.1 Plus (released in China as Nokia X6)
Nokia 6.1
Nokia 6
Nokia 5.4
Nokia 5.3
Nokia 5.1 Plus (released in China as Nokia X5)
Nokia 5.1
Nokia 5
Nokia 4.2
Nokia 3.4
Nokia 3.2
Nokia 3.1 Plus
Nokia 3.1
Nokia 3
Nokia 2.4
Nokia 2.3
Nokia 2.2
Nokia 2.1
Nokia 2
Nokia 1.4
Nokia 1.3
Nokia 1 Plus
Nokia 1
Nokia X20
Nokia X10
Nokia G20
Nokia G10
Nokia C30
Nokia C20
Nokia C10
Nokia C5 Endi
Nokia C3
Nokia C2 Tava / Tennen
Nokia C2
Nokia C1 Plus
Nokia C1
Tablets
Nokia T20
Feature phones
Nokia 8110 4G
Nokia 8000 4G
Nokia 6310 (2021)
Nokia 6300 4G
Nokia 5310 (2020)
Nokia 3310 (2017)
Nokia 2720 Flip
Nokia 800 Tough
Nokia 230
Nokia 225 4G
Nokia 220 4G
Nokia 216
Nokia 215 4G
Nokia 210 (2019)
Nokia 150 (2020)
Nokia 150
Nokia 130 (2017)
Nokia 125
Nokia 110 4G
Nokia 110 (2019)
Nokia 106 (2018)
Nokia 105 4G
Nokia 105 (2019)
Nokia 105 (2017)
Nokia 105 (2015)
Operating systems
Series 20
Series 30
Series 30+
Series 40
Series 60
Series 80
Series 90
Smart Feature OS
KaiOS
Past products and services
Mobile phones
Note:
Phones in boldface are smartphones
Status: D = discontinued; P = in production; C = cancelled
The Mobira/Nokia series (1982–1990)
The earliest phones produced by Nokia. These all use 1G networks.
Original series (1992–1999)
The last 1G phones by Nokia.
4-digit series (1992–2010, 2017–Present)
Nokia 1xxx – Ultrabasic series (1992–2010)
The Nokia 1000 series include Nokia's most affordable phones. They are mostly targeted towards developing countries and users who do not require advanced features beyond making calls and SMS text messages, alarm clock, reminders, etc.
Out of all of these phones, the Nokia 1680 classic has the most features.
Nokia 2xxx – Basic series (1994–2010, 2019–Present)
Like the 1000 series, the 2000 series are entry-level phones. However, the 2000 series generally contain more advanced features than the 1000 series; many 2000 series phones feature color screens and some feature cameras, Bluetooth and even A-GPS, GPS such as in the case of the Nokia 2710.
Nokia 3xxx – Expression series (1997–2009, 2017–Present)
The Nokia 3000 series are mostly mid-range phones targeted towards the youth market. Many of these models included youthful designs to appeal to the teen market, unlike the 6000-series which were more conservatively styled to appeal to business users, and the 7000-series which were usually more feminine and mature in design to appeal to fashionable women.
Nokia 4xxx
The Nokia 4000 series was officially skipped as a sign of deference from Nokia towards East Asian customers.
Nokia 5xxx – Active series (1998–2010, 2020–Present)
The Nokia 5000 series is similar in features to the 3000 series, but often contains more features geared toward active individuals. Many of the 5000 series phones feature a rugged construction or contain extra features for music playback.
Nokia 6xxx – Classic Business series (1995–2010, 2020–present)
The Nokia 6000 series is Nokia's largest family of phones. It consists mostly of mid-range to high-end phones containing a high number of features. The 6000 series is notable for their conservative, unisex designs, which make them popular among business users.
Nokia 6136 UMA is the first mobile phone to include Unlicensed Mobile Access. Nokia 6131 NFC is the first mobile phone to include Near Field Communication.
Nokia 7xxx – Fashion and Experimental series (1999–2010)
The Nokia 7000 series is a family of Nokia phones with two uses. Most phones in the 7000 series are targeted towards fashion-conscious users, often with feminine styling to appeal to women. Some phones in this family also test features. The 7000 series are considered to be a more consumer-oriented family of phones when contrasted to the business-oriented 6000 series. The family is also distinguished from the 3000-series phones as being more mature and female-oriented, while the 3000-series was largely targeted towards the youth market.
The 7110 was the first Nokia phone with a WAP browser. WAP was significantly hyped up during the 1998–2000 Internet boom. However WAP did not meet these expectations and uptake was limited. Another industry first was the flap, which slid from beneath the phone with a push from the release button. Unfortunately the cover was not too durable. The 7110 was also the only phone to feature a navi-roller key.
The 7250i was a slightly improved version of the Nokia 7250. It includes XHTML and OMA Forward lock digital rights management. The phone has exactly the same design as the 7250. This phone is far more popular than the 7250 and has been made available on pre-paid packages and therefore it is very popular amongst youths in the UK and other European countries.
The 7510 Supernova was a phone exclusive to T-Mobile USA. Only some units of this model have Wi-Fi chips with UMA. The Wi-Fi adapter on this phone supports up to WPA2 encryption if present. This phone uses Xpress-On Covers.
The 7650 was the first Series 60 smartphone of Nokia. It was quite basic compared to smartphones, it didn't have MMC slot, but it had a camera.
The 7610 was Nokia's first smartphone featuring a megapixel camera (1,152x864 pixels), and is targeted towards the fashion conscious individual. End-users can also use the 7610 with Nokia Lifeblog. Other pre-installed applications include the Opera and Kodak Photo Sharing. It is notable for its looks, having opposite corners rounded off. It comes with a 64 MB Reduced Size MMC. The main CPU is an ARM compatible chip (ARM4T architecture) running at 123 MHz.
The 7710's 640x320 screen was a touch screen phone.
Nokia 8xxx – Premium series (1996–2007, 2018–present)
This series is characterized by ergonomics and attractiveness. The internals of the phone are similar to those in different series and so on that level offer nothing particularly different, however the physical handset itself offers a level of functionality which appeals to users who focus on ergonomics. The front slide keypad covers offered a pseudo-flip that at the time Nokia were unwilling to make. Materials used increased the cost and hence exclusivity of these handsets.
The only exception to the rule (there are many in different series) is the 82xx and 83xx which were very small and light handsets.
Nokia 9xxx – Communicator series (1996–2007)
The Nokia 9000 series was reserved for the Communicator series, but the last Communicator, the E90 Communicator, was an Eseries phone.
Lettered series: C/E/N/X (2005–2011)
Cseries (2010–2011)
The Nokia Cseries is an affordable series optimized for social networking and sharing. The range includes a mix of feature phones running Series 40 and some smartphones running Symbian.
C1-00 and C2-00 are dual SIM phones, but with Nokia C1-00 both SIM cards cannot be utilized at the same time.
Eseries (2006–2011)
The Nokia Eseries is an enterprise-class series with business-optimized products. They are all smartphones and run on Symbian.
Nseries (2005–2011)
The Nseries are highly advanced smartphones, with strong multimedia and connectivity features and as many other features as possible into one device.
Note:
Although part of the Nseries, the Nokia N800 and N810 Internet Tablets did not include phone functionality. See the Internet Tablets section.
The N950 was meant to be the N9-00 with the old N9 'Lankku' being N9-01, however the N9-00 model number was used for the all touch 'Lankku' with the original design being the MeeGo developer-only N950.
Xseries (2009–2011)
The Nokia Xseries targets a young audience with a focus on music and entertainment. Like the Cseries, it is a mix of both Series 30/40/ feature phones and Series 60/Symbian smartphones.
3-digit series Symbian phones (2011–2012)
Since the Nokia 500, Nokia has changed the naming rule for Symbian^3 phones.
Worded series: Asha/Lumia/X (2011–2014)
Asha (2011–2014)
The Nokia Asha series is an affordable series optimized for social networking and sharing, meant for first time users. All phones run Series 40 except Asha 230 and 50x phones, which run on the Nokia Asha platform.
Lumia (2011–2014)
Lumia is a series of smartphones running Windows Phone. It also includes the Nokia Lumia 2520, a Windows RT-powered tablet computer. The series was sold to Microsoft in 2014 who branded these products under the name Microsoft.
Devices with Microsoft branding are not listed here.
X Family (2014)
The Nokia X family is a range of Android smartphones from Nokia. These were the first ever Nokia phones to run on Google's Android OS.
3-digit series feature phones (2011–)
Those phones are entry-level, classic mobile phones platform (with relatively long work on battery). The series was sold in 2014 to Microsoft which continued branding these products under Nokia. Microsoft sold this series to HMD Global in 2016 which also continues branding these products under Nokia.
Other phones
N-Gage – Mobile gaming devices (2003–2004)
PCMCIA Cardphones (2001–2003)
Concept phones
Nokia developed a phone concept, never realised as a working device, in the 2008 Nokia Morph.
Tablets
Nokia N1
VR cameras
Nokia OZO
Health
The Digital Health division of Nokia Technologies bought the following personal health devices from Withings in 2016. The division was sold back to Withings in 2018.
Nokia Steel
Nokia Steel HR
Nokia Body/Body+/Body Cardio
Nokia Go
Nokia Sleep
Nokia BPM/BPM+
Nokia Thermo
Nokia Home
Services
After the sale of its mobile devices and services division to Microsoft, all of the below services were either discontinued or spun off.
Consumer services
Accounts & SSO
Club Nokia
Maliit
Mobile Web Server
MOSH
Nokia Accessibility
Nokia Browser for Symbian
Nokia Car App
Nokia Care
Nokia Conference
Nokia Business Center
Nokia Download!
Nokia Life
Nokia Lifeblog
Nokia Mail and Nokia Chat
Nokia MixRadio
Nokia Motion Data
Nokia Motion Monitor
Nokia network monitor
Nokia Pure
Nokia Sensor
Nokia Sports Tracker
Nokia Sync
Nokia Xpress
OFono
OTA bitmap
Ovi
Plazes
Smart Messaging
Twango
WidSets
Nokia imaging apps
Nokia Camera
Nokia Cinemagraph
Nokia Creative Studio
Nokia Glam Me
Nokia Panorama
Nokia Refocus
Nokia Share
Nokia Smart Shoot
Nokia Storyteller
Nokia PhotoBeamer
Nokia Play To
Nokia Storyteller
Nokia Video Director
Nokia Video Trimmer
Nokia Video Tuner
Nokia Video Upload
Navigation apps
Boston University JobLens
HERE.com
HERE Maps
HERE Map Creator
HERE Drive
HERE Transit
HERE City Lens
Nokia Internships Lens
Nokia JobLens
Nokia Point & Find
Desktop apps
Nokia Software Recovery Tool
Nokia Software Updater
Nokia Suite
Nokia PC Suite
Humanitarian services
Nokia Data Gathering
Nokia Education Delivery
Nokia Mobile-Mathematics
Developer tools
Nokia DVLUP
Python
Websites
Dopplr
Nokia Beta Labs
Nokia Conversations
Nokia Discussions
Noknok.tv
Video gaming
Bounce
N-Gage
Nokia Climate Mission
Nokia Climate Mission 3D
Nokia Game
Nokia Modern Mayor
Snake
Space Impact
Operating systems
Series 30
Series 30+
Series 40
Symbian
S60, formerly Series 60
Series 80
Series 90
Linux-based
Maemo
MeeGo
Nokia Asha platform
Nokia X platform
Security
IP appliances run Nokia IPSO FreeBSD based operating system, work with Check Point's firewall and VPN products.
Nokia IP 40
Nokia IP 130
Nokia IP 260
Nokia IP 265
Nokia IP 330
Nokia IP 350
Nokia IP 380
Nokia IP 390 (EU Only)
Nokia IP 530
Nokia IP 710
Nokia IP 1220
Nokia IP 1260
Nokia IP 2250
Nokia Horizon Manager
Nokia Network Voyager
In 2004, Nokia began offering their own SSL VPN appliances based on IP Security Platforms and the pre-hardened Nokia IPSO operating system. Client integrity scanning and endpoint security technology was licensed from Positive Networks.
Nokia 50s
Nokia 105s
Nokia 500s
Internet Tablets
Nokia's Internet Tablets were designed for wireless Internet browsing and e-mail functions and did not include phone capabilities. The Nokia N800 and N810 Internet Tablets were also marketed as part of Nseries. See the Nseries section.
Nokia 770 Internet Tablet
Nokia N800 Internet Tablet
Nokia N810 Internet Tablet
Nokia N810 WiMAX Edition
The Nokia N900, the successor to the N810, has phone capabilities and is not officially marketed as an Internet Tablet, but rather as an actual Nseries smartphone.
ADSL modems
Nokia M10
Nokia M11
Nokia M1122
Nokia MW1122
Nokia M5112
Nokia M5122
Nokia Ni200
Nokia Ni500
GPS products
Nokia GPS module LAM-1 for 9210(i)/9290 Communicator
Nokia 5140 GPS Cover
Nokia Bluetooth GPS module LD-1W
Nokia Bluetooth GPS module LD-3W*
Nokia Bluetooth GPS Module LD-4W
Navigation Kit for Nokia 770 Internet Tablet, including LD-3W GPS receiver and software
Nokia 330 Navigator, that supports an external TMC module.
Nokia 500 Navigator
WLAN products
Nokia A020 WLAN access point
Nokia A021 WLAN access point/router
Nokia A032 WLAN access point
Nokia C020 PC card IEEE 802.11 2 Mbit/s, DSSS (produced by Samsung)
Nokia C021 PC card, with external antenna
Nokia C110 PC card IEEE 802.11b 11 Mbit/s
Nokia C111 PC card, with external antennas
Nokia MW1122 ADSL modem with wireless interface
Nokia D211 WLAN/GPRS PC card
Digital television
Nokia DBox
Nokia DBox2
Nokia Mediamaster 9200 S
Nokia Mediamaster 9500 S
Nokia Mediamaster 9500 C
Nokia Mediamaster 9600 S
Nokia Mediamaster 9600 C
Nokia Mediamaster 9610 S
Nokia Mediamaster 9800 S
Nokia Mediamaster 9850 T
Nokia Mediamaster 9900 S
Nokia Mediamaster 110 T
Nokia Mediamaster 210 T
Nokia Mediamaster 221 T
Nokia Mediamaster 230 T
Nokia Mediamaster 260 T
Nokia Mediamaster 260 C
Nokia Mediamaster 310 T
Military communications and equipment
Nokia has developed the Sanomalaitejärjestelmä ("Message device system") for the Finnish Defence Forces. It includes:
Sanomalaite M/90
Partiosanomalaite
Keskussanomalaite
For the Finnish Defence forces Nokia manufactured also:
AN/PRC-77 portable combat-net radio transceiver (under licence, designated LV 217)
M61 gas mask
Telephone switches
Nokia DX 200
Nokia DX 220
Nokia DX 220 Compact
Computers
Minicomputers
Nokia designed and manufactured a series of mini-computers starting in 1970s. These included the Mikko series of minicomputers intended for use in the finance and banking industry, and the MPS-10 minicomputer that was extensively based the Ada programming language and was widely used in major Finnish banks in the late 1980s.
Personal computers
In the 1980s, Nokia's personal computer division Nokia Data manufactured a series of personal computers by the name of MikroMikko. The MikroMikko series included the following products and product series.
Nokia's PC division was sold to the British computer company ICL in 1991. In 1990, Fujitsu had acquired 80% of ICL plc, which throughout the decade became wholly the part of Fujitsu. Personal computers and servers were marketed under the ICL brand; the Nokia MikroMikko line of compact desktop computers continued to be produced at the Kilo factories in Espoo, Finland. Components, including motherboards and Ethernet network adapters were manufactured locally, until production was moved to Taiwan. Internationally the MikroMikko line was marketed by Fujitsu as the ErgoPro.
In 1999, Fujitsu Siemens Computers was formed as a joint venture between Fujitsu Computers Europe and Siemens Computer Systems, wherein all of ICL's hardware business (except VME mainframes) was absorbed into the joint venture. On 1 April 2009, Fujitsu bought out Siemens' share of the joint venture, and Fujitsu Siemens Computers became Fujitsu Technology Solutions. Fujitsu continues to manufacture computers in Europe, including PC mainboards developed and manufactured in-house.
Mini laptops
On 24 August 2009, Nokia announced that they will be re-entering the PC business with a high-end mini laptop called the Nokia Booklet 3G. It was discontinued a few years later.
Computer displays
Nokia produced CRT and early TFT LCD Multigraph displays for PC and larger systems application. The Nokia Display Products' branded business was sold to ViewSonic in 2000.
Others
During the 1990s, Nokia divested itself of the industries listed below to focus solely on telecommunications.
Aluminium
Communications cables
Capacitors
Chemicals
Electricity generation machinery
Footwear (including Wellington boots)
Military technology and equipment
Paper products
Personal computers
Plastics
Robotics
Televisions
Tires (car and bicycle)
See also
Nokian Footwear
Nokian Tyres
Nokia phone series
History of mobile phones
List of Motorola products
List of Sony Ericsson products
References
External links
Nokia – Phone Software Update
Nokia
Nokia services
Nokia
Nokia
Lists of mobile computers |
481582 | https://en.wikipedia.org/wiki/NTRUEncrypt | NTRUEncrypt | The NTRUEncrypt public key cryptosystem, also known as the NTRU encryption algorithm, is an NTRU lattice-based alternative to RSA and elliptic curve cryptography (ECC) and is based on the shortest vector problem in a lattice (which is not known to be breakable using quantum computers).
It relies on the presumed difficulty of factoring certain polynomials in a truncated polynomial ring into a quotient of two polynomials having very small coefficients. Breaking the cryptosystem is strongly related, though not equivalent, to the algorithmic problem of lattice reduction in certain lattices. Careful choice of parameters is necessary to thwart some published attacks.
Since both encryption and decryption use only simple polynomial multiplication, these operations are very fast compared to other asymmetric encryption schemes, such as RSA, ElGamal and elliptic curve cryptography. However, NTRUEncrypt has not yet undergone a comparable amount of cryptographic analysis in deployed form.
A related algorithm is the NTRUSign digital signature algorithm.
Specifically, NTRU operations are based on objects in a truncated polynomial ring with convolution multiplication and all polynomials in the ring have integer coefficients and degree at most N-1:
NTRU is actually a parameterised family of cryptosystems; each system is specified by three integer parameters (N, p, q) which represent the maximal degree for all polynomials in the truncated ring R, a small modulus and a large modulus, respectively, where it is assumed that N is prime, q is always larger than p, and p and q are coprime; and four sets of polynomials and (a polynomial part of the private key, a polynomial for generation of the public key, the message and a blinding value, respectively), all of degree at most .
History
The NTRUEncrypt Public Key Cryptosystem is a relatively new cryptosystem.
The first version of the system, which was simply called NTRU, was developed around 1996 by three mathematicians (Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman). In 1996 these mathematicians together with Daniel Lieman founded the NTRU Cryptosystems, Inc. and were given a patent (now expired) on the cryptosystem.
During the last ten years people have been working on improving the cryptosystem. Since the first presentation of the cryptosystem, some changes were made to improve both the performance of the system and its security. Most performance improvements were focused on speeding up the process. Up till 2005 literature can be found that describes the decryption failures of the NTRUEncrypt. As for security, since the first version of the NTRUEncrypt, new parameters have been introduced that seem secure for all currently known attacks and reasonable increase in computation power.
Now the system is fully accepted to IEEE P1363 standards under the specifications for lattice-based public-key cryptography (IEEE P1363.1).
Because of the speed of the NTRUEncrypt Public Key Cryptosystem (see http://bench.cr.yp.to for benchmarking results) and its low memory use (see below), it can be used in applications such as mobile devices and Smart-cards.
In April 2011, NTRUEncrypt was accepted as a X9.98 Standard, for use in the financial services industry.
Public key generation
Sending a secret message from Alice to Bob requires the generation of a public and a private key. The public key is known by both Alice and Bob and the private key is only known by Bob. To generate the key pair two polynomials f and g, with degree at most and with coefficients in {-1,0,1} are required. They can be considered as representations of the residue classes of polynomials modulo in R. The polynomial must satisfy the additional requirement that the inverses modulo q and modulo p (computed using the Euclidean algorithm) exist, which means that
and must hold.
So when the chosen f is not invertible, Bob has to go back and try another f.
Both f and (and ) are Bob's private key. The public key h is generated computing the quantity
Example:
In this example the parameters (N, p, q) will have the values N = 11, p = 3 and q = 32 and therefore the polynomials f and g are of degree at most 10. The system parameters (N, p, q) are known to everybody. The polynomials are randomly chosen, so suppose they are represented by
Using the Euclidean algorithm the inverse of f modulo p and modulo q, respectively, is computed
Which creates the public key h (known to both Alice and Bob) computing the product
Encryption
Alice, who wants to send a secret message to Bob, puts her message in the form of a polynomial m with coefficients in . In modern applications of the encryption, the message polynomial can be translated in a binary or ternary representation.
After creating the message polynomial, Alice chooses randomly a polynomial r with small coefficients (not restricted to the set {-1,0,1}), that is meant to obscure the message.
With Bob's public key h the encrypted message e is computed:
This ciphertext hides Alice's messages and can be sent safely to Bob.
Example:
Assume that Alice wants to send a message that can be written as polynomial
and that the randomly chosen ‘blinding value’ can be expressed as
The ciphertext e that represents her encrypted message to Bob will look like
Decryption
Anybody knowing r could compute the message m by evaluating e - rh; so r must not be revealed by Alice. In addition to the publicly available information, Bob knows his own private key. Here is how he can obtain m:
First he multiplies the encrypted message e and part of his private key f
By rewriting the polynomials, this equation is actually representing the following computation:
Instead of choosing the coefficients of a between 0 and q – 1 they are chosen in the interval [-q/2, q/2] to prevent that the original message may not be properly recovered since Alice chooses the coordinates of her message m in the interval [-p/2, p/2]. This implies that all coefficients of already lie within the interval [-q/2, q/2] because the polynomials r, g, f and m and prime p all have coefficients that are small compared to q. This means that all coefficients are left unchanged during reducing modulo q and that the original message may be recovered properly.
The next step will be to calculate a modulo p:
because .
Knowing b Bob can use the other part of his private key to recover Alice's message by multiplication of b and
because the property was required for .
Example:
The encrypted message e from Alice to Bob is multiplied with polynomial f
where Bob uses the interval [-q/2, q/2] instead of the interval [0, q – 1] for the coefficients of polynomial a to prevent that the original message may not be recovered correctly.
Reducing the coefficients of a mod p results in
which equals .
In the last step the result is multiplied with from Bob's private key to end up with the original message m
Which indeed is the original message Alice has sent to Bob!
Attacks
Since the proposal of NTRU several attacks on the NTRUEncrypt public key cryptosystem have been introduced. Most attacks are focused on making a total break by finding the secret key f instead of just recovering the message m.
If f is known to have very few non-zero coefficients Eve can successfully mount a brute force attack by trying all values for f. When Eve wants to know whether f´ is the secret key, she simply calculates . If it has small coefficients it might be the secret key f, and Eve can test if f´ is the secret key by using it to decrypt a message she encrypted herself.
Eve could also try values of g and test if has small values.
It is possible to mount a meet-in-the-middle attack which is more powerful. It can cut the search time by square root. The attack is based on the property that .
Eve wants to find
and such that holds and such that they have the property
If f has d one's and N-d zero's, then Eve creates all possible and in which they both have length (e.g. covers the lowest coefficients of f and the highest)
with d/2 one's. Then she computes for all and orders them in bins based on the first k coordinates. After that she computes all and orders them in bins not only based on the first k coordinates, but also based on what happens if you add 1 to the first k coordinates. Then you check the bins that contain both and and see if the property holds.
The lattice reduction attack is one of the best known and one of the most practical methods to break the NTRUEncrypt. In a way it can be compared to the factorization of the modulus in RSA. The most used algorithm for the lattice reduction attack is the Lenstra-Lenstra-Lovász algorithm.
Because the public key h contains both f and g one can try to obtain them from h. It is however too hard to find the secret key when the NTRUEncrypt parameters are chosen secure enough. The lattice reduction attack becomes harder if the dimension of the lattice gets bigger and the shortest vector gets longer.
The chosen ciphertext attack is also a method which recovers the secret key f and thereby results in a total break. In this attack Eve tries to obtain her own message from the ciphertext and thereby tries to obtain the secret key. In this attack Eve doesn't have any interaction with Bob.
How it works:
First Eve creates a cipher text such that and
When Eve writes down the steps to decipher e (without actually calculating the values since she does not know f) she finds :
In which such that
Example:
Then K becomes .
Reducing the coefficients of polynomials a mod p really reduces the coefficients of . After multiplication with , Eve finds:
Because c was chosen to be a multiple of p, m can be written as
Which means that .
Now if f and g have few coefficients which are the same at the same factors, K has few non zero coefficients and is thereby small. By trying different values of K the attacker can recover f.
By encrypting and decrypting a message according to the NTRUEncrypt the attacker can check whether the function f is the correct secret key or not.
Security and performance improvements
Using the latest suggested parameters (see below) the NTRUEncrypt public key cryptosystem is secure to most attacks. There continues however to be a struggle between performance and security. It is hard to improve the security without slowing down the speed, and vice versa.
One way to speed up the process without damaging the effectiveness of the algorithm, is to make some changes in the secret key f.
First, construct f such that , in which F is a small polynomial (i.e. coefficients {-1,0, 1}). By constructing f this way, f is invertible mod p. In fact , which means that Bob does not have to actually calculate the inverse and that Bob does not have to conduct the second step of decryption. Therefore, constructing f this way saves a lot of time but it does not affect the security of the NTRUEncrypt because it is only easier to find but f is still hard to recover.
In this case f has coefficients different from -1, 0 or 1, because of the multiplication by p. But because Bob multiplies by p to generate the public key h, and later on reduces the ciphertext modulo p, this will not have an effect on the encryption method.
Second, f can be written as the product of multiple polynomials, such that the polynomials have many zero coefficients. This way fewer calculations have to be conducted.
According to the 2020 NTRU NIST submission the following parameters are considered secure:
Table 1: Parameters
References
Jaulmes, E. and Joux, A. A Chosen-Ciphertext Attack against NTRU. Lecture Notes in Computer Science; Vol 1880. Proceedings of the 20th Annual International Cryptology Conference on Advances in Cryptography. pp. 20–35, 2000.
Jeffrey Hoffstein, Jill Pipher, Joseph H. Silverman. NTRU: A Ring Based Public Key Cryptosystem. In Algorithmic Number Theory (ANTS III), Portland, OR, June 1998, J.P. Buhler (ed.), Lecture Notes in Computer Science 1423, Springer-Verlag, Berlin, 1998, 267–288.
Howgrave-Graham, N., Silverman, J.H. & Whyte, W., Meet-In-The-Middle Attack on a NTRU Private Key.
J. Hoffstein, J. Silverman. Optimizations for NTRU. Public-Key Cryptography and Computational Number Theory (Warsaw, September 11–15, 2000), DeGruyter, to appear.
A. C. Atici, L. Batina, J. Fan & I. Verbauwhede. Low-cost implementations of NTRU for pervasive security.
External links
NTRU technical website
The IEEE P1363 Home Page
Security Innovation (acquired NTRU Cryptosystems, Inc.)
Open Source BSD license implementation of NTRUEncrypt
Open Source GPL v2 license of NTRUEncrypt
strongSwan Open Source IPsec solution using NTRUEncrypt-based key exchange
- Embedded SSL/TLS Library offering cipher suites utilizing NTRU (wolfSSL)
Public-key encryption schemes
Lattice-based cryptography
Post-quantum cryptography |
481661 | https://en.wikipedia.org/wiki/NTRU | NTRU | NTRU is an open source public-key cryptosystem that uses lattice-based cryptography to encrypt and decrypt data. It consists of two algorithms: NTRUEncrypt, which is used for encryption, and NTRUSign, which is used for digital signatures. Unlike other popular public-key cryptosystems, it is resistant to attacks using Shor's algorithm. NTRUEncrypt was patented, but it was placed in the public domain in 2017. NTRUSign is patented, but it can be used by software under the GPL.
History
The first version of the system, which was called NTRU, was developed in 1996 by mathematicians Jeffrey Hoffstein, Jill Pipher, and Joseph H. Silverman. That same year, the developers of NTRU joined with Daniel Lieman and founded the NTRU Cryptosystems, Inc., and were given a patent on the cryptosystem. In 2009, the company was acquired by Security Innovation, a software security company. In 2013, Damien Stehle and Ron Steinfeld created a provably secure version of NTRU which is being studied by a post quantum crypto group chartered by the European Commission.
In May 2016, Daniel Bernstein, Chitchanok Chuengsatiansup, Tanja Lange and Christine van Vredendaal released NTRU Prime, which adds defenses against potential attack to NTRU by eliminating algebraic structure they considered worrisome. However, after more than 20 years of scrutiny, no concrete approach to attack the original NTRU exploiting its algebraic structure has been found so far.
NTRU became a finalist in the 3rd round of the Post-Quantum Cryptography Standardization project, whereas NTRU Prime became an alternate candidate.
Performance
At equivalent cryptographic strength, NTRU performs costly private key operations much faster than RSA does. The time of performing an RSA private operation increases as the cube of the key size, whereas that of an NTRU operation increases quadratically.
In 2010, the Department of Electrical Engineering, University of Leuven, noted that "[using] a modern GTX280 GPU, a throughput of up to 200 000 encryptions per second can be reached at a security level of 256 bits. Comparing this to a symmetric cipher (not a very common comparison), this is only around 20 times slower than a recent AES implementation."
Resistance to quantum-computer-based attacks
Unlike RSA and Elliptic Curve Cryptography, NTRU is not known to be vulnerable to quantum computer based attacks. The National Institute of Standards and Technology wrote in a 2009 survey that "[there] are viable alternatives for both public key encryption and signatures that are not vulnerable to Shor’s Algorithm” and “[of] the various lattice based cryptographic schemes that have been developed, the NTRU family of cryptographic algorithms appears to be the most practical". The European Union's PQCRYPTO project (Horizon 2020 ICT-645622) is evaluating the provably secure Stehle–Steinfeld version of NTRU (not original NTRU algorithm itself) as a potential European standard. However the Stehle-Steinfeld version of NTRU is "significantly less efficient than the original scheme."
Standardization
The standard IEEE Std 1363.1, issued in 2008, standardizes lattice-based public key cryptography, especially NTRUEncrypt.
The standard X9.98 standardizes lattice-based public key cryptography, especially NTRUEncrypt, as part of the X9 standards for the financial services industry.
The PQCRYPTO project of the European Commission is considering standardization of the provably secure Stehle-Steinfeld version of NTRU
Implementations
Originally, NTRU was only available as a proprietary, for-pay library and open source authors were threatened with legal action. It was not until 2011 that the first open-source implementation appeared, and in 2013, Security Innovation exempted open source projects from having to get a patent license, and released an NTRU reference implementation under the GPL v2.
Five open-source NTRU implementations now exist - Each is available in Java and C:
The GPL-licensed reference implementation
A BSD-licensed library
bouncycastle
GoldBug Messenger was the first Chat- and E-Mail client with NTRU Algorithm under open source license, which is based on the Spot-On Encryption Suite Kernels.
Additionally, wolfSSL provides support for NTRU cipher suites in a lightweight C implementation.
References
External links
NTRU NIST submission
NTRU Prime NIST submission
Lattice-based cryptography
Post-quantum cryptography
1996 introductions |
481813 | https://en.wikipedia.org/wiki/Timing%20attack | Timing attack | In cryptography, a timing attack is a side-channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than using cryptanalysis of known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.
Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time.
Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally with compiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code.
Avoidance
Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, a constant-time algorithm. Consider an implementation in which every call to a subroutine always returns in exactly x seconds, where x is the maximum time it ever takes to execute that routine on every possible authorized input. In such an implementation, the timing of the algorithm is less likely to leak information about the data supplied to that invocation. The downside of this approach is that the time used for all executions becomes that of the worst-case performance of the function.
The data-dependency of timing may stem from one of the following:
Non-local memory access, as the CPU may cache the data. Software run on a CPU with a data cache will exhibit data-dependent timing variations as a result of memory looks into the cache.
Conditional jumps. Modern CPUs try to speculatively execute past jumps by guessing. Guessing wrong (not uncommon with essentially random secret data) entails a measurable large delay as the CPU tries to backtrack. This requires writing branch-free code.
Some "complicated" mathematical operations, depending on the actual CPU hardware:
Integer division is almost always non-constant time. The CPU uses a microcode loop that uses a different code path when either the divisor or the dividend is small.
CPUs without a barrel shifter runs shifts and rotations in a loop, one position at a time. As a result, the amount to shift must not be secret.
Older CPUs run multiplications in a way similar to division.
Examples
The execution time for the square-and-multiply algorithm used in modular exponentiation depends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and the error correction techniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, including RSA, ElGamal, and the Digital Signature Algorithm.
In 2003, Boneh and Brumley demonstrated a practical network-based timing attack on SSL-enabled web servers, based on a different vulnerability having to do with the use of RSA with Chinese remainder theorem optimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use of blinding techniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.
Some versions of Unix use a relatively expensive implementation of the crypt library function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases. The login program in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applying brute-force to produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.
Two otherwise securely isolated processes running on a single system with either cache memory or virtual memory can communicate by deliberately causing page faults and/or cache misses in one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.
The 2017 Meltdown and Spectre attacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks. As of early 2018, almost every computer system in the world is affected by Spectre, making it the most powerful example of a timing attack in history.
Algorithm
The following C code demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:
bool insecureStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
for (size_t i = 0; i < length; i++)
if (ca[i] != cb[i])
return false;
return true;
}
By comparison, the following version runs in constant-time by testing all characters and using a bitwise operation to accumulate the result:
bool constantTimeStringCompare(const void *a, const void *b, size_t length) {
const char *ca = a, *cb = b;
bool result = true;
for (size_t i = 0; i < length; i++)
result &= ca[i] == cb[i];
return result;
}
In the world of C library functions, the first function is analogous to , while the latter is analogous to NetBSD's or OpenBSD's and . On other systems, the comparison function from cryptographic libraries like OpenSSL and libsodium can be used.
Notes
Timing attacks are easier to mount if the adversary knows the internals of the hardware implementation, and even more so, the cryptographic system in use. Since cryptographic security should never depend on the obscurity of either (see security through obscurity, specifically both Shannon's Maxim and Kerckhoffs's principle), resistance to timing attacks should not either. If nothing else, an exemplar can be purchased and reverse engineered. Timing attacks and other side-channel attacks may also be useful in identifying, or possibly reverse-engineering, a cryptographic algorithm used by some device.
References
Further reading
Paul C. Kocher. Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems. CRYPTO 1996: 104–113
Describes dudect, a simple program that times a piece of code on different data.
Side-channel attacks |
482136 | https://en.wikipedia.org/wiki/Zeroisation | Zeroisation | In cryptography, zeroisation (also spelled zeroization) is the practice of erasing sensitive parameters (electronically stored data, cryptographic keys, and critical security parameters) from a cryptographic module to prevent their disclosure if the equipment is captured. This is generally accomplished by altering or deleting the contents to prevent recovery of the data.
Mechanical
When encryption was performed by mechanical devices, this would often mean changing all the machine's settings to some fixed, meaningless value, such as zero. On machines with letter settings rather than numerals, the letter 'O' was often used instead. Some machines had a button or lever for performing this process in a single step. Zeroisation would typically be performed at the end of an encryption session to prevent accidental disclosure of the keys, or immediately when there was a risk of capture by an adversary.
Software
In modern software based cryptographic modules, zeroisation is made considerably more complex by issues such as virtual memory, compiler optimisations and use of flash memory. Also, zeroisation may need to be applied not only to the key, but also to a plaintext and some intermediate values. A cryptographic software developer must have an intimate understanding of memory management in a machine, and be prepared to zeroise data whenever a sensitive device might move outside the security boundary. Typically this will involve overwriting the data with zeroes, but in the case of some types of non-volatile storage the process is much more complex; see data remanence.
As well as zeroising data due to memory management, software designers consider performing zeroisation:
When an application changes mode (e.g. to a test mode) or user;
When a computer process changes privileges;
On termination (including abnormal termination);
On any error condition which may indicate instability or tampering;
Upon user request;
Immediately, the last time the parameter is required; and
Possibly if a parameter has not been required for some time.
Informally, software developers may also use zeroise to mean any overwriting of sensitive data, not necessarily of a cryptographic nature.
Tamper resistant hardware
In tamper resistant hardware, automatic zeroisation may be initiated when tampering is detected. Such hardware may be rated for cold zeroisation, the ability to zeroise itself without its normal power supply enabled.
Standards
Standards for zeroisation are specified in ANSI X9.17 and FIPS 140-2.
References
Key management |
482181 | https://en.wikipedia.org/wiki/Q%20%28disambiguation%29 | Q (disambiguation) | Q is the seventeenth letter of the English alphabet.
Q may also refer to:
People
Q, pseudonym of Sir Arthur Quiller-Couch, the Cornish writer
Q, pseudonym used by the originator of QAnon, an American far-right conspiracy theory
Q, pseudonym of Quentin Elias in his appearances in gay porn site Randy Blue
q, stage name for Qurram Hussain of JoSH
Q, nickname for NBA assistant coach Bruce Fraser
Q, or Q Martel, nicknames of Giffard Le Quesne Martel
Q, nickname for Joel Quenneville
Q, nickname of Qaushiq Mukherjee, an Indian film director
Q, nickname of Quincy Jones
Q, nickname of American basketball player Quintin Dailey (1961–2010)
Q, nickname of former American football player Anquan Boldin
Maggie Q (born 1979), American actress
Schoolboy Q, rapper
Stacey Q, disco singer
Brian "Q" Quinn, member of the American comedy troupe The Tenderloins
Q, main dancer and vocalist of South Korean boy band The Boyz
Arts, entertainment, and media
Fictional entities
Q (James Bond), a character in the James Bond novels and films
Q (Star Trek), a character and the name of the character's species
Q (Street Fighter), a character in the arcade game Street Fighter
Q, an amnesiac child from the video-game Zero Time Dilemma
Quinton 'Q' Brooks, a character from the TV series Moesha
Ah Q, the main character in the novella The True Story of Ah Q
The Q, a character from the TV series The Lost Islands
Kyu, a character in the dating simulation videogame HuniePop
Literature
Q (magazine), British music magazine
Q Awards, yearly music awards given by Q magazine
Q (novel), historical novel by Luther Blissett first published in Italian in 1999
Q source, also known as Q document, a hypothetical early Gospel writing (Bible)
"Q" Is for Quarry, the seventeenth novel in Sue Grafton's "Alphabet mystery" series, published in 2002
Music
Q (1970s band), an American disco group
SSQ (band), formerly Q, an American synth-pop band
Q (album), Japanese language album by Mr. Children, 2000
Q, jazz album by Tom Hasslan's Krokofant
"Q" (song), by Mental Cube (The Future Sound of London), 1990
"Q", a song by AAA, 2006
Networks, stations, and channels
BeritaSatu (News One), previously known as Q Channel and QTV, an Indonesian talk channel
Q (TV network), Philippine network previously known as Quality TeleVision (QTV)
ARY Qtv, a Pakistani television channel
Q Radio, a UK radio station run by Q magazine
Q Television Network, an American cable television network
Q TV, a UK music channel based on Q magazine
Other titled works
Q (1982 film), horror film written and directed by Larry Cohen (Also known as Q – The Winged Serpent)
Q (2011 film), French film
q (radio show), CBC Radio One show, formerly called Q
Q (TV series), Spike Milligan's BBC2 comedy series that ran between 1969 and 1983
Q, the production code for the 1965 Doctor Who serial The Space Museum
Business and government
Q (dairy), Norwegian dairy brand
Q clearance, United States Department of Energy security clearance
Q Score, in marketing, way to measure the familiarity of an item
Q Theatre, a theatre in London, England
Q-telecom, Greek operator
Motorola Q, smartphone released in 2006
Pentax Q, mirrorless interchangeable lens camera released in 2011
Tobin's q, a financial ratio developed by James Tobin
Computing and computer games
Q (cipher), encryption algorithm
Q (emulator), open-source x86 emulator for Mac OS X
Q (equational programming language), functional programming language based on term rewriting
Q (game engine), 3D middleware from Qube Software
Q (number format), fixed-point number format built into certain computer processors
Q (programming language from Kx Systems), array processing language
Q (software), a computer software package for molecular dynamics simulation
Q Sharp (Q#), domain-specific programming language
Q, a channel service in QuakeNet's IRC services
Panasonic Q, a hybrid video game console between a GameCube and a DVD player, manufactured by Nintendo and Panasonic
Q Entertainment, developer of Rez HD and the Lumines and Meteos games
Q-Games, developer of the Pixel Junk series of PlayStation 3 games
Engineering
Q, the standard abbreviation for an electronic transistor, used e.g. in circuit diagrams
Q the first moment of area, used in calculating shear stress distributions
Q, the reactive power component of apparent power
ΔQ, Heat transfer coefficient - ΔQ = heat input or heat lost, Joules
Q Factor (bicycles), the width between where the pedals attach to the cranks
Q factor or Q in resonant systems, a measurement of the effect of resistance to oscillation
Linguistics
Voiceless uvular stop in the International Phonetic Alphabet
Mathematics
or Q, set of all rational numbers
Q, the Quaternion group
Q, Robinson arithmetic, a finitely axiomatized fragment of Peano Arithmetic
Q value in statistics, the minimum false discovery rate at which the test may be called significant
Science
Biology and chemistry
Q, the symbol for discharge (hydrology)
q, an abbreviation for "every" in medicine
Q, abbreviation for the amino acid, glutamine
Q, abbreviation for quinone
q, designation for the long arm of a chromosome
Q, reaction quotient
Cardiac output (Q), the volume of blood pumped by each ventricle per minute
Coenzyme Q, a carrier in electron transport
Haplogroup Q (mtDNA), a human mitochondrial DNA (mtDNA) haplogroup
Haplogroup Q-M242 (Y-DNA), a Y-chromosomal DNA (Y-DNA) haplogroup
Q value (nuclear science), the differences of energies of the parent nuclides to the daughter nuclides
Physics and astronomy
Q or q, dynamic pressure
Max q
Q, electric charge
q, elementary charge
Q, Fusion energy gain factor
Q, heat
q, momentum transfer
Q, quasar
Q, Toomre's Stability Criterion
Q, volumetric flow rate
q, quark
Sports
JQH Arena, a nickname for the arena on the campus of Missouri State University
Q (San Jose Earthquakes mascot), furry blue mascot of the Major League Soccer team San Jose Earthquakes
Qualcomm Stadium, a nickname for a stadium in San Diego, California
Quebec Major Junior Hockey League, often referred to as "The Q"
Rocket Mortgage FieldHouse, Cleveland, Ohio; a nickname from the arena's previous name, Quicken Loans Arena
Transportation
Q (New York City Subway service)
Q-ship, converted merchant vessels with concealed armament intended to lure and destroy submarines
Other uses
Q Society of Australia, right-wing, anti-Islamic political society
Q, a nickname for Chuluaqui-Quodoushka, a series of New Age sexual meditation exercises developed by Harley Reagan
Quintals, an archaic method of measuring crop yield, was often abbreviated as q
Quarter, as in "Q1", "Q2", "Q3" and "Q4", a three-month period in a calendar year
Q. texture, a Taiwanese term describing the ideal texture of many foods
Initiative Q, a new payment network and digital currency devised by Saar Wilf
Quebec, the military time zone code for UTC−04:00
Sky Q, a subscription-based television and entertainment service
The Q (nightclub), an LGBT nightclub in New York City
See also
Cue (disambiguation)
Kyū, rank, in Japanese martial arts and other Japanese grading systems
QQ (disambiguation)
QQQ (disambiguation)
QQQQ (disambiguation)
Queue (disambiguation)
Suzie Q (disambiguation) |
482331 | https://en.wikipedia.org/wiki/Iraqi%20block%20cipher | Iraqi block cipher | In cryptography, the Iraqi block cipher was a block cipher published in C source code form by anonymous FTP upload around July 1999, and widely distributed on Usenet. It is a five round unbalanced Feistel cipher operating on a 256 bit block with a 160 bit key.
A comment suggests that it is of Iraqi origin. However, like the S-1 block cipher, it is generally regarded as a hoax, although of lesser quality than S-1. Although the comment suggests that it is Iraqi in origin, all comments, variable and function names and printed strings are in English rather than Arabic; the code is fairly inefficient (including some pointless operations), and the cipher's security may be flawed (no proof).
Because it has a constant key schedule the cipher is vulnerable to a slide attack. However, it may take 264 chosen texts to create a single slid pair, which would make the attack unfeasible. It also has many fixed points, although that is not necessarily a problem, except possibly for hashing modes. No public attack is currently available. As with S-1, it was David Wagner who first spotted the security flaws.
References
External links
Source code for the cipher
File encryption with IBC in ECB and CBC Mode
Block ciphers
Internet hoaxes
1999 hoaxes |
483191 | https://en.wikipedia.org/wiki/Key%20distribution%20center | Key distribution center | In cryptography, a key distribution center (KDC) is part of a cryptosystem intended to reduce the risks inherent in exchanging keys. KDCs often operate in systems within which some users may have permission to use certain services at some times and not at others.
Security overview
For instance, an administrator may have established a policy that only certain users may back up to tape. Many operating systems can control access to the tape facility via a "system service". If that system service further restricts the tape drive to operate only on behalf of users who can submit a service-granting ticket when they wish to use it, there remains only the task of distributing such tickets to the appropriately permitted users. If the ticket consists of (or includes) a key, one can then term the mechanism which distributes it a KDC. Usually, in such situations, the KDC itself also operates as a system service.
Operation
A typical operation with a KDC involves a request from a user to use some service. The KDC will use cryptographic techniques to authenticate requesting users as themselves. It will also check whether an individual user has the right to access the service requested. If the authenticated user meets all prescribed conditions, the KDC can issue a ticket permitting access.
KDCs mostly operate with symmetric encryption.
In most (but not all) cases the KDC shares a key with each of all the other parties.
The KDC produces a ticket based on a server key.
The client receives the ticket and submits it to the appropriate server.
The server can verify the submitted ticket and grant access to user submitting it.
Security systems using KDCs include Kerberos. (Actually, Kerberos partitions KDC functionality between two different agents: the AS (Authentication Server) and the TGS (Ticket Granting Service).)
External links
Kerberos Authentication Protocol
Microsoft: Kerberos Key Distribution Center - TechNet
Microsoft: Key Distribution Center - MSDN
Key management
Computer network security |
487303 | https://en.wikipedia.org/wiki/User%20agent | User agent | In computing, a user agent is any software, acting on behalf of a user, which "retrieves, renders and facilitates end-user interaction with Web content". A user agent is therefore a special kind of software agent.
Some prominent examples of user agents are web browsers and email readers. Often, a user agent acts as the client in a client–server system. In some contexts, such as within the Session Initiation Protocol (SIP), the term user agent refers to both end points of a communications session.
User agent identification
When a software agent operates in a network protocol, it often identifies itself, its application type, operating system, device model, software vendor, or software revision, by submitting a characteristic identification string to its operating peer. In HTTP, SIP, and NNTP protocols, this identification is transmitted in a header field User-Agent. Bots, such as Web crawlers, often also include a URL and/or e-mail address so that the Webmaster can contact the operator of the bot.
Use in HTTP
In HTTP, the User-Agent string is often used for content negotiation, where the origin server selects suitable content or operating parameters for the response. For example, the User-Agent string might be used by a web server to choose variants based on the known capabilities of a particular version of client software. The concept of content tailoring is built into the HTTP standard in RFC 1945 "for the sake of tailoring responses to avoid particular user agent limitations".
The User-Agent string is one of the criteria by which Web crawlers may be excluded from accessing certain parts of a website using the Robots Exclusion Standard (robots.txt file).
As with many other HTTP request headers, the information in the "User-Agent" string contributes to the information that the client sends to the server, since the string can vary considerably from user to user.
Format for human-operated web browsers
The User-Agent string format is currently specified by section 5.5.3 of HTTP/1.1 Semantics and Content. The format of the User-Agent string in HTTP is a list of product tokens (keywords) with optional comments. For example, if a user's product were called WikiBrowser, their user agent string might be WikiBrowser/1.0 Gecko/1.0. The "most important" product component is listed first.
The parts of this string are as follows:
product name and version (WikiBrowser/1.0)
layout engine and version (Gecko/1.0)
During the first browser war, many web servers were configured to send web pages that required advanced features, including frames, to clients that were identified as some version of Mozilla only. Other browsers were considered to be older products such as Mosaic, Cello, or Samba, and would be sent a bare bones HTML document.
For this reason, most Web browsers use a User-Agent string value as follows:
For example, Safari on the iPad has used the following:
Mozilla/5.0 (iPad; U; CPU OS 3_2_1 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko) Mobile/7B405
The components of this string are as follows:
Mozilla/5.0: Previously used to indicate compatibility with the Mozilla rendering engine.
(iPad; U; CPU OS 3_2_1 like Mac OS X; en-us): Details of the system in which the browser is running.
AppleWebKit/531.21.10: The platform the browser uses.
(KHTML, like Gecko): Browser platform details.
Mobile/7B405: This is used by the browser to indicate specific enhancements that are available directly in the browser or through third parties. An example of this is Microsoft Live Meeting which registers an extension so that the Live Meeting service knows if the software is already installed, which means it can provide a streamlined experience to joining meetings.
Before migrating to the Chromium code base, Opera was the most widely used web browser that did not have the User-Agent string with "Mozilla" (instead beginning it with "Opera"). Since July 15, 2013, Opera's User-Agent string begins with "Mozilla/5.0" and, to avoid encountering legacy server rules, no longer includes the word "Opera" (instead using the string "OPR" to denote the Opera version).
Format for automated agents (bots)
Automated web crawling tools can use a simplified form, where an important field is contact information in case of problems. By convention the word "bot" is included in the name of the agent. For example:
Googlebot/2.1 (+http://www.google.com/bot.html)
Automated agents are expected to follow rules in a special file called "robots.txt".
User agent spoofing
The popularity of various Web browser products has varied throughout the Web's history, and this has influenced the design of websites in such a way that websites are sometimes designed to work well only with particular browsers, rather than according to uniform standards by the World Wide Web Consortium (W3C) or the Internet Engineering Task Force (IETF). Websites often include code to detect browser version to adjust the page design sent according to the user agent string received. This may mean that less-popular browsers are not sent complex content (even though they might be able to deal with it correctly) or, in extreme cases, refused all content. Thus, various browsers have a feature to cloak or spoof their identification to force certain server-side content. For example, the Android browser identifies itself as Safari (among other things) in order to aid compatibility.
Other HTTP client programs, like download managers and offline browsers, often have the ability to change the user agent string.
Spam bots and Web scrapers often use fake user agents.
A result of user agent spoofing may be that collected statistics of Web browser usage are inaccurate.
User agent sniffing
User agent sniffing is the practice of websites showing different or adjusted content when viewed with certain user agents. An example of this is Microsoft Exchange Server 2003's Outlook Web Access feature. When viewed with Internet Explorer 6 or newer, more functionality is displayed compared to the same page in any other browsers. User agent sniffing is considered poor practice, since it encourages browser-specific design and penalizes new browsers with unrecognized user agent identifications. Instead, the W3C recommends creating standard HTML markup, allowing correct rendering in as many browsers as possible, and to test for specific browser features rather than particular browser versions or brands.
Websites intended for display by mobile phones often rely on user agent sniffing, since mobile browsers often differ greatly from each other.
Encryption strength notations
Web browsers created in the United States, such as Netscape Navigator and Internet Explorer, previously used the letters U, I, and N to specify the encryption strength in the user agent string. Until 1996, when the United States government allowed encryption with keys longer than 40 bits to be exported, vendors shipped various browser versions with different encryption strengths. "U" stands for "USA" (for the version with 128-bit encryption), "I" stands for "International" the browser has 40-bit encryption and can be used anywhere in the world and "N" stands (de facto) for "None" (no encryption). Following the lifting of export restrictions, most vendors supported 256-bit encryption.
Deprecation of User-Agent header
In 2020, Google announced that they would be phasing out support for the User-Agent header in their Google Chrome browser. They stated that other major web browser vendors were supportive of the move, but that they did not know when other vendors would follow suit. Google stated that a new feature called Client Hints would replace the functionality of the User-Agent string.
See also
Robots exclusion standard
Web crawler
Wireless Universal Resource File (WURFL)
User Agent Profile (UAProf)
Browser sniffing
Web browser engine
References
Clients (computing)
Hypertext Transfer Protocol headers |
487403 | https://en.wikipedia.org/wiki/40-bit%20encryption | 40-bit encryption | 40-bit encryption refers to a (now broken) key size of forty bits, or five bytes, for symmetric encryption; this represents a relatively low level of security. A forty bit length corresponds to a total of 240 possible keys. Although this is a large number in human terms (about a trillion), it is possible to break this degree of encryption using a moderate amount of computing power in a brute-force attack, i.e., trying out each possible key in turn.
Description
A typical home computer in 2004 could brute-force a 40-bit key in a little under two weeks, testing a million keys per second; modern computers are able to achieve this much faster. Using free time on a large corporate network or a botnet would reduce the time in proportion to the number of computers available. With dedicated hardware, a 40-bit key can be broken in seconds. The Electronic Frontier Foundation's Deep Crack, built by a group of enthusiasts for US$250,000 in 1998, could break a 56-bit Data Encryption Standard (DES) key in days, and would be able to break 40-bit DES encryption in about two seconds.
40-bit encryption was common in software released before 1999, especially those based on the RC2 and RC4 algorithms which had special "7-day" export review policies, when algorithms with larger key lengths could not legally be exported from the United States without a case-by-case license. "In the early 1990s ... As a general policy, the State Department allowed exports of commercial encryption with 40-bit keys, although some software with DES could be exported to U.S.-controlled subsidiaries and financial institutions." As a result, the "international" versions of web browsers were designed to have an effective key size of 40 bits when using Secure Sockets Layer to protect e-commerce. Similar limitations were imposed on other software packages, including early versions of Wired Equivalent Privacy. In 1992, IBM designed the CDMF algorithm to reduce the strength of 56-bit DES against brute force attack to 40 bits, in order to create exportable DES implementations.
Obsolescence
All 40-bit and 56-bit encryption algorithms are obsolete, because they are vulnerable to brute force attacks, and therefore cannot be regarded as secure. As a result, virtually all Web browsers now use 128-bit keys, which are considered strong. Most Web servers will not communicate with a client unless it has 128-bit encryption capability installed on it.
Public/private key pairs used in asymmetric encryption (public key cryptography), at least those based on prime factorization, must be much longer in order to be secure; see key size for more details.
As a general rule, modern symmetric encryption algorithms such as AES use key lengths of 128, 192 and 256 bits.
See also
56-bit encryption
Content Scramble System
Footnotes
References
Symmetric-key cryptography
History of cryptography
Encryption debate |
487471 | https://en.wikipedia.org/wiki/GOST%20%28block%20cipher%29 | GOST (block cipher) | The GOST block cipher (Magma), defined in the standard GOST 28147-89 (RFC 5830), is a Soviet and Russian government standard symmetric key block cipher with a block size of 64 bits. The original standard, published in 1989, did not give the cipher any name, but the most recent revision of the standard, GOST R 34.12-2015 (RFC 7801, RFC 8891), specifies that it may be referred to as Magma. The GOST hash function is based on this cipher. The new standard also specifies a new 128-bit block cipher called Kuznyechik.
Developed in the 1970s, the standard had been marked "Top Secret" and then downgraded to "Secret" in 1990. Shortly after the dissolution of the USSR, it was declassified and it was released to the public in 1994. GOST 28147 was a Soviet alternative to the United States standard algorithm, DES. Thus, the two are very similar in structure.
The algorithm
GOST has a 64-bit block size and a key length of 256 bits. Its S-boxes can be secret, and they contain about 354 (log2(16!8)) bits of secret information, so the effective key size can be increased to 610 bits; however, a chosen-key attack can recover the contents of the S-boxes in approximately 232 encryptions.
GOST is a Feistel network of 32 rounds. Its round function is very simple: add a 32-bit subkey modulo 232, put the result through a layer of S-boxes, and rotate that result left by 11 bits. The result of that is the output of the round function. In the adjacent diagram, one line represents 32 bits.
The subkeys are chosen in a pre-specified order. The key schedule is very simple: break the 256-bit key into eight 32-bit subkeys, and each subkey is used four times in the algorithm; the first 24 rounds use the key words in order, the last 8 rounds use them in reverse order.
The S-boxes accept a four-bit input and produce a four-bit output. The S-box substitution in the round function consists of eight 4 × 4 S-boxes. The S-boxes are implementation-dependent, thus parties that want to secure their communications using GOST must be using the same S-boxes. For extra security, the S-boxes can be kept secret. In the original standard where GOST was specified, no S-boxes were given, but they were to be supplied somehow. This led to speculation that organizations the government wished to spy on were given weak S-boxes. One GOST chip manufacturer reported that he generated S-boxes himself using a pseudorandom number generator.
For example, the Central Bank of Russian Federation used the following S-boxes:
However, the most recent revision of the standard, GOST R 34.12-2015, adds the missing S-box specification and defines it as follows.
Cryptanalysis of GOST
The latest cryptanalysis of GOST shows that it is secure in a theoretical sense. In practice, the data and memory complexity of the best published attacks has reached the level of practical, while the time complexity of even the best attack is still 2192 when 264 data is available.
Since 2007, several attacks have been developed against reduced-round GOST implementations and/or weak keys.
In 2011 several authors discovered more significant flaws in GOST, being able to attack the full 32-round GOST with arbitrary keys for the first time. It has even been called "a deeply flawed cipher" by Nicolas Courtois. Initial attacks were able to reduce time complexity from 2256 to 2228 at the cost of huge memory requirements, and soon they were improved up to 2178 time complexity (at the cost of 270 memory and 264 data).
In December 2012, Courtois, Gawinecki, and Song improved attacks on GOST by computing only 2101 GOST rounds. Isobe had already published a single key attack on the full GOST cipher, which Dinur, Dunkelman, and Shamir improved upon, reaching 2224 time complexity for 232 data and 236 memory, and 2192 time complexity for 264 data.
Since the attacks reduce the expected strength from 2256 (key length) to around 2178, the cipher can be considered broken. However, for any block cipher with block size of n bits, the maximum amount of plaintext that can be encrypted before rekeying must take place is 2n/2 blocks, due to the birthday paradox, and none of the aforementioned attacks require less than 232 data.
See also
GOST standards
References
Further reading
External links
Description, texts of the standard, online GOST encrypt and decrypt tools
SCAN's entry for GOST
An open source implementation of PKCS#11 software device with Russian GOST cryptography standards capabilities
https://github.com/gost-engine/engine — open-source implementation of Russian GOST cryptography for OpenSSL.
Broken block ciphers
Feistel ciphers
GOST standards |
490067 | https://en.wikipedia.org/wiki/Blinding%20%28cryptography%29 | Blinding (cryptography) | In cryptography, blinding is a technique by which an agent can provide a service to (i.e., compute a function for) a client in an encoded form without knowing either the real input or the real output. Blinding techniques also have applications to preventing side-channel attacks on encryption devices.
More precisely, Alice has an input x and Oscar has a function f. Alice would like Oscar to compute for her without revealing either x or y to him. The reason for her wanting this might be that she doesn't know the function f or that she does not have the resources to compute it.
Alice "blinds" the message by encoding it into some other input E(x); the encoding E must be a bijection on the input space of f, ideally a random permutation. Oscar gives her f(E(x)), to which she applies a decoding D to obtain .
Not all functions allow for blind computation. At other times, blinding must be applied with care. An example of the latter is Rabin–Williams signatures. If blinding is applied to the formatted message but the random value does not honor Jacobi requirements on p and q, then it could lead to private key recovery. A demonstration of the recovery can be seen in discovered by Evgeny Sidorov.
The most common application of blinding is the blind signature. In a blind signature protocol, the signer digitally signs a message without being able to learn its content.
The one-time pad (OTP) is an application of blinding to the secure communication problem, by its very nature. Alice would like to send a message to Bob secretly, however all of their communication can be read by Oscar. Therefore, Alice sends the message after blinding it with a secret key or OTP that she shares with Bob. Bob reverses the blinding after receiving the message. In this example, the function
f is the identity and E and D are both typically the XOR operation.
Blinding can also be used to prevent certain side-channel attacks on asymmetric encryption schemes. Side-channel attacks allow an adversary to recover information about the input to a cryptographic operation, by measuring something other than the algorithm's result, e.g., power consumption, computation time, or radio-frequency emanations by a device. Typically these attacks depend on the attacker knowing the characteristics of the algorithm, as well as (some) inputs. In this setting, blinding serves to alter the algorithm's input into some unpredictable state. Depending on the characteristics of the blinding function, this can prevent some or all leakage of useful information. Note that security depends also on the resistance of the blinding functions themselves to side-channel attacks.
For example, in RSA blinding involves computing the blinding operation , where r is a random integer between 1 and N and relatively prime to N (i.e. , x is the plaintext, e is the public RSA exponent and N is the RSA modulus. As usual, the decryption function is applied thus giving . Finally it is unblinded using the function . Multiplying by yields , as desired. When decrypting in this manner, an adversary who is able to measure time taken by this operation would not be able to make use of this information (by applying timing attacks RSA is known to be vulnerable to) as she does not know the constant r and hence has no knowledge of the real input fed to the RSA primitives.
Examples
Blinding in GPG 1.x
References
External links
Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems
Breaking the Rabin-Williams digital signature system implementation in the Crypto++ library
Cryptography |
490488 | https://en.wikipedia.org/wiki/Break | Break | Break or Breaks or The Break may refer to:
Time off from duties
Recess (break), time in which a group of people is temporarily dismissed from its duties
Break (work), time off during a shift/recess
Coffee break, a short mid-morning rest period in business
Annual leave (holiday/vacation), paid time off work
Time off from school
Holiday break, a U.S. term for various school holidays
Christmas break or Winter break, a break in the winter, typically around Christmas and New Years
Spring break, a recess in early spring at universities and schools in various countries in the northern hemisphere
Summer break, a typical long break in the summertime
People
Ted Breaks (1919–2000), English professional footballer
Danny Breaks (active 1990s–), British drum and bass DJ, record producer and record label owner
Sport
, the first shot meant to break the balls in cue sports, also a series of shots in snooker
Breaking ball, a pitch that does not travel straight like a fastball as it approaches the batter
Surf break, place where a wave collapses into surf
Horse breaking, the process of training a horse to be ridden
Break (tennis), to win a tennis game as the receiving player or team, thereby breaking serve
Technology
Computing
Break condition, in asynchronous serial communication
Break key, a special key on computer keyboards
break statement, a keyword in computer programming used for flow control
Line break (computing), a special character signifying the end of a line of text
To break (or "crack") an encryption cipher, in cryptanalysis
Other technologies
Brake (carriage), a type of horse-drawn carriage used in the 19th and early 20th centuries
Break (locksmithing), a separation between pins of a lock
Breakover angle, a maximum possible supplementary angle of terrain
Paradelta Break, an Italian paraglider design
Arts and media
Books
The Break (novel), 1957, by José Giovanni
The Breaks (novel), by Richard Price
Film and television
Break (2008 film), an American action film
Break (2014 film), also known as Nature Law
Break (2020 film), a British independent film
The Break (1995 film), an American tennis film
The Break (1963 film), a drama starring Tony Britton
The Break (2003 film), a TV film featuring Kris Kristofferson
The Break, alternative name for A Further Gesture, a 1997 film
The Break (TV series), a 2016 Belgian crime drama, also known as La Trêve
The Break with Michelle Wolf, a 2018 Netflix series
The Breaks (1999 film), an American comedy
The Breaks (2016 film), an American television hip-hop drama
The Breaks (TV series), a 2017 American drama and a continuation of the 2016 film
"Break" (Bottom), a British television episode
Break (Transformers), a fictional character
The Break, a short film shown at the 2016 Dublin International Film Festival
Music
Break (music), a percussion interlude or instrumental solo within a longer work of music
Breakbeat, a broad style of electronic or dance-oriented music
Albums
Break (Enchant album), 1998
Break (Mamoru Miyano album), 2009
Break (One-Eyed Doll album), 2010
Break (EP), 2006, by The Cinematics
Songs
"Break", 1972, by Aphrodite's Child
"The Breaks" (song), 1980, by Kurtis Blow
"Break" (1984), by Play Dead
"Break" (1995), by The Gyres
"Break" (1996), by Staind from the album Tormented
"Break" (1998), by Fugazi from the album End Hits
"Break" (2002), by Jurassic 5 from the album Power in Numbers
"Break" (2006), by Republic of Loose from the album Aaagh!
"Break" (2006), by The Cinematics
"Break" (2008), by Alanis Morissette from the album Flavors of Entanglement
"Break" (Three Days Grace song) (2009)
"Break" (2013), by Hostyle Gospel from the album Desperation
"Break" (2015), by Katherine McPhee from the album Hysteria
"Break" (2015), by No Devotion from the album Permanence
"Break" (Kero Kero Bonito song), 2016
Other media
Break.com, a humor website
Other uses
Big break (jargon), a circumstance which allows an actor or musician to "break into" the industry and achieve fame
Bone fracture, a medical condition in which there is a break in the continuity of the bone
Commercial break, in television and radio
Prison escape, the act of an inmate leaving prison through unofficial or illegal ways
Section break, in type setting
Break, an air combat maneuver; see Basic fighter maneuvers
Breaks, Virginia
Psychotic break
See also
Brake (disambiguation)
Break a leg, an expression in theatre
Breakdancing
Breaking (disambiguation)
Breakwater (structure)
Broke (disambiguation)
Burglary, sometimes called a "break-in"
Winter break (disambiguation)
ko:브레이크 |
490528 | https://en.wikipedia.org/wiki/VLC%20media%20player | VLC media player | VLC media player (previously the VideoLAN Client and commonly known as simply VLC) is a free and open-source, portable, cross-platform media player software and streaming media server developed by the VideoLAN project. VLC is available for desktop operating systems and mobile platforms, such as Android, iOS and iPadOS. VLC is also available on digital distribution platforms such as Apple's App Store, Google Play, and Microsoft Store.
VLC supports many audio and video compression methods and file formats, including DVD-Video, Video CD and streaming protocols. It is able to stream media over computer networks and can transcode multimedia files.
The default distribution of VLC includes many free decoding and encoding libraries, avoiding the need for finding/calibrating proprietary plugins. The libavcodec library from the FFmpeg project provides many of VLC's codecs, but the player mainly uses its own muxers and demuxers. It also has its own protocol implementations. It also gained distinction as the first player to support playback of encrypted DVDs on Linux and macOS by using the libdvdcss DVD decryption library; however, this library is legally controversial and is not included in many software repositories of Linux distributions as a result. It is available on iOS under the MPLv2.
History
The VideoLan software originated as an academic project in 1996. VLC used to stand for "VideoLAN Client" when VLC Joe was a client of the VideoLAN project. Since VLC is no longer merely a client, that initialism no longer applies. It was intended to consist of a client and server to stream videos from satellite dishes across a campus network. Originally developed by students at the École Centrale Paris, it is now developed by contributors worldwide and is coordinated by VideoLAN, a non-profit organization. Rewritten from scratch in 1998, it was released under GNU General Public License on February 1, 2001, with authorization from the headmaster of the École Centrale Paris. The functionality of the server program, VideoLan Server (VLS), has mostly been subsumed into VLC and has been deprecated. The project name has been changed to VLC media player because there is no longer a client/server infrastructure.
The cone icon used in VLC is a reference to the traffic cones collected by École Centrale's Networking Students' Association. The cone icon design was changed from a hand drawn low resolution icon to a higher resolution CGI-rendered version in 2006, illustrated by Richard Øiestad.
In 2007 the VLC project decided, for license compatibility reasons, not to upgrade to just released GPLv3. After 13 years of development, version 1.0.0 of VLC media player was released on July 7, 2009. Work began on VLC for Android in 2010 and it has been available for Android devices on the Google Play store since 2011. In September 2010, a company named "Applidium" developed a VLC port for iOS under GPLv2 with the endorsement of the VLC project, which was accepted by Apple for their App Store. In January 2011, after VLC developer Rémi Denis-Courmont's complaint to Apple about the licensing conflict between the VLC's GPLv2 and the App store's policies, the VLC had been withdrawn from the Apple App Store by Apple. Subsequently, in October 2011 the VLC authors began to relicense the engine parts of VLC from the GPL-2.0-or-later to the LGPL-2.1-or-later to achieve better license compatibility, for instance with the Apple App Store. In July 2013 the VLC application could be resubmitted to the iOS App Store under the MPL-2.0. Version 2.0.0 of VLC media player was released on February 18, 2012. The version for the Windows Store was released on March 13, 2014. Support for Windows RT, Windows Phone and Xbox One were added later. VLC is the third in the sourceforge.net overall download count, and there have been more than 3 billion downloads.
Version 3.0 was in development for Windows, Linux and macOS since June 2016 and released in February 2018. It contains many new features including Chromecast output support (except subtitles), hardware-accelerated decoding, 4K and 8K playback, 10-bit and HDR playback, 360° video and 3D audio, audio passthrough for HD audio codecs, Blu-ray Java menu support, and local network drive browsing.
In December 2017 the European Parliament approved a budget that funds a bug bounty program for VLC to improve the EU's IT infrastructure.
Release history
Design principles
Modular design
VLC, like most multimedia frameworks, has a very modular design which makes it easier to include modules/plugins for new file formats, codecs, interfaces, or streaming methods. VLC 1.0.0 has more than 380 modules. The VLC core creates its own graph of modules dynamically, depending on the situation: input protocol, input file format, input codec, video card capabilities and other parameters. In VLC, almost everything is a module, like interfaces, video and audio outputs, controls, scalers, codecs, and audio/video filters.
Interfaces
The default GUI is based on Be API on BeOS, Cocoa for macOS, and Qt 4 for Linux and Windows, but all give a similar standard interface. The old default GUI was based on wxWidgets on Linux and Windows. VLC supports highly customizable skins through the skins2 interface, and also supports Winamp 2 and XMMS skins. Skins are not supported in the macOS version. VLC has ncurses, remote control, and telnet console interfaces. There is also an HTTP interface, as well as interfaces for mouse gestures and keyboard hotkeys.
Features
Effects (desktop version)
The desktop version of VLC media player has some filters that can distort, rotate, split, deinterlace, and mirror videos as well as create display walls or add a logo overlay during playback. It can also output video as ASCII art.
An interactive zoom feature allows magnifying into video during playback. Still images can be extracted from video at original resolution, and individual frames can be stepped through, although only in forward direction.
Playback can be gamified by splitting the picture inside the viewport into draggable puzzle pieces, where the row and column count can be set as desired.
Formats
Because VLC is a packet-based media player it plays almost all video content. Even some damaged, incomplete, or unfinished files can be played, such as those still downloading via a peer-to-peer (P2P) network. It also plays m2t MPEG transport streams (.TS) files while they are still being digitized from an HDV camera via a FireWire cable, making it possible to monitor the video as it is being played. The player can also use libcdio to access .iso files so that users can play files on a disk image, even if the user's operating system cannot work directly with .iso images.
VLC supports all audio and video formats supported by libavcodec and libavformat. This means that VLC can play back H.264 or MPEG-4 Part 2 video as well as support FLV or MXF file formats "out of the box" using FFmpeg's libraries. Alternatively, VLC has modules for codecs that are not based on FFmpeg's libraries. VLC is one of the free software DVD players that ignore DVD region coding on RPC-1 firmware drives, making it a region-free player. However, it does not do the same on RPC-2 firmware drives, as in these cases the region coding is enforced by the drive itself, however, it can still brute-force the CSS encryption to play a foreign-region DVD on an RPC-2 drive.
VLC media player can play high-definition recordings of D-VHS tapes duplicated to a computer using . This offers another way to archive all D-VHS tapes with the DRM copy freely tag. Using a FireWire connection from cable boxes to computers, VLC can stream live, unencrypted content to a monitor or HDTV. VLC media player can display the playing video as the desktop wallpaper, like Windows DreamScene, by using DirectX, only available on Windows operating systems. VLC media player can record the desktop and save the stream as a file, allowing the user to create screencasts. On Microsoft Windows, VLC also supports the Direct Media Object (DMO) framework and can thus make use of some third-party DLLs (Dynamic-link library). On most platforms, VLC can tune into and view DVB-C, DVB-T, and DVB-S channels. On macOS the separate EyeTV plugin is required, on Windows it requires the card's BDA Drivers.
VLC can be installed or run directly from a USB flash drive or other external drive. VLC can be extended through scripting; it uses the Lua scripting language. VLC can play videos in the AVCHD format, a highly compressed format used in recent HD camcorders. VLC can generate a number of music visualization displays. The program is able to convert media files into various supported formats.
Both desktop and mobile releases are equipped with an audio equalizer.
Operating system compatibility
VLC media player is cross-platform, with versions for Windows, Android, Chrome OS, BeOS, Windows Phone, iOS, iPadOS, macOS, tvOS, OS/2, Linux, and Syllable. However, forward and backward compatibility between versions of VLC media player and different versions of OSes are not maintained over more than a few generations. 64-bit builds are available for 64-bit Windows.
Windows 8 and 10 support
The VLC port for Windows 8 and Windows 10 is backed by a crowdfunding campaign on Kickstarter to add support for a new GUI based on Microsoft's Metro design language, that will run on the Windows Runtime. All the existing features including video filters, subtitle support, and an equalizer are present in Windows 8. A beta version of VLC for Windows 8 was released to the Microsoft Store on March 13, 2014. A universal app was created for Windows 8, 8.1, 10, Windows Phone 8, 8.1 and Windows 10 Mobile.
Android support
In May 2012, the VLC team stated that a version of VLC for Android was being developed. The stable release version 1.0 was made available on Google Play on December 8, 2014.
Use of VLC with other programs
Bindings
Several APIs can connect to VLC and use its functionality:
libVLC API – the VLC Core, for C and C++
VLCKit – an Objective-C framework for macOS
LibVLCSharp - Crossplatform .NET bindings to libVLC (C#/F#/VB)
JavaScript API – the evolution of ActiveX API and Firefox integration
D-Bus controls
Go bindings
Python controls
Java API
DirectShow filters
Delphi/Pascal API: PasLibVlc by Robert Jędrzejczyk
Free Pascal bindings and an OOP wrapper component, via the libvlc.pp and vlc.pp units. This comes standard with the Free Pascal Compiler as of November 6, 2012.
The Phonon multimedia API for Qt and KDE applications can optionally use VLC as a backend.
Browser plugins
On Windows, Linux, macOS, and some other Unix-like platforms, VLC provides an NPAPI plugin, which enables users to view QuickTime, Windows Media, MP3, and Ogg files embedded in websites without using additional software. It supports many web browsers including Firefox, Mozilla Application Suite, and other Netscape plug-in based browsers; Safari, Chrome, and other WebKit based browsers; and Opera. Google used this plugin to build the Google Video Player web browser plugin before switching to use Adobe Flash.
Starting with version 0.8.2, VLC also provides an ActiveX plugin, which lets people view QuickTime (MOV), Windows Media, MP3, and Ogg files embedded in websites when using Internet Explorer.
Applications that use libVLC
VLC can handle some incomplete files and in some cases can be used to preview files being downloaded. Several programs make use of this, including eMule and KCeasy. The free/open-source Internet television application Miro also uses VLC code. HandBrake, an open-source video encoder, used to load libdvdcss from VLC Media Player. Easy Subtitles Synchronizer, a freeware subtitle editing program for Windows, uses VLC to preview the video with the edited subtitles.
Format support
Input formats
VLC can read many formats, depending on the operating system it is running on, including:
Container formats: 3GP, ASF, AVI, DVR-MS, FLV, Matroska (MKV), MIDI, QuickTime File Format, MP4, Ogg, OGM, WAV, MPEG-2 (ES, PS, TS, PVA, MP3), AIFF, Raw audio, Raw DV, MXF, VOB, RM, Blu-ray, DVD-Video, VCD, SVCD, CD Audio, DVB, HEIF, AVIF
Audio coding formats: AAC, AC3, ALAC, AMR, DTS, DV Audio, XM, FLAC, It, MACE, MOD, Monkey's Audio, MP3, Opus, PLS, QCP, QDM2/QDMC, RealAudio, Speex, Screamtracker 3/S3M, TTA, Vorbis, WavPack, WMA (WMA 1/2, WMA 3 partially).
Capture devices: Video4Linux (on Linux), DirectShow (on Windows), Desktop (screencast), Digital TV (DVB-C, DVB-S, DVB-T, DVB-S2, DVB-T2, ATSC, Clear QAM)
Network protocols: FTP, HTTP, MMS, RSS/Atom, RTMP, RTP (unicast or multicast), RTSP, UDP, Sat-IP, Smooth Streaming
Network streaming formats: Apple HLS, Flash RTMP, MPEG-DASH, MPEG Transport Stream, RTP/RTSP ISMA/3GPP PSS, Windows Media MMS
Subtitles: Advanced SubStation Alpha, Closed Captions, DVB, DVD-Video, MPEG-4 Timed Text, MPL2, OGM, SubStation Alpha, SubRip, SVCD, Teletext, Text file, VobSub, WebVTT, TTML
Video coding formats: Cinepak, Dirac, DV, H.263, H.264/MPEG-4 AVC, H.265/MPEG HEVC, AV1, HuffYUV, Indeo 3, MJPEG, MPEG-1, MPEG-2, MPEG-4 Part 2, RealVideo 3&4, Sorenson, Theora, VC-1, VP5, VP6, VP8, VP9, DNxHD, ProRes and some WMV.
Digital Camcorder formats: MOD and TOD via USB.
Output formats
VLC can transcode or stream audio and video into several formats depending on the operating system, including:
Container formats: ASF, AVI, FLAC, FLV, Fraps, Matroska, MP4, MPJPEG, MPEG-2 (ES, MP3), Ogg, PS, PVA, QuickTime File Format, TS, WAV, WebM
Audio coding formats: AAC, AC-3, DV Audio, FLAC, MP3, Speex, Vorbis
Streaming protocols: HTTP, MMS, RTSP, RTP, UDP
Video coding formats: Dirac, DV, H.263, H.264/MPEG-4 AVC, H.265/MPEG-H HEVC, MJPEG, MPEG-1, MPEG-2, MPEG-4 Part 2, Theora, VP5, VP6, VP8, VP9
Legality
The VLC media player software installers for the macOS platform and the Windows platform include the libdvdcss DVD decryption library, even though this library may be legally restricted in certain jurisdictions.
United States
The VLC media player software is able to read audio and video data from DVDs that incorporate Content Scramble System (CSS) encryption, even though the VLC media player software lacks a CSS decryption license. The unauthorized decryption of CSS-encrypted DVD content or unauthorized distribution of CSS decryption tools may violate the US Digital Millennium Copyright Act. Decryption of CSS-encrypted DVD content has been temporarily authorized for certain purposes (such as documentary filmmaking that uses short portions of DVD content for criticism or commentary) under the Digital Millennium Copyright Act anticircumvention exemptions that were issued by the US Copyright Office in 2010. However, these exemptions do not change the DMCA's ban on the distribution of CSS decryption tools; including those distributed with VLC.
See also
Comparison of video player software
List of codecs
List of music software
Explanatory notes
References
External links
2001 software
Amiga media players
Applications using D-Bus
Audio software with JACK support
BeOS software
BSD software
Cross-platform free software
Free and open-source Android software
Free media players
Free video software
Linux DVD players
Linux media players
Lua (programming language)-scriptable software
Multimedia frameworks
MacOS media players
Portable software
Software DVD players
Software that uses FFmpeg
Software that uses ncurses
Software that was ported from wxWidgets to Qt
Software using the LGPL license
Solaris media players
Spoken articles
Streaming media systems
Streaming software
Video software that uses Qt
Webcams
Windows media players
Universal Windows Platform apps
Xbox One software |
491313 | https://en.wikipedia.org/wiki/SSM | SSM | SSM may refer to:
Arts and entertainment
Sakıp Sabancı Museum, an art museum in Istanbul, Turkey
SSM (band), a post punk/garage/psych rock band from Detroit, Michigan, formed in 2005
Organizations
Companies Commission of Malaysia (Suruhanjaya Syarikat Malaysia)
Federation of Trade Unions of Macedonia (Сојуз на синдикатите на Македонија)
Georgian Public Broadcasting (Sakartvelos Sazogadoebrivi Mauts'q'ebeli)
Sarawak Sovereignty Movement, Kuching, Sarawak, Malaysia
Scuola Svizzera di Milano, a Swiss international school in Milan, Italy
Shattuck-Saint Mary's, a coeducational Episcopal-affiliated boarding school in Faribault, Minnesota, US
Socialist Union of Youth (Socialistický svaz mládeže), an organization in the former Czechoslovakia
SSM Health, St. Louis, Missouri, US
Society for Social Medicine, UK
Society of the Sacred Mission, an Anglican religious order
Society of Saint Margaret, an Anglican religious order
Swedish Radiation Safety Authority (Strålsäkerhetsmyndigheten)
Swiss School of Management, a business school located at Bellinzona, Switzerland
Swiss Society for Microbiology, the professional association of Swiss microbiologists
Computing
Silicon secured memory, SPARC encryption technology
Source-specific multicast, in computer networking
Standard shadow map, in computer graphics
Medicine
Ssm6a, Scolopendra subspinipes mutilans 6, or μ-SLPTX-Ssm6a, is a toxin from the venom of the Chinese red-headed centipede.
SSMEM1, Serine-rich single-pass membrane protein 1 is a protein that in humans is encoded by the SSMEM1 gene.
Sleep state misperception, a term used to classify sleep disorders
Slipped strand mispairing, a mutation process during DNA replication
Social Science & Medicine, a peer-reviewed journal
Special study module, now student selected component, an option in medical schools in the UK
Superficial spreading melanoma, a type of cancer
System status management, of emergency medical services
Other science and technology
SSM (Solid State Music) chip, a chip used in synthesizers
Scanning SQUID microscope, a magnetic current imaging system
Semi-solid metal casting, in the production of aluminium or magnesium parts
Special sensor microwave/imager, is a seven-channel, four-frequency, linearly polarized passive microwave radiometer system
Standard solar model, a mathematical treatment of the Sun as a spherical ball of gas in cosmology
Startup, shutdowns, and malfunctions, in potentially polluting industrial plants
Military appointments and decorations
Grand Commander of the Order of Loyalty to the Crown of Malaysia
Special Service Medal (Canada), awarded to members of the Canadian Forces
Squadron sergeant major, in some Commonwealth armies
Staff sergeant major, in some Commonwealth armies
Weaponry
Ship-to-ship missile
Surface-to-ship missile
Surface-to-surface missile
SSM-1, the Japanese Type 88 Surface-to-Ship Missile
SSM-1B, the Japanese Type 90 Ship-to-Ship Missile
SSM-700K Haeseong, a South Korean ship-launched sea-skimming surface-to-surface anti-ship cruise missile
SSM-A-5 Boojum, a United States Air Force cruise missile
SSM-A-23 Dart, an anti-tank guided missile developed for the United States Army
SSM-N-8 Regulus, a United States Navy cruise missile, 1955–1964
SSM-N-9 Regulus II, a United States Navy cruise missile
Other uses
Honda SSM, a concept car introduced at the 1995 Tokyo Motor Show
Sam Schmidt Motorsports, an auto racing team
Self-supporting minister, an unpaid priest in the Church of England and Church of Ireland
Single Supervisory Mechanism, whereby the European Central Bank supervises banks' stability
Special Safeguard Mechanism, a World Trade Organization tool that allows developing countries to raise tariffs temporarily to deal with import surges or price falls
Strategic service management, optimization of a company's post-sale service
Soft systems methodology, a problem-solving method
Same-sex marriage, a marriage between two people of the same gender |
492313 | https://en.wikipedia.org/wiki/Seth%20Schoen | Seth Schoen | Seth David Schoen (born September 27, 1979) is senior staff technologist for the Electronic Frontier Foundation, a technology civil rights organisation, and has been actively involved in discussing digital copyright law and encryption since the 1990s. He is an expert in trusted computing.
In February 2008, Schoen collaborated with a Princeton research group led by Edward Felten that discovered a vulnerability of DRAM that undermined the basic assumptions of computer encryption security. In October 2005, Schoen led a small research team at EFF to decode the tiny tracking dots hidden in the printouts of some laser printers.
Seth attended Northfield Mount Hermon School in Northfield, Massachusetts from 1993–1997. While attending UC Berkeley, Schoen founded Californians for Academic Freedom to protest the loyalty oath the state made university employees swear. Schoen later worked for Linuxcare, where he developed the Linuxcare Bootable Business Card. After he left Linuxcare, he forked the project to create the LNX-BBC rescue system, of which he is a lead developer. Schoen was formerly a board member and the Secretary of the Peer-Directed Projects Center, a Texas-based non-profit corporation, until he stepped down in November 2006.
Schoen is the author of the DeCSS haiku.
References
External links
Personal homepage
Vitanuova, Seth's weblog
DeCSS haiku
The History of the DeCSS Haiku
Californians for Academic Freedom (archived)
1979 births
American bloggers
Copyright activists
Living people
Northfield Mount Hermon School alumni
University of California, Berkeley alumni
21st-century American non-fiction writers |
494671 | https://en.wikipedia.org/wiki/Optus%20Television | Optus Television | Optus Television is the cable television division of Australian telecommunications company Optus.
History
Its immediate predecessor was Optus Vision, a joint venture between Optus and Continental Cablevision, with small shareholdings by media companies Publishing and Broadcasting Limited and Seven Network. It was founded to handle residential cable television and local telephony, while its parent concentrated on corporate, long-distance, satellite and interstate communications. It used a hybrid fibre coaxial cable network to connect homes to its network, and subsequently added broadband cable internet access.
Optus Vision used the Optus telecommunications licence as its authority to build a cable network, which it deployed in Sydney, Melbourne and Brisbane.
Optus's main competition, especially in the metropolitan areas, was Foxtel, a joint venture between Telstra and News Corporation.
Optus negotiated exclusive access to AFL, rugby league, and other sports, and had exclusive access to Disney Channel, ESPN and MTV Australia, but lacked the general entertainment channels Foxtel had. It commenced its cable TV service in September 1995, one month before Foxtel, delivering a limited number of channels to a small number of subscribers, all of whom were connected via its Blacktown Exchange.
From 1995 to 1997, the Super League War waged between the two consortia over lucrative rugby league rights.
In March 1997, Optus bought out the other shareholders in exchange for equity in itself, to float.
In 2002, it let go of some of its exclusive content contracts, replacing them with non-exclusive ones. MTV Australia, Disney Channel ESPN and the premium Movie Network channels all became available on Foxtel.
In 2009, Optus Television stopped offering service to new subscribers, but maintained it for existing subscribers.
From February 2011, Optus again offered Optus TV featuring Foxtel including IQ2 services.
On 14 February 2016, Singtel-Optus renewed their contract with Astro-Fetch TV in preparation for the 2016/17 EPL season.
Satellite broadcasting
Optus, along with Austar had a joint venture in the use of Satellite broadcasting for the delivery of Subscription Television. Originally, Foxtel had not previously offered a Satellite service, until purchasing the satellite subscribers from Australis Media within their service area. Until 2004, Foxtel was a customer of the Austar/Optus joint venture.
Optus utilised this joint venture to initially trial and subsequently offer a basic satellite service, named VIP. The service was very select with not many being able to access the service. It was also offered to Norfolk Island and some smart cards were enabled for some residents (who had the required satellite receiving equipment) to take part in a trial of the service. The ability to offer the service came about due to Optus offering a large number of channels to East Coast Television (now a part of Austar). After Optus axed the "VIP" Service, it also sold its share in the joint venture to Austar.
In 2004, the roles reversed and Austar became the customer to Foxtel for satellite delivery.
Sports programming
Until 2002, Optus did not offer the Fox Sports sporting channels on their service as Foxtel and Austar did, instead offering channels from Sportsvision, (later C7 Sport) and ESPN.
During the Super League/ARL war, the Optus sports channels had the rights to the ARL competition and the Super League rights were held by Fox Sports.
Seven bought Sportsvision, which became Optus and Austar exclusive C7 Sport and progressively lost sporting rights to Fox Sports. During that time Foxtel granted Optus an "NRL Channel", screening all of the NRL matches that had previously been shown exclusively on the Foxtel platform.
C7 Sport for some time had attempted to access the Foxtel platform for their service – however Foxtel were hesitant to accede to the request, with one exception being the 2000 Sydney Olympics, where C7 offered two extra channels dedicated to Olympics coverage.
In 2000, Seven and C7 Sport lost the AFL rights to a Nine/Ten and Foxtel-based consortium leaving C7 with only the Olympics and 6 Nations Rugby rights of major substance.
In March 2002, the commencement of the new AFL broadcasting deal with Foxtel led Optus and Austar to drop C7 Sport from their services, leading to the demise of the channel. Optus replaced the C7 channels with an Optus rebadged version of Fox Sports. Optus dropped the "Optus Sports" name in October 2002.
The dropping of the C7 service led to Optus being a party in the unsuccessful legal action taken by the Seven Network over the demise of the C7 Sport service.
On 13 July 2016, Optus launched Optus Sport, a group of sports channels that were established to televise the Premier League.
Subscriber numbers
For some time, Optus has not explicitly released subscriber numbers for their Optus Television service, however combining them with other services offered under the division of which Optus Television is a part.
Since December 2002, subscriber numbers have dropped considerably to almost half of the 241,000 reported. Since that time, Optus has repositioned its television service to being a major component of bundled services, rather than a service by itself. By August 2010 only 96,000 subscribers remained.
Optus iTV trials
During 2002–2003, Optus trialled interactive digital television over part of its Sydney network. The service was being built from 2000. This service was known as Optus iTV. The service was unique to Optus and had a good deal of positive consumer feedback. The iTV service utilised the Liberate platform instead of OpenTV, as used by Foxtel Digital. The trials were cancelled by Optus after the Content Supply Agreement with Foxtel was reached. One byproduct of the new agreement was the re-engineering of the Optus iTV broadcast centre in Macquarie Park to become the new broadcast centre for Foxtel.
A notable difference between Optus iTV and Foxtel Digital was that the Optus system used the same HFC cable network both for delivery and for the return path, meaning no additional hardware or service was required for this return path. By contrast, the Foxtel Digital system at the time relied on a telephone connection for the return path. The advantage of this system is that it is platform neutral, meaning that the same telephone-based return path can be used for both cable and satellite installations.
The Optus iTV system also allowed near video on demand (nvod) not true video on demand. Foxtel Digital now provides Near Video on Demand as well. Featured content broadcast on multiple channels with staggered start times available at frequent intervals. Some new Foxtel Services offer a "Video on Demand" ability for limited product, various transmission paths offer a broader VOD version.
Other potential features of Optus iTV included e-mail and walled garden Internet access.
Optus TV featuring Foxtel
After signing up to the Content Supply Agreement with Foxtel, Optus Television changed their channel line up to reflect the offerings from Foxtel. Optus were able to have a number of differences between their offering and the Foxtel offering, so that Optus could meet some contractual obligations they had, as well as satisfying a number of requirements placed on the organisation by the ACCC.
A number of channels that had previously been unique to Optus (in comparison with Foxtel) such as the Odyssey Channel were removed from the Optus line-up, as they competed with channels that Foxtel offered. Other channels crossed from Optus to the Foxtel line-up such as the Ovation Channel. Optus was required to have a number of channels that were unique to their service, though the flagged channels are now available on both platforms without any change in regulation nor penalty.
Optus TV featuring Foxtel Digital
In April 2005, Foxtel granted Optus the right to carry the "Foxtel Digital" platform (one year after being available to Foxtel subscribers) The service would be resold by Optus, utilising the same equipment as Foxtel (such as the Foxtel set top box, remote, NDS technologies for encryption and OpenTV for interactivity delivery).
This agreement also allows for non-exclusive resale rights to the Foxtel Digital service using Satellite Delivery within Foxtel's service area.
Commencing trials in November 2005, the service became fully operational in December of the same year.
Optus is believed to be permitting wholesale access to the network so third party broadcasters can sell subscription services over Optus cable.
See also
Subscription television in Australia
References
External links
Optus
Australian subscription television services
1994 establishments in Australia
Australian brands
Companies based in Sydney
Telecommunications companies established in 1994
Mass media companies established in 1994
Telecommunications companies of Australia |
494687 | https://en.wikipedia.org/wiki/Weapons%20of%20the%20Vietnam%20War | Weapons of the Vietnam War | This article is about the weapons used in the Vietnam War, which involved the People's Army of Vietnam (PAVN) or North Vietnamese Army (NVA), National Liberation Front for South Vietnam (NLF) or Viet Cong (VC), and the armed forces of the China (PLA), Army of the Republic of Vietnam (ARVN), United States, Republic of Korea, Philippines, Thailand, and the Australian, New Zealand defence forces, and a variety of irregular troops.
Nearly all United States-allied forces were armed with U.S. weapons including the M1 Garand, M1 carbine, M14 and M16. The Australian and New Zealand forces employed the 7.62 mm L1A1 Self-Loading Rifle as their service rifle, with the occasional US M16.
The PAVN, although having inherited a variety of American, French, and Japanese weapons from World War II and the First Indochina War (aka French Indochina War), were largely armed and supplied by the People's Republic of China, the Soviet Union, and its Warsaw Pact allies. In addition, some weapons—notably anti-personnel explosives, the K-50M (a PPSh-41 copy), and "home-made" versions of the RPG-2—were manufactured in North Vietnam. By 1969 the US Army had identified 40 rifle/carbine types, 22 machine gun types, 17 types of mortar, 20 recoilless rifle or rocket launcher types, nine types of antitank weapons, and 14 anti-aircraft artillery weapons used by ground troops on all sides. Also in use, primarily by anti-communist forces, were the 24 types of armored vehicles and self-propelled artillery, and 26 types of field artillery and rocket launchers.
Communist forces and weapons
During the early stages of their insurgency, the Viet Cong mainly sustained itself with captured arms (often of American manufacture) or crude, self-made weapons (e.g. copies of the US Thompson submachine gun and shotguns made of galvanized pipes). Most arms were captured from poorly defended ARVN militia outposts.
Communist forces were principally armed with Chinese and Soviet weaponry though some VC guerrilla units were equipped with Western infantry weapons either captured from French stocks during the first Indochina war, such as the MAT-49, or from ARVN units or requisitioned through illicit purchase.
In the summer and fall of 1967, all Viet Cong battalions were reequipped with arms of Soviet design such as the AK-47 assault rifle and the RPG-2 anti-tank weapon. Their weapons were principally of Chinese or Soviet manufacture. The period up to the conventional phase in the 1970, the Viet Cong and NVA were primarily limited to mortars, recoilless rifles and small-arms and had significantly lighter equipment and firepower in comparison with the US arsenal, relying on ambushes alongside superior stealth, planning, marksmanship and small-unit tactics to face the disproportionate US technological advantage.
Many divisions within the NVA would incorporate armoured and mechanised battalions including the Type 59 tank., BTR-60, Type 60 artillery and rapidly altered and integrated new war doctrines following the Tet Offensive into a mobile combined-arms force. The North Vietnamese had both amphibious tanks (such as the PT-76) and light tanks (such the Type 62) used during the conventional phase. Experimental Soviet equipment started being used against ARVN forces at the same time, including Man-portable air-defense system SA-7 Grail and anti-tank missiles including the AT-3 Sagger. By 1975 they had fully transformed from the strategy of mobile light-infantry and using the people's war concept used against the United States.
US weapons
The American M16 rifle and XM177 carbine, which both replaced the M14, were lighter and considered more accurate than the AK-47 but in Vietnam was prone to "failure to extract", in which the spent cartridge case remained stuck in the chamber after a round was fired, preventing the next round from feeding and jamming the gun. This was ultimately traced to an inadequately tested switch in propellants from DuPont's proprietary IMR 4475 to Olin's WC 846, that Army Ordnance had ordered out of concern for standardization and mass production capacity.
The heavily armored, 90 mm gun M48A3 'Patton' tank saw extensive action during the Vietnam War and over 600 were deployed with U.S. forces. They played an important role in infantry support though there were a few tank versus tank battles. The M67A1 flamethrower tank (nicknamed the Zippo) was an M48 variant used in Vietnam. Artillery was used extensively by both sides but the Americans were able to ferry the lightweight 105 mm M102 howitzer by helicopter to remote locations on quick notice. With its range, the Soviet 130 mm M-46 towed field gun was a highly regarded weapon and used to good effect by the PAVN. It was countered by the long-range, American 175 mm M107 Self-Propelled Gun.
The United States had air superiority though many aircraft were lost to surface-to-air missiles and anti-aircraft artillery. U.S. airpower was credited with breaking the siege of Khe Sanh and blunting the 1972 Easter Offensive against South Vietnam. At sea, the U.S. Navy had the run of the coastline, using aircraft carriers as platforms for offshore strikes and other naval vessels for offshore artillery support. Offshore naval fire played a pivotal role in the Battle of Huế in February 1968, providing accurate fire in support of the U.S. counter-offensive to retake the city.
The Vietnam War was the first conflict that saw wide scale tactical deployment of helicopters. The Bell UH-1 Iroquois nicknamed "Huey" was used extensively in counter-guerilla operations both as a troop carrier and a gunship. In the latter role it was outfitted with a variety of armaments including M60 machine guns, multi-barreled 7.62 mm Miniguns and unguided air-to-surface rockets. The Hueys were also successfully used in MEDEVAC and search and rescue roles. Two aircraft which were prominent in the war were the AC-130 "Spectre" Gunship and the UH-1 "Huey" gunship. The AC-130 was a heavily armed ground-attack aircraft variant of the C-130 Hercules transport plane; it was used to provide close air support, air interdiction and force protection. The AC-130H "Spectre" was armed with two 20 mm M61 Vulcan cannons, one Bofors 40mm autocannon, and one 105 mm M102 howitzer. The Huey is a military helicopter powered by a single, turboshaft engine, and approximately 7,000 UH-1 aircraft saw service in Vietnam. At their disposal ground forces had access to B-52 and F-4 Phantom II and others to launch napalm, white phosphorus, tear gas and chemical weapons as well. The aircraft ordnance used during the war included precision-guided munition, cluster bombs, a thickening/gelling agent generally mixed with petroleum or a similar fuel for use in an incendiary device, initially against buildings and later primarily as an anti-personnel weapon that sticks to skin and can burn down to the bone.
The Claymore M18A1, an anti-personnel mine was widely used, and is command-detonated and directional shooting 700 steel pellets in the kill zone.
Weapons of the South Vietnamese, U.S., South Korean, Australian, Philippine, and New Zealand Forces
Hand combat weapons
L1A1 and L1A2 bayonets – used on L1A1 Self-Loading Rifle
M1905 bayonet – used on the M1 Garand.
M1917 bayonet – used on various shotguns.
M1 Bayonet – used on the M1 Garand.
M3 fighting knife
M4 bayonet – used on the M1 and M2 Carbine.
M5 bayonet – used on the M1 Garand.
M6 bayonet – used on the M14.
M7 Bayonet – used on the M16.
Ka-Bar Utility/fighting Knife – used by the US Army, Navy and Marine Corps.
Gerber Mark II U.S. Armed Forces
Randall Made Knives – personally purchased by some US soldiers.
M1905, M1917, M1 and Lee Enfield bayonets cut down and converted in to fighting knives.
Bow – used by US Mobile Riverine Force.
Crossbow – used by South Vietnamese Montagnards
Pistols and revolvers
Colt M1911A1 – standard US and ARVN sidearm.
Colt Commander – used by US military officers and US Special forces.
Browning Hi-Power – used by Australian and New Zealand forces (L9 pistol). Also used on an unofficial basis by US reconnaissance and Special Forces units.
Colt Model 1903 Pocket Hammerless – carried by US military officers. Replaced by the Colt Commander in the mid-1960s
Colt Detective Special – .38 Special revolver, used by some ARVN officers
Colt Police Positive Special – .38 Special revolver, used by USAF and tunnel rats
High Standard HDM – Integrally suppressed .22LR handgun, supplemented by the Mark 22 Mod 0 in the later stages of the war.
Ingram MAC-10 – automatic pistol used by US special operations forces.
Luger P08 – CIA provided pistol
M1917 revolver – .45 ACP revolver used by the South Vietnamese and US forces during the beginning of the war alongside the Smith & Wesson Model 10. Used rather prominently by tunnel rats.
Quiet Special Purpose Revolver – 40. revolver used by tunnel rats.
Smith & Wesson Model 10 – .38 Special revolver used by ARVN, by US Army and USAF pilots and by tunnel rats
Smith & Wesson Model 12 – .38 Special revolver carried by US Army and USAF pilots.
Smith & Wesson Model 15 – .38 Special revolver carried by USAF Security Police Units.
Colt Python – .357 Magnum revolver carried by MACVSOG.
Smith & Wesson Model 27 – .357 Magnum revolver carried by MACVSOG.
Smith & Wesson Mark 22 Mod.0 "Hush Puppy" – Suppressed pistol used by US Navy SEALs and other U.S. special operations forces.
Walther P38 – CIA provided pistol
Infantry rifles
L1A1 Self-Loading Rifle – used by Australian and New Zealand soldiers in Vietnam
M1 Garand – used by the South Vietnamese and South Koreans
M1, M1A1, & M2 Carbine – used by the South Vietnamese Military, Police and Security Forces, South Koreans, U.S. military, and Laotians supplied by the U.S.
M14, M14E2, M14A1 – issued to most U.S. troops from the early stages of the war until 1967–68, when it was replaced by the M16.
M16, XM16E1, and M16A1 – M16 was issued in 1964, but due to reliability issues, it was replaced by the M16A1 in 1967 which added the forward assist and chrome-lined barrel to the rifle for increased reliability.
CAR-15 – carbine variant of the M16 produced in very limited numbers, fielded by special operations early on. Later supplemented by the improved XM177.
XM177 (Colt Commando)/GAU-5 – further development of the CAR-15, used heavily by MACV-SOG, the US Air Force, and US Army.
Stoner 63 – used by US Navy SEALs and USMC.
T223 – a copy of the Heckler & Koch HK33 built under license by Harrington & Richardson used in small numbers by SEAL teams. Even though the empty H&R T223 was 0.9 pounds (0.41 kg) heavier than an empty M16A1, the weapon had a forty-round magazine available for it and this made it attractive to the SEALS.
MAS-36 rifle – used by South Vietnamese militias
AK-47, AKM and Type 56 – Captured rifles were used by South Vietnamese and U.S forces.
Sniper/marksman rifles
M1C/D Garand and MC52 – used by CIA advisors, the USMC and the US Navy early in the war. Approximately 520 were supplied to the ARVN and 460 to the Thai forces.
M1903A4 Springfield – used by the USMC early in the war, replaced by the M40.
M21 Sniper Weapon System – sniper variant of the M14 rifle used by the US Army.
M40 (Remington Model 700)– bolt-action sniper rifle meant to replace the M1903A4 Springfield rifle and Winchester Model 70; used by the USMC
Parker-Hale M82 – used by ANZAC forces
Winchester Model 70 – used by the USMC
Mosin Nagant – used by South Vietnamese militias
Submachine guns
Beretta M12 – limited numbers were used by U.S. Embassy security units.
Carl Gustaf m/45 – used by Navy SEALs in the beginning of the war, but later replaced by the Smith & Wesson M76 in the late 1960s. Significant numbers were also utilized by MAC-V-SOG, the South Vietnamese, and limited numbers were used in Laos by advisors, and Laotian fighters.
Smith & Wesson M76 – copy of the Carl Gustaf m/45. Few were actually shipped to Navy SEALs fighting in Vietnam.
F1 submachine gun – replaced the Owen Gun in Australian service.
M3 Grease gun – standard U.S. military submachine gun, also used by the South Vietnamese
M50/55 Reising – limited numbers were used by MACVSOG and other irregular forces.
Madsen M-50 – used by South Vietnamese forces, supplied by the CIA.
MAS-38 submachine gun – used by South Vietnamese militias.
MAT-49 submachine gun – used by South Vietnamese militias. Captured models were used in limited numbers
MP 40 submachine gun – used by South Vietnamese forces, supplied by the CIA.
Owen Gun – standard Australian submachine-gun in the early stages of the war, later replaced by the F1.
Sten submachine gun – used by US special operations forces, often with a suppressor mounted.
Sterling submachine gun – used by Australian Special Air Service Regiment and other special operations units.
Thompson submachine gun – used often by South Vietnamese troops, and in small quantities by US artillery and helicopter units.
Uzi – used by special operations forces and some South Vietnamese, supplied from Israel.
Shotguns
Shotguns were used as an individual weapon during jungle patrol; infantry units were authorized a shotgun by TO&E (Table of Organization & Equipment). Shotguns were not general issue to all infantrymen, but were select issue weapons, such as one per squad, etc.
Ithaca 37 – pump-action shotgun used by the United States and ARVN.
Remington Model 10 – pump-action shotgun used by the United States.
Remington Model 11-48 – semi-automatic shotgun used by US Army.
Remington Model 31 – pump-action shotgun used by the US Army, the SEALs and the ARVN.
Remington Model 870 – pump-action shotgun primary shotgun used by Marines, Army and Navy after 1966.
Remington 7188 – experimental select fire shotgun, withdrawn due to lack of reliability. Used by US Navy SEALs
Savage Model 69E – pump-action shotgun used by the US Army.
Savage Model 720 – semi-automatic shotgun.
Stevens Model 77E – pump-action shotgun used by Army and Marine forces. Almost 70,000 Model 77Es were procured by the military for use in SE Asia during the 1960s. Also very popular with the ARVN because of its small size.
Stevens Model 520/620
Winchester Model 1912 – used by USMC.
Winchester Model 1200 – pump-action shotgun used by the US Army.
Winchester Model 1897 – used by the Marines during the early stages of the war.
Machine guns
M60 machine gun – standard General-purpose machine gun for US, ANZAC, and ARVN forces throughout the war.
Colt Machine Gun – experimental light machine gun deployed by SEAL Team 2 in 1970.
M1918 Browning Automatic Rifle – used by the ARVN during the early stages of the war, as well as many that were airdropped into Laos and used by Laotian fighters.
FM 24/29 light machine gun – used by South Vietnamese militias
RPD machine gun (and Type 56) – captured and used by reconnaissance teams of Mobile Strike Forces, MAC-V-SOG and other special operation forces. Also commonly modified to cut down the barrel.
Stoner M63A Commando & Mark 23 Mod.0 – used by Navy SEALs and tested by Force Recon.
M134 Minigun – 7.62 mm vehicle mounted machine gun (rare)
M1917 Browning machine gun – .30cal heavy machine gun issued to the ARVN and also in limited use by the U.S. Army.
M1919 Browning machine gun (and variants such as M37) – vehicle mounted machine gun. Meanwhile, still of use by many South Vietnamese infantry.
M73 machine gun – tank mounted machine gun.
Browning M2HB .50cal Heavy Machine Gun
Grenades and mines
AN-M8 – white smoke grenade
C4 explosive
Mark 2 fragmentation grenade
M1 smoke pot
M26 fragmentation grenade and many subvariants
M59 and M67 fragmentation grenade
M6/M7-series riot control grenades – Used to clear NVA/VC out of caves, tunnels and buildings or stop a pursuer.
AN/M14 TH3 thermite grenade – Incendiary grenade used to destroy equipment and as a fire-starting device.
M15 and M34 smoke grenades – filled with white phosphorus, which ignites on contact with air and creates thick white smoke. Used for signalling and screening purposes, as well as an anti-personnel weapon in enclosed spaces, as the burning white phosphorus would rapidly consume any oxygen, suffocating the victims.
M18 grenade Smoke Hand Grenade – Signaling/screening grenade available in red, yellow, green, and purple.
V40 Mini-Grenade
OF 37 grenade and DF 37 grenade, French grenades used by the ARVN in the 1950s
XM58 riot control grenade – A miniature riot control grenade used by MACVSOG and Navy SEALs.
M14 mine – anti-personnel blast mine
M15 mine – anti-tank mine
M16 mine – bounding anti-personnel fragmentation mine
M18/M18A1 Claymore – command-detonated directional anti-personnel mine
M19 mine – anti-tank mine
Grenade and Rocket Launchers
M1/M2 rifle grenade adapters – used to convert a standard fragmentation grenade (M1) or smoke grenade (M2) into a rifle grenade in conjunction with the M7 grenade launcher.
M7 and M8 rifle grenade launcher – rifle grenade launcher used with respectively the M1 Garand and the M1 carbine, used by the South Vietnamese. Could fire the M9 and M17 rifle grenades.
M31 HEAT rifle grenade – Used primarily by the U.S. Army before the introduction of the M72 LAW. Fired from the M1 Garand and M14 Rifle.
M79 Grenade Launcher – primary U.S. grenade launcher used by all branches of the US military, as well as ANZAC forces and the ARVN.
China Lake Grenade Launcher – pump action weapon used in very small numbers.
XM148 – experimental underbarrel 40mm grenade launcher that could be attached to the M16 rifle or XM177 carbine. Withdrawn due to safety reasons.
M203 grenade launcher – single-shot 40mm underslung grenade launcher designed to attach to a M16 rifle (or XM177 carbine, with modifications to the launcher). First tested in combat April 1969.
Mark 18 Mod 0 grenade launcher – Hand-cranked, belt-fed, 40x46mm grenade launcher used by the US Navy.
Mark 19 grenade launcher – Automatic, belt-fed, 40x53mm grenade launcher.
Mk 20 Mod 0 grenade launcher – Automatic, belt-fed, 40x46mm grenade launcher. Primarily used by riverine crews but also used by Air Force Special Operations.
XM174 grenade launcher – Automatic, belt-fed, 40x46mm grenade launcher used mainly by the US Army.
Bazooka – The M9 variant was supplied to the ARVN during the early years of the war, while the M20 "Super Bazooka" was used by the USMC and the ARVN until the full introduction of the M67 90mm recoilless rifle and of the M72 LAW.
M72 LAW – 66mm anti-tank rocket launcher.
XM202 – experimental four-shot 66mm incendiary rocket launcher.
FIM-43 Redeye MANPADS (Man-Portable Air-Defence System) – shoulder-fired heat-seeking anti-air missile, used by the US Army and USMC.
BGM-71 TOW – wire-guided anti-tank missile
Flamethrowers
M2A1-7 and M9A1-7 flamethrowers
Infantry support weapons
M18 recoilless rifle – 57mm shoulder-fired/tripod mounted recoilless rifle, used by the ARVN early in the war.
M20 recoilless rifle – 75mm tripod/vehicle-mounted recoilless rifle, used by US and ARVN forces early in the war.
M67 recoilless rifle – 90mm shoulder-fired anti-tank recoilless rifle, used by the US Army, US Marine Corps, ANZAC and ARVN selected forces.
M40 recoilless rifle 106mm tripod/vehicle-mounted recoilless rifle.
M2 mortar – 60mm mortar, used in conjunction with the lighter but less accurate and lower-range M19 mortar.
M19 mortar – 60mm mortar, used in conjunction with the older, heavier M2 mortar.
Brandt Mle 27/31 – 81mm mortar, used by ARVN forces
M1 mortar – 81mm mortar, used by ARVN forces.
M29 mortar – 81mm mortar, used by US and ARVN forces.
L16A1 mortar – 81mm, used by ANZAC forces.
82-BM-37 – captured 82mm mortar, few used by USMC with US rounds.
M30 mortar 107mm mortar, used by US and ARVN forces.
M98 Howtar, variant of the latter mounted on a M116 howitzer carriage.
Artillery
M55 quad machine gun – used to defend US Army bases and on vehicles
Oerlikon 20 mm cannon – used on riverine crafts
Bofors 40 mm gun – used on riverine crafts
105 mm Howitzer M101A1/M2A1
105 mm Howitzer M102
155 mm Howitzer M114
M53 Self-propelled 155mm gun
M55 Self-propelled 8-inch howitzer
M107 Self-propelled 175mm gun
M108 Self-propelled 105 mm howitzer
M109 Self-propelled 155 mm howitzer
M110 Self-propelled 8-inch howitzer
75mm Pack Howitzer M1
L5 pack howitzer 105 mm pack howitzer used by Australia and New Zealand
MIM-23 Hawk – medium-range surface to air missile used in very small quantities by the US Marines.
Artillery ammunition types
HE (High Explosive) – standard artillery round.
High-explosive anti-tank round – fired by 105mm guns.
White phosphorus – used for screening or incendiary purposes.
Smoke shells – used for screening.
Leaflet shell
Beehive flechette rounds – antipersonnel rounds.
Improved Conventional Munition – antipersonnel shell with sub-munitions.
Aircraft
(listed alphabetically by modified/basic mission code, then numerically in ascending order by design number/series letter)
A-1 Skyraider – ground attack aircraft
A-3 Skywarrior – carrier-based bomber
A-4 Skyhawk – carrier-based strike aircraft
A-6 Intruder – carrier-based all weather strike aircraft
A-7 Corsair II – carrier-based strike aircraft
A-26 Invader – light bomber
A-37 Dragonfly – ground attack aircraft
AC-47 Spooky – gunship
AC-119G "Shadow" – gunship
AC-119K "Stinger" – gunship
AC-130 "Spectre" – gunship
AU-23 Peacemaker – ground attack aircraft
AU-24 Stallion – ground attack aircraft
B-52 Stratofortress – heavy bomber
B-57 Canberra – medium bomber
Canberra B.20 – Royal Australian Air Force medium bomber
C-1 Trader – cargo/transport aircraft
C-2 Greyhound – cargo/transport aircraft
C-5 Galaxy – strategic lift cargo aircraft
C-7 Caribou – tactical cargo aircraft, used by the U.S. Air Force, the Royal Australian Air Force and the South Vietnamese Air Force
C-46 Commando – cargo/transport aircraft
C-47 – cargo/transport aircraft
C-54 – transport aircraft
C-97 Stratofreighter – cargo/transport aircraft
C-119 Boxcar – cargo/transport aircraft
C-121 Constellation – transport aircraft
C-123 Provider – cargo/transport aircraft
C-124 Globemaster II – cargo/transport aircraft
C-130 Hercules – cargo/transport plane
C-133 Cargomaster – cargo/transport aircraft
C-141 Starlifter – strategic cargo aircraft
E-1 Tracer – carrier-based airborne early warning (AEW) aircraft
E-2 Hawkeye – carrier-based airborne early warning (AEW) aircraft
EA-3 Skywarrior – carrier-based tactical electronic reconnaissance aircraft
EA-6B Prowler – carrier-based electronic warfare & attack aircraft
EB-57 Canberra – tactical electronic reconnaissance aircraft
EB-66 – tactical electronic reconnaissance aircraft
EC-121 – radar warning or sensor relay aircraft
EF-10 Skyknight – tactical electronic warfare aircraft
EKA-3B Skywarrior – carrier-based tactical electronic warfare aircraft
F-4 Phantom II – carrier and land based fighter-bomber
F-5 Freedom Fighter – light-weight fighter used in strike aircraft role
F8F Bearcat – piston fighter-bomber, used by the South Vietnamese Air Force until 1964.
F-8 Crusader – carrier and land based fighter-bomber
F-14 Tomcat – carrier-based fighter, made its combat debut during Operation Frequent Wind, the evacuation of Saigon, in April 1975.
F-100 Super Sabre – fighter-bomber
F-102 Delta Dagger – fighter
F-104 Starfighter – fighter
F-105 Thunderchief – fighter-bomber
F-111 Aardvark – medium bomber
HU-16 Albatross – rescue amphibian
KA-3 Skywarrior – carrier-based tactical aerial refueler aircraft
KA-6 Intruder – carrier-based tactical aerial refueler aircraft
KB-50 Superfortress – aerial refueling aircraft
KC-130 Hercules – tactical aerial refueler/assault transport aircraft
KC-135 Stratotanker – aerial refueling aircraft
O-1 Bird Dog – light observation airplane
O-2 Skymaster – observation aircraft
OV-1 Mohawk – battlefield surveillance and light strike aircraft
OV-10 Bronco – light attack/observation aircraft
P-2 Neptune – maritime patrol aircraft
P-3 Orion – maritime patrol aircraft
P-5 Marlin – antisubmarine seaplane
QU-22 Pave Eagle (Beech Bonanza) – electronic monitoring signal relay aircraft
RA-3B Skywarrior – carrier-based tactical photographic reconnaissance aircraft
RA-5C Vigilante – carrier-based tactical photographic reconnaissance aircraft
RB-47 Stratojet – photographic reconnaissance aircraft
RB-57 Canberra – tactical photographic reconnaissance aircraft
RB-66 – tactical photographic reconnaissance aircraft
RF-4 Phantom II – carrier and land-based tactical photographic reconnaissance aircraft
RF-8 Crusader – carrier-based tactical photographic reconnaissance aircraft
RF-101 Voodoo – tactical photographic reconnaissance aircraft
RT-33A – reconnaissance jet
S-2 Tracker – carrier-based anti-submarine warfare (ASW) aircraft
SR-71 Blackbird – strategic reconnaissance aircraft
TF-9J Cougar – fast forward air controller
T-28 Trojan – trainer/ground attack aircraft
T-41 Mescalero – trainer aircraft
U-1 Otter – transport aircraft
U-2 – reconnaissance aircraft
U-6 Beaver – utility aircraft
U-8 Seminole – transport/electronic survey aircraft
U-10 Helio Courier – utility aircraft
U-17 Skywagon – utility aircraft
U-21 Ute – liaison and electronic survey
YO-3 Quiet Star – light observation airplane
Helicopters
(listed numerically in ascending order by design number/series letter, then alphabetically by mission code)
UH-1 Iroquois "Huey" – utility transport and gunship helicopter
AH-1G HueyCobra – attack helicopter
AH-1J SeaCobra – twin-engine attack helicopter
UH-1N Iroquois – twin-engine utility helicopter
UH-2 Seasprite – carrier-based utility helicopter
CH-3 Sea King – long-range transport helicopter
HH-3 "Jolly Green Giant" – long-range combat search and rescue (CSAR) helicopter
SH-3 Sea King – carrier-based anti-submarine warfare (ASW) helicopter
OH-6A Cayuse "Loach" (from LOH – Light Observation Helicopter) – light transport/observation (i.e. scout) helicopter
OH-13 Sioux – light observation helicopter
UH-19 Chickasaw – utility transport helicopter
CH-21 Shawnee – cargo/transport helicopter
OH-23 Raven – light utility helicopter
CH-34 Choctaw – cargo/transport helicopter
CH-37 Mojave – cargo/transport helicopter
HH-43 Huskie – rescue helicopter
CH-46 Sea Knight – cargo/transport helicopter
CH-47 Chinook – cargo/transport helicopter
CH-53 Sea Stallion – heavy-lift transport helicopter
HH-53 "Super Jolly Green Giant" – long-range combat search and rescue (CSAR) helicopter
CH-54 Tarhe "Sky Crane" – heavy lift helicopter
OH-58A Kiowa – light transport/observation helicopter
Aircraft ordnance
GBUs
CBUs
BLU-82 Daisy cutter
Napalm
Bomb, 250 lb, 500 lb, 750 lb, 1000 lb, HE (high explosive), general-purpose
Rocket, aerial, HE (High Explosive), 2.75 inch
Aircraft weapons
M60D machine gun – 7.62mm (helicopter mount)
Minigun – 7.62 mm (aircraft and helicopter mount)
Colt Mk 12 cannon – 20 mm (aircraft mount)
M3 cannon – 20 mm (aircraft mount)
M39 cannon – 20 mm (aircraft mount)
M61 Vulcan – 20 mm (aircraft mount), M195 was used on AH-1
M197 Gatling gun – 20 mm (used on AH-1J helicopters)
M75 grenade launcher – 40 mm (helicopter mount)
M129 grenade launcher – 40 mm (helicopter mount)
AIM-4 Falcon
AIM-7 Sparrow
AIM-9 Sidewinder
AGM-12 Bullpup
AGM-22
AGM-45 Shrike
AGM-62 Walleye
AGM-78 Standard ARM
AGM-65 Maverick
Chemical weapons
Rainbow Herbicides
Agent Orange – While developed to be used as a herbicide to destroy natural obstacles and tree camouflage, it was later revealed that it posed health risks to those exposed to it.
Agent Blue – Used to destroy agricultural land that was believed to be used to grow food for the VC/NVA.
Agent White
Napalm
CS-1 riot control agent – "Teargas", used in grenades, cluster bomblets or (rarely) shells.
CN gas – "teargas"
Vehicles
In addition to cargo-carrying and troop transport roles, many of these vehicles were also equipped with weapons and sometimes armor, serving as "gun trucks" for convoy escort duties.
M274 Truck, Platform, Utility, 1/2 Ton, 4X4 – Commonly called a "Mechanical Mule".
Land Rover (short and long wheelbase) – Australian and New Zealand forces.
CJ-3B and M606 – 1/4 ton jeep
Willys M38A1 – ¼ ton jeep.
M151 – ¼ ton jeep.
Dodge M37 – 3/4 ton truck.
Kaiser Jeep M715 – 1¼-ton truck.
M76 Otter – 1¼-ton amphibious cargo carrier used by USMC.
M116 Husky – 1¼-ton amphibious cargo carrier tested by USMC.
M733 Amphibious Personnel Carrier – tested by USMC.
M35 series 2½-ton 6x6 cargo truck
M135 2½-ton truck
M54 5-ton 6x6 truck
M548 – 6-ton tracked cargo carrier
M520 Goer – 4x4 8-ton cargo truck.
M123 and M125 10-ton 6x6 trucks
Other vehicles
Caterpillar D7E bulldozer – used by US Army
Various graders and bulldozers used by the USMC
ERDLator
Combat vehicles
Tanks
M24 Chaffee – light tank; main ARVN tank early in the war, used at least as late as the Tet Offensive.
M41A3 Walker Bulldog – light tank, replaced the M24 Chaffee as the main ARVN tank from 1965.
M48 Patton – main tank of the US Army and Marines throughout the war, and also used by ARVN forces from 1971.
M67 "Zippo" – flamethrower variant of the M48 Patton, used by USMC.
M551 Sheridan – Armored Reconnaissance Airborne Assault Vehicle/Light Tank, used by the US Army from 1969.
Centurion Mk 5 Main Battle Tank – used by the Australian Army, with AVLB and ARV variants.
Other armored vehicles
C15TA Armoured Truck – used by the ARVN early in the war
LVTP5 (aka AMTRACs) – amphibious tractors/landing craft used by USMC and later by RVNMD
Lynx Scout Car Mk II – used by the ARVN
M113 – APC (Armored Personnel Carrier)
M113 ACAV – Armored Cavalry Assault Vehicle
M163 Vulcan – self-propelled anti-aircraft tank
M114 – reconnaissance vehicle
M132 Armored Flamethrower
M106 mortar carrier
M2 Half Track Car
M3 Scout Car – used by South Vietnamese forces early in the war.
M3 Half-track – used by South Vietnamese forces early in the war.
M5 Half-track
M9 Half-track
Cadillac Gage V-100 Commando – replaced ARVN M8 armored cars in 1967. Also used by US forces as M706 Commando.
M8 Greyhound Used by ARVN forces early in the war.
M56 Scorpion – limited use in 1965–1966
M50 Ontos – self-propelled 106 mm recoilless rifle carrier used by the USMC until 1969.
M42 Duster – M41 based hull, with a twin 40 mm antiaircraft gun mounted on an open turret
M728 Combat Engineer Vehicle – modified M60 Patton tank equipped with dozer blade, short-barreled 165mm M135 Demolition Gun, and A-Frame crane.
M60 AVLB – armored vehicle launched bridge using M60 Patton chassis.
M51 Armored Recovery Vehicle – fielded by US Marines.
M578 light recovery vehicle
M88 Recovery Vehicle – armored recovery vehicle based on M48 chassis.
Wickums armored draisine used by the ARVN.
Naval craft
LCM-6 and LCM-8 – with several modifications:
LCMs modified as a river monitors
Armored Troop Carrier
Command and Communication Boat (CCB)
other variants included helipad boats and tankers
LCVP – Landing craft vehicle personnel, some made by the French Services Techniques des Construction et Armes Navales/France Outremer and known as FOM
Swift Boat – Patrol Craft Fast (PCF)
ASPB – assault support patrol boat
PBR – Patrol Boat River, all-fiberglass boats propelled by twin water jets, used by the US Navy
Hurricane Aircat – airboat used by ARVN and US Army
Communications
Radios
The geographically dispersed nature of the war challenged existing military communications. From 1965 to the final redeployment of tactical units, numerous communications-electronics systems were introduced in Vietnam to upgrade the quality and quantity of tactical communications and replace obsolete gear:
AN/PRT-4 and PRR-9 squad radios – replaced the AN/PRC-6.
AN/PRC-6 and AN/PRC-10 – older short range radios, used for outposts
AN/PRC-25 and 77 – short-range FM radios that replaced the AN/PRC-8-10.
AN/VRC-12 series (VRC-43, VRC-45, VRC-46, VRC-47, VRC-48) – FM radios that replaced the RT-66-67-68/GRC (including AN/GRC 3–8, VRC 7–10, VRC 20–22, and VRQ 1–3 sets).
AN/GRC-106 – AM radios and teletypewriter that replaced the AN/GRC-19.
TA-312 and TA-1 field telephones.
Encryption systems
Encryption systems developed by the National Security Agency and used in Vietnam included:
NESTOR – tactical secure voice system, including the TSEC/KY-8, 28 and 38 was used with the PRC-77 and VRC-12
KW-26 – protected higher level teletype traffic
KW-37 – protected the U.S. Navy fleet broadcast
KL-7 – provided offline security
A number of paper encryption and authentication products, including one time pads and the KAL-55B Tactical Authentication System
Weapons of the PAVN/VC
The PAVN and the Southern communist guerrillas, the Viet Cong (VC) as they were commonly referred to during the war, largely used standard Warsaw Pact weapons. Weapons used by the PAVN also included Chinese Communist variants, which were referred to as CHICOM's by the US military. Captured weapons were also widely used; almost every small arm used by SEATO may have seen limited enemy use. During the early 1950s, US equipment captured in Korea was also sent to the Viet Minh.
Small arms
Hand combat weapons
A wide variety of bayonets meant for fitting on the many types of rifles used by the NVA and VC.
Type 30 bayonet
Spears, used during "suicide attacks"
Other types of knives, bayonets, and blades
Handguns and revolvers
Makarov PM (and Chinese Type 59)
Mauser C96 – Locally produced copies were used alongside Chinese copies and German variants supplied by the Soviets.
Nagant M1895
Webley Mk2
Mac M1892
Smith and Wesson Model 10
M1911 pistol
M1935A pistol
SA vz. 61 – automatic pistol
Type 69 Slience
Tokarev TT-33 – Standard pistol, including Chinese Type 51 and Type 54 copies including Zastava M57
Walther P38 – Captured by the Soviets during World War II and provided to the VPA and the NLF as military aid
Home-made pistols, such as copies of the M1911 or of the Mauser C96 (Cao Dai 763) or crude single-shot guns, were also used by the Viet Cong early in the war.
Automatic and semi-automatic rifles
SKS (Chinese Type 56) semi-automatic carbine
AK-47 – from the Soviet Union, Warsaw Pact countries, China and North Korea
Type 56 – Chinese-made standard rifle
Type 58 – Limited use from North Korea
PMK – Polish-made AK-47
AKM – from the Soviet Union, common modernized variant of the AK-47
PM md. 63/65 – Romanian variant of AKM
AMD-65 – Very limited use from Hungary
M1/M2 carbines – common and popular captured semi-automatic rifles
vz. 52 rifle semi-automatic rifle, very rarely used
Vz. 58 assault rifle
Sturmgewehr 44 Limited
Type 63 assault rifle – Limited use, received during the 1970s
M14, M16A1 – captured from US and South Vietnamese forces.
M1 Garand – captured semi-automatic rifle
MAS-49 rifle – captured French rifle from First Indochina War
Bolt-action rifles/marksman rifles
Mosin–Nagant – Bolt-action rifles and carbines from the Soviet Union and China (especially M44).
Mauser Kar98k – Bolt-action rifle (captured from the French during the First Indochina War and also provided by the Soviets as military aid).
Chiang Kai-shek rifle – Used by recruits and militias
MAS-36 rifle
Lee–Enfield – Used by the Viet Cong
Arisaka rifles – used by Viet Cong early in the war.
Lebel rifle – Used in earlier stages of the Vietnam War.
vz. 24 – Used by Viet Cong Forces.
SVD Dragunov – Soviet semi-automatic sniper rifle in limited use
M1903 Springfield – Used by Viet Cong forces
M1917 Enfield – Used by Viet Cong forces
Remington Model 10 – pump-action shotgun used by the Viet Cong
Older or rarer rifles where often modified by the Viet Cong early in the war: Gras mle 1874 carbines were rechambered to .410 bore while Destroyer carbines were modified to accept the magazine of the Walther P38.
Home-made rifles, often spring-action rifles made to look like a M1 Garand or a M1 Carbine, were also used by the Viet Cong.
Submachine guns
K-50M submachine gun (Vietnamese edition, based on Chinese version of Russian PPSh-41, under licence)
MAT-49 submachine gun – Captured during the French-Indochina War. Many were converted from 9x19mm to 7.62x25 Tokarev
PPSh-41 submachine gun (both Soviet, North Korean and Chinese versions)
PPS-43 submachine gun (both Soviet and Chinese versions)
M3 submachine gun Limited use
Thompson submachine gun – including Vietnamese copies
MP 40
MP 38 submachine gun – Limited use.
MAS-38 submachine gun – Captured from the French in the Indochina War.
PM-63 submachine gun – Used by tank crews
M49 submachine gun – limited use, received from Yugoslavia
M56 submachine gun – limited use, received from Yugoslavia
Vietnamese home-made submachine guns, inspired by the Sten or the Thompson, were used by the Viet Cong early in the war.
Machine guns
Bren light machine gun, used by Viet Cong
Degtyarev DP (DPM and RP-46 variants and Chinese Type 53 and Type 58 copies)
DShK heavy machine gun (including Chinese Type 54)
FM-24/29 – used by Viet Cong Forces
KPV heavy machine gun
M1918 Browning Automatic Rifle
M1917 Browning machine gun – at least 1 used by the Viet Cong
M1919 Browning machine gun – captured from ARVN/US forces
M60 machine gun – captured from ARVN/US forces
M2 Browning – captured from ARVN/US forces
MG 34 – captured by the Soviets during World War II and provided to the VPA and the NLF as military aid
MG 42 – captured by the Soviets during World War II and provided to the VPA and the NLF as military aid
FG 42 – Limited use, captured by the Soviets during World War II and supplied in the 1950s
Maxim machine-gun M1910
PK Very limited use general-purpose machine gun from Soviet Union
RPD light machine gun (and Chinese Type 56 and North Korean Type 62 copies) – first used in 1964
RPK light machine gun of Soviet design
SG-43/SGM medium machine guns including Type 53 and Type 57 Chinese copies of these guns
Type 11 light machine gun
Type 24 machine gun (Chinese-made MG-08) – used by the Viet Cong Forces
Type 67 machine gun
Type 92 heavy machine gun
Type 99 light machine gun
Uk vz. 59 general-purpose machine gun
ZB vz. 26 light machine gun (included Chinese copies)
ZB vz.30 light machine gun from Czechoslovakia
Grenades, mines and Booby Traps
Molotov cocktail
Home-made grenades and IEDs
Punji sticks
Cartridge traps
F1 grenade (Chinese Type 1)
M29 grenade – captured
M79 grenade launcher – captured from US or ARVN forces
M203 grenade launcher – captured from US or ARVN forces
Model 1914 grenade
RG-42 grenade (Chinese Type 42)
RGD-1 and RGD-2 smoke grenades
RGD-5 grenade (Chinese Type 59)
RGD-33 stick Grenade
RKG-3 anti-tank grenade (Chinese Type 3)
RPG-40 Anti-Tank Hand Grenade
RPG-43 HEAT (High Explosive Anti-Tank) Hand Grenade
RPG-6
Type 4 grenade
Type 10 grenade
Type 67 and RGD-33 stick grenades
Type 64 rifle grenade – fired from AT-44 grenade launchers, fitted to Mosin-Nagant carbines
Type 91 grenade
Type 97 grenade
Type 99 grenade
Type 10 grenade discharger
Type 89 grenade discharger
Lunge mine
M16 mine — Captured.
M18/M18A1 Claymore mine — Captured.
Flamethrower
LPO-50 flamethrower
Type 74 Chinese-built copy
Rocket launchers, recoilless rifles, anti-tank rifles and lightweight guided missiles
Recoilless rifles were known as DKZ (Đại-bác Không Giật).
RPG-2 recoilless rocket launcher (both Soviet, Chinese and locally produced B-40 and B-50 variants used)
RPG-7 recoilless rocket launcher
Type 51 (Chinese copy of the M20 Super Bazooka) – used by Viet Cong as late as 1964
B-10 recoilless rifle
B-11 recoilless rifle
SPG-9 73 mm recoilless rifle
M18 recoilless rifle (and Chinese Type 36 copy) and captured from US or ARVN forces
M20 recoilless rifle (and Chinese Type 52 and Type 56 copies) and captured from US or ARVN forces
PTRD Limited use by the Viet Cong Forces.
9K32 Strela-2 (SA-7) anti-aircraft weapon
9M14 Malyutka (AT-3 Sagger)
Mortars
Brandt Mle 1935 – 60mm mortar
M2 mortar (including Chinese Type 31 and Type 63 copies) – 60mm mortars
M19 mortar – 60mm mortar
M1 mortar – 81mm
M29 mortar – 81mm
Brandt Mle 27/31 – 81mm mortar
Type 97 81mm mortar
82-PM-37 (including Chinese Type 53 copy)- 82mm mortar
82-PM-41 – 82mm mortar.
Type 67 mortar – 82mm mortar
Type 94 90mm mortar
Type 97 90 mm mortar
M1938 107mm mortar
120-PM-43 mortar
Type 97 150 mm mortar
M1943 160mm mortar (including Chinese Type 55 copy)
Field artillery rocket launchers
Field artillery rockets were often fired from improvised launchers, sometines a tube fixed with bamboo.
102mm 102A3 rockets
107mm Type 63 MRL – used with single-tube or 12-tubes launchers
single-tube 122mm 9M22M rocket taken from BM-21 Grad MRL
single-tube 140mm M14-OF rocket taken from BM-14 MRL
Field guns and howitzers
57 mm anti-tank gun M1943 (ZiS-2)
70 mm Type 92 battalion gun
Type 41 75 mm mountain gun, supplied by China
7.5 cm Pak 40
75mm M116 pack howitzer, supplied by China
76 mm divisional gun M1942 (ZiS-3) (and Chinese Type 54)
85 mm divisional gun D-44
100 mm field gun M1944 (BS-3)
Type 91 10 cm howitzer, supplied by China
M101 howitzer
122 mm gun M1931/37 (A-19)
122 mm howitzer M1938 (M-30)
D-74 122 mm Field Gun
130 mm towed field gun M1954 (M-46)
152 mm howitzer-gun M1937 (ML-20)
152 mm howitzer M1943 (D-1)
152 mm towed gun-howitzer M1955 (D-20)
M114 155 mm howitzer
Anti-aircraft weapons
ZPU-1/2/4 single, double and quad 14.5 mm anti-aircraft machine guns
ZU-23 twin 23 mm anti-aircraft cannon
M1939 37 mm anti-aircraft gun (and Chinese Type 55)
2 cm Flak 30 anti-aircraft gun of German origin WW II
S-60 57 mm anti-aircraft gun
85mm air defense gun M1944
100 mm air defense gun KS-19
8.8 cm Flak 18/36/37/41
S-75 Dvina Soviet high-altitude air defence system
S-125 Neva Soviet high-altitude air defence system
Aircraft
Aero Ae-45 trainer aircraft
Aero L-29 Delfín trainer aircraft
An-2 utility aircraft
Cessna A-37 Dragonfly attack aircraft – limited use of captured or defected
Ilyushin Il-12 transport aircraft
Ilyushin Il-14 transport aircraft
Ilyushin Il-28 jet bomber
Lisunov Li-2 transport aircraft
Mikoyan-Gurevich MiG-15 (and Chinese F-4) jet trainer
MiG-17 (and Chinese F-5) jet fighter
MiG-19 (and Chinese F-6) jet fighter
MiG-21 jet fighter
North American T-28 Trojan – 1 ex-Laotian used in 1964
Yakovlev Yak-18 trainer aircraft
Zlín Z 26 trainer aircraft
Aircraft weapons
Gryazev-Shipunov GSh-23
Nudelman-Rikhter NR-30
Nudelman N-37
Nudelman-Rikhter NR-23
K-5 (missile) (RS-2US)
K-13 (missile) (R-3S)
Helicopters
Mi-2
Mi-4
Mi-6
Mi-8
Tanks
M24 Chaffee – light tank, captured from the French and used for training early in the war
M41 Walker Bulldog – light tank, captured from the ARVN.
M48 Patton – captured from the ARVN.
PT-76 amphibious tank
SU-76 self-propelled gun
SU-100 self-propelled guns in limited numbers.
SU-122 self-propelled gun in limited numbers
T-34-85 medium tank, from 1959
T-54 main battle tanks, used from 1965
Type 59 main battle tanks
Type 62 light tank
Type 63 anti-aircraft self-propelled systems
Type 63 amphibious tank
ZSU-57-2 anti-aircraft self-propelled systems
ZSU-23-4 anti-aircraft self-propelled systems
Other armored vehicles
BTR-40 APC
BTR-50 APC
BTR-60PB APC
BTR-152 APC
M3 Half-track and M8 Light Armored Car – first NVA armored vehicles. Used to protect Air Bases in the North.
M113 armored personnel carrier – captured from the ARVN
MTU-20 armored bridge-layer
Type 63 APC
Support vehicles
AT-L light artillery tractor
AT-S and ATS-59 medium artillery tractors
Beijing BJ212
Dnepr M-72
GAZ-AA
GAZ-MM
GAZ-46 light amphibious car
GAZ-51 truck (and Chinese copy)
GAZ-63 truck
GAZ-64
GAZ-67
GAZ-69
IFA W 50
KrAZ-255 heavy truck
artillery tractor
MAZ-502 truck
M35 truck series (captured)
M54 truck series (captured)
M151 jeep (captured)
PMZ-A-750
ZIS-150 truck (and Chinese CA-10)
UralZIS-355M truck
Ural-375
ZIL-130 truck
ZIL-151 truck
ZIL-157 and ZIL-157K trucks (and Chinese CA-30)
ZiS-485 amphibious vehicle
Naval craft
Swatow-class gunboats
P4 and P6 torpedo boats
Countless civilian-type sampans – mainly used for smuggling supplies and weapons
See also
NLF and PAVN strategy, organization and structure
NLF and PAVN logistics and equipment
NLF and PAVN battle tactics
Weapons of the Laotian Civil War
Weapons of the Cambodian Civil War
References
Citations and notes
Bibliography
External links
Vietnam War-related lists
Vietnam War
Vietnam War |
497303 | https://en.wikipedia.org/wiki/Digital%20terrestrial%20television | Digital terrestrial television | Digital terrestrial television (DTTV or DTT, or DTTB with "broadcasting") is a technology for terrestrial television in which land-based (terrestrial) television stations broadcast television content by radio waves to televisions in consumers' residences in a digital format. DTTV is a major technological advance over the previous analog television, and has largely replaced analog which had been in common use since the middle of the 20th century. Test broadcasts began in 1998 with the changeover to DTTV (aka Analog Switchoff (ASO) or Digital Switchover (DSO)) beginning in 2006 and is now complete in many countries. The advantages of digital terrestrial television are similar to those obtained by digitising platforms such as cable TV, satellite, and telecommunications: more efficient use of limited radio spectrum bandwidth, provision of more television channels than analog, better quality images, and potentially lower operating costs for broadcasters (after the initial upgrade costs).
Different countries have adopted different digital broadcasting standards; the major ones are:
ATSC DTV – Advanced Television Standards Committee (System A)
ATSC-M/H – Advanced Television Systems Committee Mobile & Handheld
DTMB
DVB-H – Digital Video Broadcasting Handheld
DVB-T/DVB-T2 – Digital Video Broadcasting Terrestrial (System B)
ISDB-T – Integrated Services Digital Broadcasting Terrestrial (System C)
DMB-T/H
ISDB-TSB – Integrated Services Digital Broadcasting-Terrestrial Sound Broadcasting – (System F)
FLO – Forward Link Only (System M)
Transmission
DTT is transmitted using radio frequencies through terrestrial space in the same way as the former analog television systems, with the primary difference being the use of multiplex transmitters to allow reception of multiple services (TV, radio stations or data) on a single frequency (such as a UHF or VHF channel).
The amount of data that can be transmitted (and therefore the number of channels) is directly affected by channel capacity and the modulation method of the transmission.
North America and South Korea uses the ATSC standard with 8VSB modulation, which has similar characteristics to the vestigial sideband modulation used for analog television. This provides considerably more immunity to interference, but is not immune to multipath distortion and also does not provide for single-frequency network operation (which is in any case not a requirement in the United States).
The modulation method in DVB-T is COFDM with either 64 or 16-state Quadrature Amplitude Modulation (QAM). In general, 64QAM is capable of transmitting a greater bit rate, but is more susceptible to interference. 16 and 64QAM constellations can be combined in a single multiplex, providing a controllable degradation for more important program streams. This is called hierarchical modulation. DVB-T (and even more so DVB-T2) are tolerant of multipath distortion and are designed to work in single frequency networks.
Developments in video compression have resulted in improvements on the original discrete cosine transform (DCT) based H.262 MPEG-2 video coding format, which has been surpassed by H.264/MPEG-4 AVC and more recently H.265 HEVC. H.264 enables three high-definition television services to be coded into a 24 Mbit/s DVB-T European terrestrial transmission channel. DVB-T2 increases this channel capacity to typically 40 Mbit/s, allowing even more services.
Reception
DTTV is received either via a digital set-top box (STB), TV gateway or more usually now an integrated tuner included with television sets, that decodes the signal received via a standard television antenna. These devices often now include digital video recorder (DVR) functionality. However, due to frequency planning issues, an aerial capable of receiving a different channel group (usually a wideband) may be required if the DTTV multiplexes lie outside the reception capabilities of the originally installed aerial. This is quite common in the UK; see external links.
Indoor aerials are even more likely to be affected by these issues and possibly need replacing.
DTT around the world and digital television transition
Main articles: List of digital television deployments by country, Digital television transition (aka Analog Switchoff (ASO) or Digital Switchover (DSO))
Asia
Afghanistan
Afghanistan officially launched digital transmissions in Kabul using DVB-T2/MPEG-4 on Sunday, 31 August 2014. Test transmissions had commenced on 4 UHF channels at the start of June 2014. Transmitters were provided by GatesAir.
Bangladesh
Bangladesh had its first DTT service DVB-T2/MPEG-4 on 28 April 2016 launched by the GS Group. The service is called RealVU.
It is done with partnership with Beximco. GS Group acts as a supplier and integrator of its in-house hardware and software solutions for the operator's functioning in accordance with the modern standards of digital television. RealVu provides more than 100 TV channels in SD and HD quality.
The digital TV set-top boxes developed by GS Group offer such functions as PVR and time-shift, along with an EPG.
India
India adopted DVB-T system for digital television in July 1999. The first DVB-T transmission was started on 26 January 2003 in the four major metropolitan cities by Doordarshan. Currently the terrestrial transmission is available in both digital and analog formats. 4 high power DVB-T transmitters were set up in the top 4 cities, which were later upgraded to DVB-T2 + MPEG4 and DVB-H standards. An additional 190 high power, and 400 low power DVB-T2 transmitters have been approved for Tier I, II and III cities of the country by 2017. The Indian telecom regulator, TRAI, had recommended the I&B to allow private broadcast companies to use the DTT technology, in 2005. So far, the Indian I&B ministry only permits private broadcast companies to use satellite, cable and IPTV based systems. The government's broadcasting organisation Doordarshan had started the free TV service over DVB - T2 to the mobile phone users from 25 February onwards and extended to cover 16 cities including the four metros from 5 April 2016.
Israel
Israel started digital transmissions in MPEG-4 on Sunday, 2 August 2009, and analogue transmissions ended on 31 March 2011. Israel was the first nation in the Middle East and the first non-European nation to shut down its analogue TV distribution system. The new service which is operated by The Second Authority for Television and Radio in Israel currently offers 6 SD TV channels and 30 national and regional (private) radio services. According to government decisions, the system will expand to include two additional multiplexes that will carry new channels and HD versions of the existing channels. There is a proposition by the Ministry of Finance to run a tender in order to hand over the maintenance of the system to a private company that, in return, will receive an extended license and will be able to offer pay TV channels. In this matter nothing has been decided upon until the end on 2012.
On 20 March 2013, it was announced that Thomson Broadcast had won a major contract with The Second Authority for Television and Radio for the extension of its nationwide DVB-T/DVB-T2 network. The Second Authority's order for new infrastructure includes two new multiplexes on thirty sites including three single frequency areas. This major deal incorporates a three-year service agreement for the global transmission system.
Sixty-three high- and medium- power transmitters from Thomson's GreenPower range have been ordered together with installation and commissioning services, in a deal which follows on from the company's earlier deployment of DVB-T multiplexes over thirty transmission and sixty-two repeater sites. Equipped with dualcast-ready digital exciters, the GreenPower range offers the ability to smoothly migrate from DVB-T to DVB-T2 and to easily offer additional HDTV content. Ranging from low- to high- power, the range covers all the power requirements of The Second Authority. Thomson will also deploy its user-friendly centralized Network Management System to control and monitor the overall transmission system from a single facility.
The deal includes a new service level agreement providing The Second Authority with a high level of local services to keep its currently operating DTV transmission equipment running 24/7, 365 days a year.
Japan
The Japanese Ministry of Internal Affairs and Communications and DPA (The Association for Promotion of Digital Broadcasting-Japan) jointly set the specification and announced a guideline for "simplified DTT tuners" with price under 5,000 Japanese yen on 25 December 2007. MIAC officially solicited manufactures to put it on the market by end of March 2010 (end of fiscal year 2009). MIAC is estimating that 14 million, at maximum, traditional non-digital TV sets remain and need the "simplified DTT tuner" to be adapted even after complete transition to DTT after July 2011; it is aiming to avoid the disposal of large numbers of useless TV sets without such a tuner at one time.
On 20 December 2007, the Japan Electronics and Information Technology Industries Association published rules for Digital Rights Management for DTT broadcasting under the name "Dubbing 10". Despite the name, consumers were allowed to use Blu-ray Disc and other recorders to "dub" or copy the video and audio of entire TV programs up to 9 times, with 1 final "move" permitted.
Broadcasting with "Dubbing 10" was supposed to start at about 4:00 a.m. on 2 June 2008, but was postponed after lengthy talks with the Japanese Society for Rights of Authors, Composers and Publishers. It finally started about 4:00 a.m. on 4 July 2008. The manufacturers of DVD and DTT recorders were to make units conforming to the "Dubbing 10" rules, and some manufacturers announced plans for create firmware downloads to allow users to update their existing recorders.
On 3 April 2008, DPA announced that a total of 32.71 million of DTT (ISDB-T) TV sets capable of DTT reception (excepting 1seg receivers) had been installed in Japan as of the end of March 2008. On 8 April 2008 DPA also announced guidelines for manufacturers of DTT receiving, record and replay unit which operate with personal computers. These add-on units operate on USB or PCI BUS, and went on sale on a reservation basis from late April, appearing in retail store in mid-May 2008.
On 8 May 2008, the Ministry of Internal Affairs and Communications announced that 43.7% of homes had DTT (ISDB-T) capable TVs and/or tuners with DVD recorder by end of March 2008. It had been 27.8% one year before, and the ministry was expecting 100% penetration by April 2011. On 27 April 2009, National Association of Commercial Broadcasters in Japan (NAB) a new official mascot, Chidejika, to replace Tsuyoshi Kusanagi after he was arrested on suspicion of public indecency.
On 3 September 2009, Ministry of Internal Affairs and Communications announced the procurement by tender of 5,000-8,000 sets of "simplified DTT tuners" with remote control for the citywide test transition from analogue to digital in Suzu, Ishikawa. The sets should be delivered by end November 2009. The program is aiming to examine the transition problem at individuals' home in country wide such as senior and non-technical families. Based on this rehearsal plan, analog TV transmission was interrupted in Suzu and parts of Noto for 48 hours, between noon on 2010/01/22 and noon on 2010/01/24.
On 4 September 2009, ÆON announced the low cost "simplified DTT tuners" with remote control for ISDB-T to sell at JUSCO from 19 September 2009. The tuner is produced by Pixela Corporation, and met the retail price of under 5,000 Japanese yen, which is the target price to industry by Dpa (). The tuner connects to an old fashioned TV though an RCA connector with SDTV quality and some other minimal functions.
On 7 September 2009, Ministry of Internal Affairs and Communications appointed two manufacturers I-O Data and Melco among 12 bids for minimal functioning "simplified DTT tuners" with remote control for ISDB-T of free supply to Japanese Temporary Assistance for Needy Families. Tuner connects to old fashion TV though RCA connector with SDTV quality and some other minimal function. On 24 July 2010, at noon, analog TV transmission officially stopped in Suzu and parts of Noto (approximately 8,800 homes) as the rehearsal plan that took place one year ahead of the nationwide shutdown, which is scheduled on 24 July 2011. Ministry of Internal Affairs and Communications shall watch what type of problem arise in transition to DTT which might apply to the nationwide shutdown.
On 20 April 2011, Ministry of Internal Affairs and Communications confirmed, and made the resolution by the House of Councillors on 8 June 2011, that the analog terrestrial TV close down schedule on 24 July 2011 will be unchanged, with the exception being the close down having to be postponed by a maximum one year. Analog television shut down on 31 March 2012, in Iwate, Miyagi and Fukushima prefectures, which were heavily damaged in the 2011 Tōhoku earthquake and tsunami and the nuclear accidents that followed it. Analog television stations are required to cease normal programming at noon and shut down their signals at midnight.
Malaysia
Digital television in Malaysia was first rolled out in January 2014. By June 2015, free digital television was provided by MYTV Broadcasting. In the first phase, there would be around 15 digital television stations nationwide with further stations will be added in the next phase. By November 2016, MYTV set top box also available for sale in electronics store nationwide. Malaysia's Prime Minister at the time Najib Razak officiated the launching of digital television in the country on 6 June 2017 with an estimate of 4.2 million digital television decoders freely distributed to the country citizens, including recipients of the government aid of 1Malaysia People's Aid (BR1M).
The government has planned for a full shutdown of analogue television broadcasting by September 2019 with a full introduction of digital television will be available to public by October. Langkawi become the first area of analogue television switch off that were executed on 21 July at 02:30 (UTC+8). Further on 6 August, the Malaysian Communications and Multimedia Ministry released a complete list of transition date on the remaining areas with central and southern West Malaysia to be commenced on 19 August, northern West Malaysia on 2 September, eastern coast West Malaysia on 17 September and entire East Malaysia on 30 September. The switch over in West Malaysia are fully completed on 16 October at 12:30am (UTC+8), while the final switch over in East Malaysia are completed on 31 October also at 12:30am (UTC+8) as scheduled.
Maldives
The Maldives has chosen the Japanese-Brazilian standard ISDB-Tb.
Philippines
On 11 June 2010, National Telecommunications Commission of the Philippines announced that the country will use the Japanese ISDB-T International standard. The first fully operational digital TV channel is Channel 49 of the religious group Iglesia ni Cristo.
However, in October 2012, DZCE-TV (which occupies the channel) reopened the analog signal to relaunch as INC TV; therefore, channel 49 can only transmit digitally during off-air hours of the analog channel 49.
This was followed by state-owned television network PTV which conducted its test transmission on UHF Channel 48.
On 11 February 2015, the major TV network ABS-CBN commercially launched the digital TV box called ABS-CBN TV Plus also known as Mahiwagang Blackbox. Seven years before the commercial launch, the network initially applied for a license from NTC on digital free TV. The digital TV box was given away as a prize for the loyal viewers and listeners of ABS-CBN channel 2, DZMM (AM radio station of ABS-CBN) and DZMM TeleRadyo (TV-radio cable channel of ABS-CBN) after the initial application.
Digital television transition began on 28 February 2017, with ZOE Broadcasting Network's DZOZ-TV as the country's first station to permanently shut down analog terrestrial transmissions without any test simulcast with its digital signal, and is expected to be finished by 2023.
On 25 May 2018, Solar Entertainment Corp began a new digital TV service called Easy TV where most of its channels are offered to consumers through Easy TV's proprietary set-top box.
On October 2020, with the 70th anniversary of GMA Network released their newest product GMA Affordabox, a digital tv box to engage audience to switch to Digital TV viewing while watching GMA and GMA News TV. Before the launch of the said TV box, the network released 2 new sub channels, the Heart of Asia Channel last June 2020 featuring GMA's acquired Asianovelas and locally produced dramas and Hallypop airing acquired programs from Jungo TV featuring asian pop culture and international music events.
Singapore
Singapore adopted the DVB-T2 standard in 2012, with monopoly Free-to-air broadcaster Mediacorp offering all seven of its services via DTT in 2013. Mediacorp ended analogue television service shortly after midnight on 2 January 2019.
Sri Lanka
Sri Lanka has chosen the Japanese-Brazilian standard ISDB-Tb.
Thailand
In 2005, the Ministry of Information announced their plan to digitalise nationwide free-to-air TV broadcasts led by MCOT Public Company Limited (MCOT). Trial broadcasts were undertaken, involving one thousand households in the Bangkok Metropolitan Area from December 2000 till May 2001. According to the then-Deputy Minister of Information, the trial received "very positive" feedback, i.e. "more than 60 percent said the quality of the signal ranged from good to very good. Over 88 percent said the picture quality improved, while 70 percent said the sound quality was better."
According to Information Minister, Sontaya Kunplome, MCOT is expected to fully complete its digitalization in 2012 as part of its three-year restructuring process. Each household, once equipped with the necessary equipment (set-top box or iDTV set) is expected to receive up to 19 channels, seven of which fall under MCOT and the rest for private broadcasters such as BEC-TERO which owns its channels such as TV3. Thus far, besides simulcasting Modernine TV and Television of Thailand, MCOT is test-airing MCOT 1, MCOT 2 and MCOT 3 exclusively on the digital TV platform, transmitted at UHF channel 44, modulated at 64QAM. MCOT was also expected to launch regional channels for each province and/or territory in Thailand, making it 80 MCOT television channels. BEC-TERO was expected to commence trials in March 2009.
Thailand and the rest of ASEAN countries (with the exception of the Philippines; see above) have selected DVB-T as the final DTV standard, and are expected to switch off analogue broadcasts completely by 2015. In June 2008, participants of the 6th ASEAN Digital Broadcast Meeting from seven south-east Asian countries (including Thailand) agreed to finalise the specifications of the DTV set-top box for use within ASEAN, and also set up an ASEAN HD Centre to provide training on HDTV content to broadcasters in the region.
Even though MCOT's trial was a success, the future of the digital terrestrial television transition has become uncertain, especially after the end of Somchai Wongsawat's tenure as the prime minister and the beginning of successor Abhisit Vejjajiva's reign in favor of his concept.
In March 2011, MCOT announced that it is also possible that MCOT may be planning to switch to DVB-T2 some time in the future to replace DVB-T.
The Switch-off Date has been planned for 2020. (Only for Channel 3 (Thailand)) For now, five analogue channels has been off-air since 16 June 2018(Thai Public Broadcasting Service and Channel 7 (Thailand)), 21 June (Channel 5 (Thailand)) and 16 July (Channel 9 MCOT HD and NBT)
Oceania
Australia
Australia uses DVB-T. The transition to digital television and the phaseout of analogue television was completed on 10 December 2013.
New Zealand
New Zealand uses DVB-T. The transition to digital television and the shutdown of analogue transmissions was completed on 1 December 2013
Europe
European Union
As of 2001, three countries had introduced DTT: the UK, Sweden and Spain. Their total TV viewership market shares were 5.7%, 2.3% and 3.5% respectively.
The EU recommended in May 2005 that its Member States cease all analogue television transmissions by 1 January 2012. Some EU member states decided to complete the transition as early as 2006 for Luxembourg and the Netherlands, and 2007 for Finland. Latvia stopped broadcasting analogue television from 1 June 2010. Poland completed the transition on 23 July 2013 and Bulgaria completed the transition on 30 September 2013. Malta switched on 1 November 2011. ASO was mostly completed in Europe in 2013 though small hilly underpopulated isolated terrain areas awaited DTT rollout beyond that date.
Many TV-viewers TV-equipment in Europe might experience TV-interference and blocking because of 800 MHz broadband usage.
As of 2018, DTT is the main TV reception for 27.7 percent of the EU27 countries. Croatia, Greece, Italy and Spain all have DTT penetration over 50 percent of total TV reception.
Bulgaria
Bulgaria launched a free-to-air platform on Sofia region, starting in November 2004. Standards chosen are DVB-T and MPEG4 AVC/H.264 compression format. DVB-T2 will not be used at this time. The Communications Regulatory Commission (CRC) has said that it received 6 bids for the licence to build and operate Bulgaria's two nationwide DTT networks. A second licence tender for the operation of 3 DTT multiplexes was open until 27 May 2009. Following the closing of this process, Hannu Pro, part of Silicon Group, and with Baltic Operations has secured the license to operate three DTT multiplexes in Bulgaria by the country's Communications Regulatory Commission (CRC) Bulgaria officially completed the transition to digital broadcasting on Monday, 30 September 2013.
Belgium
Flanders has no free-to-air television, as Dutch-language public broadcaster VRT shut off its DVB-T service on 1 December 2018 citing minimal usage. VRT cited that only 1 percent of Flemish households made use of the terrestrial signal, and that it was not worth the €1 million to upgrade to DVB-T2. After some outcry over the loss of terrestrial coverage, VRT's channels were added to TV Vlaanderen's subscription DVB-T2 package called Antenne TV alongside all major Dutch-language commercial channels.
French-language public broadcaster RTBF remains available in Brussels and Wallonia via DVB-T transmissions.
95 percent of Belgium is covered by cable, well above the EU28 average of 45.1 percent, which can explain the low terrestrial usage in the country.
Denmark
DTT had its technical launch in Denmark in March 2006 after some years of public trials. The official launch was at midnight on 1 November 2009 where the analogue broadcasts shut down nationwide.
As of June 2020, five national multiplexes are available, broadcasting several channels in both SD and HD via DVB-T2, all using the MPEG-4 codec.
MUX 1 is Free-to-air and operated by I/S DIGI-TV, a joint-venture between DR and TV 2.
MUX 2, 3, 4 and 5 are operated by Boxer, and are for pay television only.
Finland
Finland launched DTT in 2001, and terminated analogue transmissions nationwide on 1 September 2007. Finland has successfully launched a mixture of pay and free-to-air DTT services. Digita operates the DTT and Mobile Terrestrial networks and rents capacity to broadcasters on its network on a neutral market basis. Digita is owned by TDF (France). The pay-DTT service provider Boxer has acquired a majority stake in the leading Finnish pay DTT operator PlusTV which offers a number of commercial channels for a subscription. It started in October 2006. Boxer already provides pay-DTT services in Sweden and Denmark.
Three nationwide multiplexes are granted to DNA and Anvia for DVB-T2 for High Definition and Standard Definition channel (MPEG4).
France
France's télévision numérique terrestre (TNT) offers 26 free national channels and 9 pay channels, plus up to 4 local free channels. An 89% DTT penetration rate is expected by December 2008. Free-to-view satellite services offering the same DTT offer were made available in June 2007.
Since 12 December 2012, TNT has delivered ten free HD channels (TF1 HD, France2 HD, M6 HD, Arte HD, HD1, L'Équipe 21, 6ter, Numéro 23, RMC Découverte HD, Chérie 25) and one pay TNT HD channel (Canal+ HD) using the MPEG4 AVC/H.264 compression format. French consumer technology magazine Les Numériques gave the picture quality of TNT's previous 1440x1080i service a score of 8/10, compared to 10/10 for Blu-ray 1080p.
Typically:
free TNT channels are broadcast 720×576 MPEG-2 with a VBR of 3.9 Mbits (2.1 to 6.8 as measured) or a CBR of 4.6 Mbits
pay TNT channels are broadcast 720×576 MPEG4 AVC/H.264 with a VBR of 3.0 Mbits (1.1 to 6.0 as measured)
free TNT-HD and pay TNT-HD are broadcast 1920×1080 (1080i50) MPEG4 AVC/H.264 with a VBR of 7.6 Mbits (3 to max 15M), but were previously broadcast at the lower definition of 1440×1080.
For the audio part AC3 and AAC are used in 192 kbits for 2.0 and 384 kbits for 5.1.
Typically up to four audio part can be used:
French 5.1
VO (original language) 5.1
French 2.0
Audivision 5.1
Prime Minister François Fillon confirmed that the final analogue switch-off date would be 30 November 2011. DTT coverage had to reach 91% of a given before analogue transmissions could be switched off. CSA announced a call to tender for more local DTT licences on 15 June 2009 and 66 new DTT sites went up since May 2009 to expand coverage to cover low population areas.
Freesat began broadcasts from the Eutelsat Atlantic Bird 3 satellite from June 2009 as Fransat, providing for those unable to receive DTT signals for terrain reasons in preparation for ASO in 2011. Eighteen channels were broadcast initially and although free to watch, viewers needed to buy a set top box with smart card for €99 according to DVB.org article.
The end dates of analogue shutdown were: 2 February 2010: Alsace, 9 March 2010: Lower Normandy, 18 May 2010: Pays de la Loire, 8 June 2010: Bretagne, 28 September 2010: Lorraine and Champagne-Ardenne, 19 October 2010: Poitou-Charentes and the middle of the country, November 2010: Franche-Comté and Bourgogne, 7 December 2010: North of the country, First quarter 2011: Picardie and Haute-Normandie, Île-de-France, Aquitaine and Limousin, Auvergne, Côte d'Azur and Corsica, Rhône, Second quarter 2011 (before 30 November): Provence, Alpes, Midi-Pyrénées, Languedoc-Roussillon.
Germany
Germany launched a free-to-air platform region-by-region, starting in Berlin in November 2002. The analogue broadcasts were planned to cease soon after digital transmissions are started. Berlin became completely digital on 4 August 2003 with other regions completing between then and 2008. Digital switchover has been completed throughout Germany as of 2 December 2008 and services are now available to 100% of the population following the update of infill for the remaining 10% of transmitters by Media Broadcast who set up broadcast antennas at 79 transmission sites and installed 283 new transmitter stations. More services are to be launched on DTT and some pay DTT channels are or have been launched in various areas such as Stuttgart and soon Leipzig.
Greece
16 January 2006: Started its first pilot DTT broadcasts of 1st DTT package using five transmitters in Attica (Hymettus, Parnitha, Aegina): 48 UHF, Central Macedonia (Chortiatis): 56 UHF and Thessaly (Pelion): 53 UHF to distribute the stations Prisma+, Cine+, Sport+ and RIK Sat via its ERT Digital subsidiary, transmitting digitally terrestrial for first time in Greece.
26 September 2007: Broadcasting of 1st DTT package from 26 UHF added in Central Macedonia region from Chortiatis, Central Macedonia (Chortiatis): 26, 56 UHF.
13 October 2007: Broadcasting of 1st DTT package from 42 UHF added in Thessaly region from Pelion, Thessaly (Pelion): 42, 53 UHF.
31 October 2008: Broadcasting of 1st DTT package commenced in South West Thrace (Plaka): 64 UHF.
6 May 2009: Broadcasting of 1st DTT package from Styra added to Attica region, Attica (Hymettus, Parnitha, Aegina, Styra): 48 UHF.
7 October 2009: Broadcasting of 1st DTT package commenced in Arcadia and Argolis (Doliana): 21 UHF.
27 September 2010: Started broadcast of 2nd DTT package in Attica (Hymettus): 52 UHF, Central Macedonia (Chortiatis): 26 UHF (switching off 1st DTT package from 26 UHF in Central Macedonia region from Chortiatis), 1st DTT package in Central Macedonia (Chortiatis): 56 UHF only consisting of television stations ET1, NET, ET3, Vouli Tileorasi, and radio stations NET, Deftero, Trito, ERA Sport, KOSMOS.
19 November 2010: Broadcasting of 2nd DTT package commenced in South West Thrace (Plaka): 58 UHF.
14 December 2010: Broadcasting of 2nd DTT package from Aegina added to Attica region, Attica (Hymettus, Aegina): 52 UHF.
14 January 2011: Broadcasting of 2nd DTT package moved frequency in Central Macedonia region from 26 UHF (switching off 26 UHF) to 23 UHF and added broadcasting also from Philippion from 23 UHF, Central Macedonia (Chortiatis, Philippion): 23 UHF.
26 April 2011: 1st DTT package consists from now on with television stations Vouli Tileorasi, Prisma+, CineSport+ continuing Sport+ created from the merge of Cine+ and Sport+ stations and RIK Sat, all stations with temporarily MPEG-2 Compression. 2nd DTT package consists from now on with television stations ET1, NET, ET3 and a new Full High Definition television station ERT HD, all stations with H.264/MPEG-4 AVC Compression along with radio stations NET, Deftero, Trito, ERA Sport, KOSMOS.
27 April 2011: ERT HD started pilot High Definition transmissions.
2 May 2011: Broadcasting of 1st DTT package moved frequency in Arcadia and Argolis from 21 UHF to 39 UHF, Arcadia and Argolis (Doliana): 39 UHF.
27 May 2011: Broadcasting of 1st DTT package commenced in Central Thessaly (Dovroutsi): 43 UHF
29 July 2011: Broadcasting commenced in the Gulf of Corinth (Xylokastro): 55 & 61 UHF
27 October 2011: Broadcasting commenced in Aetolia-Acarnania.
November 2011: Broadcasting commenced in Corfu.
3 February 2012: Broadcasting commenced in Patra.
17 August 2012: Analogue TV was switched off in Athens.
The following switch-offs are in cooperation with Digea, so the dates are the same.
Digea:
24 September 2009:The first digital broadcasting of Digea consisting of television stations Alpha TV, Alter Channel, ANT1, Makedonia TV, Mega Channel, Skai TV and Star Channel was carried out in the Gulf of Corinth from the transmitting site of Xylokastro.
14 January 2010: Digital broadcasting began in Thessaloniki - Central Macedonia from the transmitting sites of Chortiatis and Philippion.
18 June 2010: Digital broadcasting began in Athens - Attica from the transmitting sites of Hymettus and Aegina.
1 September 2010: Digital broadcasting of regional scale channels 0-6 TV, ATTICA TV, Extra 3, High TV, MAD TV, MTV Greece, Nickelodeon (Greece) and SPORT TV added in Athens - Attica from the transmitting site of Aegina.
8 February 2011: Digital broadcasting of regional scale channels BLUE SKY, CHANNEL 9, KONTRA Channel and TELEASTY added in Athens - Attica from the transmitting site of Aegina.
25 February 2011: Digital broadcasting began in Rhodes (city).
19 March 2011: Digital broadcasting began in Alexandroupoli - South West Thrace.
27 May 2011: Digital broadcasting began in Central Thessaly
9 December 2011: Digital broadcasting began in Aitoloakarnania.
3 February 2012: Digital broadcasting began in Patra.
20 July 2012: Analogue TV signals were switched off in Athens.
14 December 2012: Analogue TV signals were switched off in Thessaloniki.
26 June 2013: Digital broadcasting began in Crete.
27 September 2013: Digital broadcasting began in Kalamata
27 June 2014: Analogue TV signals were switched off in Peloponnisos.
1 August 2014: Analogue TV signals were switched off in Attica.
5 September 2014: Analogue TV signals were switched off in East Macedonia and Thrace, Lemnos and Lesbos.
21 November 2014: Analogue TV signals were switched off in Central Macedonia, Thessaly and Central Greece.
19 December 2014: Analogue TV signals were switched off in West Macedonia, Epirus, Western Greece, Corfu.
6 February 2015: Analogue TV signals were switched off in Crete, Samos, Icaria and Dodecanese.
Digital Union:
27 March 2010: The first digital broadcasting of Digital Union consisting of Regional television stations Time Channel, TV Thessaloniki, TV Chalkidiki, in Thessaloniki / Central Macedonia.
27 August 2010: Digital transmission of Regional television stations New Televisionof Chania and 4 U TV, in Iraklion - Crete Isl.
6 November 2010: Digital broadcasting began in Patra, for the Regional Channels Lepanto TV, Best TV, ORT and Ionian TV.
19 March 2011: Digital broadcasting began in Alexandroupoli - South West Thrace from the transmitting site of Plaka, for the Regional Channels Thraki NET, Delta TV, Home Shop and Rodopi TV.
27 May 2011: Digital broadcasting began in Central Thessaly from the transmitting site of Dovroutsi, for the Regional Channels Thessalia TV, Trikala TV 10, TRT and Astra TV.
ERT - NOVA (pay TV platform):
22 July 2011: Broadcasting consisting of television stations NovaCinema1, NovaSports1 and two more satellite TV channels, that ERT will decide in the future, commenced in Attica (Hymettus): 22 UHF
Autumn 2011: Broadcasting will commence in Thessaloniki - Central Macedonia.
January 2012: Broadcasting will commence in 21 more areas.
11 July 2014: Platform's DTT ceases to exist.
TV1 Syros started its first pilot broadcasts on 1 November 2008 in Cyclades (Syros): 60 UHF.
As of 6 February 2015, Greece has completed its transition to digital TV.
Hungary
Experimental DTT broadcast has started in December 2008. The program of Duna Televízió was broadcast during the trials. Originally analog television was planned to be shut down on 1 January 2012, but this deadline was first pushed out to 2014 then brought forward to 2013. Analogue broadcast was terminated at 12:30pm, on 31 July 2013 in the central part of Hungary, and October 2013 in the rest of the country. M1, M2, Duna TV, Duna World, RTL Klub, TV2 and Euronews are available as free-view. M1, M2 and Duna TV are also available in HD.
On both of the 2013 shutoff dates, all analog channels ceased normal programming at 12:30pm and showed a silent ASO information screen that has a phone number to call for help. It was kept on for a few days, after which the analog transmitters are permanently shut down.
Ireland
In Ireland DTT has been somewhat problematic. Responsibility for DTT, based on plans of Raidió Teilifís Éireann, was divided between two government Departments with differing views on its running. This delayed the project, took away its momentum and the economic situation deteriorated so that the opportunity to launch in good conditions was lost. When legislation finally arrived after two years to enable DTT to proceed, a private sector model was envisaged similar to the UK. A company trading as "It's TV" was the sole applicant for a digital terrestrial television license under the provisions of the Irish Broadcasting Act 2001. The "It's TV" proposed a triple play deployment with Broadband, TV and Digital Radio services, but the on air return channel (DVB-RCT system) for "interactive" use while 10s of Mbps per mast, would per user only have been 300 to 2400bit/s at peak times, they never got approval to run an Internet service. RTÉ was to have a minority stake in its network and sell its majority share. However legislative delays and economic changes made it financially difficult for RTÉ to get a good price for the network stake. "It's TV" plans raise the necessary funding failed due to the lack of approval for Internet aspect and infeasible Internet access model. Other DTT deployments in operation around that time also went bust, most particularly in the UK, Spain and Portugal. "It's TV" failed to get its license conditions varied or to get a time extension for securing funding. Its license was either never awarded (as they could not demonstrate a viable business plan & funding) or was eventually withdrawn for non-performance.
Under subsequent legislation in May 2007, RTÉ, the spectrum regulator (ComReg) and the broadcasting regulator BCI (now BAI), were mandated to invite applications during 2008 under the Broadcasting (Amendment) Act 2007. RTÉ and the BCI received licenses from ComReg for spectrum to establish DTT. The BAI advertised and invited multiplex submissions by 2 May 2008. RTÉ Networks was required to broadcast in digital terrestrial TV (aerial TV) and received an automatic license through the RTÉ Authority. It expanded and upgraded its transmission network to digital terrestrial during 2009 which culminated in 98% coverage by 31 December 2011 with analog switchover beginnin in Summer 2012 in concert with Northern Ireland, under the MOU signed with the UK and Irish Governments.
It also planned on making this network available to the commercial multiplex winner for rental of capacity once negotiations had concluded, rental agreed and a security bond received. It was testing the BAI multiplexes from November 2009 across the network, which is publicly receivable with the correct DTT receivers. 1 Mux (a group of broadcast channels) would have provided the services of the public service broadcaster and have had 98% population coverage by 31 December 2011. The other three multiplexes would have had between 90% and 92% population coverage. Following Analogue Switchover one additional PSB mux and one or more commercial muxes was made available for DTT, mobile television, broadband and other services.
The BCI (now BAI) received three conditional applications to operate the three muxes which were presented to the public on 12 May 2008. It decided in principle to allocate the license to Boxer DTT Ltd, a consortium made up of the Swedish pay-DTT operator Boxer and the media group Communicorp at its board meeting on 21 July 2008.
On 20 April 2009, the BCI revealed that Boxer had withdrawn their license, and it was instead given to the runner-up applicant OneVision. At the end of April 2010 the negotiations with Onevision ended and they also decided to return the license. On 29 April 2010 the contract was offered to the only remaining applicant, Easy TV. The Easy TV consortium informed the BAI on 12 May 2010 that it was declining their offer to pursue negotiations regarding the Commercial DTT Multiplex Licence.
A Houses of the Oireachtas Channel (reportedly shelved in December 2008) and the Irish Film Channel (whose status is unclear though a company was formed to provide the channel) were enabled for establishment as public service broadcasters on Irish DTT.
The Broadcasting Authority of Ireland replaced the Broadcasting Complaints Commission, the Broadcasting Commission of Ireland and the RTÉ Authority. The BAI includes Awards and Advisory Committees under statutory instrument 389 that gave effect to the provisions of the Broadcasting Act 2009. This legislation dissolved the BCI, vesting it and new responsibilities, assets and liabilities in a new Broadcasting Authority of Ireland on 1 October 2009. This Act also dealt with analogue switchover.
A DTT Information Campaign was announced by the Department of Communications, Energy and Natural Resources, to launch in March 2009 ahead of the September 2009 launch of Irish DTT. The Information Campaign was undertaken by the BAI, with support of the Department.
On 30 October 2010 FTA DTT, which is known as Saorview, was launched following a direction from the Minister for Communications, Energy & Natural Resources, to RTÉ and the signing of the RTÉ (National Television Multiplex) Order 2010 (S.I. No. 85 of 2010) on 26 February 2010. The rollout of FTA Saorview DTT then proceeded, and a commercial DTT competition was deferred.
On 1 July 2010 RTÉ announced that Mary Curtis, RTÉ's deputy head of TV programming, would take on the role of Director of Digital Switchover (DSO).
In May 2011, RTÉ launched Saorview, which was officially opened by Minister Rabbitte.
On 14 October 2011, Minister Rabbitte announced that analogue terrestrial television broadcast would be switched off on 24 October 2012. This date was chosen in consultation with the UK on its Northern Ireland analogue switchover date so that both jurisdictions on the island would switch over at roughly the same time. This was done to make it straightforward for citizens on both sides of the border., referring citizens to both Saorview's website and the Department's Digital Switchover Website
On 24 October 2012 all analogue television transmission in Ireland ended, leaving Saorview as the primary source of broadcast television in Ireland.
Italy
The switch-off of the analogue terrestrial network progressed region–by–region. It started on Wednesday 15 October 2008, and was completed on Wednesday 4 July 2012.
The selected broadcasting standard is DVB-T with MPEG2 video for SD and H.264 video for HD, audio is usually MPEG1. The whole frequency spectrum has been allocated with SFN in mind.
Along the original analog free to air channels that switched to digital, a few new pay per view platforms came around with the advent of DTT.
Worth mentioning is the addition of an experimental free to air HD 1080i channel from RAI which is set to broadcast important sport events like the Olympic Games or the FIFA World Cup.
Luxembourg
Luxembourg launched DTT services in April 2006. The national service launched in June 2006. On 1 September 2006, Luxembourg became the first European country to transition completely to DTT. Luxe TV, a niche theme based station, soon began broadcasting on the Luxembourg DTT platform, transmitted from the Dudelange transmitter. The aim was to reach audiences in some parts of Germany as well as in Luxembourg.
Netherlands
The Netherlands launched its DTT service 23 April 2003, and terminated analogue transmissions nationwide on 11 December 2006. KPN owns Digitenne which provides a mix of FTA public channels and paid DTT services. KPN started to switch its digital terrestrial television platform Digitenne to the DVB-T2 HEVC standard in October 2018, this transition completed on 9 July 2019.
Poland
DTT launch in Poland was scheduled for Autumn 2009. Regulatory disagreements delayed its tender and approach until resolved and the multiplexes available for DTT were reduced to 3 and the 2nd was licensed in the Autumn of 2009. The reduction from 5 to 3 enabled mobile TV and broadband to get more spectrum allocation. Muxes 2 and 3 therefore had limited coverage until ASO. Polsat, TVN, TV4 and TV Puls have officially applied to reserve space on the countries first multiplex set to start in September. Wirtualne Media is given as the source of the story. The public broadcaster's three main channels TVP1, TVP2 and TVP Info had already been allocated capacity on the multiplex.
Poland ended its television broadcast in analogue on 23 July 2013. A mobile TV license has also been awarded in Poland to Info TV FM to use DVB-H standard.
Portugal
Portugal launched its DTT service on 29 April 2009, available to around 20% of the Portuguese population and Portugal Telecom expected to reach 80% of the population by the end of the 2009. Airplus TV Portugal that was set up to compete for a licence to manage Portugal's pay-TV DTT multiplexes, dissolved as it did not get the license and a Portuguese court ruled not to suspend the process for the awarding of a licence to Portugal Telecom, based on a complaint submitted by Airplus TV Portugal. After Airplus TV Portugal was dissolved, Portugal Telecom informed that will not honour the pay-TV DTT multiplexes licence obligations. ANACOM, the Portuguese communications authority, accepted. Portugal thus has only one active multiplexer.
Romania
In Romania, broadcasting regulations have been amended so that DTT service providers have only a single licence rather than the two previously required by the National Audiovisual Council (CNA). DTT services were launched in December 2009 using the MPEG-4 (H.264 AVC) compression format following the Ministry of Communications publication of a strategic plan for the transition to digital broadcasting. According to Media Express, it envisaged a maximum of five national UHF multiplexes, a national VHF multiplex and a multiplex allocated to regional and local services, all in accordance with the ITU Geneva Conference RRC-06 reports BroadbandTVNews.
The Ministry of Communications (MCSI) estimated that 49% of Romania's 7.5 million households got TV from cable and 27% from DTH services in Romania while terrestrial TV was used by 18% of the TV households. 6% are reported as not able to receive TV transmissions. Subsidies were offered for those below a certain income to assist switchover for them. Switchover was scheduled for January 2012.
Romkatel, the local representative of Kathrein, have since been awarded the commercial Romanian DTT services license. ZF reported that Romkatel has signed a 12-month contract worth €710,420, having beaten off a challenge from France's TDF. The tender was organised by Romania's National Society for Radiocommunications (SNR).
Meanwhile, the National Audiovisual Council, in charge of the public service broadcasting sector has awarded digital licences to the public channels TVR1 and TVR2.
According to Media Express, this followed a short debate at the National Audiovisual Council (CNA) about whether to also award licences to the nine remaining public channels, one of which transmits in HD and five are regional.
Romania's first DTT multiplex is likely to have the five leading commercial channels — Pro TV, Antena 1 (Romania), Prima TV, Kanal D Romania and Realitatea TV — as well as TVR1 and TVR2.
The National Authority in Communications (ANCOM), will most probably award the transmission network contract for this to the national transmission company Radiocommunicatii.
In June 2013, the Romanian Government issued a Strategy to shift from analogue terrestrial to digital terrestrial that pushes ahead the until the date stopped process. According to the Strategy one of the five planned digital terrestrial multiplexes will be de facto granted to Radiocom, the state company involved in terrestrial carrying the public television signals, way before a selection for the muxes operators will be organized by ANCOM, selection with the deadline of 17 June 2015. Government is describing the Radiocom multiplex with the terms "pilot project" and "experiment". The minimum technical requirements for this project are: broadcast standard DVB-T2, ensuring the coverage of up to 40% of the population until 1 July 2014, and 70% of the population up to 17 June 2015, and the possibility of using the broadcasting premises that belongs to Radiocom.
On 17 June 2015, Romania turned off analog broadcasting, and started broadcasting with DVB-T2 technology, but with very low coverage, and a very reduced number of broadcasts available. Because of low coverage, Romania will still broadcast TVR1 in analog format on VHF until 31 December 2015. DVB-T continues to be available for an undetermined period of time, only in Sibiu ( Ch 47 and Ch 54) and Bucharest ( Ch 54 and Ch 59). However, since the analogue turn-off, many people who were receiving TV on terrestrial shifted to a cable or DTH operator . To the present day, DVB-T and DVB-T2 are still in experimental broadcast. The delay of DVB-T launching is criticized by some people in Romania, as a form of sustaining the interest of cable and DTH providers, by delaying the stable launch of DVB-T and decreasing the number of channels ( once, there were available 18 channels in DVB-T, even 3 in HD) but to the present day the number has fallen to only 6, and continues to be lower, especially since TVR announced that they will reduce the number of channels (TVR news will be shut down, probably because of low audience, the same for TVR 3, and the fate of TVR HD, which is one of the channels with the largest audience in TVR Group after TVR1, is unknown). Kanal D left terrestrial platform on 2 July 2015, and, Antena 3 is the only channel except TVR's available on terrestrial in DVB-T. It is unknown whether Antena 3 will remain available on DVB-T, will shift to DVB-T2, or fully leave the terrestrial platform . Antena Group channels were once available both analogue and digital in terrestrial. Also, many people in Romania are somehow surprised by the fact that Romania had regressed in terrestrial broadcasts, because in analogue there were, once, about 8 channels ( in Bucharest), and decreasing to only 6 channels in Digital terrestrial is a regress. However, terrestrial TV was heavily surpassed both by DTH, or Cable, some people are even watching foreign FTA on satellite, because foreign broadcasts are having a more interesting content than Romanian channels. To the present day, DVB-T and T2 are still in tests. Although many TV sellers are marking DVB-T2 capable sets as being compatible with digital terrestrial television in Romania, by highlighting this feature with a sticker on the TV, buyers are mainly interested if the TV has digital cable tunner or digital satellite tuner, however TV sets without DVB-T2 continue to be sold with only DVB-T/C and sometimes S2, as cable and satellite compatibility presents most of interest.
Spain
In Spain most multiplexes closed after the failure of Quiero TV, the country's original pay DTT platform. DTT was relaunched on 30 November 2005, with 20 free-to-air national TV services as well as numerous regional and local services. Nearly 11 million DTT receivers had been sold as of July 2008. Positive approval for pay DTT services have reportedly been given by Spain's Ministry of Industry in a surprise move on 17 June of the Advisory Council on Telecommunications and the Information Society (Catsi). IT will now be included in a Royal Decree. A number of leading Spanish media players including Sogecable, Telefónica, Ono, Orange and Vodafone have apparently criticised that as according to Prisa, Sogecable's owner, "it caps a series of policy changes that benefits only a few audiovisual operators, those of terrestrial TV, to the detriment of satellite operators, cable and DSL." There may be appeals lodged against the government's decision.
Sweden
In Sweden, DTT was launched in 1999 solely as a paid service. As of 2007, there are 38 channels in 5 MUXs. 11 of those are free-to-air channels from a number of different broadcasters. Switch-off of the analogue TV service started on 19 September 2005 and finished on 29 October 2007. Boxer began the deployment of MPEG-4 receivers to new subscribers. Over the next six years from 2008 Sweden will gradually migrate from MPEG-2 visual coding to using MPEG-4, H.264.
The Swedish Radio and TV Authority (RTVV) recently announced eight new national channels that will broadcast in the MPEG-4 format.
From 1 April 2008 Boxer is also responsible for approving devices to use on the network, will no longer accept MPEG-2 receivers for test and approval. Set Top Boxes must be backward compatible so that they can decode both MPEG-2 and MPEG-4 coded transmissions.
Switzerland
Switzerland introduced DTT in 2007. Switzerland later become the first country to eliminate broadcast terrestrial television entirely when public broadcaster SRG SSR, which runs the country's only terrestrial channels, shut down its DVB-T transmitter network in June 2019. SRG SSR estimated that less than two per cent of households relied on its DVB-T network, the large majority of which used it only for reception on secondary devices, making continued operation not economically viable. Its programing will remain on IPTV services, cable, and free-to-air satellite. SRG SSR recommended consumers to switch to satellite. As the satellite signals are free but encrypted to restrict reception to Swiss residents, there is now one privately owned DVB-T transmitter on Hoher Kasten in Appenzell to feed the channels of SRF to cable systems in Vorarlberg, Austria.
United Kingdom
The United Kingdom (1998), Sweden (1999) and Spain (2000) were the first to launch DTT with platforms heavily reliant on pay television. All platforms experienced many starter problems, in particular the British and Spanish platforms which failed financially (mainly due to their encryption being compromised). Nevertheless, Boxer, the Swedish pay platform which started in October 1999, proved to be very successful.
DTT in the United Kingdom was launched in November 1998 as a primarily subscription service branded as ONdigital, a joint venture between Granada Television and Carlton Communications, with only a few channels being available free to air. ONdigital soon ran into financial difficulties with subscriber numbers below expectations, and in order to attempt to reverse their fortunes, it was decided that the ITV and ONdigital brands should align, and the service was rebranded ITV Digital in 2001. Despite an expensive advertising campaign, ITV Digital struggled to attract sufficient new subscribers and in 2002 closed the service. After commercial failure of the Pay TV proposition it was relaunched as the free-to-air Freeview platform in 2002. Top Up TV, a lite pay DTT service, became available in 2004 when Inview launched the first DTT (Freeview) EPG service.
On 30 March 2005, the older analogue signals began to be phased out on a region-by-region basis (a process known as the Digital switchover, or DSO), beginning with a technical trial at the Ferryside television relay station. The first full transmitter to switch to digital-only transmission was the Whitehaven transmitter in Cumbria, which completed its transition on Wednesday 17 October 2007. The switchover to digital-only broadcasting was completed on 24 October 2012 when the transmitters in Northern Ireland turned off their analogue broadcasts (which coincided with the transition in the Republic of Ireland).
The additional transmission frequencies freed up by the shutdown of analogue signals have (among other things, such as the introduction of 4G mobile internet) allowed for the creation of a single DVB-T2 multiplex used to carry high-definition programming. There are also plans to use one frequency to launch local television services.
North Macedonia
DTT was successfully launched in November 2009. It uses MPEG-2 for 4K UHD and MPEG-4 for HD. The service was launched by ONE, and the platform is called BoomTV. It offers 42 channels including all national networks and it is available to 95% of the population.
Russia
In Russia, digital television appeared in the summer of 2009. The first multiplex consisted of the following channels: Channel One Russia, Russia-1, Russia-2 (now Match-TV), NTV, Petersburg 5th channel, Russia-Culture, Russia-24, Bibigon (later - Carousel). In the spring of 2013, two more channels were added to the first multiplex: OTR and TV-Center.
On 19 March 2012, DVB-T was replaced in favor of DVB-T2.
On 14 December 2012, a second DTV multiplex begun airing. REN TV, TV Center (from 2013 - Spas), STS, Domashniy, Sport (from 2013 - TV-3), NTV Plus Sport Plus (from 2015 onwards - Pyatnica!), Zvezda, Mir, TNT and Muz-TV were in that multiplex.
In 2014, the third and fourth multiplex appeared in the Crimea. They were created after the annexation of Crimea into Russia.
On 15 January 2015, a third multiplex went live in Moscow and the Moscow region, broadcasting some satellite-only channels (for example, on 22 channel 2х2 at 12am and in the 11:59am, My planet (Moya Planeta) in the noon-6pm, and in the 6pm-12am - Science 2.0 (Nauka 2.0) )
On 30 November 2018, the Ministry of digital development, communications and mass communications of the Russian Federation proposed turning off analog broadcasting in four stages (on 11 February, 15 April, 3 June and 14 October)
On 3 December 2018, first region completed its switch from analogue broadcasting to digital broadcasting.
On 14 October 2019, Russia has entirely switched from analog television to digital television. Some channels are still being aired as analog TV. (Namely, Che)
On 22 October 2020, aired channel list on the second multiplex was changed. Channels aired in the multiplex now are Che, STS Love, STS Kids, TNT4, TNT Music, 2x2, Super, U, Channel Disney, and Match Strana.
Turkey
DTT was trialed in Turkey in 2006 using DVB-T, but the public rollout did not occur, only the analogue transmission was switched off in favour of HD satellite broadcast. In 2011 preparations were made for the introduction of DTT, with channel licenses later allocated. However, in 2014 the allocations were voided by the Supreme Court citing irregularities in awarding the licenses. The uncertainty led to reluctance of broadcasters to invest in a DTT network, particularly with satellite TV having a dominant penetration. The DTT project was revived in 2016 with the construction of a multi-purpose 100 m transmitter in Çanakkale, designed to look like a ribbon. Test broadcasts commenced with the opening of Küçük Çamlıca TV Radio Tower and Çanakkale TV Tower, before an official DTT launch in 2021.
North America
Bahamas
On 14 December 2011, national public broadcaster ZNS-TV announced it would be upgrading to ATSC digital television with mobile DTV capabilities, in line with its neighbours, the United States and Puerto Rico.
Bermuda
Bermuda has plans to convert its three broadcast stations to ATSC digital terrestrial television in the future.
Canada
In Canada, analogue switch-off was mandated by regulatory authorities for all provincial capital cities and all multi-station markets. Analogue would continue in single-station markets and remote areas. With an exception, analogue switch-off in the mandated areas took place on 31 August 2011. The CBC was granted an exception in many smaller multi-station markets, due to the cost of conversion, otherwise the CBC services would have gone dark in many such markets. Most network stations are already broadcasting high-definition digital signals in Toronto, Ottawa, Montreal, Calgary, Edmonton, Winnipeg, Regina, and Vancouver. Most networks had been concerned about the August 2011 deadline as not all parts of the country were equipped to receive DTTV by the scheduled date.
Mexico
In Mexico, the digital transition is completed. Digital signals are available in all cities, thus providing national coverage. Analog transmission were turned off based on population size. Tijuana was the first city to turn off analog signals and the nationwide turn off was completed on 31 December 2015. On 27 October 2016, Mexico relocated all of its channels. This made Azteca 13 (now Azteca Uno) on virtual channel 1.1 nationwide, Canal de Las Estrellas (now Las Estrellas) on virtual channel 2.1 and Imagen Television on virtual channel 3.1. Border cities were not affected due signal issues across the United States. For example, in the Tijuana-San Diego area, channel 2.1's signal comes from KCBS-TV, a CBS owned-and-operated station in Los Angeles, and can affect television users in portions of San Diego County. Thus, Las Estrellas is on virtual channel 57.1.
United States
In the United States on 12 June 2009, all full-power U.S. television broadcasts became exclusively digital, under the Digital Television and Public Safety Act of 2005. Furthermore, since 1 March 2007, new television sets that receive signals over the air, including pocket-sized portable televisions, include ATSC digital tuners for digital broadcasts. Prior to 12 June 2009, most U.S. broadcasters were transmitting in both analog and digital formats; a few were digital only. Most U.S. stations were not permitted to shut down their analog transmissions prior to 16 February 2009 unless doing so was required in order to complete work on a station's permanent digital facilities. In 2009, the FCC finished auctioning channels 52–59 (the lower half of the 700 MHz band) for other communications services, completing the reallocation of broadcast channels 52–69 that began in the late 1990s.
The analog switch-off rendered all non-digital television sets unable to receive most over-the-air television channels without an external setbox receiver; however, low-power television stations and cable TV systems were not required to convert to digital until 1 September 2015. Beginning 1 January 2008, consumers could request coupons to help cover most of the cost of these converters by calling a toll-free number or via a website. Some television stations had also been licensed to operate "nightlights", analog signals which consisted only of a brief repeated announcement advising remaining analog viewers how to switch to digital reception.
Central America and the Caribbean
Costa Rica
Costa Rica chose Japanese-Brazilian standard ISDB-T as 7th country on 25 May 2010, and started trial transmissions by Channel 13 from Irazú Volcano on 19 March 2012
Cuba
Cuba announced on 19 March 2013 that it is "prepared" to perform a digital television test using the Chinese DTMB system.
Dominican Republic
The Dominican Republic chose ATSC standards for DTT on 10 August 2010.
El Salvador
El Salvador has chosen the Japanese / Brazilian standard ISDB-Tb in 2017. The Digital Switchover began on 21 December 2018, and by the year 2022 it will be completed.
Guatemala
Guatemala has chosen the Japanese-Brazilian standard ISDB-Tb.
Honduras
Honduras has chosen the Japanese-Brazilian standard ISDB-Tb.
Nicaragua
Nicaragua has chosen the Japanese-Brazilian standard ISDB-Tb.
Panama
Panama chose the European DVB-T standard on 12 May 2009.
South America
Argentina
Argentine President Cristina Fernández signed on 28 August 2009 an agreement to adopt the ISDB-Tb system, joining Brazil, which has already implemented the standard in its big cities. On air service started from 28 April 2010.
Bolivia
On 5 July 2010 the Bolivian chancellor signed an agreement with the Japanese ambassador to Bolivia, choosing the Japanese system with the Brazilian modifications ISDB-T (Integrated Services Digital Broadcasting Terrestrial).
Brazil
In Brazil, the government chose a modified version of the Japanese ISDB-T standard, called ISDB-Tb (or SBTVD) in June 2006. Digital broadcasts started on 2 December 2007 in São Paulo and now it is under expansion all over the country. As of 15 September 2009, metro areas of São Paulo, Rio de Janeiro, Belo Horizonte, Brasília, Goiânia, Curitiba, Porto Alegre, Salvador, Campinas, Vitória, Florianópolis, Uberlândia, São José do Rio Preto, Teresina, Santos, Campo Grande, Fortaleza, Recife, João Pessoa, Sorocaba, Manaus, Belém, Aracaju, Ribeirão Preto, Boa Vista, Macapá, Porto Velho, Rio Branco, São Carlos, São José do Rio Preto, São Luís, Pirassununga, São José dos Campos, Taubaté, Ituiutaba, Araraquara, Feira de Santana, Itapetininga, Sorocaba, Presidente Prudente, Bauru, Campos dos Goytacazes, Londrina, Juiz de Fora, Campina Grande, Caxias do Sul, Franca, Rio Claro and Cuiabá have digital terrestrial broadcasting. By 2013 the digital signal was available in the whole country. Analogue shutdown is scheduled for 2023.
Chile
On 14 September 2009, president Michelle Bachelet announced that the government had finally decided on a digital television standard. Chile adopted the ISDB-T Japanese standard (with the custom modifications made by Brazil). Simulcasting began in 2010, with a projected analog switch-off in 2017.
Colombia
Colombia has chosen the European DVB-T standard on 28 August 2008. However, in 2012, Colombia adopted DVB-T2 as the national standard for terrestrial television, replacing DVB-T, the previously selected standard for digital TV.
On 28 December 2010, private networks Caracol TV and RCN TV officially started digital broadcasts for Bogotá, Medellín and surrounding areas on channels 14 and 15 UHF, respectively. State-run Señal Colombia and Canal Institucional had started test digital broadcasts earlier in 2010.
The current coverage of DVB-T2 can be consulted on the website of the organization "Tdt para Todos" which is the entity responsible for facilitating its adoption.
Ecuador
Ecuador chose Japanese-Brazilian standard ISDB-T as 6th country on 26 March 2010.
Paraguay
Paraguay chose Japanese-Brazilian standard ISDB-T on 1 June 2010.
Peru
On 23 April 2009, Peru chose the Brazilian variant of the Japanese digital television standard ISDB-T. The Peruvian government signed an agreement with its Japanese counterpart in order for the latter to fund the implementation of the DTT platform in the country. The first network to be launched on digital terrestrial television was TV Perú on 30 March 2010, using the ISDB-Tb standard. Currently, all the major stations in Lima are broadcasting on DTT in high-definition. ATV was the first television channel in the country to do digital test broadcasts in June 19, 2007, using either ATSC, DVB-T and ISDB-T in order to see which of them was better. Eventually, ATV chose ISDB-Tb, officially started broadcasting in HD; its first live TV show to be aired in high-definition was Magaly TV on 30 August 2010. Frecuencia Latina also began broadcasting on DTT on 14 September 2010 with a match of the Peru women's national volleyball team in the 2010 FIVB Women's Volleyball World Championship. Shortly after these events, América Televisión started broadcasting on DTT.
Suriname
Suriname is currently transitioning from analogue NTSC broadcasts to digital ATSC and DVB-T broadcasts. Channel ATV started with ATSC broadcasts in the Paramaribo area in June 2014, which was followed by ATSC broadcasts from stations in Brokopondo, Wageningen and Albina. The stations in Brokopondo, Wageningen and Albina broadcast both the channels of ATV (i.e. ATV and TV2) and STVS, while the station in Paramaribo currently only broadcasts the ATV channels. The Telecommunication Authority of Suriname was originally aiming at a full digital transition by June 2015, but this was criticized by broadcasters as being unfeasible. However, the ITU has documented both DVB-T and ATSC are in use].
Uruguay
Uruguay chose the European DVB-T standard in August 2007, however disproved it and decided to adopt ISDB-T on 27 December 2010 to follow neighbouring countries.
Venezuela
In Venezuela, tests are being performed with full deployment to start 2008–2009. DTT will coexist with analogue standard television for some time, until full deployment of the system on a nationwide level is accomplished. 30 September 2009, decided to employ Japanese ISDB-T system under cooperation with Japan, and officially be agreed with Japan in early October 2009.
On 6 October 2009, Venezuela has officially adopted ISDB-T with Brazilian modifications. Transition from analog to digital is expected to take place in the next 10 years.
On March 2012, Venezuela signed a $50M agreement to purchase 300,000 decoders from Argentina to implement TDT in Caracas and later this year in some of the most important cities, but only in the Government controlled TV Stations. NTSC and TDT will coexist. The Government hopes to reach TDT the whole country's population in 2 years. HugeDomains.com - InsideTele.com is for sale (Inside Tele)]
As of 2019, due to the Venezuelan crisis, the digital television transition is paralysed and DTT development has been frozen.
Africa
The majority of countries in Africa have adopted the DVB-T2 standard, including Algeria, Angola, Democratic Republic of the Congo, Ethiopia, Ghana, Kenya, Lesotho, Madagascar, Malawi, Mali, Mauritius, Mozambique, Namibia, Nigeria, Seychelles, South Africa, Swaziland, Tanzania, Togo, Uganda, Zambia and Zimbabwe.
Angola
Angola has chosen the Japanese-Brazilian standard ISDB-Tb.
Botswana
Botswana has chosen the Japanese-Brazilian standard ISDB-Tb.
Nigeria
In March 2015 Inview Technology (a UK digital switch-off company based in Cheshire and with local operations in Nigeria) was appointed by Nigeria's government run National Broadcasting Commission (NBC) to enable digital switchover from analogue throughout the country and provide a Conditional Access System, set-top box software and services including full EPG, Push Video-On-Demand, a range of broadcasting applications such as news, public service information and audience measurement services over the digital terrestrial and satellite networks in Nigeria.
Inview Nigeria and NCB will:
create a free digital TV service called FreeTV (based on Freeview in the UK), rather than pay TV subscriptions
subsidise the FreeTV STB down to an affordable retail price of N1500 ($7.50)
fund subsidies and digital infrastructure costs through the sale of spectrum and the introduction of a BBC style licence fee (called 'digital access fee') of N1000 ($5) pa which is payable on all digital STBs including pay TV operators. The Nigerian Government will receive a digital dividend of c.$1 billion from the sale of spectrum, ensuring that the whole DSO programme is self-funding.
FreeTV will carry up to 30 free channels: the best of Nigeria TV and international channels across news, movies, kids, music and general entertainment genres.
A national standard set top box specification has been set which incorporates a common operating system created by Inview. All boxes will require this specification to view the channels and access value added services such as interactive news, programme recording, internet applications and video on demand.
As of March 2016, FreeTV is being operated by Inview and is an open platform which enables any content or pay TV provider to broadcast their content so that Nigerian consumers only have to buy one box to view all the content. Only manufacturers licensed in Nigeria will receive the Inview operating system required to access the channels and services, which will protect domestic manufacturers and consumers from illegal grey imports.
Tunisia
To follow the transition from analog to digital in the field of terrestrial television broadcasting, and to keep pace with these technological international innovations, Tunisia through the Office of National Broadcasting has planned the following phases to digitize its terrestrial broadcasting networks:
First phase: Deployment, since 2001, of an experimental digital TV broadcasting unit using DVB-T system, and MPEG-2 compression, implemented in Boukornine site, to insure the coverage of Great Tunis (25% of population). This experimental project highlighted the benefits of digitization which are : -Better quality of Video and audio signals, -Increasing the capacity of distribution networks through the transmission of a digital TV package (bouquet): a layer of digital distribution network enables the transmission of 4 to 6 programs TV instead of a single TV program in the case of an analog network -The economics of radio spectrum and the energy consumption. -Introduction of new multimedia services. In preparatory phase, ONT has prepared the frequency plan for digital terrestrial TV networks of and has signed the final acts of the Regional Radio Communication Conference 2006 in Geneva organized by the International Telecommunications Union, which recommends to switch off analog broadcasting services around 2015 and their replacement by digital broadcasting systems.
Second phase: This phase includes the completion of the two following projects:
1- First part: Digitization of the transmission network between production studios and different broadcasting stations. The network consists of 41 transmission stations spread throughout the country. This step represents the first part of the Digital Terrestrial TV network, and its deployment is completed during the period 2008–2009. The cost of this project is 27 million dinars TTC.
2- Second part: National digital terrestrial TV broadcasting Network to viewers, which consists of 17 DTTV stations spread throughout the country and will be conducted under a contract including a vendor financing agreement with the Thomson Grass Valley (France) company. The project came into force in August 2009 and will be conducted during 2009–2010. Its cost is 13 million dinars TTC.
Analogue to digital transition by countries
The broadcasting of digital terrestrial transmissions has led to many countries planning to phase out existing analogue broadcasts. This table shows the launches of DTT and the closing down of analogue television in several countries.
Official launch: The official launch date of digital terrestrial television in the country, not the start for trial broadcasts.
Start of closedown: The date for the first major closedown of analogue transmitters.
End of closedown: The date when analogue television is definitely closed down.
System: Transmission system, e. g. DVB-T, ATSC or ISDB-T.
Interactive: System used for interactive services, such as MHP and MHEG-5.
Compression: Video compression standard used. Most systems use MPEG-2, but the more efficient H.264/MPEG-4 AVC has become increasingly popular among networks launching later on. Some countries use both MPEG-2 and H.264, for example France which uses MPEG-2 for standard definition free content but MPEG-4 for HD broadcasts and pay services.
See also
Common Interface (CI)
Conditional-access module (CAM)
Digital television transition
Interactive television standards
List of digital television deployments by country
Multiplex
1seg
USB stick
WiB (Digital Terrestrial Television)
Notes
References
External links
The Future of Broadcast Television (FoBTV) is next month 20 March 2012
Future of Broadcast Television Summit Declares Global Goals for Future of Broadcasting 11 November 2011
IEEE Spectrum - Does China Have the Best Digital Television Standard on the Planet?
DigiTAG
The DVB Project - including data on DTT deployments worldwide
European Audiovisual Observatory
MAVISE database on TV channels and TV companies in the European Union
Worldwide overview of the digital terrestrial systems ATSC, DMB-T/H, DVB-T and ISDB-T Status DTT in the world.
Digital Broadcasting, the Launching by Country Digital Broadcasting Experts Group (DiBEG)
Research in DTT
Schedule for the implementation of Digital TV in the world
Terrestrial |
497430 | https://en.wikipedia.org/wiki/NESSIE | NESSIE | NESSIE (New European Schemes for Signatures, Integrity and Encryption) was a European research project funded from 2000 to 2003 to identify secure cryptographic primitives. The project was comparable to the NIST AES process and the Japanese Government-sponsored CRYPTREC project, but with notable differences from both. In particular, there is both overlap and disagreement between the selections and recommendations from NESSIE and CRYPTREC (as of the August 2003 draft report). The NESSIE participants include some of the foremost active cryptographers in the world, as does the CRYPTREC project.
NESSIE was intended to identify and evaluate quality cryptographic designs in several categories, and to that end issued a public call for submissions in March 2000. Forty-two were received, and in February 2003 twelve of the submissions were selected. In addition, five algorithms already publicly known, but not explicitly submitted to the project, were chosen as "selectees". The project has publicly announced that "no weaknesses were found in the selected designs".
Selected algorithms
The selected algorithms and their submitters or developers are listed below. The five already publicly known, but not formally submitted to the project, are marked with a "*". Most may be used by anyone for any purpose without needing to seek a patent license from anyone; a license agreement is needed for those marked with a "#", but the licensors of those have committed to "reasonable non-discriminatory license terms for all interested", according to a NESSIE project press release.
None of the six stream ciphers submitted to NESSIE were selected because every one fell to cryptanalysis. This surprising result led to the eSTREAM project.
Block ciphers
MISTY1: Mitsubishi Electric
Camellia: Nippon Telegraph and Telephone and Mitsubishi Electric
SHACAL-2: Gemplus
AES*: (Advanced Encryption Standard) (NIST, FIPS Pub 197) (aka Rijndael)
Public-key encryption
ACE Encrypt#: IBM Zurich Research Laboratory
PSEC-KEM: Nippon Telegraph and Telephone Corp
RSA-KEM*: RSA key exchange mechanism (draft of ISO/IEC 18033-2)
MAC algorithms and cryptographic hash functions
Two-Track-MAC: Katholieke Universiteit Leuven and debis AG
UMAC: Intel Corp, Univ. of Nevada at Reno, IBM Research Laboratory, Technion Institute, and Univ. of California at Davis
CBC-MAC*: (ISO/IEC 9797-1);
EMAC: Berendschot et al.
HMAC*: (ISO/IEC 9797-1);
WHIRLPOOL: Scopus Tecnologia S.A. and K.U.Leuven
SHA-256*, SHA-384* and SHA-512*: NSA, (US FIPS 180-2)
Digital signature algorithms
ECDSA: Certicom Corp
RSA-PSS: RSA Laboratories
SFLASH: Schlumberger Corp (SFLASH was broken in 2007 and should not be used anymore).
Identification schemes
GPS-auth: Ecole Normale Supérieure, France Télécom, and La Poste
Other entrants
Entrants that did not get past the first stage of the contest include Noekeon, Q, Nimbus, NUSH, Grand Cru, Anubis, Hierocrypt, SC2000, and LILI-128.
Project contractors
The contractors and their representatives in the project were:
Katholieke Universiteit Leuven (Prime contractor): Bart Preneel, Alex Biryukov, Antoon Bosselaers, Christophe de Cannière, Bart Van Rompay
École Normale Supérieure: Jacques Stern, Louis Granboulan, Gwenaëlle Martinet
Royal Holloway, University of London: Sean Murphy, Alex Dent, Rachel Shipsey, Christine Swart, Juliette White
Siemens AG: Markus Dichtl, Marcus Schafheutle
Technion Institute of Technology: Eli Biham, Orr Dunkelman, Vladimir Furman
Université catholique de Louvain: Jean-Jacques Quisquater, Mathieu Ciet, Francesco Sica
Universitetet i Bergen: Lars Knudsen, Håvard Raddum
See also
ECRYPT
References
External links
The homepage of the NESSIE project
Cryptography contests
Cryptography standards
Research projects |
498851 | https://en.wikipedia.org/wiki/RSA%20Factoring%20Challenge | RSA Factoring Challenge | The RSA Factoring Challenge was a challenge put forward by RSA Laboratories on March 18, 1991 to encourage research into computational number theory and the practical difficulty of factoring large integers and cracking RSA keys used in cryptography. They published a list of semiprimes (numbers with exactly two prime factors) known as the RSA numbers, with a cash prize for the successful factorization of some of them. The smallest of them, a 100-decimal digit number called RSA-100 was factored by April 1, 1991. Many of the bigger numbers have still not been factored and are expected to remain unfactored for quite some time, however advances in quantum computers make this prediction uncertain due to Shor's algorithm.
In 2001, RSA Laboratories expanded the factoring challenge and offered prizes ranging from $10,000 to $200,000 for factoring numbers from 576 bits up to 2048 bits.
The RSA Factoring Challenges ended in 2007. RSA Laboratories stated: "Now that the industry has a considerably more advanced understanding of the cryptanalytic strength of common symmetric-key and public-key algorithms, these challenges are no longer active." When the challenge ended in 2007, only RSA-576 and RSA-640 had been factored from the 2001 challenge numbers.
The factoring challenge was intended to track the cutting edge in integer factorization. A primary application is for choosing the key length of the RSA public-key encryption scheme. Progress in this challenge should give an insight into which key sizes are still safe and for how long. As RSA Laboratories is a provider of RSA-based products, the challenge was used by them as an incentive for the academic community to attack the core of their solutions — in order to prove its strength.
The RSA numbers were generated on a computer with no network connection of any kind. The computer's hard drive was subsequently destroyed so that no record would exist, anywhere, of the solution to the factoring challenge.
The first RSA numbers generated, RSA-100 to RSA-500 and RSA-617, were labeled according to their number of decimal digits; the other RSA numbers (beginning with RSA-576) were generated later and labelled according to their number of binary digits. The numbers in the table below are listed in increasing order despite this shift from decimal to binary.
The mathematics
RSA Laboratories states that: for each RSA number n, there exists prime numbers p and q such that
n = p × q.
The problem is to find these two primes, given only n.
The prizes and records
The following table gives an overview over all RSA numbers. Note that the RSA Factoring Challenge ended in 2007 and no further prizes will be awarded for factoring the higher numbers.
The challenge numbers in white lines are numbers expressed in base 10, while the challenge numbers in yellow lines are numbers expressed in base 2
See also
RSA numbers, decimal expansions of the numbers and known factorizations
LCS35
The Magic Words are Squeamish Ossifrage, the solution found in 1993 to another RSA challenge posed in 1977
RSA Secret-Key Challenge
Integer factorization records
Notes
Integer factorization algorithms
Cryptography contests
1991 establishments in the United States |
499324 | https://en.wikipedia.org/wiki/Enigma | Enigma | Enigma or The Enigma may refer to:
Riddle, someone or something that is mysterious or puzzling
Biology
ENIGMA, a class of gene in the LIM domain
Computing and technology
Enigma (company), a New York-based data-technology startup
Enigma machine, a family of German electro-mechanical encryption machines
Enigma, the codename for Red Hat Linux 7.2
Film
Enigma (1982 film), a film starring Martin Sheen and Sam Neill
Enigma (2001 film), a film adapted from the Robert Harris novel
Enigma (2009 film), a short film by the Shumway Brothers
Literature
Enigma (novel), a 1995 novel by Robert Harris
Enigma (DC Comics), a DC Comics character
Enigma (Marvel Comics), a Marvel Comics character
Enigma (Vertigo), a title published by DC's imprint Vertigo
Enigma (manga), a 2010 manga published in Weekly Shōnen Jump
Enigma Cipher, a series from Boom! Studios
Enigma, a novel in The Trigon Disunity series by Michael P. Kube-McDowell
"Enigma" and "An Enigma", two poems by Edgar Allan Poe
Music
Enigma (German band), an electronic music project founded by Michael Cretu
Enigma (British band), a 1980s band
Enigma Records, an American rock and alternative record label in the 1980s
Enigma Variations, 14 variations composed by Edward Elgar
Albums
Enigma (Ill Niño album) (2008)
Enigma (Tak Matsumoto album) (2016)
Enigma (Keith Murray album) (1996)
Enigma (Aeon Zen album) (2013)
Songs
"Enigma (Give a Bit of Mmh to Me)", 1978, by Amanda Lear
"Enigma", 2007, by Amorphis from Silent Waters
"Enigma", 2002, by Trapt from Trapt
"Enigma", 2014, by Within the Ruins from Phenomena
"Enigma", 2020, by Lady Gaga from Chromatica
Places
Enigma, Georgia
Enigma, Tennessee
Enigma Peak, a mountain in Palmer Land, Antarctica
Television
Enigma (Derren Brown), a televised tour show
Enigma (TV series), a Biography Channel TV series
"Enigma" (NCIS), an episode of NCIS
"Enigma" (Stargate SG-1), an episode of Stargate SG-1
Enigma, a character from Nip/Tuck
Transport
Enigma (yacht), a private superyacht
Dynamic Sport Enigma, a Polish paraglider design
Enigma Motorsport, a British motor-racing team
Video games
Enigma (1998 video game)
Enigma (2007 video game)
Enigma: Rising Tide, a 2003 video game
Enigma, a character from Dota 2
"The Enigma", an episode of the video game Batman: The Enemy Within
Other uses
The Enigma (performer), American
Enigma (roller coaster), Pleasurewood Hills, Suffolk, England
Enigma - Museum for Post, Tele og Kommunikation, the national Danish postal museum
The Enigma, a monthly publication of the National Puzzlers' League
Enigma, mathematical puzzles published in New Scientist 1979-2013
The Enigma (diamond), a black diamond
See also
Ænigma (disambiguation)
Enigmata (disambiguation)
Enigmatic (disambiguation)
Lady Gaga Enigma, a concert residency
Publius Enigma, an unsolved Internet puzzle
23 enigma, a belief in the significance of number 23 |
506063 | https://en.wikipedia.org/wiki/Centrum%20Wiskunde%20%26%20Informatica | Centrum Wiskunde & Informatica | The (abbr. CWI; English: "National Research Institute for Mathematics and Computer Science") is a research centre in the field of mathematics and theoretical computer science. It is part of the Netherlands Organisation for Scientific Research (NWO) and is located at the Amsterdam Science Park. This institute is famous as the creation site of the programming language Python. It was a founding member of the European Research Consortium for Informatics and Mathematics (ERCIM).
Early history
The institute was founded in 1946 by Johannes van der Corput, David van Dantzig, Jurjen Koksma, Hendrik Anthony Kramers, Marcel Minnaert and Jan Arnoldus Schouten. It was originally called Mathematical Centre (in Dutch: Mathematisch Centrum). One early mission was to develop mathematical prediction models to assist large Dutch engineering projects, such as the Delta Works. During this early period, the Mathematics Institute also helped with designing the wings of the Fokker F27 Friendship airplane, voted in 2006 as the most beautiful Dutch design of the 20th century.
The computer science component developed soon after. Adriaan van Wijngaarden, considered the founder of computer science (or informatica) in the Netherlands, was the director of the institute for almost 20 years. Edsger Dijkstra did most of his early influential work on algorithms and formal methods at CWI. The first Dutch computers, the Electrologica X1 and Electrologica X8, were both designed at the centre, and Electrologica was created as a spinoff to manufacture the machines.
In 1983, the name of the institute was changed to Centrum Wiskunde & Informatica (CWI) to reflect a governmental push for emphasizing computer science research in the Netherlands.
Recent research
The institute is known for its work in fields such as operations research, software engineering, information processing, and mathematical applications in life sciences and logistics.
More recent examples of research results from CWI include the development of scheduling algorithms for the Dutch railway system (the Nederlandse Spoorwegen, one of the busiest rail networks in the world) and the development of the Python programming language by Guido van Rossum. Python has played an important role in the development of the Google search platform from the beginning, and it continues to do so as the system grows and evolves.
Many information retrieval techniques used by packages such as SPSS were initially developed by Data Distilleries, a CWI spinoff.
Work at the institute was recognized by national or international research awards, such as the Lanchester Prize (awarded yearly by INFORMS), the Gödel Prize (awarded by ACM SIGACT) or the Spinoza Prize. Most of its senior researchers hold part-time professorships at other Dutch universities, with the institute producing over 170 full professors during the course of its history. Several CWI researchers have been recognized as members of the Royal Netherlands Academy of Arts and Sciences, the Academia Europaea, or as knights in the Order of the Netherlands Lion.
In February 2017, CWI in association with Google announced a successful collision attack on SHA 1 encryption algorithm.
European Internet
CWI was an early user of the Internet in Europe, in the form of a TCP/IP connection to NSFNET. Piet Beertema at CWI established one of the first two connections outside the United States to the NSFNET (shortly after France's INRIA) for EUnet on 17 November 1988. The first Dutch country code top-level domain issued was cwi.nl.
The Amsterdam Internet Exchange (one of the largest Internet Exchanges in the world, in terms of both members and throughput traffic) is located at the neighbouring SARA (an early CWI spin-off) and NIKHEF institutes. The World Wide Web Consortium (W3C) office for the Benelux countries is located at CWI.
Spin-off companies
CWI has demonstrated a continuing effort to put the work of its researchers at the disposal of society, mainly by collaborating with commercial companies and creating spin-off businesses. In 2000 CWI established "CWI Incubator BV", a dedicated company with the aim to generate high tech spin-off companies. Some of the CWI spinoffs include:
1956: Electrologica, a pioneering Dutch computer manufacturer.
1971: SARA, founded as a center for data processing activities for Vrije Universiteit Amsterdam, Universiteit van Amsterdam, and the CWI.
1990: DigiCash, an electronic money corporation founded by David Chaum.
1994: NLnet, an Internet Service Provider.
1994: General Design / Satama Amsterdam, a design company, acquired by LBi (then Lost Boys international).
1995: Data Distilleries, developer of analytical database software aimed at information retrieval, eventually becoming part of SPSS and acquired by IBM.
1996: Stichting Internet Domeinregistratie Nederland (SIDN), the .nl top-level domain registrar.
2000: Software Improvement Group (SIG), a software improvement and legacy code analysis company.
2008: MonetDB, a high-tech database technology company, developer of the MonetDB column-store.
2008: Vectorwise, an analytical database technology company, founded in cooperation with the Ingres Corporation (now Actian) and eventually acquired by it.
2010: Spinque, a company providing search technology for information retrieval specialists.
2013: MonetDB Solutions, a database services company.
2016: Seita, a technology company providing demand response services for the energy sector.
Software and languages
ABC programming language
Algol 60
Algol 68
Alma-0, a multi-paradigm computer programming language
ASF+SDF Meta Environment, programming language specification and prototyping system, IDE generator
Cascading Style Sheets
MonetDB
NetHack
Python programming language
RascalMPL, general purpose meta programming language
RDFa
SMIL
van Wijngaarden grammar
XForms
XHTML
XML Events
Notable people
Adrian Baddeley
Theo Bemelmans
Piet Beertema
Jan Bergstra
Gerrit Blaauw
Peter Boncz
Hugo Brandt Corstius
Stefan Brands
Andries Brouwer
Harry Buhrman
Dick Bulterman
David Chaum
Ronald Cramer
Theodorus Dekker
Edsger Dijkstra
Constance van Eeden
Peter van Emde Boas
Richard D. Gill
Jan Friso Groote
Dick Grune
Michiel Hazewinkel
Jan Hemelrijk
Martin L. Kersten
Willem Klein
Jurjen Ferdinand Koksma
Kees Koster
Monique Laurent
Gerrit Lekkerkerker
Arjen Lenstra
Jan Karel Lenstra
Gijsbert de Leve
Barry Mailloux
Massimo Marchiori
Lambert Meertens
Rob Mokken
Albert Nijenhuis
Steven Pemberton
Herman te Riele
Guido van Rossum
Alexander Schrijver
Jan H. van Schuppen
Marc Stevens
John Tromp
John V. Tucker
Paul Vitányi
Hans van Vliet
Marc Voorhoeve
Adriaan van Wijngaarden
Ronald de Wolf
Peter Wynn
References
External links
Amsterdam-Oost
Computer science institutes in the Netherlands
Edsger W. Dijkstra
Mathematical institutes
Members of the European Research Consortium for Informatics and Mathematics
Organisations based in Amsterdam
1946 establishments in the Netherlands
Research institutes in the Netherlands
Science and technology in the Netherlands |
506383 | https://en.wikipedia.org/wiki/Cryptosystem | Cryptosystem | In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).
Typically, a cryptosystem consists of three algorithms: one for key generation, one for encryption, and one for decryption. The term cipher (sometimes cypher) is often used to refer to a pair of algorithms, one for encryption and one for decryption. Therefore, the term cryptosystem is most often used when the key generation algorithm is important. For this reason, the term cryptosystem is commonly used to refer to public key techniques; however both "cipher" and "cryptosystem" are used for symmetric key techniques.
Formal definition
Mathematically, a cryptosystem or encryption scheme can be defined as a tuple with the following properties.
is a set called the "plaintext space". Its elements are called plaintexts.
is a set called the "ciphertext space". Its elements are called ciphertexts.
is a set called the "key space". Its elements are called keys.
is a set of functions . Its elements are called "encryption functions".
is a set of functions . Its elements are called "decryption functions".
For each , there is such that for all .
Note; typically this definition is modified in order to distinguish an encryption scheme as being either a symmetric-key or public-key type of cryptosystem.
Examples
A classical example of a cryptosystem is the Caesar cipher. A more contemporary example is the RSA cryptosystem.
References
Cryptography |
509021 | https://en.wikipedia.org/wiki/Oyster%20card | Oyster card | The Oyster card is a payment method for public transport in London (and certain areas around it) in the United Kingdom. A standard Oyster card is a blue credit-card-sized stored-value contactless smart card. It is promoted by Transport for London (TfL) and can be used on travel modes across London including London Buses, London Underground, the Docklands Light Railway (DLR), London Overground, Tramlink, some river boat services, and most National Rail services within the London fare zones. Since its introduction in June 2003, more than 86 million cards have been used.
Oyster cards can hold period tickets; travel permits and; most commonly, credit for travel ("Pay as you go"), which must be added to the card before travel. Passengers touch it on an electronic reader when entering and leaving the transport system in order to validate it or deduct funds from the stored credit. Cards may be "topped-up" by recurring payment authority, by online purchase, at credit card terminals or by cash, the last two methods at stations or ticket offices. The card is designed to reduce the number of transactions at ticket offices and the number of paper tickets. Usage is encouraged by offering substantially cheaper fares than with cash though the acceptance of cash is being phased out. On London buses, cash is no longer accepted.
The card was first issued to the public on 30 June 2003, with a limited range of features and there continues to be a phased introduction of further functions. By June 2012, over 43 million Oyster cards had been issued and more than 80% of all journeys on public transport in London were made using the card.
From September 2007 to 2010, the Oyster card functionality was experimented on Barclaycard contactless bank cards. Since 2014, the use of Oyster cards has been supplemented by contactless credit and debit cards as part of TfL's "Future Ticketing Programme". TfL was one of the first public transport providers in the world to accept payment by contactless bank cards, after, in Europe, the tramways and bus of Nice on 21 May 2010 either with NFC bank card or smartphone, and the widespread adoption of contactless in London has been credited to this. TfL is now one of Europe's largest contactless merchants, with around 1 in 10 contactless transactions in the UK taking place on the TfL network.
Background
Precursor
Early electronic smartcard ticket technology was developed in the 1980s, and the first smartcard was tested by London Transport on bus route 212 from Chingford to Walthamstow in 1992. The trial showed that the technology was possible and that it would reduce boarding times. In February 1994, the "Smartcard" or "Smart Photocard" was launched and trialled in Harrow on 21 routes. Advertised as "the new passport to Harrow’s buses", the trial was the largest of its kind in the world, costing £2 million and resulting in almost 18,000 photocards issued to the Harrow public. It lasted until December 1995 and was a success, proving that it reduces boarding times, is easy to use, and is able to record entry and exit stops and calculate the corresponding fare fee, i.e. pay as you go. However the Upass smartcard of the South Korean capital Seoul would eventually be the first to roll it out officially, at the end of 1995, eight years before London would as the "Oyster card".
Operator
The Oyster card was set up under a Private Finance Initiative (PFI) contract between Transport for London (TfL) and TranSys, a consortium of suppliers that included EDS and Cubic Transportation Systems (responsible for day-to-day management) and Fujitsu and WS Atkins (shareholders with no active involvement). The £100 million contract was signed in 1998 for a term of 17 years until 2015 at a total cost of £1.1 billion.
In August 2008, TfL decided to exercise a break option in the contract to terminate it in 2010, five years early. This followed a number of technical failures. TfL stated that the contractual break was to reduce costs, not connected to the system failures. In November 2008 a new contract was announced between TfL and Cubic and EDS for two of the original consortium shareholders to run the system from 2010 until 2013.
Brand
The Oyster name was agreed on after a lengthy period of research managed by TranSys and agreed by TfL. Two other names were considered and "Oyster" was chosen as a fresh approach that was not directly linked to transport, ticketing or London. Other proposed names were "Pulse" and "Gem". According to Andrew McCrum, now of Appella brand name consultants, who was brought in to find a name by Saatchi and Saatchi Design (contracted by TranSys), "Oyster was conceived ... because of the metaphorical implications of security and value in the hard bivalve shell and the concealed pearl. Its associations with London through Thames estuary oyster beds and the major relevance of the popular idiom "the world is your oyster" were also significant factors in its selection".
The intellectual property rights to the Oyster brand originally belonged to TranSys. Following the renegotiation of the operating contract in 2008, TfL sought to retain the right to use the Oyster brand after the termination of its partnership with Transys, eventually acquiring the rights to the brand in 2010 at a cost of £1 million.
Technology
The Oyster card has a claimed proximity range of about 80 mm (3 inches). The card operates as a RFID system and is compatible with ISO/IEC 14443 types A and B. Oyster readers can also read other types of cards including Cubic Transportation Systems' Go cards. From its inception until January 2010, Oyster cards were based on NXP/Philips' MIFARE Classic 1k chips provided by Giesecke & Devrient, Gemalto and SchlumbergerSema. All new Oyster cards have used MIFARE DESFire EV1 chips since December 2009. From February 2010, MIFARE Classic-based Oyster cards were no longer issued. MIFARE DESFire cards are now widely used as transport smartcards.
MIFARE Classic chips, on which the original Oyster card was based, are hard-wired logic smartcards, meaning that they have limited computing power designed for a specific task. The MIFARE DESFire chips used on the new Oyster card are CPUs with much more sophisticated security features and more complex computation power. They are activated only when they are in an electromagnetic field compatible with ISO/IEC 14443 type A, provided by Oyster readers. The readers read information from the cards, calculate whether to allow travel, assess any fare payable and write back information to the card. Some basic information about the MIFARE Classic or MIFARE DESFire chip can be read by any ISO/IEC 14443 type A compatible reader, but Oyster-specific information cannot be read without access to the encryption used for the Oyster system. While it has been suggested that a good reader could read personal details from a distance, there has been no evidence of anyone being able to decrypt Oyster information. By design the cards do not carry any personal information. Aluminium shielding has been suggested to prevent any personal data from being read.
Oyster uses a distributed settlement framework. All transactions are settled between the card and reader alone. Readers transmit the transactions to the back office in batches but there is no need for this to be done in real time. The back office acts mainly as a record of transactions that have been completed between cards and readers. This provides a high degree of resilience.
In 2008, a fashion caught on for removing the RFID chip from Oyster cards and attaching it to wrist watches and bracelets. This allowed commuters to pass through the gates by "swiping" their hand without the need to take out a proper card. Although the RFID chips were charged in the normal way and no fare evasion was involved, TfL disapproved of the practice and threatened to fine anyone not carrying a full undamaged card, although it is not clear what the actual offence would be, were a case to be brought.
Architecture
The Oyster system is based on a closed, proprietary architecture from Cubic Transportation Systems. The card readers were developed entirely by Cubic, whereas development of the back office systems was started by Fujitsu and completed by Cubic. The system has the capability to interface with equipment or services provided by other suppliers. The Oyster website is not part of the closed system but interfaces with it. Similarly, Oyster readers are now embedded into ticket machines produced by Shere and Scheidt and Bachmann on the national rail network.
In early 2007, TfL and Deloitte worked to migrate the on-line payment systems to a more open architecture, using a number of open source components such as Linux, to resolve issues of lock-in costs, updates, incorporation of new security standards of PCI DSS, non-scalability, low and inconsistent quality of service, and slower response time to business changes.
Features
Registration and protection
Oyster cards can be registered or protected for loss or theft. Full registration can be done at a London Underground station, an Oyster Ticket Stop (shop) or a Travel Information Centre: an Oyster registration form must be filled in (either at time of purchase or subsequently). Registration enables the customer to buy any product for the card and to have an after-sales service, and it protects against theft or loss. The customer has to supply a Security Answer: either their mother's maiden name, memorable date or memorable place.
All adult Oyster cards purchased online or by phone are fully registered. (This does not include Visitor Oyster cards.)
Oyster cards obtained at stations or shops cannot be fully registered online. However, cards can be protected online by setting up an Oyster online account and linking the card to that account. This allows for a full protection against theft or loss, but the Oyster card will be able to hold only 7-day season tickets and pay-as-you-go.
Sales
Oyster cards can be purchased from a number of different outlets in the London area:
Ticket machines at London Underground stations, which accept banknotes, coins, and credit and debit cards.
London Overground & TfL Rail ticket offices
Online, using the TfL website
Through the TfL app
Selected National Rail stations, some of which are also served by London Underground
Travel Information Centres
About 4,000 Oyster Ticket Stop agents (usually newsagent's shops)
By telephone sales from TfL.
Visitor Oyster cards can be obtained from Visit Britain outlets around the world, and from other transport operators, such as EasyJet and Gatwick Express, and online and from any ticket office. However, these limited-functionality cards cannot be registered. A £5 deposit is required which will be refunded in cash upon return of the card. Any remaining credit on the card is refundable as well.
The cards were originally free, but a refundable deposit of £3 was subsequently introduced,
increased to £5 for a refundable Oyster card in January 2011. The deposit and any unused credit is refundable by posting the card to TfL; however, refunds are paid only by pounds sterling cheque, bank transfer to a UK bank account, credit to another Oyster card, or a TfL web account voucher, and refunds of over £15 require the customer to provide proof of identity and address. Refunds of up to £10 in credit plus the deposit may be claimed at London Underground ticket machines, which will pay the refund in cash. Even though the £5 deposit is officially for the card itself, the ticket machine has no facility for relieving the customer of the card who departs the transaction still in possession of a (now useless) Oyster card. On cards issued since February 2020, the £5 deposit has become a card fee and will be repaid as credit to the card on the first transaction made more than a year after issue; it is no longer repayable. This is to encourage retention of cards.
A registration form can be obtained at or after the time of purchase, which if not completed restricts the Oyster card to Pay-as-you-go and weekly tickets.
Ticket vending machines on most National Rail stations will top-up Oyster cards and sell tickets that can be loaded on to Oyster. New Oyster cards are not available at most National Rail stations and termini. At several main line termini, TfL runs Travel Information Centres, which do sell Oyster.
Reporting
Touch-screen ticket machines report the last eight journeys and last top-up amount. The same information is available as a print-out from ticket offices, and also on board London Buses by request. The balance is displayed on some Underground barriers at the end of journeys that have caused a debit from the balance, and can also be requested at newsagents and National Rail stations that provide a top-up facility.
Oyster Online service can also deliver regular Travel Statements via email.
A complete 8-week 'touch' history can be requested from TfL: for registered and protected Oyster cards, TfL can provide the history for the previous 8 weeks, but no further back.
Oyster online also displays up to 8 weeks of journey history.
Use
Touching in and out
Travellers touch the card on a distinctive yellow circular reader (a Tri-Reader, developed by Cubic Transportation Systems) on the automated barriers at London Underground stations to 'touch in' and 'touch out' at the start and end of a journey. Physical contact is not necessary, but the range of the reader is only a few millimetres. Tram stops have readers on the platforms, and buses also have readers on the driver/conductor's ticket machine, and on these modes passengers must touch their card to the reader at the start of their journey only. Most Docklands Light Railway stations and occasionally London Underground stations such as at Waterloo (for the Waterloo & City line) do not have automatic barriers; hence, passengers must touch their card on a reader at both the beginning and end of their journey if they wish to avoid being charged the maximum fare for an unresolved journey. Such a step is not needed if transferring between trains within a station.
Season tickets
Oyster cards can be used to store season tickets of both travelcards and bus passes (of one week or more), and a Pay-as-you-go balance.
An Oyster card can hold up to three season tickets at the same time. Season tickets are Bus & Tram Passes or Travelcards lasting 7 days, 1 month, or any duration up to one year (annual).
There is no essential difference in validity or cost between a 7-day, monthly or longer period Travelcard on Oyster and one on a traditional paper ticket; they are valid on all Underground, Overground, DLR, bus, tram and national rail services within the zones purchased. See the main article for a fuller explanation of Travelcards. Tube, DLR and London Overground Travelcards may be used on buses in all zones. Trams may also be used if the travelcard includes Zones 3, 4, 5 or 6.
Although TfL asks all Oyster users to tap their card at entry/exit points of their journey, in practice Travelcard holders only need to "touch in" and "touch out" to operate ticket barriers or because they intend to travel outside the zones for which their Travelcard is valid. As long as the Travelcard holder stays within their permitted zones no fare will be deducted from the pay-as-you-go funds on the card. The Oyster system checks that the Travelcard is valid in the zones it is being used in.
Travel outside zones
If users travel outside the valid zones of their Travelcard (but within Oyster payment zones), any remaining fare due may be deducted from their pay-as-you-go funds (see below for how this is calculated). From 22 May 2011, Oyster Extension Permits (OEPs) were no longer required. Before that date, users who travelled outside the zones of their Travelcard, and whose journey involved the use of a National Rail service, were required to set an OEP on their Oyster card before travelling, to ensure that they paid for the extra-zonal journey.
Renewals
Oyster card Travelcards can be renewed at the normal sales points and ticket machines at London Underground or London Overground stations, Oyster Ticket Stop agents, or some National Rail stations. Travelcards can also be renewed online via the Oystercard website, or by telephone sales from TfL; users must then nominate a Tube or overground station where they will tap their card in order to charge the card with the funds or season ticket purchased. Alternatively a user can choose to automatically add either £20 or £40 every time the balance on the card falls below £20. Online purchases can be collected at any Oyster touch point (including buses) 30 minutes after purchase; the previous requirement to nominate a station at which to collect the top-up and wait until the next day has been removed.
Pay-as-you-go
In addition to holding Travelcards and bus passes, Oyster cards can also be used as stored-value cards, holding electronic funds of money. Amounts are deducted from the card each time it is used, and the funds can be "recharged" when required. The maximum value that an Oyster card may hold is £90. This system is known as "pay as you go" (abbreviated PAYG), because instead of holding a season ticket, the user only pays at the point of use.
When Oyster cards were introduced, the PAYG system was initially named "pre pay", and this name is still sometimes used by National Rail. TfL officially refers to the system as "pay as you go" in all publicity.
The validity of PAYG has a more complex history as it has only been gradually accepted by transport operators independent of TfL. Additionally, the use of PAYG differs across the various modes of transport in London, and passengers are sometimes required to follow different procedures to pay for their journey correctly.
It is possible to have a negative pay-as-you-go balance after completing a journey, but this will prevent the card from being used (even if it is loaded with a valid Travelcard) until the card is topped up.
Oyster route validators
In 2009, TfL introduced a new type of Oyster card validator, distinguished from the standard yellow validators by having a pink-coloured reader. They do not deduct funds, but are used at peripheral interchange points to confirm journey details. Oyster pay-as-you-go users travelling between two points without passing through Zone 1 are eligible for a lower fare, and from 6 September 2009 can confirm their route by touching their Oyster cards on the pink validators when they change trains, allowing them to be charged the appropriate fare without paying for Zone 1 travel. The pink validators are located at 15 interchange stations.
Gospel Oak
Gunnersbury
Highbury & Islington
Kensington Olympia
Rayners Lane
Stratford
West Brompton
Willesden Junction
Blackhorse Road
Wimbledon
Richmond
Whitechapel
Canada Water
Surrey Quays (introduced September 2013)
Clapham Junction (introduced September 2013)
An example journey would be Watford Junction to Richmond, which costs £5.00 peak and £3.10 off-peak when travelling via Zone 1. If travelling on a route outside Zone 1 via , the fares are £4.10 and £1.80 respectively, which can be charged correctly if the Oyster card is validated at the pink validator when changing trains at Willesden Junction.
Underground and DLR
Oyster card pay-as-you-go users must "touch in" at the start of a journey by London Underground or DLR, and "touch out" again at the end. The Oyster card readers automatically calculate the correct fare based on the start and end points of the journey and deduct that fare from the Oyster card. Pay-as-you-go funds are also used to cover any additional fares due from season ticket holders who have travelled outside the valid zones of their season ticket (see Travelcards above).
Passengers enter or exit most London Underground stations through ticket barriers which are operated by scanning an Oyster card or inserting a valid ticket. Some tube stations (such as those at National Rail interchanges) and DLR stations have standalone validators with no barriers. In both instances, pay-as-you-go users are required to touch in and out.
London Overground
London Overground services are operated by Arriva on behalf of TfL and Oyster pay-as-you-go users use their cards in the same way as on Underground journeys, touching their card on a card reader at the entry and exit points of their journey to calculate the fare due.
Buses
Users must touch the Oyster card only once at the point of boarding: as London buses have a flat fare of £1.55 (which allows for unlimited bus journeys within 62 minutes from the point of touching in), there is no need to calculate an end point of the journey.
In July 2016, cash became a deprecated form of payment for travel on London Buses, with TfL heavily promoting the use of a contactless card or Oyster card. All major contactless cards are accepted which carry the 'contactless symbol', however tourists are advised to check with their bank before travel for validity details.
As London buses do not accept cash payments, TfL introduced a "one more journey" incentive on Oyster cards. This meant that customers are able to take a bus if their cards have £0 or more. Doing so may result in a negative balance, but the card can be topped up at a later date. When using the 'one more journey' feature, customers receive an emergency fare advice slip to acknowledge that the Oyster 'One More Journey' feature has been used and to remind them that their card needs to be topped up before another journey can be made. It is estimated that by eliminating cash from buses, TfL will save £103m by the year 2023, which will be reinvested into the capital.
Some London bus routes cross outside the Greater London boundary before reaching their terminus. Pay-as-you-go users are permitted to travel the full length of these route on buses operated as part of the London Bus network, even to destinations some distance outside Greater London.
Trams
London's trams operate on the same fare structure as buses; the rules are similar, and users with pre-pay must touch the Oyster card only once at the point of boarding. Users with Travelcards valid for the Tramlink zones need not touch in unless travelling to with a Travelcard not valid in zone 3.
A more complex arrangement exists at Wimbledon station; tram passengers starting their journey there must pass through ticket gates in order to reach the tram platform, and therefore need to touch their Oyster card to open the barriers. They must then touch their Oyster card once again on the card reader on the Tramlink platform to confirm their journey as a tram passenger. Tram passengers arriving in Wimbledon must not touch out on the card reader on the Tramlink platform, but must touch-out to exit via the station gates. If the card is touched on the platform, the touch-out at the gate would be seen as a touch-in and cause the maximum fare to be charged to the card.
River
Passengers boarding a Thames Clippers riverbus service must tap their Oyster card on the reader situated on the pier before boarding. Thames Clippers operates a pay-before-boarding policy.
Emirates Air Line
Oyster cards are accepted on the Emirates Air Line cable route between Greenwich Peninsula and Royal Docks. The Emirates Air Line is outside of the London Travelcard validity. However, a 25% discount applies to Travelcard and Freedom Pass holders for both single and return journeys. The discount is automatically applied to Oyster card users, but only if their Travelcard is loaded onto their Oyster card. Freedom Pass holders and visitors in possession of ordinary magnetic stripe Travelcards have to buy a cash ticket if they wish to take advantage of the discount.
National Rail
As with Underground and DLR journeys, Oyster PAYG users on National Rail must tap their card at the start and end of the journey to pay the correct fare. PAYG funds may also be used to cover any additional fares due from season ticket holders who have travelled outside the valid zones of their season ticket (see Travelcards above).
Many large National Rail stations in London have Oystercard-compatible barriers. At other smaller stations, users must touch the card on a standalone validator.
Out-of-Station Interchange (OSI)
At a number of Tube, DLR, London Overground and National Rail stations which lie in close proximity, or where interchange requires passengers to pass through ticket barriers, an Out-of-Station Interchange (OSI) is permitted. In such cases, the card holder touches out at one station and then touches in again before starting the next leg of the journey. The PAYG fares are then combined and charged as a single journey. Examples include transferring between the Jubilee line at Canary Wharf and the DLR where Oyster card holders must tap their card at the ticket barriers in the Tube station, and then touch in on the validator at the DLR station. Balham (National Rail) to/from Balham (Tube) is another OSI, as is Camden Town (Tube) to/from Camden Road (London Overground). Failure to touch in or out on the validators in these circumstances will incur a maximum fare which is deducted from PAYG funds. In some cases (e.g. at West Hampstead NR stations) the OSI replicates interchanges which have existed for several decades before the invention of the Oyster system but were generally used with season tickets rather than day tickets.
Out-of-Station Interchanges can be temporary or permanent. A temporary arrangement may exist between two stations at short notice (routinely during weekend work but also when an emergency closure occurs). The two journeys that result are only charged as a single journey.
Recharging
When the PAYG balance runs low, the balance can be topped up at the normal sales points or ticket machines at London Underground or London Overground stations, Oyster Ticket Stops or some National Rail stations. All ticket offices at stations run by London Underground will sell or recharge Oyster cards, or handle Oyster card refunds. However, some Tube stations are actually operated by National Rail train operating companies, and their ticket offices will not deal with Oyster refunds. DLR does not have any ticket offices which sell any Oyster card top-ups or handle refunds (as its stations are usually unmanned), except for the information office at London City Airport.
PAYG funds and Travel card season tickets (but not Bus & Tram Passes) can also be purchased online via the Oyster online website or by calling the Oyster helpline. The top up can be collected 30 mins later by touching in or out as part of a normal journey at any station or on any bus. There is no requirement to select a specific station nor to wait until the next day, which was the case in the past.
For further information on recharging and renewals, see the section on Renewals in this article.
Auto top-up
Customers can set up and manage Auto top-up online for their existing Oyster card. They register a debit or credit card, make a PAYG top-up purchase (minimum £10) and select either £20 or £40 as the Auto top-up amount.
Alternatively, a new Oyster card with Auto top-up and a minimum of £10 pay as you go can be ordered via Oyster online.
There is a constraint in the design, that requires a journey to be made via a nominated station, before auto top-up can be enabled. There are a number of services such as Thames Clippers, for which this initiation transaction is not offered.
Whenever the pay as you go balance falls below £10, £20 or £40 is added to the balance automatically when the Oyster card is touched on an entry validator. A light on the Oyster reader flashes to indicate the Auto top-up has taken place and an email is sent to confirm the transaction. Payment is then taken from the registered debit or credit card.
To ensure successful transactions, customers must record any changes to their billing address and update their debit or credit card details as necessary.
Oyster photocards
Oyster photocards, with an image of the authorised user on the card front, are issued to members of groups eligible for free or discounted travel. The cards are encoded to offer discounted fares and are available for students in full-time education (30% off season tickets), 16+ cards (half the adult-rate for single journeys on the Underground, London Overground, DLR and a limited number of National Rail services, discounted period Travelcards, free travel on buses and trams for students that live and attend full-time education in London) and for children under 16 years old (free travel on buses and trams and discounted single fares on the Underground, London Overground, DLR and most National Rail services). A 'Bus & Tram' Discount Card is specifically given to disadvantaged and 'unwaged' groups, primarily those on 'Job Seekers Allowance', 'Employment Support Allowance' and receivers of a variety of disabilities allowances, at half-fare rates for bus and tram services only; these cards simply charge the full rate on journeys not included in the discount scheme.
Student cards
Student Oyster photocards offering a 30% discount on period tickets, are available to full-time students over 18 at registered institutions within the area of the M25 motorway, an area slightly larger than Greater London, at a cost of £20. Until the 2009–10 academic year, they cost £5 but required replacing each year of multiple-year courses. There is no discount for Pay-as-you-go, although many students hold the National Rail 16–25 Railcard, which can be added to an Oyster card at an Underground station ticket office to obtain a 1/3 reduction on off-peak caps and a 1/3 discount on off-peak Oyster single fares on all rail services. (NB peak National Rail fares may be cheaper with discounted paper tickets). A small selection of universities outside London have also registered on the scheme.
A replacement for lost or stolen cards costs £10 and involves applying for a replacement card online or by calling the Oyster helpline. A new photograph is not required. The funds and remaining travelcard is transferable to a new student Oyster photocard.
Since 8 September 2006, students at some London universities have been able to apply for their 18+ Oyster photocard online by uploading a digital photograph and paying with a credit or debit card.
Zip cards
On 7 January 2008, Transport for London unveiled the Zip card, an Oyster photocard to be used by young people aged 18 years or under who qualify for free bus and tram travel within the capital, with effect from 1 June 2008. To qualify, one must live in a London borough (and still be in full-time education if they are 18). Children outside London (and indeed the UK) may also apply for a Visitor version of the Zip card (which offers free bus and tram travel for under-16s, and half-rate fares for 16–18-year-olds) online, which they must collect from one of TfL's Travel Information Centres. From 1 September 2010 a fee of £10-15 (Dependent on age) has been charged for the card.
Freedom Passes and 60+ Oyster Cards
Freedom Passes are generally issued on what is in technical terms an Oyster card, though it does not bear that name. Freedom passes are free travel passes available to Greater London residents who are over a specified age (60 until March 2010, increasing in phases to 66 from March 2020) or with a disability specified in the Transport Act 2000; individual London boroughs have exceptional discretion to issue Freedom Passes to disabled people who do not meet the national statutory requirements (though they have to fund them). Travel is free at all times on the Tube, DLR, buses and Tramlink, and after 09:30 on most National Rail journeys entirely within the Greater London boundary. Holders cannot put any money or ticket products on a Freedom Pass; to travel outside these times, a separate Oyster card or other valid ticket is required.
Residents who are over 60 but who do not qualify for a Freedom Pass can obtain a similar 60+ Oyster Card for a single fee. The outer boundary of the area in which Freedom Passes and 60+ Oyster Cards can be used is mostly the same as the area within which ordinary Oyster Cards can be used. Oyster PAYG cards can be used to Broxbourne station, but Freedom Passes and 60+ Oyster cannot be used north of or stations. This is solely because National Express East Anglia Railways took the decision to accept Oyster PAYG only as far as Broxbourne. Cards also have to be visually inspected on any non-TfL buses whose routes accept the concessionary cards on journeys partly entering Greater London including routes equipped with readers that accept the national standard ITSO bus pass cards with which Oyster is not compatible.
Freedom Passes issued to qualifying persons are also an English National Concessionary Bus Pass. They look identical to concessionary bus passes but are additionally marked "Freedom Pass" with the word "Pass" in red. Unlike the Freedom pass, the 60+ Oyster card is not valid for concessionary travel outside of the area approved by the Greater London Authority. This is because the concessionary bus travel scheme is centrally funded by government, but the Oyster 60+ and the Freedom Pass's validity on Tube, tram and rail networks is funded locally by the Greater London Authority.
Oyster and credit card
A credit card variant of the Oyster card was launched by Barclaycard in September 2007 and is called OnePulse. The card combines standard Oyster card functionality with Visa credit card facilities. The Barclaycard OnePulse incorporates contactless payment technology, allowing most transactions up to £20 to be carried out without the need to enter a PIN (unlike the Chip and PIN system).
In 2005, Transport for London shortlisted two financial services suppliers, Barclaycard and American Express, to add e-money payment capability to the Oyster card. Barclaycard was selected in December 2006 to supply the card, but the project was then temporarily shelved. The OnePulse card was later launched using a combination of Oyster and Visa, but with no e-money functionality.
In February 2014 Barclaycard announced that the OnePulse card would be withdrawn from use and all functionality would cease after 30 June 2014. This came about because the Oyster readers will now also recognise contactless cards and the presence of both on one card will cause 'card clash'. Customers had their OnePulse card replaced with the Freedom Rewards credit card.
Validity
A number of different ticket types can be held on an Oyster card, and validity varies across the different transport modes within London.
TfL services
Oyster is operated by Transport for London and has been valid on all London Underground, London buses, DLR and London Tramlink services since its launch in 2003.
National Rail
The introduction of Oyster pay as you go on the National Rail commuter rail network in London was phased in gradually over a period of about six years (see Roll-out history). Since January 2010, PAYG has been valid on all London suburban rail services which accept Travelcards. Additionally, PAYG may be used at a selected number of stations which lie just outside the zones. New maps were issued in January 2010 which illustrates where PAYG is now valid.
Certain limitations remain on National Rail, however. Airport services Stansted Express and Thameslink Luton Airport services all run outside the Travelcard zones, so PAYG is not valid on those services either.
Heathrow Express accepts Oyster pay as you go since 19 February 2019.
In November 2007, the metro routes operated by Silverlink were brought under the control of TfL and operated under the brand name London Overground. From the first day of operation, Oyster PAYG became valid on all Overground routes.
London Oyster Cards and contactless cards will be accepted on many Southern, Gatwick Express and Thameslink services in early 2016. These include to Gatwick Airport station and five other Surrey railway stations, as well as to Luton Airport.
London River Services
Since 23 November 2009, Oyster PAYG has been valid on London River Services boats operated by Thames Clippers only. Oyster cards are accepted for all Thames Clippers scheduled services, the DoubleTree Docklands ferry, the "Tate to Tate" service and the O2 Express. Discounts on standard fares are offered to Oyster cardholders, except on the O2 Express. The daily price capping guarantee does not apply to journeys made on Thames Clippers.
Emirates Air Line
Oyster card holders (PAYG, Travelcard or Freedom Pass) receive discounts on the Emirates Air Line cable car service across the River Thames between Greenwich and the Royal Docks, which opened in June 2012. Like London River Services, the cable car is a privately funded concern and is not fully integrated into TfL's ticketing system. To encourage use of the Air Line as a commuter service, substantial discounts are offered with a "frequent flyer" ticket which allows 10 journeys within 12 months.
Pricing
Pricing below is correct
The pricing system is fairly complex, and changes from time to time. The most up to date fares can be found on Transport for London's FareFinder website.
Adult single fares
Cash is no longer accepted on London's buses and trams and, in order to encourage passengers to use Oyster or contactless, cash fares for tubes and trains are generally much more expensive than PAYG fares. A contactless debit or credit card can be used in place of an Oyster card at the same fare.
The single Oyster fare for a bus or tram journey is £1.55, although the Hopper fare rules allow unlimited bus and tram journeys within one hour of first touching in for no additional cost. Passengers need to touch in using the same card on all the bus and tram journeys made and any free fares are applied automatically.
Using PAYG, a single trip on the tube within zone 1 costs £2.40 (compared to £5.50 if paid by cash). Journeys within any other single zone cost £1.80 at peak times and £1.60 off peak (£5.50 for cash at any time). Journeys in multiple zones are progressively more expensive.
The zoned fare system under which Oyster operates inevitably gives rise to some quirks in the fares charged. A 21 stop journey between Stratford and Clapham Junction on the overground is charged at £1.80 at peak times (£1.60 off peak) whereas a 1 stop journey between Whitechapel and Shoreditch High Street on the overground costs £2.40 at all times. This occurs because Shoreditch High Street is the only station in zone 1 on its line, whereas the entire Stratford to Clapham Junction line runs in zone 2 only. The cash fare is £5.50 in both cases and at all times. Similar anomalies are a feature of zoned fare systems worldwide.
Fare capping
A 'capping' system was introduced on 27 February 2005, which means that an Oyster card will be charged no more than the nearest equivalent Day Travelcard for a day's travel, if penalty fares are not incurred. The daily cap is £7.40 within zones 1-2 and £13.25 within zones 1–6, provided no penalty fares are incurred for failure to touch in or out, or for touching in or out at the same station. A lower cap of £4.65 applies if the day's journeys are restricted to buses and trams only.
Price capping does not apply to PAYG fares on London River Services boats and on Southeastern high speed train services.
Railcard discount
Holders of Disabled Persons, HM Forces, Senior, 16–25, 26-30 National Rail Railcards and Annual Gold Cards (as of 23 May 2010) receive a 34% reduction in the off-peak PAYG fares and price cap; Railcard discounts can be loaded on at London Underground ticket machines (with help from a member of staff).
Bus and tram discount
On 20 August 2007, a 'Bus and Tram Discount photocard' was launched for London Oyster card users who received Income Support. It allows cardholders to pay £0.75 for a single bus journey (capped at £2.25 per day), and to buy half price period bus passes.
This was originally the result of a deal between Transport for London and Petróleos de Venezuela to provide fuel for London Buses at a 20% discount. In return Transport for London agreed to open an office in the Venezuelan capital Caracas to offer expertise on town planning, tourism, public protection and environmental issues. The deal with Venezuela was ended by Mayor Boris Johnson shortly after he took office, and the Bus and Tram Discount photocard scheme closed to new applications on 20 August 2008; Johnson said that "TfL will honour the discount [on existing cards] until the six-month time periods on cards have run out".
The Bus and Tram Discount Scheme reopened on 2 January 2009, this time funded by London fare payers. The scheme has been extended to people receiving Employment and Support Allowance (ESA) and to those receiving Jobseeker's Allowance for 13 weeks or more.
River Bus discounts
Boats operated by Thames Clippers offer a 10% discount on standard fares to Oyster PAYG users, except on their O2 Express service, and a 1/3 discount to passengers carrying Oyster cards which have been loaded with a valid period Travelcard.
Penalty fares and maximum Oyster fare
In order to prevent "misuse" by a stated 2% of passengers, from 19 November 2006 pay as you go users are automatically charged the "maximum Oyster fare" for a journey on that network when they touch in. Depending on the journey made, the difference between this maximum fare and the actual fare due is automatically refunded to the user's Oyster card upon touching out. The maximum fare is automatically charged to a passenger who touches out without having first touched in. Two maximum fares are charged (one for touching in, one for touching out) if a passenger touches in at a station, waits for over twenty minutes, and then touches out at the same station, because the system assumes that the passenger has been able to travel to another station in that time, taking no account of situations where there are severe delays.
Users must touch in and out even if the ticket barriers are open. At stations where Oyster is accepted but that do not have ticket barriers, an Oyster validator will be provided for the purposes of touching in and out. The maximum Oyster fare applies even if the daily price cap has been reached as this does not count towards the cap.
Maximum Oyster fares may be contested by telephone to the Oyster helpline on 0343 222 1234 or via email. This involves providing the Oyster card number and the relevant journey details; further journeys appearing on the card are helpful to validate the user's claim.
If the claim is accepted then the maximum Oyster fare minus the cost of the journey will be refunded. The user will be asked to nominate and make a journey from a specific Tube, DLR, London Overground or National Rail station, or Tram stop. On touching in or out, the refund is loaded to the card. The only way to collect a refund is as part of an actual journey, otherwise a further penalty fare is charged. This is because when the passenger touches the reader with their Oyster card, not only will the refund go on to the card, but a new journey will start.
The start date to pick up the refund can be the next day (at the earliest) and the refund will remain at the nominated station for 8 days in total. The customer does have the option to delay the start date for up to 8 days, and the refund will still remain at the gate for up to 8 further days. After this time the refund will be deleted from the gate line, and the customer will have to re-request the refund.
Customers claiming a refund must do so within 28 days of the overcharge.
Oyster users who do not touch in before making a journey may be liable to pay a penalty fare (£80) and/or reported for prosecution if caught by a revenue protection inspector.
Refunds for delayed journeys
Commuters who were delayed 15 minutes or more on the Tube & DLR, and 30 minutes or more on London Overground & TfL Rail, are eligible to claim a refund for the cost of their journey. Commuters with Travelcards that do not pay for individual journeys will be refunded the Pay As You Go price of that single delayed journey. Customers wishing to claim these refunds must create an online TfL account, and then either manually claim online each time they are delayed, or use the free Train Reeclaim tool which automatically detects delayed TfL journeys and claims a refund on behalf of the commuter for each one.
Roll-out history
The roll-out of Oyster features and migration from the paper-based system has been phased. Milestones so far have been:
London Underground ticket barriers, bus ticket machines, Docklands Light Railway stations and Tramlink stops fitted with validators. Cards issued to Transport for London, London Underground, and bus operator staff (2002)
Cards issued to the public for annual and monthly tickets (2003)
Freedom Passes issued on Oyster (2004)
Pay as you go (PAYG, first called 'prepay') launched on London Underground, DLR, and the parts of National Rail where Underground fares had previously been valid. (January 2004)
Off-Peak Oyster single fares launched (January 2004)
Annual tickets available only on Oyster (2004)
Monthly tickets available only on Oyster, unless purchased from a station operated by a train company rather than TfL (2004)
Payg on buses (May 2004)
Daily price capping (February 2005)
Student Oyster Photocards for students over 18 (early 2005)
Oyster Child Photocards for under 16s—free travel on buses and reduced fares on trains (August 2005)
Automatic top-up (September 2005)
Weekly tickets available only on Oyster (September 2005)
Oyster single fares cost up to 33% less than paper tickets (January 2006)
Auto top-up on buses and trams (June 2006)
Journey history for Pay as you go transactions available online (July 2006)
Ability for active and retired railway staff who have a staff travel card to obtain privilege travel fares on the Underground with Oyster (July 2006)
£4 or £5 'maximum cash fare' charged for Pay as you go journeys without a 'touch in' and 'touch out' (November 2006)
Oyster card for visitors branded cards launched and sold by Gatwick Express.
Oyster PAYG extended to London Overground (11 November 2007)
Holders of Railcards (but not Network Railcard) can link their Railcard to Oyster to have PAYG capped at 34% below the normal rate since 2 January 2008.
Oyster PAYG can be used to buy tickets on river services operated by Thames Clipper (23 November 2009)
Oyster PAYG extended to National Rail (2 January 2010)
Contactless cards can be used on London Buses (End of 2012)
Cash no longer accepted on buses. Cash ticket machines removed from bus stops in central London (Summer 2014)
Contactless cards can be used on London Underground, Docklands Light Railway, London Overground and National Rail service. Weekly capping introduced on contactless cards. (September 2014)
Apple Pay, Android Pay and Samsung Pay accepted. (September 2014)
'One Day Bus and Tram Pass' introduced in 2015. Can be used for a maximum of one day only and can not be reloaded with credit. Allows the user to have unlimited journeys on buses and trams. (March 2015)
Online top ups ready to collect at any station or on any bus within 30 minutes - previously users had to nominate a station and collect next day. Collection on buses was also unavailable (July 2017)
Official TfL Oyster card app introduced for iOS and Android devices (August 2017)
'Hopper Fare' introduced whereby users can make 2 journeys for £1.50 within 1 hour. This was improved in 2018 with the ability to make unlimited journeys within 1 hour for the same fare (January 2018)
Roll-out on National Rail
The National Rail network is mostly outside the control of Transport for London, and passenger services are run by number of independent rail companies. Because of this, acceptance of Oyster PAYG on National Rail services was subject to the policy of each individual company and the roll-out of PAYG was much slower than on TfL services. For the first six years of Oyster, rollout on National Rail was gradual and uneven, with validity limited to specific lines and stations.
Several rail companies have accepted London Underground single fares because they duplicate London Underground routes, and they adopted the Oyster PAYG on those sections of the line which run alongside the Underground. When TfL took over the former Silverlink Metro railway lines, PAYG was rolled out on the first day of operation of London Overground. As a consequence, some rail operators whose services run parallel to London Overground lines were forced to accept PAYG, although only after some initial hesitation.
Examples of these services include London Midland trains from to and Southern trains to .
The process of persuading the various rail firms involved a long process of negotiation between the London Mayors and train operating companies. In 2005 Ken Livingstone (then Mayor of London) began a process of trying to persuade National Rail train operating companies to allow Oyster PAYG on all of their services within London, but a dispute about ticketing prevented this plan from going ahead.
After further negotiations, Transport for London offered to fund the train operating companies with £20m to provide Oyster facilities in London stations; this resulted in an outline agreement to introduce PAYG acceptance across the entire London rail network.
TfL announced a National Rail rollout date of May 2009, but negotiation with the private rail firms continued to fail and the rollout was delayed to 2010. Oyster readers were installed at many National Rail stations across London, but they remained covered up and not in use. In November 2009 it was finally confirmed that PAYG would be valid on National Rail from January 2010. The rollout was accompanied by the introduction of a new system of Oyster Extension Permits to allow travelcard holders to travel outside their designated zones on National Rail. This system was introduced to address the revenue protection concerns of the rail companies, but it was criticised for its complexity, and was abolished on 22 May 2011.
Impact
Since the introduction of the Oyster card, the number of customers paying cash fares on buses has dropped dramatically. In addition, usage of station ticket offices has dropped, to the extent that in June 2007, TfL announced that a number of their ticket offices would close, with some others reducing their opening hours. TfL suggested that the staff would be 're-deployed' elsewhere on the network, including as train drivers.
In August 2010 the issue of the impact of the Oyster card on staffing returned. In response to The National Union of Rail, Maritime and Transport Workers (RMT) ballot for a strike over planned job cuts, TfL stated that the increase in people using Oyster electronic ticketing cards meant only one in 20 journeys now involved interaction with a ticket office. As a result, it aims to reduce staff in ticket offices and elsewhere while deploying more workers to help passengers in stations.
Usage statistics
By June 2010 over 34 million cards have been issued of which around 7 million are in regular use. More than 80% of all tube journeys and more than 90% of all bus journeys use Oyster. Around 38% of all Tube journeys and 21% of all bus journeys are made using Oyster pay as you go. Use of single tickets has declined and stands at roughly 1.5% of all bus journeys and 3% of all Tube journeys.
Since the launch of contactless payment in 2012, over 500 million journeys have been made using contactless, using over 12 million contactless bank cards.
Future
Beyond London
Since January 2010, Oyster PAYG is valid at c2c stations , , and in Thurrock (Essex).
On 2 January 2013, Oyster PAYG was extended to (the terminus of the future Crossrail service) and by Abellio Greater Anglia.
With regard to London's airports, TfL and BAA studied acceptance of Oyster Pay As You Go on BAA's Heathrow Express service and the Southern-operated Gatwick Express service in 2006, but BAA decided not to go ahead. However, Oyster has been valid to on both the Gatwick Express and Southern Rail and Thameslink services since January 2016.
Oyster was extended to Hertford East when London Overground took over suburban services previously operated by Greater Anglia in May 2015.
Oyster was extended to Epsom, Hertford North, Potters Bar and Radlett in Summer 2019.
Contactless payment
In 2014, Transport for London became the first public transport provider in the world to accept payment from contactless bank cards. TfL first started accepting contactless debit and credit cards on London Buses on 13 December 2012, expanding to the Underground, Tram and the Docklands Light Railway in September 2014. Since 2016, contactless payment can also take place using contactless-enabled mobile devices such as phones and smartwatches, using Apple Pay, Google Pay and Samsung Pay.
TfL designed and coded the contactless payment system in-house, at a cost of £11m, after realising existing commercial solutions were inflexible or too focused on retail use. Since the launch of contactless payment in 2012, over 500 million journeys have been made, using over 12 million contactless bank cards. Consequently, TfL is now one of Europe's largest contactless merchants, with around 1 in 10 contactless transactions in the UK taking place on the TfL network.
In 2016, TfL licensed their contactless payment system to Cubic, the original developers of the Oyster card, allowing the technology to be sold to other transport providers worldwide. In 2017, licensing deals were signed with New York City, New South Wales and Boston.
The same requirement to touch in and out on underground services applies to contactless cards. The same price capping that applies to the use of Oyster cards applies to the use of contactless cards (provided the same card is used for the day's journeys). The fare paid every day is settled with the bank and appears on the debit or credit card statement. Detailed usage data is written to Transport for London's systems and is available for customers who register their contactless cards with Transport for London. Unlike an Oyster card, a contactless card does not store credit (beyond the holder's credit limit) and there is no need or facility to add credit to the card.
An Oyster card can have a longer term "season" ticket loaded onto it (either at a ticket office or on line). Such a ticket can start on any day and be valid for a minimum of seven days and a maximum of one year. Unlike an Oyster card, a contactless card can automatically apply a seven-day travel card rate. If the card is regularly used between any Monday to Sunday period, an automatic cap is applied. The seven-day period is fixed at Monday to Sunday, it cannot be any seven-day period, unlike a seven-day ticket applied to an Oyster card. There is currently no automatic cap for longer periods.
Since the Oyster readers cannot write to a contactless card, the reader when touching out is unable to display the fare charged for the journey, as the card does not have the starting point stored in it. This is calculated overnight once the touch in and touch out information is downloaded from the gates and collated. When a touch in with a contactless card is made, the validity of the card is checked by debiting the card account with 10 pence. The final fare charged includes this initial charge. As with Oyster, a failure to touch either in or out, charges the maximum possible fare. Transport for London state that if ticket inspection is taking place, that it is necessary to present the contactless card to the ticket inspector's portable oyster card reader. As the reader at the starting station cannot write to the contactless card and the card's use is not downloaded until the following night, the portable card reader cannot determine whether the card was used to touch into the system. However, a nightly reconciliation takes place to ensure that the maximum fare is charged to any card that was presented to an inspector not having been used to touch in for that journey.
App
In late 2017, TfL introduced the Oyster card app which allows users to check their balance on a compatible Android or iOS smartphone. The app is free to download and users can top up their Oyster card on the go or check journey history. Top ups are available to collect at any London Underground station or bus within 30 minutes. The app also has a notification feature which alerts users when their balance is below a specified amount.
First generation Oyster cards are not compatible with the app and TfL recommends users to upgrade their cards to newer ones to use them in the app.
Visual design
Designs
Trial versions, Transport for London staff versions and the first version of the standard Oyster card for the public were released with the roundels on the front of the cards in red. Standard issues of the Oyster card have been updated since the first public release in order to meet TfL's Design Standards.
There have been three issues of the standard Oyster card, including the original red roundel issue, but all three Oyster cards have retained their original dimensions of 85mm x 55mm, with Oyster card number and reference number located in the top right-hand corner and bottom right hand corner of the back of the card respectively, along with the terms and conditions.
The second issue of the standard Oyster card had 'Transport for London' branding on the back of the card, with the Mayor of London (having replaced the 'LONDON' branding in the blue segment of the card's back). The roundel on the front of the card was changed from the colour red to white, as white was seen to represent Transport for London (whereas a red roundel is more known to represent London Buses).
The most recent issue of the standard Oyster card has TfL branding on its front, removed from the back of the card in the previous issue. The Mayor of London branding has also been moved from the blue segment on the back of the card to underneath the terms and conditions, where it is more prominent.
Oyster card holder/wallet
With the release of the Oyster card, TfL released an accompanying Oyster card holder to replace the existing designs, previously sponsored by companies such as Yellow Pages, Direct Line and IKEA, as well as London Underground's and London Buses own releases of the holder which came without advertising.
The official Oyster branded holders have been redesigned on several occasions, keeping up with various iterations of the card and to increase service awareness. The initial version mimicked the blue design of the card itself, and was later modified to include the line "Please reuse your card" on the front.
In March 2007 the Oyster card wallet was designed by British designers including Katharine Hamnett, Frostfrench and Gharani Strok for Oxfam's I'm In campaign to end world poverty. The designer wallets were available for a limited period of time from Oxfam's street teams in London who handed them out to people who signed up to the I'm In movement. Also, to celebrate 100 years of the Piccadilly line, a series of limited edition Oyster card wallets were commissioned from selected artists from the Thin Cities Platform for Art project. The previous wallets handed out were sponsored by IKEA who also sponsor the tube map, and did not display the Oyster or the London Underground logos.
In late 2007 the standard issue wallets were redesigned with the only changes being the colour scheme changing from blue to black, and the removal of the resemblance to the Oyster card.
The most recent variation of the wallet came with the introduction of contactless payment acceptance on the network in 2012, where light-green "Watch out for card clash" wallets have been issued to raise awareness of "card clash", and replace the previous simplistic designs. The inside of these wallets reads "Only touch one card on the reader" on the clear plastic.
In 2015 Mel Elliot won the London Design Awards with her "Girls Night Out" themed wallet.
In addition to the official wallets distributed by TfL, which may or may not carry advertising for a sponsor, Oyster card holders and wallets are sometimes used as a marketing tool by other organisations seeking to promote their identity or activities. Such items are normally given away free, either with products or handed out to the public.
Although customers are usually given a free wallet when purchasing a card, the wallets themselves, including the most recent design issue, can be picked up for free at most stations or newsagents.
In September 2019 TfL announced that they were discontinuing their free Oyster Card wallets citing the cost and also the use of plastic.
Staff cards
The standard public Oyster card is blue but colour variants are used by transport staff. Similar cards are issued to police officers.
Variants
The standard Oyster card designs are as follows:
Standard Oyster card, Blue: Design has remained mostly the same since its introduction in 2003, but very minor text changes on the reverse continue to occur. These are issued when limited edition cards are not in circulation
One Day Bus and Tram Pass, Green: Introduced in January 2015, this card carries the "Oyster" branding and can only be used for a maximum of one day, as it can not be reloaded with credit. It is half the thickness of a standard Oyster card, as it is meant to be discarded when it expires. The card allows the user unlimited travel on bus and tram until the next day.
Visitor Oyster Card: A visitor card is designed for use by tourists to London and can be delivered to their home address before they arrive. Tourists can benefit from special offers and discounts and save money in leading London restaurants, shops and entertainment venues on presentation of this card. A discount is also offered on the Emirates cable car service.
A number of limited edition Oyster card variant designs exist. These are produced in limited quantities but otherwise function as standard Oyster cards. These include:
2011 Wedding of Prince William and Catherine Middleton.
2012 Queen Elizabeth II's Diamond Jubilee.
2012 London 2012 Games.
2013 150th Anniversary of London Underground.
2014 TfL Year of the Bus.
2015 London Design Awards "Girls Night Out"
2018 Adidas x TfL (only 500 produced).
In 2012, TfL also released various cards to mark the Olympic Games taking place in London that year. The cards performed the same as any other card and also include all the same text, apart from a differentiating line (listed below), and the London 2012 logo. Cards like these were distributed solely to select 2012 volunteers who took part in the opening and closing ceremonies.
They were used for the duration of the games and therefore are no longer valid for use on the transport system. The colour of these Oyster cards is pink with a coloured stripe:
"London Olympic Games", Pink stripe.
"London Paralympic Games", Blue stripe.
"Olympic Volunteer", Green stripe.
"Paralympic Volunteer" , Orange Stripe.
"2012 Ceremonies Volunteer", Purple stripe.
Three design variations of the Oyster visitor cards also exist:
2007 Tutankhamun and the Golden Age of the Pharaohs exhibition at the O2.
2007 Standard version showing the London Eye, St Paul's Cathedral, 30 St Mary Axe and the Millennium Bridge.
2012 Visitor Oyster card containing letters made up of landmarks spelling LONDON.
Collaborations
In October 2018, TfL partnered with Adidas to celebrate 15 years of the Oyster card. A limited number of trainers from the "Oyster Club pack" went on sale on with each of the three types costing £80 and being based on an element of the Tube's history. These designs include Temper Run, ZX 500RM and Continental 80. Only 500 limited-edition Oyster cards were produced, and each type of trainer contains a different card design in the box. Also included with the trainers is a genuine leather case (with TfL and Adidas logo engraving) and a credit of £80 preloaded on the Oyster card.
Issues and criticisms
Touching out penalties
Card users sometimes forget to touch in or touch out, are unable to find the yellow readers or it may be too crowded to touch out. Such card users have either received penalty fares by revenue inspectors, been charged a maximum cash fare, or been prosecuted in courts which can issue high penalties. Card users are also penalised for touching in and out of the same station within a two-minute period, and charged the maximum possible fare from that station.
The system also applies two penalty fares (one for touching in, and one for touching out) to passengers who touch in and touch out at the same station after 20 minutes; this is due to the system assuming that, after such a long delay, the passenger has travelled to another station and returned without touching in or out at the other station, when in reality the passenger might simply have been waiting for a train, baulked at the long waiting time and exited.
Extension fares
Holders of Travelcards can add pay-as-you-go credit on their Oyster cards. This credit is used as 'extension fare' when users travel beyond the zones in which their Travelcard is valid. This extension fare equals the regular Oyster fare for a journey from/to the respective station outside of the validity area of the Travelcard to/from the closest zone still covered by the Travelcard. To distinguish between peak and off-peak fares, however, the start of the journey is taken into account. That means travellers might be charged the (more expensive) peak fare as extension fare even if they had not yet left the area of validity of their Travelcard by the end of peak time. Conversely, a journey starting in the covered zones shortly before the start of the peak time will be charged as off-peak.
There is an exploitable feature of the system, in that if a touch-in (or touch-out) is made in a zone where the oyster card is loaded with a valid season ticket or Travelcard but there is no associated touch-out (or touch-in), the system does not treat this as an unresolved journey. Although encouraged to do so, such ticket holders are not obliged to touch-in or touch-out within the zones of their ticket's validity (other than to operate a barrier). This means that a passenger holding (say) a valid zone 1&2 Travelcard, can touch-in at a zone 1 station (to open the ticket barrier) and then travel to a zone 3, 4, 5 or 6 station that does not have a barrier without touching out or paying the extension fare. Ticket inspectors frequently operate at such locations to catch these fare dodging passengers. Since the system maintains a record of every touch the card does make (even with a valid travelcard), TfL will assiduously seek to recover all the unpaid fares when a passenger who is caught is prosecuted for fare evasion.
Privacy
The system has been criticised as a threat to the privacy of its users. Each Oyster card is uniquely numbered, and registration is required for monthly or longer tickets, which are no longer available on paper. Limited usage data is stored on the card. Journey and transaction history is held centrally by Transport for London for up to eight weeks, after which the transactions and journey history are disassociated from the Oyster card and cannot be re-associated; full registration details are held centrally and not on individual Oyster cards; recent usage can be checked by anyone in possession of the card at some ticket machines.
The police have used Oyster card data as an investigative tool, and this use is increasing. On 13 April 2006, TfL stated that "Between August 2004 and March 2006 TfL's Information Access and Compliance Team received 436 requests from the police for Oyster card information. Of these, 409 requests were granted and the data were released to the police." However, in response to another request in February 2012, "TfL said this had happened 5,295 times in 2008, 5,359 in 2009, 5,046 in 2010, and a record 6,258 in 2011".
Additionally, in 2008 news reports indicated that the security services were seeking access to all Oyster card data for the purposes of counter-terrorism. Such access is not provided to the security services.
As yet, there have been no reports of customer data being misused, outside the terms of the registration agreement. There have been no reports of Oyster data being lost.
Design
The system has been criticised for usability issues in general system, website and top-up machine design.
Oyster pay-as-you-go users, on London Underground, DLR and National Rail (including London Overground) services are required always to "touch in" and "touch out" to cause the correct fare to be charged. This requirement is less obviously enforced at stations where there are only standalone yellow reader rather than ticket barriers. Without a physical barrier, pay-as-you-go users may simply forget to "touch in" or fail to touch their card correctly, which will result in a maximum fare being charged. Equally, if the barriers do not function (reading 'SEEK ASSISTANCE') and the TfL or train operating company staff member has to open the gates manually, then the maximum fare may be charged. If this occurs a refund may be requested by telephoning the Oyster helpline the day after the incident occurs (to allow time for the central computers to be updated); the overcharged amount can be added back to the pay-as-you-go balance on the card from the following day when the Oyster card is used to make a journey.
The use of Oyster cards on buses has been subject to criticism following a number of successful criminal prosecutions by TfL of bus passengers whose Oyster card, when checked by Revenue Protection Inspectors, did not show that the passenger had "touched in" correctly on boarding. In particular, problems have been highlighted in connection with the quality of error messages given to passengers when touching in has failed for any reason. In one case, a passenger successfully appealed against his conviction for fare evasion when the court noted that the passenger believed he had paid for his journey because the Oyster reader did not give sufficient error warning.
In 2011, London Assembly member Caroline Pidgeon obtained figures from the Mayor of London which revealed that in 2010, £60million had been taken by TfL in maximum Oyster fares. The statistics also detailed a "top ten" of stations where maximum fares were being collected, notably Waterloo and . In her criticism of the figures, Pidgeon claimed that "structural problems" with the Oyster system were to blame, such as faulty equipment failing to register cards and difficulty in obtaining refunds. A report by BBC London highlighted the system of "autocomplete" (in which Oyster cards journeys are automatically completed without the need to physically touch out, exceptionally used when large crowds are exiting stations) as particularly problematic.
Technical faults
In January 2004, on the day that the pay-as-you-go system went live on all Oyster cards, some season ticket passengers were prevented from making a second journey on their travelcard. Upon investigation each had a negative prepay balance. This was widely reported as a major bug in the system. However, the reason for the "bug" was that some season ticket holders were passing through zones not included on their tickets. The existing paper system could not prevent this kind of misuse as the barriers only checked if a paper ticket was valid in the zone the barrier was in.
On 10 March 2005 an incorrect data table meant that the Oyster system was inoperable during the morning rush hour. Ticket barriers had to be left open and pay as you go fares could not be collected.
On 12 July 2008 an incorrect data table disabled an estimated 72,000 Oyster cards, including Travelcards, staff passes, Freedom Passes, child Oyster cards and other electronic tickets. The Oyster system was shut down and later restarted during traffic hours. Some customers already in the system were overcharged. Refunds were given to those affected and all disabled cards were replaced. Freedom Pass holders had to apply to their local authority for replacement passes (as these are not managed by TfL).
A further system failure occurred two weeks later on 25 July 2008, when pay as you go cards were not read properly.
On 2 January 2016 the Oyster system failed, with readers failing to process Oyster cards but continuing to process contactless cards and Apple Pay transactions.
The difference between pay as you go and Travelcards
Transport for London promoted the Oyster card at launch with many adverts seeking to portray it as an alternative to the paper Travelcard. In late 2005 the Advertising Standards Authority ordered the withdrawal of one such poster which claimed that Oyster pay as you go was "more convenient" than Travelcards with "no need to plan in advance". The ASA ruled that the two products were not directly comparable, mainly because the pay as you go facility was not valid on most National Rail routes at the time.
Transport for London has made a significant surplus from excess fares deducted for those travelling using PAYG and failing to touch out as they exit stations. According to information obtained under the Freedom of Information Act TfL made £32m from pay as you go cards of which £18m was maximum fares for failing to touch out. Only £803,000 was paid in refunds, showing that whilst customers can apply for a refund, most do not. The oyster online site does not list all penalty fares eligible for refunds on the front page, and users must search for fares charged on a particular day to discover all penalty fares that have been charged. The maximum fares for failing to touch out were introduced late 2006.
Validity on National Rail
Until the availability of Oyster pay-as-you-go on the whole of the National Rail suburban network in January 2010, the validity of PAYG was not consistent across different modes of transport within London, and this gave rise to confusion for Oyster pay-as-you-go users. Many passengers were caught out trying to use Pay as you go on rail routes where it was not valid.
On some National Rail routes where pay-as-you-go was valid, Oyster validators had not been installed at some intermediate stations. While Oyster pay-as-you-go users could legally travel along those lines to certain destinations, they were not permitted to board or alight at intermediate stations. If their journey began or ended at an intermediate station, they would be unable to touch out and consequently be liable for penalty fares or prosecution.
The complexity of Oyster validity on these routes was criticised for increasing the risk of passengers inadvertently failing to pay the correct fare. Criticism was also levelled at train operating companies for failing to provide adequate warnings to passengers about Oyster validity on their routes and for not installing Oyster readers at certain stations.
TfL published guides to the limitations of pay-as-you-go validity diagrammatic maps illustrating PAYG validity were published in November 2006 by National Rail, but these were rarely on display at stations and had to be obtained from transport websites.
Online and telesales
Oyster card ticket renewals and pay-as-you-go top-ups made online allow users to make purchases without the need to go to a ticket office or vending machine. However, there are certain limitations to this system:
tickets and pay-as-you-go funds can only be added to the Oyster card from 30 minutes after purchase (if bought online);
users must select a station or tram stop where they must touch in or out as part of a normal journey to complete the purchase (as cards cannot be credited remotely);
users must nominate the station in advance – failure to enter or exit via this station means that the ticket is not added to the card;
tickets purchased in this way cannot be added from a bus reader (due to these not being fixed in a permanent location).
Security issues
In June 2008, researchers at the Radboud University in Nijmegen, the Netherlands, who had previously succeeded in hacking the OV-chipkaart (the Dutch public transport chip card), hacked an Oyster card, which is also based on the MIFARE Classic chip. They scanned a card reader to obtain its cryptographic key, then used a wireless antenna attached to a laptop computer to brush up against passengers on the London Underground and extract the information from their cards. With that information they were able to clone a card, add credit to it, and use it to travel on the Underground for at least a day. The MIFARE chip manufacturers NXP Semiconductor sought a court injunction to prevent the publication of the details of this security breach, but this was overturned on appeal.
The Mifare Classic—which is also used as a security pass for controlling entry into buildings—has been criticised as having very poor security, and NXP criticised for trying to ensure security by obscurity rather than strong encryption. Breaching security on Oyster cards should not allow unauthorised use for more than a day, as TfL promises to turn off any cloned cards within 24 hours, but a cloned Mifare Classic can allow entry into buildings that use this system for security.
Strategic research
Transport for London, in partnership with academic institutions such as MIT, has begun to use the data captured by the Oyster smartcard system for strategic research purposes, with the general goal of using Oyster data to gain cheap and accurate insights into the behaviour and experience of passengers. Specific projects include estimation of Origin-Destination Matrices for the London Underground, analysis of bus-to-bus and bus-to-tube interchange behaviour, modelling and analysis of TfL-wide fare policy changes, and measurement of service quality on the London Overground.
See also
Oyster card (pay as you go) on National Rail
Smartcards on National Rail
List of smart cards
Octopus Card
OV-chipkaart, a similar smart card system used in the Netherlands
Clipper Card, used in the San Francisco Bay Area
Presto card
Troika card
Radio-frequency identification
Opal card, used in New South Wales and based on the Oyster card
Myki card, used in Victoria (Australia)
References
External links
Contactless smart cards
Fare collection systems in London
Transport for London |
509070 | https://en.wikipedia.org/wiki/Ripping | Ripping | Ripping is extracting all or parts of digital contents from a container. Originally, it meant to rip music out of Amiga games. Later, the term was used to extract WAV or MP3 format files from digital audio CDs, but got applied as well to extract the contents of any media, most notably DVD and Blu-ray discs.
Despite the name, neither the media nor the data is damaged after extraction. Ripping is often used to shift formats, and to edit, duplicate or back up media content. A rip is the extracted content, in its destination format, along with accompanying files, such as a cue sheet or log file from the ripping software.
To rip the contents out of a container is different from simply copying the whole container or a file. When creating a copy, nothing looks into the transferred file, nor checks if there is any encryption or not, and raw copy is also not aware of any file format. One can copy a DVD byte by byte via programs like the Linux dd command onto a hard disk, and play the resulting ISO file just as one would play the original DVD.
To rip contents is also different from grabbing an analog signal and re-encoding it, as it was done with early day CD-ROM drives not capable of digital audio extraction (DAE). Sometimes even encoding, i.e. digitizing audio and video originally stored on analog formats, such as vinyl records is incorrectly referred to as ripping.
Ripping software
A CD ripper, CD grabber or CD extractor is a piece of software designed to extract or "rip" raw digital audio (in format commonly called CDDA) from a compact disc to a file or other output. Some all-in-one ripping programs can simplify the entire process by ripping and burning the audio to disc in one step, possibly re-encoding the audio on-the-fly in the process.
For example, audio CDs contain 16-bit, 44.1 kHz LPCM-encoded audio samples interleaved with secondary data streams and synchronization and error correction info. The ripping software tells the CD drive's firmware to read this data and parse out just the LPCM samples. The software then dumps them into a WAV or AIFF file, or feeds them to another codec to produce, for example, a FLAC or MP3 file. Depending on the capabilities of the ripping software, ripping may be done on a track-by-track basis, or all tracks at once, or over a custom range. The ripping software may also have facilities for detecting and correcting errors during or after the rip, as the process is not always reliable, especially when the CD or the drive containing the CD itself is damaged or defective.
There are also DVD rippers which operate in a similar fashion. Unlike CDs, DVDs do contain data formatted in files for use in computers. However, commercial DVDs are often encrypted (for example, using Content Scramble System/ARccOS Protection), preventing access to the files without using the ripping software's decryption ability, which may not be legal to distribute or use. DVD files are often larger than is convenient to distribute or copy to CD-R or ordinary (not dual-layer) DVD-R, so DVD ripping software usually offers the ability to re-encode the content, with some quality loss, so that it fits in smaller files.
Legality
When the material being ripped is not in the public domain, and the person making the rip does not have the copyright owner's permission, then such ripping may be regarded as copyright infringement. However, some countries either explicitly allow it in certain circumstances, or at least don't forbid it. Some countries also have fair use-type laws which allow unauthorized copies to be made under certain conditions.
Asia
Europe
A directive of the European Union allows its member nations to instate in their legal framework a private copy exception to the authors and editors rights. If a member State chooses to do so, it must also introduce a compensation for the copyright holders. Most European countries, except for Norway, have introduced a private copying levy that compensates the owners directly from the country's budget. In 2009 the sum awarded to them was $55 million. In all but a few of these countries (exceptions include the UK and Malta), the levy is excised on all machines and blank materials capable of copying copyrighted works.
Under the directive, making copies for other people is forbidden, and if done for profit can lead to a jail sentence.
Netherlands
In the Netherlands, citizens are allowed to make copies of their legally bought audio and video. This contains for example CD, SACD, as well as DVD and Blu-ray. These copies are called "home copies" and may only be used by the ripper. Public distribution of ripped files is not allowed.
Spain
In Spain, anyone is allowed to make a private copy of a copyrighted material for oneself, providing that the copier has accessed the original material legally.
United Kingdom
Private copying of copyrighted material is illegal in the United Kingdom. According to a 2009 survey, 59% of British consumers believed ripping a CD to be legal, and 55% admitted to doing it.
In 2010, the UK government sought input on modernizing copyright exceptions for the digital age, and commissioned the Hargreaves Review of Intellectual Property and Growth. The review asserted that a private copying exception was overdue, citing that users were unaware of what was even legally allowed, and that a copyright law where "millions of citizens are in daily breach of copyright, simply for shifting a piece of music or video from one device to another" was not "fit for the digital age". The review recommended, among other things, that the government consider adopting the EU Copyright Directive's recommendation that member states enact an exception for private, noncommercial copying so long as the rights holders receive "fair compensation." Other EU member states chose to implement the exception paired with a tax on music purchases or widely varying levies on copying equipment and blank media. However, the Review reasoned that no such collections are necessary when implementing a copyright exception for format-shifting:
In August 2011, the government broadly accepted the recommendations of the Hargreaves Review. At the end of 2012, the government published "Modernising Copyright", a document outlining specific changes the government intends to make, including the Hargreaves-recommended exception for private, noncommercial copying.
Following each milestone in the reform process, press reports circulated that ripping non-DRM-protected CDs and DVDs was no longer illegal. However, the actual legislation to implement the changes is not yet in force; the Intellectual Property Office had only begun seeking review of draft legislation in June 2013, and the resulting Statutory Instruments (SIs) weren't laid out for Parliamentary approval until March 27, 2014, and weren't actually approved until July 14 (Commons) and July 27 (Lords); with an effective date of October 1, 2014. Anticipating approval, the Intellectual Property Office published a guide for consumers to explain the forthcoming changes and to clarify what would remain illegal. The private copying exception may seem to conflict with the existing Copyright Directive prohibition on overriding or removing any DRM or TPM (technical protection measures) that are sometimes used on optical media to protect the content from ripping. However, the "Modernising Copyright" report makes clear that any workarounds to allow access will not involve a relaxation of the prohibition.
On 17 July 2015, the private copying exemption was overturned by the High Court of Justice following a complaint by BASCA, Musicians' Union, and UK Music, making private copying once again illegal. The groups objected to the exclusion of a compensation scheme, presenting evidence contradicting an assertion that an exemption would cause "zero or insignificant harm" to copyright holders and thus did not require compensation.
North America
United States
U.S. copyright law (Title 17 of the United States Code) generally says that making a copy of an original work, if conducted without the consent of the copyright owner, is infringement. The law makes no explicit grant or denial of a right to make a "personal use" copy of another's copyrighted content on one's own digital media and devices. For example, space shifting, by making a copy of a personally owned audio CD for transfer to an MP3 player for that person's personal use, is not explicitly allowed or forbidden.
Existing copyright statutes may apply to specific acts of personal copying, as determined in cases in the civil or criminal court systems, building up a body of case law. Consumer copyright infringement cases in this area, to date, have only focused on issues related to consumer rights and the applicability of the law to the sharing of ripped files, not to the act of ripping, per se.
Canada
The Copyright Act of Canada generally says that it is legal to make a backup copy of any copyrighted work if the user owns or has a licence to use a copy of the work or subject-matter as long as the user does not circumvent a technological protection measure or give any of the reproductions away. This means that in most cases, ripping DVDs in Canada is most likely illegal.
Oceania
Australia and New Zealand
In Australia and New Zealand a copy of any legally purchased music may be made by its owner, as long as it is not distributed to others and its use remains personal. In Australia, this was extended in 2006 to also include photographs and films.
Opinions of ripping
Recording industry representatives have made conflicting statements about ripping.
Executives claimed (in the context of Atlantic v. Howell) that ripping may be regarded as copyright infringement. In oral arguments before the Supreme Court in MGM Studios, Inc. v. Grokster, Ltd., MGM attorney Don Verrilli (later appointed United States Solicitor General by the Obama administration), stated: "The record companies, my clients, have said, for some time now, and it's been on their Website for some time now, that it's perfectly lawful to take a CD that you've purchased, upload it onto your computer, put it onto your iPod. There is a very, very significant lawful commercial use for that device, going forward."
Nevertheless, in lawsuits against individuals accused of copyright infringement for making files available via file-sharing networks, RIAA lawyers and PR officials have characterized CD ripping as "illegal" and "stealing".
Asked directly about the issue, RIAA president Cary Sherman asserted that the lawyers misspoke, and that the RIAA has never said whether it was legal or illegal, and he emphasized that the RIAA had not yet taken anyone to court over that issue alone.
Fair use
Although certain types of infringement scenarios are allowed as fair use and thus are effectively considered non-infringing, "personal use" copying is not explicitly mentioned as a type of fair use, and case law has not yet established otherwise.
Personal copying acknowledgments
According to Congressional reports, part of the Audio Home Recording Act (AHRA) of 1992 was intended to resolve the debate over home taping. However, 17 USC 1008, the relevant text of the legislation, didn't fully indemnify consumers for noncommercial, private copying. Such copying is broadly permitted using analog devices and media, but digital copying is only permitted with certain technology like DAT, MiniDisc, and "audio" CD-R—not with computer hard drives, portable media players, and general-purpose CD-Rs.
The AHRA was partially tested in RIAA v. Diamond Multimedia, Inc., a late-1990s case which broached the subject of a consumer's right to copy and format-shift, but which ultimately only ascertained that one of the first portable MP3 players wasn't even a "digital recording device" covered by the law, so its maker wasn't required to pay royalties to the recording industry under other terms of the AHRA.
Statements made by the court in that case, and by both the House and Senate in committee reports about the AHRA, do interpret the legislation as being intended to permit private, noncommercial copying with any digital technology. However, these interpretations may not be binding.
In 2007, the Federal Trade Commission (FTC), a government office which requires business to engage in consumer-friendly trade practices, has acknowledged that consumers normally expect to be able to rip audio CDs. Specifically, in response to the Sony BMG copy protection rootkit scandal, the FTC declared that the marketing and sale of audio CDs which surreptitiously installed digital rights management (DRM) software constituted deceptive and unfair trade practices, in part because the record company "represented, expressly or by implication, that consumers will be able to use the CDs as they are commonly used on a computer: to listen to, transfer to playback devices, and copy the audio files contained on the CD for personal use."
A DVD ripper''' is a computer program that facilitates copying the content of a DVD to a hard disk drive. They are mainly used to transfer video on DVDs to different formats, to edit or back up DVD content, and to convert DVD video for playback on media players and mobile devices. Some DVD rippers include additional features such as Blu-ray support, DVD and Blu-ray Disc decryption, copy protection removal and the ability to make discs unrestricted and region-free. While most DVD rippers only convert video to highly compressed MP4 video files, there are other rippers that can convert DVDs to higher quality compressed video. These types of DVD rippers are used by the television and film industry to create broadcast quality video from DVD. Video ripped by these professional DVD rippers is an exact duplicate of the original DVD video.
Circumvention of DVD copy protection
In the case where media contents are protected using some effective copy protection scheme, the Digital Millennium Copyright Act (DMCA) of 1998 makes it illegal to manufacture or distribute circumvention tools and use those tools for infringing purposes. In the 2009 case RealNetworks v. DVD CCA'', the final injunction reads, "while it may well be fair use for an individual consumer to store a backup copy of a personally owned DVD on that individual's computer, a federal law has nonetheless made it illegal to manufacture or traffic in a device or tool that permits a consumer to make such copies." This case made clear that manufacturing and distribution of circumvention tools was illegal, but use of those tools for non-infringing purposes, including fair use purposes, was not.
The Library of Congress periodically issues rulings to exempt certain classes of works from the DMCA's prohibition on the circumvention of copy protection for non-infringing purposes. One such ruling in 2010 declared, among other things, that the Content Scramble System (CSS) commonly employed on commercial DVDs could be circumvented to enable non-infringing uses of the DVD's content. The Electronic Frontier Foundation (EFF) hailed the ruling as enabling DVD excerpts to be used for the well-established fair-use activities of criticism and commentary, and for the creation of derivative works by video remix artists. However, the text of the ruling says the exemption can only be exercised by professional educators and their students, not the general public.
See also
References
Copyright law
Audio storage
Video storage
Video conversion software
DVD |
510278 | https://en.wikipedia.org/wiki/Infineon%20Technologies | Infineon Technologies | Infineon Technologies AG is a German semiconductor manufacturer founded in 1999, when the semiconductor operations of the former parent company Siemens AG were spun off. Infineon has about 46,665 employees and is one of the ten largest semiconductor manufacturers worldwide. It is market leader in automotive and power semiconductors. In fiscal year 2020, the company achieved sales of €8.6 billion. Infineon bought Cypress in April 2020.
Markets
Infineon markets semiconductors and systems for automotive, industrial, and multimarket sectors, as well as chip card and security products. Infineon has subsidiaries in the US in Milpitas, California, and in the Asia-Pacific region, in Singapore and Tokyo, Japan.
Infineon has a number of facilities in Europe, one in Dresden. Infineon's high power segment is in Warstein, Germany; Villach and Graz in Austria; Cegléd in Hungary; and Italy. It also runs R&D centers in France, Singapore, Romania, Taiwan, UK, Ukraine and India, as well as fabrication units in Singapore, Malaysia, Indonesia, and China. There's also a Shared Service Center in Maia, Portugal.
Infineon is listed in the DAX index of the Frankfurt Stock Exchange.
In 2010, a proxy contest broke out in advance of the impending shareholders' meeting over whether board member Klaus Wucherer would be allowed to step into the chairman's office upon the retirement of the then-current chairman Max Dietrich Kley.
After several restructurings, Infineon today comprises four business areas:
Automotive (ATV)
Infineon provides semiconductor products for use in powertrains (engine and transmission control), comfort electronics (e.g., steering, shock absorbers, air conditioning) as well as in safety systems (ABS, airbags, ESP). The product portfolio includes microcontrollers, power semiconductors and sensors. In fiscal year 2018 (ending September), sales amounted to €3,284 million for the ATV segment.
Industrial Power Control (IPC)
The industrial division of the company includes power semiconductors and modules which are used for generation, transmission and consumption of electrical energy. Its application areas include control of electric drives for industrial applications and household appliances, modules for renewable energy production, conversion and transmission. This segment achieved sales of €1,323 million in fiscal year 2018.
Power & Sensor Systems (PSS)
The division Power & Sensor Systems sums up the business with semiconductor components for efficient power management or high-frequency applications. Those find application in lighting management systems and LED lighting, power supplies for servers, PCs, notebooks and consumer electronics, custom devices for peripheral devices, game consoles, applications in medical technology, high-frequency components having a protective function for communication and tuner systems and silicon MEMS microphones. In fiscal year 2018 PSS generated €2,318 million.
Connected Secure Systems (CSS)
The CSS business provides microcontrollers for mobile phone SIM cards, payment cards, security chips and chip-based solutions for passports, identity cards and other official documents. Infineon delivers a significant number of chips for the new German identity card. In addition, CSS provides solutions for applications with high security requirements such as pay television and Trusted Computing. CSS achieved €664 million in fiscal year 2018. "Infineon is the number 1 in embedded security" (IHS, 2016 – IHS Embedded Digital Security Report).
Acquisitions and divestitures
ADMTek acquisition
Infineon bought ADMtek in 2004.
Qimonda carve out
The former Memory Products division was carved out in 2006 as Infineon's subsidiary Qimonda AG, of which Infineon last held a little over three-quarters. At its height Qimonda employed about 13,500 people; it was listed on the New York Stock Exchange until it filed for bankruptcy with the district court in Munich in January 2009.
Lantiq carve out
On 7 July 2009, Infineon Technologies AG agreed by contract with the U.S. investor Golden Gate Capital on the sale of its Wireline Communications for €250 million. The resulting company was named Lantiq and had around 1,000 employees. It was acquired by Intel in 2015 for US$345 million.
Mobile Communications sale to Intel
On 31 January 2011, the sale of the business segment of wireless solutions to Intel was completed for US$1.4 billion. The resulting new company had approximately 3,500 employees and operated as Intel Mobile Communications (IMC). The smartphone modem business of IMC was announced to be acquired by Apple Inc. in 2019.
International Rectifier acquisition
Infineon Technologies agreed on 20 August 2014 to buy the International Rectifier Corporation (IR) for about $3 billion, one third by cash and two-thirds by credit line. The acquisition of International Rectifier was officially closed on 13 January 2015.
Wolfspeed acquisition attempt
In July 2016, Infineon announced it agreed to buy the North Carolina-based company Wolfspeed from Cree Inc. for $850 million in cash. The deal was however stopped due to US security concerns.
Innoluce BV acquisition
In October 2016, Infineon acquired the company Innoluce which has expertise in MEMS and LiDAR systems for use in autonomous cars. The MEMS lidar system can scan up to 5,000 data points a second with a range of 250 meters with an expected unit cost of $250 in mass production.
RF Power sale to Cree
In March 2018, Infineon Technologies AG sold its RF Power Business Unit to Cree Inc. for €345 Million.
Cypress Semiconductor acquisition
In June 2019, Infineon announced it would acquire Cypress Semiconductors for $9.4 billion. The acquisition closed on 17 April 2020.
Financial data
Management
The board of directors consists of Reinhard Ploss (Chief Executive Officer), Helmut Gassel (Chief Marketing Officer), Jochen Hanebeck (Chief Operations Officer), Constanze Hufenbecher (Chief Digital Transformation Officer) and Sven Schneider (Chief Financial Officer).
Litigation
In 2004–2005, an investigation was carried out into a DRAM price fixing conspiracy during 1999–2002 that damaged competition and raised PC prices. As a result, Samsung paid a $300 million fine, Hynix paid $185 million, Infineon: $160 million.
Security flaw
In October 2017, it was reported that a flaw, dubbed ROCA, in a code library developed by Infineon, which had been in widespread use in security products such as smartcards and TPMs, enabled private keys to be inferred from public keys. As a result, all systems depending upon the privacy of such keys were vulnerable to compromise, such as identity theft or spoofing. Affected systems include 750,000 Estonian national ID cards, 300,000 Slovak national ID cards, and computers that use Microsoft BitLocker drive encryption in conjunction with an affected TPM. Microsoft released a patch that works around the flaw via Windows Update immediately after the disclosure.
Notes
External links
Smart cards
Semiconductor companies of Germany
Multinational companies headquartered in Germany
Companies based in Bavaria
Electronics companies established in 1999
Computer memory companies
Companies listed on the Frankfurt Stock Exchange
German companies established in 1999
Neubiberg
Siemens
Corporate spin-offs
Company in the TecDAX |
513506 | https://en.wikipedia.org/wiki/Krohn%E2%80%93Rhodes%20theory | Krohn–Rhodes theory | In mathematics and computer science, the Krohn–Rhodes theory (or algebraic automata theory) is an approach to the study of finite semigroups and automata that seeks to decompose them in terms of elementary components. These components correspond to finite aperiodic semigroups and finite simple groups that are combined in a feedback-free manner (called a "wreath product" or "cascade").
Krohn and Rhodes found a general decomposition for finite automata. In doing their research, though, the authors discovered and proved an unexpected major result in finite semigroup theory, revealing a deep connection between finite automata and semigroups.
Definitions and description of the Krohn–Rhodes theorem
Let T be a semigroup. A semigroup S that is a homomorphic image of a subsemigroup of T is said to be a divisor of T.
The Krohn–Rhodes theorem for finite semigroups states that every finite semigroup S is a divisor of a finite alternating wreath product of finite simple groups, each a divisor of S, and finite aperiodic semigroups (which contain no nontrivial subgroups).
In the automata formulation, the Krohn–Rhodes theorem for finite automata states that given a finite automaton A with states Q and input set I, output alphabet U, then one can expand the states to Q' such that the new automaton A' embeds into a cascade of "simple", irreducible automata: In particular, A is emulated by a feed-forward cascade of (1) automata whose transitions semigroups are finite simple groups and (2) automata that are banks of flip-flops running in parallel. The new automaton A' has the same input and output symbols as A. Here, both the states and inputs of the cascaded automata have a very special hierarchical coordinate form.
Moreover, each simple group (prime) or non-group irreducible semigroup (subsemigroup of the flip-flop monoid) that divides the transformation semigroup of A must divide the transition semigroup of some component of the cascade, and only the primes that must occur as divisors of the components are those that divide A's transition semigroup.
Group complexity
The Krohn–Rhodes complexity (also called group complexity or just complexity) of a finite semigroup S is the least number of groups in a wreath product of finite groups and finite aperiodic semigroups of which S is a divisor.
All finite aperiodic semigroups have complexity 0, while non-trivial finite groups have complexity 1. In fact, there are semigroups of every non-negative integer complexity. For example, for any n greater than 1, the multiplicative semigroup of all (n+1) × (n+1) upper-triangular matrices over any fixed finite field has complexity n (Kambites, 2007).
A major open problem in finite semigroup theory is the decidability of complexity: is there an algorithm that will compute the Krohn–Rhodes complexity of a finite semigroup, given its multiplication table?
Upper bounds and ever more precise lower bounds on complexity have been obtained (see, e.g. Rhodes & Steinberg, 2009). Rhodes has conjectured that the problem is decidable.
History and applications
At a conference in 1962, Kenneth Krohn and John Rhodes announced a method for decomposing a (deterministic) finite automaton into "simple" components that are themselves finite automata. This joint work, which has implications for philosophy, comprised both Krohn's doctoral thesis at Harvard University, and Rhodes' doctoral thesis at MIT.<ref>Morris W. Hirsch, "Foreword to Rhodes' Applications of Automata Theory and Algebra". In J. Rhodes, Applications of Automata Theory and Algebra: Via the Mathematical Theory of Complexity to Biology, Physics, Philosophy and Games", (ed. C. L. Nehaniv), World Scientific Publishing Co., 2010.</ref> Simpler proofs, and generalizations of the theorem to infinite structures, have been published since then (see Chapter 4 of Steinberg and Rhodes' 2009 book The q-Theory of Finite Semigroups for an overview).
In the 1965 paper by Krohn and Rhodes, the proof of the theorem on the decomposition of finite automata (or, equivalently sequential machines) made extensive use of the algebraic semigroup structure. Later proofs contained major simplifications using finite wreath products of finite transformation semigroups. The theorem generalizes the Jordan–Hölder decomposition for finite groups (in which the primes are the finite simple groups), to all finite transformation semigroups (for which the primes are again the finite simple groups plus all subsemigroups of the "flip-flop" (see above)). Both the group and more general finite automata decomposition require expanding the state-set of the general, but allow for the same number of input symbols. In the general case, these are embedded in a larger structure with a hierarchical "coordinate system".
One must be careful in understanding the notion of "prime" as Krohn and Rhodes explicitly refer to their theorem as a "prime decomposition theorem" for automata. The components in the decomposition, however, are not prime automata (with prime defined in a naïve way); rather, the notion of prime is more sophisticated and algebraic: the semigroups and groups associated to the constituent automata of the decomposition are prime (or irreducible) in a strict and natural algebraic sense with respect to the wreath product (Eilenberg, 1976). Also, unlike earlier decomposition theorems, the Krohn–Rhodes decompositions usually require expansion of the state-set, so that the expanded automaton covers (emulates) the one being decomposed. These facts have made the theorem difficult to understand, and challenging to apply in a practical way—until recently, when computational implementations became available (Egri-Nagy & Nehaniv 2005, 2008).
H.P. Zeiger (1967) proved an important variant called the holonomy decomposition (Eilenberg 1976). The holonomy method appears to be relatively efficient and has been implemented computationally by A. Egri-Nagy (Egri-Nagy & Nehaniv 2005).
Meyer and Thompson (1969) give a version of Krohn–Rhodes decomposition for finite automata that is equivalent to the decomposition previously developed by Hartmanis and Stearns, but for useful decompositions, the notion of expanding the state-set of the original automaton is essential (for the non-permutation automata case).
Many proofs and constructions now exist of Krohn–Rhodes decompositions (e.g., [Krohn, Rhodes & Tilson 1968], [Ésik 2000], [Diekert et al. 2012]), with the holonomy method the most popular and efficient in general (although not in all cases). Owing to the close relation between monoids and categories, a version of the Krohn–Rhodes theorem is applicable to category theory. This observation and a proof of an analogous result were offered by Wells (1980).
The Krohn–Rhodes theorem for semigroups/monoids is an analogue of the Jordan–Hölder theorem for finite groups (for semigroups/monoids rather than groups). As such, the theorem is a deep and important result in semigroup/monoid theory. The theorem was also surprising to many mathematicians and computer scientists since it had previously been widely believed that the semigroup/monoid axioms were too weak to admit a structure theorem of any strength, and prior work (Hartmanis & Stearns) was only able to show much more rigid and less general decomposition results for finite automata.
Work by Egri-Nagy and Nehaniv (2005, 2008–) continues to further automate the holonomy version of the Krohn–Rhodes decomposition extended with the related decomposition for finite groups (so-called Frobenius–Lagrange coordinates) using the computer algebra system GAP.
Applications outside of the semigroup and monoid theories are now computationally feasible. They include computations in biology and biochemical systems (e.g. Egri-Nagy & Nehaniv 2008), artificial intelligence, finite-state physics, psychology, and game theory (see, for example, Rhodes 2009).
See also
Semigroup action
Transformation semigroup
Green's relations
Notes
References
Egri-Nagy, A.; and Nehaniv, C. L. (2005), "Algebraic Hierarchical Decomposition of Finite State Automata: Comparison of Implementations for Krohn–Rhodes Theory", in 9th International Conference on Implementation and Application of Automata (CIAA 2004), Kingston, Canada, July 22–24, 2004, Revised Selected Papers, Editors: Domaratzki, M.; Okhotin, A.; Salomaa, K.; et al.; Springer Lecture Notes in Computer Science, Vol. 3317, pp. 315–316, 2005
Two chapters are written by Bret Tilson.
Krohn, K. R.; and Rhodes, J. L. (1962), "Algebraic theory of machines", Proceedings of the Symposium on Mathematical Theory of Automata'' (editor: Fox, J.), (Wiley–Interscience)
Erratum: Information and Control 11(4): 471 (1967), plus erratum.
Further reading
External links
Prof. John L. Rhodes, University of California at Berkeley webpage
SgpDec: Hierarchical Composition and Decomposition of Permutation Groups and Transformation Semigroups, developed by A. Egri-Nagy and C. L. Nehaniv. Open-source software package for the GAP computer algebra system.
An introduction to the Krohn-Rhodes Theorem (Section 5); part of the Santa Fe Institute Complexity Explorer MOOC Introduction to Renormalization, by Simon DeDeo.
Semigroup theory
Category theory
Finite automata |
514259 | https://en.wikipedia.org/wiki/Citizen%20Lab | Citizen Lab | The Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs at the University of Toronto, Canada. It was founded by Ronald Deibert in 2001. The laboratory studies information controls that impact the openness and security of the Internet and that pose threats to human rights. The organization uses a "mixed methods" approach which combines computer-generated interrogation, data mining, and analysis with intensive field research, qualitative social science, and legal and policy analysis methods.
The Citizen Lab was a founding partner of the OpenNet Initiative (2002–2013) and the Information Warfare Monitor (2002–2012) projects. The organization also developed the original design of the Psiphon censorship circumvention software, which was spun out of the Lab into a private Canadian corporation (Psiphon Inc.) in 2008.
History
In a 2009 report "Tracking GhostNet", researchers uncovered a suspected cyber espionage network of over 1,295 infected hosts in 103 countries between 2007 and 2009, a high percentage of which were high-value targets, including ministries of foreign affairs, embassies, international organizations, news media, and NGOs. The study was one of the first public reports to reveal a cyber espionage network that targeted civil society and government systems internationally.
In Shadows in the Cloud (2010), researchers documented a complex ecosystem of cyber espionage that systematically compromised government, business, academic, and other computer network systems in India, the offices of the Dalai Lama, the United Nations, and several other countries. According to a January 24, 2019 AP News report, Citizen Lab researchers were "being targeted" by "international undercover operatives" for its work on NSO Group.
In Million Dollar Dissident, published in August 2016, researchers discovered that Ahmed Mansoor, one of the UAE Five, a human rights defender in the United Arab Emirates, was targeted with Pegasus software developed by Israeli "cyber war" company NSO Group. Prior to the releases of the report, researchers contacted Apple who released a security update that patched the vulnerabilities exploited by the spyware operators. Mansoor was imprisoned one year later and as of 2021, is still in jail.
Researchers reported in October 2018, that NSO Group surveillance software was used to spy on the "inner circle" of Jamal Khashoggi just before his murder, "are being targeted in turn by international undercover operatives." A Citizen Lab October report revealed that NSO's "signature spy software" which had been placed on the iPhone of Saudi dissident Omar Abdulaziz, one of Khashoggi's confidantes, months before. Abdulaziz said that Saudi Arabia spies used the hacking software to reveal Khashoggi's "private criticisms of the Saudi royal family". He said this "played a major role" in his death.
In March 2019, The New York Times reported that Citizen Lab had been a target of the UAE contractor DarkMatter.
In November 2019, Ronan Farrow released a Podcast called "Catch and Kill," an extension of his book of the same name. The first episode includes Farrow's reporting on an instance in which a source of Farrow's was involved in a counter-espionage incident while operatives from Black Cube were targeting Citizen Lab.
Funding
Financial support for the Citizen Lab has come from the Ford Foundation, the Open Society Institute, the Social Sciences and Humanities Research Council of Canada, the International Development Research Centre (IDRC), the Government of Canada, the Canada Centre for Global Security Studies at the University of Toronto's Munk School of Global Affairs, the John D. and Catherine T. MacArthur Foundation, the Donner Canadian Foundation, the Open Technology Fund, and The Walter and Duncan Gordon Foundation. The Citizen Lab has received donations of software and support from VirusTotal, RiskIQ and Cisco’s AMP Threat Grid Team
Research areas
Digital attacks
The Citizen Lab's Targeted Threats research stream seeks to gain a better understanding of the technical and social nature of digital attacks against civil society groups and the political context that may motivate them. The Citizen Lab conducts comparative analysis of various online threats, including Internet filtering, denial-of-service attacks, and targeted malware. Targeted Threats reports have covered a number of espionage campaigns and information operations against the Tibetan community and diaspora, phishing attempts made against journalists, human rights defenders, political figures, international investigators and anti-corruption advocates in Mexico, and a prominent human rights advocate who was the focus of government surveillance in the United Arab Emirates.
Citizen Lab researchers and collaborators like the Electronic Frontier Foundation have also revealed several different malware campaigns targeting Syrian activists and opposition groups in the context of the Syrian Civil War. Many of these findings were translated into Arabic and disseminated along with recommendations for detecting and removing malware.
The group reports that their work analyzing spyware used to target opposition figures in South America has triggered death threats. In September 2015 members of the group received a pop-up that said:We're going to analyze your brain with a bullet—and your family's, too ... You like playing the spy and going where you shouldn't, well you should know that it has a cost—your life!
Internet censorship
The OpenNet Initiative studied Internet filtering in 74 countries and found that 42 of them—including both authoritarian and democratic regimes—implement some level of filtering.
The Citizen Lab continued this research area through the Internet Censorship Lab (ICLab), a project aimed at developing new systems and methods for measuring Internet censorship. It was a collaborative effort between the Citizen Lab, Professor Phillipa Gill's group at Stony Brook University's Department of Computer Science, and Professor Nick Feamster's Network Operations and Internet Security Group at Princeton University.
Application-level information controls
The Citizen Lab studies censorship and surveillance implemented in popular applications including social networks, instant messaging, and search engines.
Previous work includes investigations of censorship practices of search engines provided by Google, Microsoft, and Yahoo! for the Chinese market along with the domestic Chinese search engine Baidu. In 2008, Nart Villeneuve found that TOM-Skype (the Chinese version of Skype at the time) had collected and stored millions of chat records on a publicly accessible server based in China. In 2013, Citizen Lab researchers collaborated with Professor Jedidiah Crandall and Ph.D. student Jeffrey Knockel at the University of New Mexico to reverse engineering of TOM-Skype and Sina UC, another instant messaging application used in China. The team was able to obtain the URLs and encryption keys for various versions of these two programs and downloaded the keyword blacklists daily. This work analyzed over one year and a half of data from tracking the keyword lists, examined the social and political contexts behind the content of these lists, and analyzed those times when the list had been updated, including correlations with current events.
Current research focuses on monitoring information controls on the popular Chinese microblogging service Sina Weibo, Chinese online encyclopedias, and mobile messaging applications popular in Asia. The Asia Chats project utilizes technical investigation of censorship and surveillance, assessment on the use and storage of user data, and comparison of the terms of service and privacy policies of the applications. The first report released from this project examined regional keyword filtering mechanisms that LINE applies to its Chinese users.
Analysis of a popular cellphone app called "Smart Sheriff", by Citizen Lab and the German group Cure53, asserted the app represented a security hole that betrayed the privacy of the children it was meant to protect and that of their parents. South Korean law required all cellphones sold to those under 18 to contain software designed to protect children, and Smart Sheriff was the most popular government approved app—with 380,000 subscribers. The Citizen Lab/Cure53 report described Smart Sheriff's security holes as "catastrophic".
Commercial surveillance
The Citizen Lab conducts research on the global proliferation of targeted surveillance software and toolkits, including FinFisher, Hacking Team and NSO Group.
FinFisher is a suite of remote intrusion and surveillance software developed by Munich-based Gamma International GmbH and marketed and sold exclusively to law enforcement and intelligence agencies by the UK-based Gamma Group. In 2012, Morgan Marquis-Boire and Bill Marczak provided the first public identification of FinFisher's software. The Citizen Lab and collaborators have done extensive investigations into FinFisher, including revealing its use against Bahraini activists, analyzing variants of the FinFisher suite that target mobile phone operating systems, uncovering targeted spying campaigns against political dissidents in Malaysia and Ethiopia, and documenting FinFisher command and control servers in 36 countries. Citizen Lab's FinFisher research has informed and inspired responses from civil society organizations in Pakistan, Mexico, and the United Kingdom. In Mexico, for example, local activists, and politicians collaborated to demand an investigation into the state's acquisition of surveillance technologies. In the UK, it led to a crackdown on the sale of the software over worries of misuse by repressive regimes.
Hacking Team is a company based in Milan, Italy, that provides intrusion and surveillance software called Remote Control System (RCS) to law enforcement and intelligence agencies. The Citizen Lab and collaborators have mapped out RCS network endpoints in 21 countries, and have revealed evidence of RCS being used to target a human rights activist in the United Arab Emirates, a Moroccan citizen journalist organization, and an independent news agency run by members of the Ethiopian diaspora. Following the publication of Hacking Team and the Targeting of Ethiopian Journalists, the Electronic Frontier Foundation and Privacy International both took legal action related to allegations that the Ethiopian government had compromised the computers of Ethiopian expatriates in the United States and UK.
In 2018, the Citizen Lab released an investigation into the global proliferation of Internet filtering systems manufactured by the Canadian company, Netsweeper, Inc. The laboratory identified Netsweeper installations designed to filter Internet content operational on networks in 30 countries and focused on 10 with past histories of human rights challenges: Afghanistan, Bahrain, India, Kuwait, Pakistan, Qatar, Somalia, Sudan, United Arab Emirates, and Yemen. Websites blocked in these countries include religious content, political campaigns, and media websites. Of particular interest was Netsweeper's "Alternative Lifestyles" category, which appears to have as one of its principal purposes the blocking of non-pornographic LGBTQ content, including that offered by civil rights and advocacy organizations, HIV/AIDS prevention organizations, and LGBTQ media and cultural groups. The Citizen Lab called on government agencies to abandon the act of filtering LGBT content.
Since 2016, Citizen Lab has published a number of reports on "Pegasus", a spyware for mobile devices that was developed by the NSO Group, an Israeli-based cyber intelligence firm. Citizen Lab's ten part series on the NSO Group ran from 2016 through 2018. The August 2018 report was timed to coordinate with Amnesty International's in-depth report on the NSO Group.
In 2017, the group released several reports that showcased phishing attempts in Mexico that used NSO Group technology. The products were used in multiple attempts to gain control of mobile devices of Mexican government officials, journalists, lawyers, human rights advocates and anti-corruption workers. The operations used SMS messages as bait in an attempt to trick targets into clicking on links to the NSO Group's exploit infrastructure. Clicking on the links would lead to the remote infection of a target's phone. In one case, the son of one of the journalists—a minor at the time—was also targeted. NSO, who purports to only sell products to governments, also came under the group's focus when prominent UAE human rights defender Ahmed Mansoor's mobile phone was targeted. The report on these attempts showed the first time iOS zero day exploits in the wild and prompted Apple to release a security update to their iOS 9.3.5, affecting more than 1 billion Apple users worldwide.
In 2020, the Citizen Lab released a report unveiling Dark Basin, a hack-for-hire group based in India.
The Citizen Lab's research on commercial surveillance technologies has resulted in legal and policy impacts. In December 2013, the Wassenaar Arrangement was amended to include two new categories of surveillance systems on its Dual Use control list—"intrusion software" and "IP Network surveillance systems". The Wassenaar Arrangement seeks to limit the export of conventional arms and dual-use technologies by calling on signatories to exchange information and provide notification on export activities of goods and munitions included in its control lists. The amendments in December 2013 were the product of intense lobbying by civil society organizations and politicians in Europe, whose efforts were informed by Citizen Lab's research on intrusion software like FinFisher and surveillance systems developed and marketed by Blue Coat Systems.
Commercial filtering
The Citizen Lab studies the commercial market for censorship and surveillance technologies, which consists of a range of products that are capable of content filtering as well as passive surveillance.
The Citizen Lab has been developing and refining methods for performing Internet-wide scans to measure Internet filtering and detect externally visible installations of URL filtering products. The goal of this work is to develop simple, repeatable methodologies for identifying instances of internet filtering and installations of devices used to conduct censorship and surveillance.
The Citizen Lab has conducted research into companies such as Blue Coat Systems, Netsweeper, and SmartFilter. Major reports include "Some Devices Wander by Mistake: Planet Blue Coat Redux" (2013), "O Pakistan, We Stand on Guard for Thee: An Analysis of Canada-based Netsweeper’s Role in Pakistan’s Censorship Regime" (2013), and Planet Blue Coat: Mapping Global Censorship and Surveillance Tools (2013).
Following the 2011 publication of "Behind Blue Coat: Investigations of Commercial Filtering in Syria and Burma", Blue Coat Systems officially announced that it would no longer provide "support, updates. or other services" to software in Syria. In December 2011, the U.S. Department of Commerce's Bureau of Industry and Security reacted to the Blue Coat evidence and imposed a $2.8 million fine on the Emirati company responsible for purchasing filtering products from Blue Coat and exporting them to Syria without a license.
Citizen Lab's Netsweeper research has been cited by Pakistani civil society organizations Bytes for All and Bolo Bhi in public interest litigation against the Pakistani government and in formal complaints to the High Commission (Embassy) of Canada to Pakistan.
Corporate transparency and governmental accountability
The Citizen Lab examines transparency and accountability mechanisms relevant to the relationship between corporations and state agencies regarding personal data and other surveillance activities. This research has investigated the use of Artificial Intelligence in Canada's immigration and refugee systems (co-authored by the International Human Rights Program at the University of Toronto's Faculty of Law), an analysis of ongoing encryption debates in the Canadian context (co-authored by the Canadian Internet Policy and Public Interest Clinic), and a close look at consumer personal data requests in Canada.
In the summer of 2017, the Government of Canada introduced new national security legislation, Bill C-59 (the National Security Act). It proposed to significantly change Canada's national security agencies and practices, including Canada's signals intelligence agency (the Communications Security Establishment). Since the Bill was first proposed, a range of civil society groups and academics have called for significant amendments to the proposed Act. A co-authored paper by the Citizen Lab and the Canadian Internet Policy and Public Interest Clinic represented the most detailed and comprehensive analysis of CSE-related reforms to date.
Policy engagement
The Citizen Lab participates in various global discussions on Internet governance, such as the Internet Governance Forum, ICANN, and the United Nations Government Group of Experts on Information and Telecommunications.
Since 2010, the Citizen Lab has helped organize the annual Cyber Dialogue conference, hosted by the Munk School of Global Affairs’ Canada Centre, which convenes over 100 individuals from countries around the world who work in government, civil society, academia, and private enterprise in an effort to better understand the most pressing issues in cyberspace. The Cyber Dialogue has a participatory format that engages all attendees in a moderated dialogue on Internet security, governance, and human rights. Other conferences around the world, including a high-level meeting by the Hague-based Scientific Council for Government Policy and the Swedish government's Stockholm Internet Forum, have taken up themes inspired by discussions at the Cyber Dialogue.
Reception
The Citizen Lab has won a number of awards for its work. It is the first Canadian institution to win the MacArthur Foundation’s MacArthur Award for Creative and Effective Institutions (2014) and the only Canadian institution to receive a "New Digital Age" Grant (2014) from Google Executive Chairman Eric Schmidt. Past awards include the Electronic Frontier Foundation Pioneer award (2015), the Canadian Library Association's Advancement of Intellectual Freedom in Canada Award (2013), the Canadian Committee for World Press Freedom's Press Freedom Award (2011), and the Canadian Journalists for Free Expression’s Vox Libera Award (2010).
The Citizen Lab's work is often cited in media stories relating to digital security, privacy controls, government policy, human rights, and technology. Since 2006, they have been featured in publications such as The Washington Post, The New York Times, The Globe and Mail, BusinessWeek, Al Jazeera, Forbes, Wired, BBC, Bloomberg News, Slate, Salon, and the Jakarta Post.
Notes
References
External links
University of Toronto
Laboratories in Canada
Computer surveillance
Internet-related organizations
2001 establishments in Ontario
Organizations established in 2001 |
517884 | https://en.wikipedia.org/wiki/List%20of%20Alias%20characters | List of Alias characters | The following is a partial list of characters from the TV series, Alias.
Overview
Main characters
Recurring characters
McKenas Cole
McKenas Cole was portrayed by Quentin Tarantino. Formerly an operative for SD-6, he defected from the organization, first to work for "The Man", and later to assume a leading position in The Covenant.
Cole first appears in the season one episodes "The Box" (Parts I and II). He and a group of agents infiltrate the SD-6 office in Los Angeles and take the entire cell hostage with the exception of Sydney and Jack Bristow, who were in the parking garage of the Credit Dauphine building at the time. It is revealed that he is a former SD-6 agent who was tasked by Arvin Sloane to destroy a pipeline in Chechnya five years prior, but was captured and tortured by rebels. He is ultimately released and becomes an agent for the crime syndicate operated by "The Man," later revealed to be Irina Derevko, and he tries to procure a vial of liquid from the vault of SD-6 in order to expose the ink of a document written by Milo Rambaldi. However, after valiant efforts by the Bristows and CIA agent Michael Vaughn, he is apprehended and taken into custody by the United States government.
He next appears in the season three episode "After Six." After Julian Sark and Lauren Reed kill the six leaders of the Covenant cells, they attempt to use this as leverage in order to ascend to more powerful positions within the terrorist organization. To the chagrin of Sark, Lauren went behind his back and alerted Cole of Sark's intentions. However, this was of value to Cole and the Covenant, because the CIA was currently looking for a document called The Doleac Agenda that specifies the most important information about the Covenant's operations, including the cell leaders' names. Thus Sark's actions, while a betrayal, were beneficial to the Covenant and Cole promoted Sark and Lauren as co-commanders of the North American cell of the Covenant.
Cole also oversaw a demonstration of Sydney's apparent brainwashing as Julia Thorne.
Cole's current whereabouts and how he escaped from CIA custody are unknown.
Gordon Dean
Gordon Dean was portrayed by Tyrees Allen.
Dean was the head of The Shed, a clandestine criminal organization similar to SD-6. As at SD-6, most of the operatives of the Shed believed they were working for a black ops division of the Central Intelligence Agency. Apparently only Dean and his associate Kelly Peyton knew the truth, that the Shed was actually a division of Prophet Five.
Dean initially appeared in the first episode of season 5, claiming to be with the CIA's Office of Special Investigations. He claimed to be investigating Michael Vaughn as a double agent, however, APO quickly determined Dean was a rogue agent. He had faked his death two years earlier and had since then been living "off the grid."
When Shed operative Rachel Gibson learned the Shed's actual affiliation, she turned on it and at the behest of APO and Sydney stole the data from the Shed's servers. When Dean discovered what she was doing he destroyed the Shed, killing everyone except Peyton and Gibson.
Following the destruction of the Shed, Dean and Peyton undertook a series of attempts to recover or kill Gibson, with no success. Dean was instrumental in securing the release of Arvin Sloane from federal custody and getting Sloane's security clearance reinstated so Sloane could work for Dean inside APO. He had an undetermined connection to Sloane prior to this.
Eventually Dean was captured by Gibson and taken into APO custody. While in custody Sloane murdered him at the behest of Peyton who, with Dean's death, took control of his operations.
Elena Derevko
Elena Derevko was played by Sônia Braga.
Elena is the sister of Irina and Katya Derevko and was one of the KGB's foremost assassins, responsible for the murder of countless diplomats and politicians throughout Eastern Europe. She was described by her sister Irina as "volatile and quick-tempered". Considered the most ruthless of the Derevko sisters, Elena is the secret leader of The Covenant. She was first mentioned in the season three episode "Crossings" by Jack. She is not referenced again until the fourth season, when in "Welcome to Liberty Village" it is revealed that an unidentified Russian group is trying to find her.
Thirty years before Season 4, Elena disappeared and could not be found, even by Irina and around a year before Season 4, Arvin Sloane and Jack Bristow had been investigating Elena's whereabouts until Jack received a message from an old associate that led him to one of Elena's safehouses in Warsaw. Inside was detailed information on the lives of Sydney Bristow and Nadia Santos, her nieces, with evidence suggesting she had them under surveillance for at least a decade.
Soon Nadia's former caretaker Sophia Vargas arrived in Los Angeles claiming that her reason for coming to the United States was due to an attack by an unidentified group at her home in Lisbon, Portugal. They quickly assumed Elena was the responsible behind the attack, not realising that she was in fact "Sophia".
Under the alias of "Sophia Vargas", Elena used her former connection to Nadia to stay with her and Sydney to monitor their work for the CIA. She also monitored Nadia's conversations through a necklace equipped with a microphone.
She later hacked into Nadia's laptop and requested to move the Hydrosek, a deadly weapon, to an unsecured location away from APO. Elena and her associate Avian then stole the weapon before Nadia and Weiss could stop them, not realizing who the thief was.
In order to complete her endgame, Elena, fully aware of the fact that he would at some point double-cross her, also ordered the creation of "Arvin Clone", a corporal in the U.S. Army named Ned Bolger, who was captured and brainwashed to believe that he was, in fact Arvin Sloane, committing numerous criminal acts in the process, such as setting Irina up to appear as if she had contracted an assassin to kill Sydney.
Elena sent a team into the Department of Special Research to steal all the Rambaldi artifacts necessary to complete "Rambaldi's endgame". On the same night, she left Sydney and Nadia's house, saying that she had been informed by the Lisbon police that it was safe for her to return. Hours later, Hayden Chase entered the house and accused Nadia of working against the government and allowing the team to break into the DSR. It was at this point that Jack and Sloane began to realize that Sophia was Elena, which was backed by the microphone found in the necklace given to Nadia by Sophia.
Sloane deduced that Elena would go to Lazlo Drake with the Rambaldi artifacts. Sydney and Sloane went to meet him, but he was already dead. Sydney found a hidden camera and rewound the feed to see Elena talking with Drake. Before she could see where they planned to take the Rambaldi artifacts, Sloane tranquillized her and left. He met up with Elena in her office in Prague and offered his services to help complete her task. Jack discovered through Katya Derevko that The Covenant was a major terrorist organization set up by Elena in order to secretly acquire Rambaldi artifacts.
Dixon, Nadia and Sydney were sent on a mission to stop them. Dixon found Elena and Sloane, but Elena shot him before he could react. While in the hospital, Dixon told Sydney that he saw Irina Derevko before Elena shot him. It was discovered that Elena had used Project Helix to create a double of Irina, then through the use of "Arvin Clone" set up a hit on Sydney, sparking the events that led to the double's execution - to ultimately lead the intelligence/criminal world to believe that she was dead. The double was described as being an ardent follower of Elena Derevko, and fully aware of the fact that she was being set up to die.
The real Irina, however, was imprisoned by Elena, and tortured for her knowledge of a Rambaldi prophecy known as "Il Diluvio", the Flood. She was kept for eighteenth months in Prague, but moved to a remote location in Guatemala - Tikal. Sydney, Jack and Nadia located her through Lucian Nisard, and saved her from Elena's guards.
Elena and Sloane constructed a massive Mueller device over the Russian city of Sovogda which, in conjunction with the tainted water supply from The Orchid, infected the people of Sovogda and made them extremely aggressive. When Sloane learned that APO had sent a team into the city to stop them, he offered to head out with the team himself. While he was doing this, Elena began to transmit a subaudible frequency via the Russian mil-sat network to spread the infection worldwide. The second that the bombers were scheduled to destroy Sovogda, the frequency would be broadcast causing the effects of Sovogda to occur worldwide.
One of Elena's men found Nadia walking around the streets and brought her to Elena, who told Nadia that she never wanted to hurt her. She said that she always thought of Nadia as her daughter, then offered her a place alongside her. She showed Nadia a map of the world with pinpoints all over it, saying "the flood has begun, just a few will survive." Nadia refused to join her, saying that she would never kill Sydney as Rambaldi's prophecy had stated. Elena then injected Nadia with the tainted water, which infected her with the aim of forcing her to kill Sydney.
Elena noticed Sloane on the monitor. He was back and carrying a supposedly unconscious Vaughn, but as soon as they arrived in the base they began to shoot Elena's men. They were joined by Jack and Irina. They got to Elena and Irina knocked her to the floor with her gun. Sloane put Elena in a chair and Jack and Vaughn went to the computers to stop the mil-sat network from uploading the frequency. However, they could not retract the transmission. Elena told them that she had planned the attack for five years; it would not be that simple to stop her. Sydney managed to get into the main circuit board, but found a blue and a white wire. Irina did not know which to cut, and she asked Elena. When she refused to answer, she told Jack to start torturing her. Before he could do so, Elena told him to stop, and said to cut the white wire. Irina immediately shot Elena on the head and told Sydney to cut the blue wire.
Elena's Past with Nadia
Sydney and Vaughn discover footage of a very young Nadia being tested with Rambaldi fluid in a Russian bunker. Katya Derevko tells them Nadia was born to Irina in a Soviet prison after Irina left Jack. Sydney was born in 1975 and Irina left in late 1981. This means Nadia has to have been born in 1982. The footage proves Nadia was in Soviet custody for at least the first few years of her life. Conservatively, based on the footage, she could have been in Soviet hands until at least 1985.
Elena, as Sophia Vargas, tells Michael Vaughn the only time she met his father Bill was when he left Nadia in her care at the orphanage in Buenos Aires. However, Michael Vaughn locates Sophia's address in Lisbon through an encoded entry in a journal purported to have belonged to his father, which indicated Bill Vaughn had survived past 1979, when Irina Derevko was supposed to have killed him. Since this journal was revealed a forgery, there is no reason to believe information given by Elena supporting the authenticity of the journal was true.
Nadia ran away from the orphanage in 1992. The footage is fairly dark but it is possible she was as young as 10 when she ran away. She's in a dormitory with other girls of about that age. She's recruited by Roberto Fox when she's 17 or 18, which would make the year 1999 or 2000. She dies in circa 2007.
In this timeline, the only loose ends are how Nadia ended up at the orphanage in the first place and why the mercenary Roberts would tell Michael Vaughn his father was involved. Since the orphanage was headed up by Elena Derevko, the most obvious explanation is that Elena kidnapped her after she was taken from Irina and she simply lied to Vaughn about his father's involvement, and Roberts was paid to do the same.
Katya Derevko
Yekaterina "Katya" Derevko, alias the Black Sparrow, was the second of the three Derevko sisters to be introduced. She was played by Isabella Rossellini.
Katya first appeared during the third season, sent by Irina in response to Jack's request for help in extricating Sydney and Vaughn from a North Korean prison. Following the mission, Katya kisses Jack passionately.
In the course of searching for The Passenger after her abduction in season 3, Sydney and Vaughn discover video files of Nadia as a small child in an abandoned Russian military bunker, formerly the center of Soviet Rambaldi research. Katya and several Russian guards confront them, but Katya turns on the guards, subduing them with tranquilizer darts, and the three of them travel to St. Petersburg. Katya fills them in on Nadia's background, claiming that Nadia was born to Irina in a Russian prison after Irina left Jack. Since Sydney was born in April 1975 and Irina left Jack in late 1981, Nadia must have been born in 1982. This timeline is irreconcilable with reports that Vaughn's father had rescued Nadia as a young girl, since he was killed by Irina in 1979, confirming the belief that Elena Derevko had fabricated the story of Bill Vaughn's involvement. Katya and Jack work together to trace Sloane's movements through bank activity, and Katya flirts outrageously with Jack. Jack initially rebuffs her advances but later kisses her deeply.
After Nadia's rescue, Vaughn tracks down his wife, Lauren Reed. He has captured her, beaten her, and is about to kill her when Katya intervenes, stabbing Vaughn in the back and releasing Lauren. While engaged in reconnaissance of a Covenant operation in Palermo, Sydney encounters Katya again. Katya attempts to shoot her (thereby confirming to Sydney that Katya is allied with the Covenant), but Sydney had removed the bullets from Katya's gun. Sydney hits Katya with a tranquilizer dart and Katya is taken into federal custody.
Nothing is seen of Katya until Nadia visits her in prison in season 4, seeking to learn more about the mother Nadia never knew. Katya eats some chocolate Nadia brought her, triggering a near-fatal allergic reaction to illustrate her desperation to meet with Sydney. Sydney visits Katya in the prison infirmary to warn her against continuing to see Nadia. Katya tells her that she never intended to harm Sydney in Palermo, and Irina never intended to harm Sydney either. Irina had tried to contact Katya, concerned someone was setting her up. Katya asks Sydney to retrieve the message from Irina, which is contained in a music box. Sydney retrieves the box and recovers the message, which leads her to bank records in the name of Arvin. Sydney informs Katya that the records exonerate Irina and implicate Sloane (it was later discovered that Sloane's imposter, a pawn of Elena Derevko, was behind it).
After Elena's raid on the Department of Special Research and with Katya being the only Derevko he has access to, Jack visits her in prison seeking information on the whereabouts and operations of Elena. In exchange, he promises to do everything in his power to secure her release. Katya tells him Elena was the true power behind the Covenant and advises him that Elena's base of operations is in Prague, but only Irina knew Elena's full endgame and how to stop it.
Whether Jack was successful in securing Katya's release is unknown, and no further information regarding her current status is extant.
Anna Espinosa
Anna Espinosa was initially played by Gina Torres. Born in Cuba but raised in Russia, Anna was a Soviet spy at some point, until the collapse of the Soviet Union. She is a devout follower of Rambaldi, as evidenced by the eye tattoo on her hand, and as such is one of the many characters obsessed with carrying on the late prophet's work, though her appearances on the show are scant compared to the other zealots.
In season one, Espinosa is a ruthless and persistent nemesis to Sydney while the latter goes on missions for SD-6. She worked for K-Directorate, a rival organization of SD-6, composed of former operatives from the Eastern bloc, and was the go-to officer for Wetworks and Active Measures, until The Man, aka Irina Derevko, took down the organization and seized control of its assets. Upon the fall of the organization, Anna staged her death and went freelance, selling her skills as an agent to the highest bidder.
She is not seen again until the fourth season, under the employ of the Cadmus Revolutionary Front (an organization believed to be composed of former Covenant operatives until it is revealed the organization is still alive under Elena Derevko). She is tasked to steal a chemical bomb but instead eliminates CRF's leader upon retrieving it. She rescues Julian Sark and plans to sell the bomb, but Sark betrays her and she is captured by Sydney. She is thereupon taken into federal custody.
She returned for the 100th episode of the series, entitled "There's Only One Sydney Bristow". Prophet Five, using its government contacts, was able to secure her release. She then kidnaps Will in an attempt to blackmail Sydney. She is able to obtain Page 47 of the Rambaldi prophecy, depicting the Chosen One, a woman bearing Sydney's likeness. At the end of the episode, possibly using Project Helix, the controlling elements of Prophet Five genetically transformed Espinosa into Sydney's double after the latter's DNA was sampled by Prophet Five. At this point Jennifer Garner took over the portrayal of the character.
The first encounter APO personnel had with the transformed Anna occurred in Ghana in the episode "30 Seconds" and resulted in her killing freelance operative Renée Rienne before escaping. In the following episode ("I See Dead People"), Anna is ordered to intercept the real Sydney, who is en route to contact Michael Vaughn. Anna tries to deceive Vaughn, but he realizes she is an imposter, though he does not know her true identity. They fight, but as Anna gains the upper hand, she is shot in the back and head by Sydney, who had just arrived at the scene. Sydney subsequently impersonates Anna within Prophet Five for a brief time, initially convincing Sloane she (Anna) had killed Sydney by shooting her in the back. Sloane conspired to kill Anna as he felt Sydney did not deserve to die that way, but subsequently realized Sydney was impersonating her, first by doubts as Sloane notices the passion behind how Sydney (impersonating Anna) talks to him and later confirmed when Sydney told Sloane "I don't die that easy" inside a prison cell after Sydney meets with The Rose. Apparently, Prophet Five considered Anna either disposable or too dangerous to their plans to keep alive, as they planned to kill her once she fulfilled her duties foreseen by Rambaldi as the Chosen One (namely retrieving an artifact from San Cielo).
Thomas Grace
Thomas Grace was portrayed by Balthazar Getty.
The character was introduced in the second episode of the fifth season. Thomas Grace is recruited into Authorized Personnel Only, a black-ops division of the CIA, in order to replace agent Michael Vaughn after Vaughn is supposedly killed (he also replaces Eric Weiss after Weiss transfers out of APO). This led to friction between Grace and Vaughn's partner and fiancée, Sydney. Grace also bonded with another new APO recruit, Rachel Gibson, and began training the novice agent to fight.
Not much is known about the character's background, although his file, details unknown, caused many a person to halt and reconsider welcoming him into APO with open arms. In The Horizon, Marshall Flinkman discovers that Grace used to be married, though he initially believes it is a remnant from an undercover assignment. Grace reacts with anger at Marshall's nosiness.
In S.O.S, Grace takes part in an APO mission to infiltrate CIA headquarters in Langley, Virginia in order to locate Sydney, who has been kidnapped by Prophet Five. During the mission, he takes advantage of the opportunity to obtain information from the Witness Protection Program regarding the man who, it is revealed, murdered Grace's wife Amanda. Later, Grace tracks the man down, but instead of killing him, asks to be put in touch with someone called "The Cardinal." His "off-book" mission does not go unnoticed, as Rachel discovers Grace's investigation. Grace, in response, lies to her and says he intended to kill the man, but was unable to do so once he saw that he had started a family of his own.
In No Hard Feelings, Rachel discovers the truth: The Cardinal is the man who ordered the death of Grace's wife and the man he had pursued was the hit man assigned to do the job. The hit man agrees to tell Grace why his wife was targeted, in return for his vehicle which had been impounded by the government. Upon returning the vehicle to the hit man, he is told that Amanda was simply "in the wrong place at the wrong time" and that the intent was actually to kill Grace. The hit man then drives off in his vehicle, only to die when Grace detonates a bomb inside. The identity of "The Cardinal" and why he had targeted Grace for assassination is never revealed.
In the penultimate episode of Alias, Grace is sent to defuse a bomb that Sark had placed onto a subway to destroy APO headquarters. Unable to defuse the bomb, he repeatedly sprays liquid nitrogen on it, which slows down the mechanism for a short period of time but does not disable it. He is instructed to leave when the timer gets to 60 seconds, but ultimately stays with the bomb beyond the one-minute threshold in order to keep it from going off. His last words are to Rachel Gibson, telling her that he wished they had more time because he would have asked her out. Starting to cry, Rachel responds that she would have said yes just as the bomb detonates, killing Grace.
Assistant Director Kendall
Assistant Director Kendall was played by Terry O'Quinn.
He first appeared in the first season as the head of an FBI tribunal brought in from Washington DC to interrogate Sydney after she is arrested following evidence linking her to a prophecy by 15th century prophet Milo Rambaldi which suggests she could be a national security risk to the United States. Kendall questions her on every aspect of her life, until she is broken out of FBI custody.
Kendall then reappeared in the second episode of season two, assuming command of a joint FBI/CIA task force working to bring down criminal organization the Alliance and primarily, their SD-6 cell, though his reappearance coincided specifically with the surrender of Irina Derevko to the CIA. He immediately became a source of antagonism for the characters with his no-nonsense, by the book approach. The character effectively became a second season regular, appearing in almost every episode, although almost nothing was learned about him (to this day his first name remains a mystery) and he was mainly a source of exposition.
Following the two year time jump between the end of season two and beginning of the third season, Kendall was replaced as head of the task force by Marcus Dixon, though later reappeared again midway through the season when he met secretly with Sydney and informed her of the truth: he was never an FBI Assistant Director as he claimed, and had in fact always been affiliated with the Department of Special Research (DSR). He ran their facility in Nevada, Project Black Hole, which held everything the US government had collected on Rambaldi. He informs Sydney about what happened during the two years she was missing, and how they overcame their past differences and formed a friendship. The episode is Kendall's last appearance in the series.
Alexander Khasinau
Alexander Khasinau was played by Derrick O'Connor.
Alexander Khasinau is second-in-command of a criminal organization headed by a mysterious figure known as "The Man." Julian Sark, who undertakes a number of missions on behalf of this organization, reports to Khasinau, leading SD-6 to believe that Khasinau is "The Man." Khasinau owns a nightclub in Paris which serves as a front for his criminal activities.
Following a raid on SD-6 headquarters, led by McKenas Cole, who says his employer is "The Man", Arvin Sloane seeks a declaration of war by the Alliance of Twelve on Khasinau. In an attempt to secure the swing vote of SD-9 leader Edward Poole, Sloane assassinates fellow SD cell leader Jean Briault at his behest, after Poole shows Sloane evidence of Briault's complicity with Khasinau. When Poole still votes against war, Sloane realizes that Poole framed Briault and manipulated him into the assassination. Poole is the one actually in league with Khasinau.
Sloane tells Sydney that following the disappearance of Sydney's mother, Irina Derevko, Sloane served on a commission that investigated her activities. He states that Khasinau had been Irina's superior at the KGB. In a later episode, videotape footage of Irina shows her stating that Khasinau actually recruited her.
Khasinau, through his mole in the CIA, FBI agent Steven Haladki, learns that Sydney Bristow is a double agent for the CIA. He uses this knowledge to formulate a plan to expose SD-6 by feeding information to Sydney's reporter friend Will Tippin. When Tippin, at the urging of Jack, mentions "The Circumference," Khasinau has Tippin come to Paris and has him interrogated under sodium pentathol. Tippin is rescued by Sydney, who is in Paris with Marcus Dixon to retrieve a Rambaldi document from Khasinau's office.
Khasinau arranges for Tippin to be kidnapped by Sark from a CIA safehouse, transported to Taipei and tortured by an evil dentist for more information about the Circumference. Tippin knows nothing about it other than the name. Ironically, it turns out that Khasinau already had the Circumference and didn't realize it. It is the Rambaldi document that SD-6 had previously retrieved from his office. Jack trades the Circumference to Sark to secure Tippin's release.
In the season 1 finale, Khasinau reveals to Sydney that he is not "The Man." "The Man" is in fact Sydney's mother, Irina. It is unclear how widespread the knowledge that Khasinau is not the leader is within the organization. Khasinau is apparently able to inspire fanatical devotion among some members of the organization. Haladki, once exposed by Jack as a double agent, entreats Jack to join with Khasinau, saying Khasinau is "the future."
Khasinau is killed by Irina early in Season 2 in the course of a mission to retrieve "the Bible," a book containing the protocols of Irina's criminal organization.
Andrian Lazarey
Andrian Lazarey was played by Mark Bramhall.
Andrian Lazarey was a Russian diplomat. He was the father of Julian Sark, a descendant of the Romanovs and a member of the Magnific Order of Rambaldi. He was first seen in a videotape in the season 3 premiere episode, the apparent victim of an assassination by Sydney. Sydney had been missing for almost two years, and the only record she had of that time was the tape.
With Lazarey's apparent death, his fortune of $800,000,000 in gold bullion passed to Sark. Sark was in CIA custody, and a shadowy organization known as The Covenant sprang him in exchange for seizing that fortune. Lauren Reed, an agent of the National Security Council and the wife of Michael Vaughn, was assigned to learn the identity of Lazarey's murderer.
As Sydney continued to search for clues to her missing two years, she deciphered a coded message that had been left in a penthouse in Rome, Italy. The message was a set of coordinates which led her and Jack to a box buried in the California desert. The box contained a severed hand marked with the Eye of Rambaldi. Tests determined that it belonged to Lazarey and the time since it was severed proved that Sydney did not kill Lazarey.
In an attempt to recover her memories, Sydney submitted herself to a chemical procedure and received a number of cryptic visions. One repeating image was the name "St. Aidan" and the image of her friend Will Tippin, who was currently in the Witness Protection Program. (Conscious)
Sydney contacted Will and learned that St. Aiden was the name of one of the contacts he'd cultivated during his time as a CIA analyst. Will contacted St. Aidan, who turned out to be Lazarey. Lazarey gave him information that led them to a Rambaldi artifact, a sample of Rambaldi's DNA. Lazarey was captured by the Covenant during that meeting and tortured by Sark for the information he had given Will and Sydney. ("Remnants")
FBI Assistant Director Kendall finally filled Sydney in on her lost time. She had been captured by The Covenant and conditioned to be an assassin operating under the name Julia Thorne. Because of her Project Christmas conditioning, the Covenant was unsuccessful. Sydney operated within The Covenant as a double agent. She was tasked to find out from Lazarey the location of the Rambaldi cube containing the DNA sample and then kill him. Instead, Sydney and Lazarey faked his death and searched for the cube under the auspices of the CIA.
Together they located the cube in the Fish River Gorge in Namibia. They recovered the cube from the vault in which it was housed but Lazarey became trapped, necessitating the severing of his hand by Sydney to free him. Per Lazarey's agreement with the CIA, he vanished.
Sydney learned that the Covenant planned to combine the Rambaldi DNA sample with her eggs to create the "second coming" of Rambaldi. Horrified, she hid the DNA sample and had her memory wiped so that the Covenant could not use her to locate it.
The CIA undertook a mission to recover the genetic material, although Sydney decided to destroy it instead. The CIA team recovered Lazarey from Covenant custody. He was transported to a hospital and while in transit asked Sydney if she knew about "The Passenger." Before Lazarey could reveal any additional information about The Passenger, he was assassinated by Lauren, a double agent for the Covenant. (Full Disclosure)
Dr. Zhang Lee
Dr. Zhang Lee was played by Ric Young.
Lee is a torturer, always remaining eerily calm and distant from the pain he inflicts. He was always seen wearing a suit and glasses, which led to his being referred to by the nickname "Suit 'N' Glasses" before his name was finally revealed in the third season.
Lee first appeared in the pilot episode, Truth Be Told. He tortured Sydney by extracting a molar, trying to find out who her employer was. Sydney was eventually able to escape after stabbing him with his own equipment. He returned in the first-season finale, Almost Thirty Years, this time to find out what Will Tippin knew about a Rambaldi artifact known as The Circumference. Lee used a truth serum that was known to cause paralysis in 1 in 5 recipients, after which he determined that Tippin knew nothing and could be released. Tippin subsequently turned the serum on Lee, screaming, "One in five, you little bitch!". While Tippin was not paralyzed as a result of receiving the serum, Lee was not so fortunate.
He resurfaced in the season two episode A Higher Echelon, now in an electric wheelchair. His new victim was Marshall Flinkman, who he threatened with an epoxy that would expand, harden, and puncture his organs when ingested. Marshall seemed to cave under the threat and agreed to reconstruct a computer program for him. However, this was simply to buy time for Sydney to rescue him and Lee was left empty-handed again, though this time he got away with only being kicked out of his chair.
In the third-season episode Legacy, Lee was discovered to have been in charge of the Soviet Union's Rambaldi research in the early 1980s, including overseeing experiments on Sydney's sister Nadia Santos. Whether he was a follower of Milo Rambaldi or he was simply following orders is unknown. Sydney and Michael Vaughn tracked him down and captured him. Now in a position of no power at all, his calm demeanor evaporated and he became a terrified, sobbing wreck. When he was unable to give any pertinent information, Vaughn poured a powerfully corrosive liquid over his unfeeling legs and then the rest of his body. Lee made no further appearances so his final fate is unknown.
Milo Rambaldi
Milo Giacomo Rambaldi is a historical personage. The work of Rambaldi, often centuries ahead of its time and tied to prophecy, plays a central role in the show.
According to Alias creator J. J. Abrams, in a feature on Rambaldi included on the season 5 DVD box set, the Mueller device and Milo Rambaldi were originally intended simply to be MacGuffins.
Rambaldi's technological developments are sought after by numerous governments and rogue organizations in the series. Arvin Sloane is generally obsessed with obtaining Rambaldi's work and unlocking its secrets.
There seems to be no limit to Rambaldi's genius; he was highly adept in automatism, life extension, protein engineering, mathematics, cryptography and cartography. Rambaldi is said to have predicted the digital Information Age. He invented a machine code language around 1489, cryptographic algorithms, and sketched the designs of a portable vocal communicator and a prototype that reflected the properties of a transistor.
The character draws its inspiration from real-life historical figures including Leonardo da Vinci and Nostradamus. Rambaldi's artistic-looking manuscripts, written in code, are a direct reference to Leonardo's method of recording his work.
The Rambaldi subplot, which dominated the first two seasons of the series and was greatly explored in the second half of the third season, was virtually nonexistent in the early episodes of season four. However, it continued to lurk in the background of the series and, as series creator J. J. Abrams promised, resurfaced in full later in the season. Rambaldi and his works continued to figure heavily through the end of the series.
Fictional biography
Rambaldi (1444–1496), an artist, alchemist, engineer, mystic, and renaissance man in the vein of Da Vinci, served as chief architect to Pope Alexander Sextus.
Rambaldi was born in Parma, educated by Vespertine monks, and worked as a student of the arts until he was 12. During his travels to Rome when he was 18, he met Cardinal Rodericus and was retained privately as an architect, consultant and prophet when Rodericus of Borgia (Borja) became Pope in 1492.
His writings and plans are written in multiple languages ranging from Italian and Demotic hybrids to elusive mixtures of symbols (pre-masonic cipher encryptions). Rambaldi also created the earliest known watermark which he used on all of his documents: a naked eye, visible only when viewed under black light, known as the eye of Rambaldi, which helped distinguish original works from forgeries many years later. His waterpapers were all handmade from a unique polymer fiber.
Despite Borgia's benevolence, Rambaldi and his works never became famous due to Archdeacon Claudio Vespertini, who feared the revolutionary implications of the technologies defined in Rambaldi's belief system: Rambaldi believed that science would someday allow us to know God. Vespertini attempted to pursue and destroy everything he could find and keep the name of Rambaldi "invisible".
When Alexander VI died in 1503, Vespertini ordered that the name Rambaldi be erased from all inscriptions between 1470 and 1496 and his workshop in Rome destroyed. Rambaldi himself was excommunicated for heresy and sentenced to death by flame. However, Rambaldi died in the winter of 1496, a lonely man without a known surviving heir.
Not long after his death, a second workshop was discovered in San Lazzaro and was destroyed by the Vatican. Plans and sketches were sold and traded as if without value during a private auction. These plans have since been located, starting in the 15th century and continuing to recent years, around Italy, France, Eastern Europe and the former Soviet Union. Plans were also found in private collections and museum warehouses. During the Third Reich, documents interpreting his designs and teachings were highly sought-after. It was during this time that the nickname Nostravinci arose among auctioneering circles. The design directive for many of these drawings remains unclear to this day and has inspired some impressive forgeries, even prime examples of digital piracy. However the eye of Rambaldi was proven to be the only accurate method to identify genuine artifacts.
Known Rambaldi artifacts
Mueller device - Designed by Rambaldi and built by Oskar Mueller. Known effects include increasing the aggression of certain bees and emitting a sub-sonic frequency which combined with chemical contaminants causes heightened aggression in humans. Five Mueller devices are known to have been built. One small-scale device was seized by Sydney and handed over to Sloane. Its status is unknown as the CIA found no Rambaldi artifacts when it raided SD-6. One large-scale device was destroyed by Sydney in Taipei. "Arvin Clone" had completed one small-scale device, the status of which is unknown, and one large-scale device located in his Santiago compound, which was seized and/or destroyed by APO. There were also incomplete components of at least one more device in the compound, also presumably seized and/or destroyed by APO. Finally, Elena Derevko constructed an enormous Mueller device in Sovogda, Russia, which Sydney destroyed.
Notebook - Recovered from San Lazzaro. Not seen on-screen; an analysis by SD-6 indicates that the notebook contains rudimentary schematics for a cellular phone.
Sketches - Two sketches, each of which contains half of a binary code. The second sketch is destroyed by acid in Berlin. When the codes are combined with a compression scheme, they reveal the location of the Sol D'Oro.
Sol D'Oro (Golden Sun) - A yellow disk constructed of synthetic polymers, made to resemble stained glass.
Clock - Commissioned and designed by Rambaldi and built by Giovanni Donato (the only man Rambaldi ever collaborated with). When combined with the Golden Sun disk, the clock reveals a star chart identifying the location of Rambaldi's journal.
Rambaldi's journal - Contains instructions on how to assemble Rambaldi's designs. Page 47 was written in invisible ink and became known as "The Prophecy" since when decoded revealed a text and a drawing identifying Sydney as the "Chosen One". Page 94 included a listing of "apocalyptic dates and times".
Ampule - Filled with a "Rambaldi liquid" used to make the text of Page 47 and the Circumference visible.
Code key - Inscribed on the frame of a portrait of Pope Alexander VI painted by Rambaldi, housed in The Vatican. Used to decode page 47 of Rambaldi's journal.
The Circumference - A page of text describing the construction and application of the Mueller device. Like Page 47, it was written in invisible ink. It was obtained from the CIA for Irina Derevko in exchange for Will Tippin. It is next seen in the possession of "Arvin Clone".
Music box - Played a sequence that when each note was translated to its corresponding frequency, revealed an equation for zero-point energy. Destroyed by Sydney after recording the equation to prevent it from falling into the hands of SD-6.
Flower - Found in an egg-shaped container, it was apparently 400–600 years old, Rambaldi's proof of eternal life.
Firebomb - A neutron bomb designed by Rambaldi, it delivered micropulses that disintegrated organic matter but left inorganic matter unharmed.
Cut-out manuscript page - A manuscript page with a section cut out of the center. Sloane recovers the missing piece from inside a 15th-century statue of an arhat.
Study of the Human Heart - Recovered by Jack and Irina. The CIA intended to use this manuscript and Irina to flush Sloane out, but Irina double-crossed Jack, stole the manuscript and turned it over to Sloane in exchange for extracting her from CIA custody. Included the DNA "fingerprint" of Proteo di Regno, which also served as a code key for page 94 of the Rambaldi journal.
Di Regno heart - Found inside Proteo di Regno's body, apparently keeping him alive. Used to power Il Dire.
Pages - loose pages seen in the possession of the man in Nepal who gives Sloane The Restoration.
Il Dire (The Telling) - A machine made of 47 Rambaldi artifacts, it wrote the word eirene ("peace" in Greek) along with the DNA sequence of The Passenger.
Medication - derived from a formula found in Rambaldi's journal and used on Allison Doren. It helped her heal from the extensive wounds she received at Sydney's hands and may have imparted lasting rapid regenerative abilities. Not seen on-screen.
Keys - A dozen keys collected by Andrian Lazarey (although one may have been collected by Sydney in the animated episode included on the season 3 DVD set). Used to open the vault which housed the Cube holding Rambaldi's DNA.
Cube - Housed a live sample of Rambaldi's DNA. Collected by Sydney and Lazarey and later concealed by Sydney, which prompted her to erase her memory. The Cube would eventually be retrieved by Sydney for the CIA and was then stolen by The Covenant, who planned to fuse the DNA with Sydney's eggs to engineer Rambaldi's second coming. Destroyed by Sydney.
Kaleidoscope - This artifact required three discs. Once the discs were inserted into the kaleidoscope, they formed a map of an underwater formation in the Sea of Japan, where four additional discs were found.
Discs - four discs recovered from the Sea of Japan that were required to open the "Irina" box.
"Irina" box - A box with the name Irina inscribed on it. It was rumored to contain "the Passenger", which the CIA believed to be a bioweapon. Supposedly it had not been opened since Rambaldi's time but apparently Sloane had found it open, because he hid the di Regno heart inside it.
The Restoration - A manuscript which references "The Passenger" and the Hourglass. Contains the formula for "Rambaldi fluid".
Code key - Used to decode The Restoration. Not seen on-screen. Lauren Reed stole a false code key created by the CIA.
Hourglass - Contained a fluid that powered the battery for the EEG machine.
EEG machine - A device that reveals the identity of "The Passenger" by sketching her brainwaves.
The Passenger - First believed to be a bioweapon, but discovered to be actually a living person with a "direct conduit" to Milo Rambaldi and the only one to know the exact location of the Sphere of Life. The Passenger is revealed to be Nadia Santos, daughter of Arvin Sloane and Irina Derevko and half-sister of Sydney Bristow.
"Rambaldi fluid" - A chemical containing "protein strains" which, when injected into Nadia Santos (The Passenger), made her a "direct conduit to Rambaldi." This allowed her to experience visions and through muscle memory transcribe a complex algebraic equation for longitude and latitude, the location of the Sphere of Life.
The Sphere of Life - A vessel that supposedly houses Rambaldi's consciousness and is placed over a floor made of glass that only the Passager can walk. When "the Passenger" came into contact with it, she saw terrifying flashes of the future.
The Vespertine Papers - Texts, rumored to have been destroyed during World War II, which refer to properties of the Rambaldi orchid. Not seen on-screen. Pages obtained from the Department of Special Research (which may or may not have been genuine Rambaldi pages) were placed in an auction as The Vespertine Papers to flush out "Arvin Clone".
Orchid - Paphiopedilum khan, a rare lady slipper orchid brought to Italy from China in 1269. Recovered by "Arvin Clone" from the Monte Inferno monastery, it is the source of a chemical contaminant that (when combined with other substances seeded in various water supplies by Sloane) encourages human qualities like empathy and harmonic coexistence; in conjunction with the Mueller device, it causes heightened aggression in humans.
Vade Mecum - A Rambaldi manuscript, translated by Lazlo Drake. Described by Sloane as "a template describing how Rambaldi's creations were to be assembled in order to bring forth his final prophecy". Not shown on-screen.
Il Diluvio (The Flood) - A manuscript that described Rambaldi's vision of a moment when the world would be cleansed and everything would begin anew. Not seen on-screen; Irina states that she destroyed it.
The Profeta Cinque (Fifth Prophet) - A manuscript written in apparently unbreakable code that speaks of advanced genetics.
Horizon - A small orb which, when placed on the stone altar in Rambaldi's tomb, forms a levitating sphere which generates a red fluid which has the power to convey immortality.
Amulet - Recovered from "The Rose", it is considered Rambaldi's greatest gift and also his greatest curse. A defiance to the natural order and the end of nature itself. The Amulet apparently contains a map to Rambaldi's tomb, which is revealed by sunlight shining through it while in a cavern on Mt. Subasio.
Rambaldi's Tomb - Contains what appears to be the coffin of Rambaldi and a stone altar, which works in conjunction with "The Horizon".
Symbol
The symbol , generally referred to as the "Eye of Rambaldi," is the symbol of The Magnific Order of Rambaldi. In the episode Time Will Tell, a direct descendant of Giovanni Donato (who may, in fact, have been Donato himself) describes the Order as "Rambaldi's most trusted followers, entrusted with safeguarding his creations. Sadly, like most things that once were pure, criminals now use this symbol to infiltrate the Order." Some followers of Rambaldi have the mark tattooed on their hands, including Anna Espinosa.
The glyph is also said by Irina Derevko to represent the struggle between "The Chosen One" and "The Passenger", whom Irina believes to be her two daughters. (See Rambaldi's prophecies below.) The circle in the center of the design is said to be the object around which they will do great battle.
The Eye appears in one frame of the Alias title sequence (seasons 1-3) after all the letters in the word Alias appear. In season one and three, it flashes as ALIAS is spelled out and Victor Garber's credit appears. In season two, it appears on ALIAS when "With Lena Olin" appears. Season four doesn't have a Rambaldi sign, but it appears in season five when Balthazar Getty appears.
Rambaldi's prophecies
One of Rambaldi's prophecies is central to the Alias story arc. Written on the forty-seventh page of one of his manuscripts, it reads:
This woman here depicted will possess unseen marks, signs that she will be the one to bring forth my works: bind them with fury, a burning anger. Unless prevented, at vulgar cost, this woman will render the greatest power unto utter desolation. This woman, without pretense, will have had her effect, never having seen the beauty of my sky behind Mt. Subasio. Perhaps a single glance would have quelled her fire.
Included with the prophecy is a drawing of a woman that strongly resembles CIA Agent Sydney Bristow and because it stated that she would "render the greatest power unto utter desolation", Sydney was apprehended by the Department of Special Research. The prophecy also mentioned three physical anomalies which Sydney also has. Sydney is eventually rescued before her cover as a CIA agent is blown and she tries to disprove the prophecy refers to her by climbing Mt. Subasio and see the sky.
After Sydney removed the chances of her being 'The Chosen One', she realized that if she managed to fool the police, couldn't her mother have done the same? Eventually the Department of Special Research refocused their efforts on trying to track down Sydney's mother, Irina Derevko, after it was revealed she was still alive.
In the season two finale, Irina reveals that Sydney is actually Rambaldi's Chosen One, and not Irina as Sydney wanted to believe and that Sydney is the only one capable of stopping Sloane.
During Season Three, an organization known as The Covenant tried to use a Rambaldi artifact known as "The Cube", which contained a live sample of Rambaldi's DNA to fulfill the prophecy by fusing Rambaldi's DNA with Sydney's eggs to engineer Rambaldi's second coming. Sydney prevented this by destroying the lab where the procedure was underway.
When FBI Assistant Director Kendall tells Sydney the truth about the two years she was missing, he mentions that she's a celebrity among the Department of Special Research as she was the one to collect the most part of Rambaldi's works.
During Season Three (Episode 20), another prophecy was revealed to Michael Vaughn. It says that the "Chosen One", the woman referred to on Page 47, and "the Passenger", revealed to be Sydney's half-sister Nadia Santos, would fight and kill each other in battle. Irina Derevko recites a similar prophecy at the finale of season 4 that Rambaldi wrote:
When blood-red horses wander the streets and angels fall from the sky, the Chosen One and the Passenger will clash . . . and only one will survive.
When Elena Derevko put in motion her scheme to destroy civilization using the contaminated water in conjunction with a giant Mueller device, the team spotted a horse that under the light of the device appeared blood red (which prompted Irina to recite the prophecy); Nadia passed a statue of an angel which had fallen from a building right before she was captured and later becomes infected with the contaminated water.
As predicted by Rambaldi, "The Chosen One" and "The Passenger" clashed over the Mueller device, and Sydney would have been killed if not for Arvin Sloane's intervention when he was forced to shoot Nadia when she began strangling Sydney. Sydney then destroyed the Mueller device and Nadia is left in a coma.
In Season Five, it is revealed that page 47 holds a secret message, which is briefly shown when Nadia after being cured tries to burn the page to prevent her father from finding it, which as predicted in the prophecy, results in Nadia's death. Arvin Sloane eventually translates the message which reads:
The circle will be complete when the Chosen One finds The Rose in San Chielo.
Prophet Five, using its government contacts, was able to secure Anna Espinosa's release and later was able to genetically transform Espinosa into Sydney's double after the latter's DNA was sampled by Prophet Five. Sydney discovers Anna's impersonation and kills her, then begins impersonating Anna within Prophet Five.
Sydney is sent to Italy where she is approached by Sark at a betting parlor. Believing Sydney to be Anna, he explains that the former San Chielo monastery is now the La Fossa prison. After faking a robbery, Sark and Sydney surrender to the police and are taken to La Fossa. Sark fakes an illness and is taken to the infirmary, where he fights off the guards and accesses the security system to open the door to Sydney's cell. Sydney finds the basement, where an old man says that he has been waiting for her for a very long time. He indicates that he knows she's the real Sydney and quotes the message from page 47 and says that he is "The Rose". He then shows Sydney a wall where a drawing similar to the one on page 47 was painted, and gives her an amulet before declaring ominously and prophetically that "it is only a matter of time until the stars fall from the sky, until the end of light".
In the finale of Season Five, it is revealed that Sydney had misinterpreted the part of the prophecy that refers to "the beauty of my sky". Arvin Sloane shoots the ice under Sydney to prevent her (and the audience) of seeing a specific pattern on the wall inside Mt. Subasio when the rising sunlight seeps into the cavern and filters through the amulet that Sydney recovered from the Rose. Later while facing her mother, Sydney discovers that Irina had sent a virus to all the defence satellites forcing them to plunging to Earth in what appears stars falling from the sky and eventually the end of light comes indicating the end of the series.
Rambaldi's endgame
As revealed in the final three episodes of season four, the first part of Rambaldi's endgame was the Mueller Device (which could be used to change human nature to hatred or peace, depending on the desire of the possessor, as wanted). The second part of the endgame was The Horizon (immortality), as revealed at the end of season five. Together, they would result in the possessor living forever and, with the help of the Mueller Device, changing the world the way they see fit for all eternity. However, this was averted when all known Mueller Devices were destroyed and Sloane, the only known beneficiary of The Horizon, was trapped in Rambaldi's own tomb by an explosion, left to exist but never be free for all eternity.
Emily Sloane
Emily Sloane was portrayed by Amy Irving.
Emily Sloane was married to SD-6 head Arvin Sloane for more than 30 years. Little is known of her life outside of her marriage to Arvin. She had worked for the State Department for several years but whether this was before or during her marriage is unknown. She was an avid gardener. She and Arvin had one child, Jacquelyn, who died in infancy. Her grief was so great that she asked Arvin never to mention their child again.
When Irina Derevko, Sydney's mother, faked her death when Sydney was six, Sydney's father, Jack, was taken into federal custody. He named Arvin and Emily as Sydney's temporary guardians. Why Sydney would later state in multiple episodes she did not meet Arvin or Emily until after she had started working for SD-6 is unknown. Sydney stated she considered Emily her real mother.
During season 1, Emily is near death from lymphoma. Sydney visits her in an SD-6 hospital, where Emily reveals she knows Arvin doesn't really work for a bank and she knows of the existence of SD-6 (although she apparently believes it is affiliated with the CIA). Under security protocols, Sydney is required to report her to SD-6 Security Section but does not. SD-6 finds her out anyway and Arvin is ordered to execute her. He pleads for a reprieve on the grounds her cancer will soon kill her anyway and his request is granted. Emily's cancer goes into total remission and Arvin is again ordered to kill her. Instead, over the end of season 1 and the beginning of season 2, he hatches an elaborate scheme to fake her death and spirits her away to an isolated island. There, he confesses to her the true nature of SD-6. Initially horrified, Emily's love for Arvin leads her to forgive him and they remain together.
With the downfall of SD-6 and the Alliance, Sloane (now an internationally wanted terrorist) tells Emily they're "free" and leads her to believe he has finally left behind his deceptions. Emily and Arvin relocate to a villa in Tuscany, but because of the possibility, they have been compromised Sloane relocates them again. Emily learns Irina, whom she knew as Laura Bristow and thought was dead, is still alive and working with her husband. Arvin tells Emily everything he's done is to assure she remains cancer-free forever.
Distraught, Emily goes to the American consulate in Florence, announces she is Sloane's wife and demands to speak to Sydney. Sydney makes contact with Emily, who tells her she won't be the excuse for Arvin's crimes. She volunteers to help bring Arvin in but demands a guarantee Arvin will not be sentenced to death.
The CIA makes the deal, and Emily returns to Tuscany wearing a wire. Unbeknownst to her, Arvin is making a deal to sell all of his assets, contacts and Rambaldi artifacts to Irina. At the last moment, after an emotional talk with Arvin, Emily switches sides again. She reveals the wire to Arvin and then disconnects it, and they attempt to escape from the CIA. Marcus Dixon, aiming for Sloane, accidentally shoots and kills Emily instead.
Emily's final appearance in the series is in a flashback in the season 4 episode In Dreams..., which is when the existence of Jacquelyn is revealed.
Notable guest stars
Edward Atterton - Dr. Danny Hecht (season 1)
Jonathan Banks - Frederick Brandon (season 2)
Raymond J. Barry - Senator George Reed (season 3)
Patrick Bauchau - Dr. Aldo Desantis (season 5)
Angela Bassett - CIA Director Hayden Chase (season 4)
Tobin Bell - Karl Dreyer (season 1)
Peter Berg - SD-6 Agent Noah Hicks (season 1)
Agnes Bruckner - Kelly McNeil (season 1)
David Carradine - Conrad (seasons 2, 3)
David Cronenberg - Dr. Brezzel (season 3)
Faye Dunaway - Ariana Kane (season 2)
Griffin Dunne - Leonid Lisenker (season 3)
Amanda Foreman - Carrie Bowman (seasons 2, 3, 4, 5)
Vivica A. Fox - Toni Cummings (season 3)
Kurt Fuller - NSC Director Robert Lindsey (season 3)
Ricky Gervais - Daniel Ryan (season 3)
Joel Grey - 'Arvin Clone'/Corporal Ned Bolger (season 4)
John Hannah - Martin Shepard (season 1)
Rutger Hauer - Anthony Geiger (season 2)
James Handy - CIA Director Ben Devlin (seasons 1, 2, 5)
Ethan Hawke - CIA Agent James L. Lennox (season 2)
Ira Heiden - CIA Agent Rick McCarthy (season 2)
Djimon Hounsou - Kazari Bomani (season 3)
Aharon Ipalé - Ineni Hassan (season 1)
James Lesure - Craig Blair (season 2)
Richard Lewis - CIA Agent Mitchell Yeagher (season 2)
Peggy Lipton - Olivia Reed (season 3)
Tracy Middendorf - Elsa Caplan (season 2)
Roger Moore - Edward Poole (season 1)
Wolf Muser - Ramon Veloso (seasons 1, 2)
Ken Olin - David McNeil (season 1)
Evan Parke - Charlie Bernard (season 1)
Richard Roundtree - Thomas Brill (season 3)
Miguel Sandoval - Anthony Russek (season 1)
Angus Scrimm - Calvin McCullough (seasons 1, 2, 4)
Jason Segel - Sam Hauser (season 4)
Sarah Shahi - Jenny (season 1)
Christian Slater - Dr. Neil Caplan (season 2)
Joey Slotnick - Steven Haladki (season 1)
Justin Theroux - Simon Walker (season 3)
Danny Trejo - Emilio Vargas (season 2)
Patricia Wettig - Dr. Judy Barnett (seasons 1, 2, 3)
Rick Yune - Kazu Tamazaki (season 4)
Keone Young - Professor Choy (seasons 1, 5)
William Wellman Jr. - Priest (season 1)
Ravil Isyanov - Luri Karpachev (season 1, 2)
Alexander Kuznetsov - Kazimur Scherbakov (season 1) / Assault Team Leader (season 4)
John Aylward - Jeffrey Davenport (season 1, 5)
Mark Rolston - Seth Lambert (season 1)
Faran Tahir - Mokhtar (season 1)
Bernard White - Malik Sawari (season 1)
Jeff Chase - Sawari’s bodyguard (season 1) / Large Russian (season 4)
Alex Veadov - K-Directorate officer (season 1) / The Chemist (season 4)
Tom Everett - Paul Kelvin (season 1)
Norbert Weisser - Jeroen Schiller (season 1)
Lori Heuring - Eloise Kurtz (season 1)
Robert Bailey Jr. - Steven Dixon (season 1)
Tristin Mays - Robin Dixon (season 1, 3)
Nancy Dussault - Helen Calder (season 1)
Evgeniy Lazarev - Dr. Kreshnik (season 1)
Maurice Godin - Fisher (season 1)
Paul Lieber - Igor Sergei Valenko / Bentley Calder (season 1)
James Hong - Joey (season 1)
Derek Mears - Romanian Orderly (season 1)
Kristof Konrad - Endo (season 1)
Igor Jijikine - Chopper (season 1)
James Lew - Quan Li (season 1)
Michelle Arthur - Abigail (season 1, 2)
Lindsay Crouse - Carson Evans (season 1)
Castulo Guerra - Jean Briault (season 1)
Lilyan Chauvin - Signora Ventutti (season 1)
Stephen Spinella - Mr. Kishnell (season 1, 3) / Boyd Harkin (season 5)
Tony Amendola - Tambor Barcelo (season 1, 2)
Joseph Ruskin - Alain Christophe (season 1, 2)
Amy Aquino - Virginia Kerr (season 2)
Pasha D. Lychnikoff - Zoran Sokolov (season 2)
Shishir Kurup - Saeed Akhtar (season 2)
Derek de Lint - Gerard Cuvee (season 2)
Iqbal Theba - General Arshad (season 2)
Courtney Gains - Holden Gemler (season 2)
Olivia d’Abo - Emma Wallace (season 2)
Michael Enright - Antonyn Vasilly (season 2)
Tom Urb - Ilya Shtuka (season 2)
Robert Joy - Dr. Hans Jürgens (season 2)
Oleg Taktarov - Gordei Volkov (season 3)
Michael Berry Jr. - Scott Kingsley (season 3)
Al Sapienza - Tom (season 3)
Ilia Volok - Ushek San'ko (season 3, 4)
Bill Bolender - Oleg Madrczyk (season 3)
Mark Ivanir - Boris Oransky (season 3)
Clifton Collins Jr. - Javier Parez (season 3)
Timothy V. Murphy - Avery Russet (season 3)
Erick Avari - Dr. Vasson (season 3)
Pruitt Taylor Vince - Schapker (season 3)
Erica Leerhsen - Kaya (season 3)
Arnold Vosloo - Mr. Zisman (season 3)
Byron Chung - Colonel Yu (season 3)
Francois Chau - Mr. Cho (season 3)
Randall Park - Korean Soldier (season 3)
Stana Katic - Flight Attendant (season 3)
Dmitri Diatchenko - Vilmos (season 3)
Herman Sinitzyn - Petr Berezovsky (season 3)
Geno Silva - Diego Machuca
Vincent Riotta - Dr. Robert Viadro
Morgan Weisser - Cypher
Glenn Morshower as Marlon Bell
Cotter Smith as Hank Foster
Rob Benedict - Brodien (season 4)
Corey Stoll - Sasha Korjev (season 4)
Kevin Alejandro as Cesar Martinez (season 4)
José Zúñiga as Roberto Fox (season 4)
Ulrich Thomsen - Ulrich Kottor (season 4)
Elya Baskin - Dr. Josef Vlachko (season 4)
Robin Sachs - Hans Dietrich (season 4)
Michael K. Williams - Roberts (season 4)
Izabella Scorupco - Sabina (season 4)
Paul Ben-Victor - Carter (season 4)
Vladimir Mashkov - Milos Kradic (season 4)
Nestor Serrano - Thomas Raimes (season 4)
Oz Perkins - Avian (season 4)
John Benjamin Hickey - Father Kampinski (season 4)
Nick Jameson - Lazlo Drake (season 4)
Jeff Yagher - Greyson Wells (season 4)
Andrew Divoff - Lucian Nisard (season 4)
Joel Bissonnette - Keach (season 5)
Angus Macfadyen - Joseph Ehrmann (season 5)
Caroline Goodall - Elizabeth Powell (season 5)
Ntare Mwine - Benjamin Masari (season 5)
Michael Massee - Dr. Gonzalo Burris (season 5)
Oleg Vidov - Laborer (season 5)
Sterling K. Brown - Agent Rance (season 5)
Navid Negahban - Foreman (season 5)
References
Lists of action television characters
Lists of American drama television series characters |
518155 | https://en.wikipedia.org/wiki/Ken%20Sakamura | Ken Sakamura | , as of April 2017, is a Japanese professor and dean of the Faculty of Information Networking for Innovation and Design at Toyo University, Japan. He is a former professor in information science at the University of Tokyo (through March 2017). He is the creator of the real-time operating system (RTOS) architecture TRON.
In 2001, he shared the Takeda Award for Social/Economic Well-Being with Richard Stallman and Linus Torvalds.
Career
As of 2006, Sakamura leads the ubiquitous networking laboratory (UNL), located in Gotanda, Tokyo, and the T-Engine forum for consumer electronics. The joint goal of Sakamura's ubiquitous networking specification and the T-Engine forum, is to enable any everyday device to broadcast and receive information. It is essentially a TRON variant, paired with a competing standard to radio-frequency identification (RFID).
Since the foundation of the T-Engine forum, Sakamura has been working on opening Japanese technology to the world. His prior brainchild, TRON, the universal RTOS used in Japanese consumer electronics has had limited adoption in other countries. Sakamura has signed deals with Chinese and Korean universities to work together on ubiquitous networking. He has also worked with French software component manufacturer NexWave Solutions, Inc. He is an external board member for Nippon Telegraph and Telephone (NTT), Japan.
Ubiquitous Communicator
The Ubiquitous Communicator (UC) is a mobile computing device designed by Sakamura for use in ubiquitous computing. On 15 September 2004, YRP-UNL announced in Japan that it had begun producing a new model after creating five prototypes over three years. The model was used in trial tests circa late 2004. The new model, weighing about 196 grams, contains new features: RFID reader compatible for ucode, a two megapixel charge-coupled device (CCD) camera, a secondary 300,000 pixel camera for videotelephony, support for wireless network technologies, Bluetooth, Wi-Fi, and IrDA, VoIP phone feature, SD and mini-SD memory card slots, fingerprint authentication, and encryption coprocessor as options. It was expected to be sold for ¥300,000 yen, $2,700 dollars.
Honors
In May 2015, Sakamura received the prestigious ITU150 award from the International Telecommunication Union (ITU), along with Bill Gates, Robert E. Kahn, Thomas Wiegand, Mark I. Krivocheev, and Martin Cooper. The following is the citation given by ITU:
... Today, the real-time operating systems based on the TRON specifications are used for engine control on automobiles, mobile phones, digital cameras, and many other appliances, and are believed to be the among most popular operating systems for embedded computers around world. The R&D results from TRON Project are useful for ubiquitous computing. For example, UNL joined the standardization efforts at ITU-T and helped produce a series of Recommendations, including H.642 “Multimedia information access triggered by tag-based identification”. The idea behind H.642 series is based on de facto “ucode” standard developed by UNL for communication in the age of the Internet of Things ... For his achievements, Sakamura has won many awards: Takeda Award, the Medal with Purple Ribbon from the Japanese government, Okawa Prize, Prime Minister Award, and Japan Academy Prize. He is a fellow and the golden core member of the IEEE Computer Society.
References
External links
Biography
ITU150 Awards
YRP-UNL release a ubiquitous-computing device
Japanese educators
Japanese computer scientists
Keio University alumni
TRON project
University of Tokyo faculty
Living people
Recipients of the Medal with Purple Ribbon
Ubiquitous computing
Videotelephony
1951 births |
519460 | https://en.wikipedia.org/wiki/List%20of%20archive%20formats | List of archive formats | This is a list of file formats used by archivers and compressors used to create archive files.
Archiving only
Compression only
Archiving and compression
Data recovery
Comparison
Containers and compression
Notes
While the original tar format uses the ASCII character encoding, current implementations use the UTF-8 (Unicode) encoding, which is backwards compatible with ASCII.
Supports the external Parchive program (par2).
From 3.20 release RAR can store modification, creation and last access time with the precision up to 0.0000001 second (= 0.1 µs).
The PAQ family (with its lighter weight derivative LPAQ) went through many revisions, each revision suggested its own extension. For example: ".paq9a".
WIM can store the ciphertext of encrypted files on an NTFS volume, but such files can only by decrypted if an administrator extracts the file to an NTFS volume, and the decryption key is available (typically from the file's original owner on the same Windows installation). Microsoft has also distributed some download versions of the Windows operating system as encrypted WIM files, but via an external encryption process and not a feature of WIM.
Purpose: Archive formats are used for backups, mobility, and archiving. Many archive formats compress the data to consume less storage space and result in quicker transfer times as the same data is represented by fewer bytes. Another benefit is that files are combined into one archive file which has less overhead for managing or transferring. There are numerous compression algorithms available to losslessly compress archived data and some algorithms work better (smaller archive or faster compression) with particular data types. Archive formats are also used by most operating systems to package software for easier distribution and installation than binary executables.
Filename extension: The DOS and Windows operating systems required filenames to include an extension (of at least one, and typically 3 characters) to identify the file type and use. Filename extensions must be unique for each type of file. Many operating systems identify a file's type from its contents without the need for an extension in its name. However, the use of three-character extensions has been embraced as a useful and efficient shorthand for identifying file types.
Integrity check: Archive files are often stored on magnetic media, which is subject to data storage errors. Early tape media had a higher rate of errors than they do today. Many archive formats contain extra error-correction information to detect storage or transmission errors, and the software used to read the archive files contains logic to detect and possibly correct errors.
Recovery record: Many archive formats contain redundant data embedded in the files in order to detect data storage or transmission errors, and the software used to read the archive files contains logic to detect and correct errors.
Encryption: In order to protect the data being stored or transferred from being read if intercepted, many archive formats include the capability to encrypt the data. There are multiple mathematical algorithms available to encrypt data.
Software packaging and distribution
Notes
Not to be confused with the archiver JAR written by Robert K. Jung, which produces ".j" files.
Features
See also
Archive file
Comparison of file archivers
Comparison of file systems
List of file systems
Solid compression
zlib
Notes
Compression is not a built-in feature of the formats, however, the resulting archive can be compressed with any algorithm of choice. Several implementations include functionality to do this automatically
Most implementations can optionally produce a self-extracting executable
Per-file compression with gzip, bzip2, lzo, xz, lzma (as opposed to compressing the whole archive). An individual can choose not to compress already compressed filenames based on their suffix as well.
Footnotes
Lists of file formats |
520066 | https://en.wikipedia.org/wiki/History%20of%20cryptography | History of cryptography | Cryptography, the use of codes and ciphers to protect secrets, began thousands of years ago. Until recent decades, it has been the story of what might be called classic cryptography — that is, of methods of encryption that use pen and paper, or perhaps simple mechanical aids. In the early 20th century, the invention of complex mechanical and electromechanical machines, such as the Enigma rotor machine, provided more sophisticated and efficient means of encryption; and the subsequent introduction of electronics and computing has allowed elaborate schemes of still greater complexity, most of which are entirely unsuited to pen and paper.
The development of cryptography has been paralleled by the development of cryptanalysis — the "breaking" of codes and ciphers. The discovery and application, early on, of frequency analysis to the reading of encrypted communications has, on occasion, altered the course of history. Thus the Zimmermann Telegram triggered the United States' entry into World War I; and Allies reading of Nazi Germany's ciphers shortened World War II, in some evaluations by as much as two years.
Until the 1960s, secure cryptography was largely the preserve of governments. Two events have since brought it squarely into the public domain: the creation of a public encryption standard (DES), and the invention of public-key cryptography.
Antiquity
The earliest known use of cryptography is found in non-standard hieroglyphs carved into the wall of a tomb from the Old Kingdom of Egypt circa 1900 BC. These are not thought to be serious attempts at secret communications, however, but rather to have been attempts at mystery, intrigue, or even amusement for literate onlookers.
Some clay tablets from Mesopotamia somewhat later are clearly meant to protect information—one dated near 1500 BC was found to encrypt a craftsman's recipe for pottery glaze, presumably commercially valuable. Furthermore, Hebrew scholars made use of simple monoalphabetic substitution ciphers (such as the Atbash cipher) beginning perhaps around 600 to 500 BC.
In India around 400 BC to 200 AD, Mlecchita vikalpa or "the art of understanding writing in cypher, and the writing of words in a peculiar way" was documented in the Kama Sutra for the purpose of communication between lovers. This was also likely a simple substitution cipher. Parts of the Egyptian demotic Greek Magical Papyri were written in a cypher script.
The ancient Greeks are said to have known of ciphers. The scytale transposition cipher was used by the Spartan military, but it is not definitively known whether the scytale was for encryption, authentication, or avoiding bad omens in speech. Herodotus tells us of secret messages physically concealed beneath wax on wooden tablets or as a tattoo on a slave's head concealed by regrown hair, although these are not properly examples of cryptography per se as the message, once known, is directly readable; this is known as steganography. Another Greek method was developed by Polybius (now called the "Polybius Square"). The Romans knew something of cryptography (e.g., the Caesar cipher and its variations).
Medieval cryptography
David Kahn notes in The Codebreakers that modern cryptology originated among the Arabs, the first people to systematically document cryptanalytic methods. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
The invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers, by Al-Kindi, an Arab mathematician, sometime around AD 800, proved to be the single most significant cryptanalytic advance until World War II. Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), in which he described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
In early medieval England between the years 800–1100, substitution ciphers were frequently used by scribes as a playful and clever way to encipher notes, solutions to riddles, and colophons. The ciphers tend to be fairly straightforward, but sometimes they deviate from an ordinary pattern, adding to their complexity, and possibly also to their sophistication. This period saw vital and significant cryptographic experimentation in the West.
Ahmad al-Qalqashandi (AD 1355–1418) wrote the Subh al-a 'sha, a 14-volume encyclopedia which included a section on cryptology. This information was attributed to Ibn al-Durayhim who lived from AD 1312 to 1361, but whose writings on cryptography have been lost. The list of ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter (later called homophonic substitution). Also traced to Ibn al-Durayhim is an exposition on and a worked example of cryptanalysis, including the use of tables of letter frequencies and sets of letters which cannot occur together in one word.
The earliest example of the homophonic substitution cipher is the one used by Duke of Mantua in the early 1400s. Homophonic cipher replaces each letter with multiple symbols depending on the letter frequency. The cipher is ahead of the time because it combines monoalphabetic and polyalphabetic features.
Essentially all ciphers remained vulnerable to the cryptanalytic technique of frequency analysis until the development of the polyalphabetic cipher, and many remained so thereafter. The polyalphabetic cipher was most clearly explained by Leon Battista Alberti around AD 1467, for which he was called the "father of Western cryptology". Johannes Trithemius, in his work Poligraphia, invented the tabula recta, a critical component of the Vigenère cipher. Trithemius also wrote the Steganographia. The French cryptographer Blaise de Vigenère devised a practical polyalphabetic system which bears his name, the Vigenère cipher.
In Europe, cryptography became (secretly) more important as a consequence of political competition and religious revolution. For instance, in Europe during and after the Renaissance, citizens of the various Italian states—the Papal States and the Roman Catholic Church included—were responsible for rapid proliferation of cryptographic techniques, few of which reflect understanding (or even knowledge) of Alberti's polyalphabetic advance. "Advanced ciphers", even after Alberti, were not as advanced as their inventors / developers / users claimed (and probably even they themselves believed). They were frequently broken. This over-optimism may be inherent in cryptography, for it was then – and remains today – difficult in principle to know how vulnerable one's own system is. In the absence of knowledge, guesses and hopes are predictably common.
Cryptography, cryptanalysis, and secret-agent/courier betrayal featured in the Babington plot during the reign of Queen Elizabeth I which led to the execution of Mary, Queen of Scots. Robert Hooke suggested in the chapter Of Dr. Dee's Book of Spirits, that John Dee made use of Trithemian steganography, to conceal his communication with Queen Elizabeth I.
The chief cryptographer of King Louis XIV of France was Antoine Rossignol; he and his family created what is known as the Great Cipher because it remained unsolved from its initial use until 1890, when French military cryptanalyst, Étienne Bazeries solved it. An encrypted message from the time of the Man in the Iron Mask (decrypted just prior to 1900 by Étienne Bazeries) has shed some, regrettably non-definitive, light on the identity of that real, if legendary and unfortunate, prisoner.
Outside of Europe, after the Mongols brought about the end of the Islamic Golden Age, cryptography remained comparatively undeveloped. Cryptography in Japan seems not to have been used until about 1510, and advanced techniques were not known until after the opening of the country to the West beginning in the 1860s.
Cryptography from 1800 to World War I
Although cryptography has a long and complex history, it wasn't until the 19th century that it developed anything more than ad hoc approaches to either encryption or cryptanalysis (the science of finding weaknesses in crypto systems). Examples of the latter include Charles Babbage's Crimean War era work on mathematical cryptanalysis of polyalphabetic ciphers, redeveloped and published somewhat later by the Prussian Friedrich Kasiski. Understanding of cryptography at this time typically consisted of hard-won rules of thumb; see, for example, Auguste Kerckhoffs' cryptographic writings in the latter 19th century. Edgar Allan Poe used systematic methods to solve ciphers in the 1840s. In particular he placed a notice of his abilities in the Philadelphia paper Alexander's Weekly (Express) Messenger, inviting submissions of ciphers, of which he proceeded to solve almost all. His success created a public stir for some months. He later wrote an essay on methods of cryptography which proved useful as an introduction for novice British cryptanalysts attempting to break German codes and ciphers during World War I, and a famous story, The Gold-Bug, in which cryptanalysis was a prominent element.
Cryptography, and its misuse, were involved in the execution of Mata Hari and in Dreyfus' conviction and imprisonment, both in the early 20th century. Cryptographers were also involved in exposing the machinations which had led to the Dreyfus affair; Mata Hari, in contrast, was shot.
In World War I the Admiralty's Room 40 broke German naval codes and played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea that led to the battles of Dogger Bank and Jutland as the British fleet was sent out to intercept them. However its most important contribution was probably in decrypting the Zimmermann Telegram, a cable from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico which played a major part in bringing the United States into the war.
In 1917, Gilbert Vernam proposed a teleprinter cipher in which a previously prepared key, kept on paper tape, is combined character by character with the plaintext message to produce the cyphertext. This led to the development of electromechanical devices as cipher machines, and to the only unbreakable cipher, the one time pad.
During the 1920s, Polish naval-officers assisted the Japanese military with code and cipher development.
Mathematical methods proliferated in the period prior to World War II (notably in William F. Friedman's application of statistical techniques to cryptanalysis and cipher development and in Marian Rejewski's initial break into the German Army's version of the Enigma system in 1932).
World War II cryptography
By World War II, mechanical and electromechanical cipher machines were in wide use, although—where such machines were impractical—code books and manual systems continued in use. Great advances were made in both cipher design and cryptanalysis, all in secrecy. Information about this period has begun to be declassified as the official British 50-year secrecy period has come to an end, as US archives have slowly opened, and as assorted memoirs and articles have appeared.
Germany
The Germans made heavy use, in several variants, of an electromechanical rotor machine known as Enigma. Mathematician Marian Rejewski, at Poland's Cipher Bureau, in December 1932 deduced the detailed structure of the German Army Enigma, using mathematics and limited documentation supplied by Captain Gustave Bertrand of French military intelligence acquired from a German clerk. This was the greatest breakthrough in cryptanalysis in a thousand years and more, according to historian David Kahn. Rejewski and his mathematical Cipher Bureau colleagues, Jerzy Różycki and Henryk Zygalski, continued reading Enigma and keeping pace with the evolution of the German Army machine's components and encipherment procedures for some time. As the Poles' resources became strained by the changes being introduced by the Germans, and as war loomed, the Cipher Bureau, on the Polish General Staff's instructions, on 25 July 1939, at Warsaw, initiated French and British intelligence representatives into the secrets of Enigma decryption.
Soon after the invasion of Poland by Germany on 1 September 1939, key Cipher Bureau personnel were evacuated southeastward; on 17 September, as the Soviet Union attacked Poland from the East, they crossed into Romania. From there they reached Paris, France; at PC Bruno, near Paris, they continued working toward breaking Enigma, collaborating with British cryptologists at Bletchley Park as the British got up to speed on their work breaking Enigma. In due course, the British cryptographerswhose ranks included many chess masters and mathematics dons such as Gordon Welchman, Max Newman, and Alan Turing (the conceptual founder of modern computing) made substantial breakthroughs in the scale and technology of Enigma decryption.
German code breaking in World War II also had some success, most importantly by breaking the Naval Cipher No. 3. This enabled them to track and sink Atlantic convoys. It was only Ultra intelligence that finally persuaded the admiralty to change their codes in June 1943. This is surprising given the success of the British Room 40 code breakers in the previous world war.
At the end of the War, on 19 April 1945, Britain's highest level civilian and military officials were told that they could never reveal that the German Enigma cipher had been broken because it would give the defeated enemy the chance to say they "were not well and fairly beaten".
The German military also deployed several teleprinter stream ciphers. Bletchley Park called them the Fish ciphers; Max Newman and colleagues designed and deployed the Heath Robinson, and then the world's first programmable digital electronic computer, the Colossus, to help with their cryptanalysis. The German Foreign Office began to use the one-time pad in 1919; some of this traffic was read in World War II partly as the result of recovery of some key material in South America that was discarded without sufficient care by a German courier.
The Schlüsselgerät 41 was developed late in the war as a more secure replacement for Enigma, but only saw limited use.
Japan
A US Army group, the SIS, managed to break the highest security Japanese diplomatic cipher system (an electromechanical stepping switch machine called Purple by the Americans) in 1940, before the attack on Pearl Harbour. The locally developed Purple machine replaced the earlier "Red" machine used by the Japanese Foreign Ministry, and a related machine, the M-1, used by Naval attachés which was broken by the U.S. Navy's Agnes Driscoll. All the Japanese machine ciphers were broken, to one degree or another, by the Allies.
The Japanese Navy and Army largely used code book systems, later with a separate numerical additive. US Navy cryptographers (with cooperation from British and Dutch cryptographers after 1940) broke into several Japanese Navy crypto systems. The break into one of them, JN-25, famously led to the US victory in the Battle of Midway; and to the publication of that fact in the Chicago Tribune shortly after the battle, though the Japanese seem not to have noticed for they kept using the JN-25 system.
Allies
The Americans referred to the intelligence resulting from cryptanalysis, perhaps especially that from the Purple machine, as 'Magic'. The British eventually settled on 'Ultra' for intelligence resulting from cryptanalysis, particularly that from message traffic protected by the various Enigmas. An earlier British term for Ultra had been 'Boniface' in an attempt to suggest, if betrayed, that it might have an individual agent as a source.
Allied cipher machines used in World War II included the British TypeX and the American SIGABA; both were electromechanical rotor designs similar in spirit to the Enigma, albeit with major improvements. Neither is known to have been broken by anyone during the War. The Poles used the Lacida machine, but its security was found to be less than intended (by Polish Army cryptographers in the UK), and its use was discontinued. US troops in the field used the M-209 and the still less secure M-94 family machines. British SOE agents initially used 'poem ciphers' (memorized poems were the encryption/decryption keys), but later in the War, they began to switch to one-time pads.
The VIC cipher (used at least until 1957 in connection with Rudolf Abel's NY spy ring) was a very complex hand cipher, and is claimed to be the most complicated known to have been used by the Soviets, according to David Kahn in Kahn on Codes. For the decrypting of Soviet ciphers (particularly when one-time pads were reused), see Venona project.
Role of women
The UK and US employed large numbers of women in their code-breaking operation, with close to 7,000 reporting to Bletchley Park
and 11,000 to the separate US Army and Navy operations, around Washington, DC. By tradition in Japan and Nazi doctrine in Germany, women were excluded from war work, at least until late in the war. Even after encryption systems were broken, large amounts of work were needed to respond to changes made, recover daily key settings for multiple networks, and intercept, process, translate, prioritize and analyze the huge volume of enemy messages generated in a global conflict. A few women, including Elizabeth Friedman and Agnes Meyer Driscoll, had been major contributors to US code-breaking in the 1930s and the Navy and Army began actively recruiting top graduates of women's colleges shortly before the attack on Pearl Harbor. Liza Mundy argues that this disparity in utilizing the talents of women between the Allies and Axis made a strategic difference in the war.
Modern cryptography
Encryption in modern times is achieved by using algorithms that have a key to encrypt and decrypt information. These keys convert the messages and data into "digital gibberish" through encryption and then return them to the original form through decryption. In general, the longer the key is, the more difficult it is to crack the code. This holds true because deciphering an encrypted message by brute force would require the attacker to try every possible key. To put this in context, each binary unit of information, or bit, has a value of 0 or 1. An 8-bit key would then have 256 or 2^8 possible keys. A 56-bit key would have 2^56, or 72 quadrillion, possible keys to try and decipher the message. With modern technology, cyphers using keys with these lengths are becoming easier to decipher. DES, an early US Government approved cypher, has an effective key length of 56 bits, and test messages using that cypher have been broken by brute force key search. However, as technology advances, so does the quality of encryption. Since World War II, one of the most notable advances in the study of cryptography is the introduction of the asymmetric key cyphers (sometimes termed public-key cyphers). These are algorithms which use two mathematically related keys for encryption of the same message. Some of these algorithms permit publication of one of the keys, due to it being extremely difficult to determine one key simply from knowledge of the other.
Beginning around 1990, the use of the Internet for commercial purposes and the introduction of commercial transactions over the Internet called for a widespread standard for encryption. Before the introduction of the Advanced Encryption Standard (AES), information sent over the Internet, such as financial data, was encrypted if at all, most commonly using the Data Encryption Standard (DES). This had been approved by NBS (a US Government agency) for its security, after public call for, and a competition among, candidates for such a cypher algorithm. DES was approved for a short period, but saw extended use due to complex wrangles over the use by the public of high quality encryption. DES was finally replaced by the AES after another public competition organized by the NBS successor agency, NIST. Around the late 1990s to early 2000s, the use of public-key algorithms became a more common approach for encryption, and soon a hybrid of the two schemes became the most accepted way for e-commerce operations to proceed. Additionally, the creation of a new protocol known as the Secure Socket Layer, or SSL, led the way for online transactions to take place. Transactions ranging from purchasing goods to online bill pay and banking used SSL. Furthermore, as wireless Internet connections became more common among households, the need for encryption grew, as a level of security was needed in these everyday situations.
Claude Shannon
Claude E. Shannon is considered by many to be the father of mathematical cryptography. Shannon worked for several years at Bell Labs, and during his time there, he produced an article entitled "A mathematical theory of cryptography". This article was written in 1945 and eventually was published in the Bell System Technical Journal in 1949. It is commonly accepted that this paper was the starting point for development of modern cryptography. Shannon was inspired during the war to address "[t]he problems of cryptography [because] secrecy systems furnish an interesting application of communication theory". Shannon identified the two main goals of cryptography: secrecy and authenticity. His focus was on exploring secrecy and thirty-five years later, G.J. Simmons would address the issue of authenticity. Shannon wrote a further article entitled "A mathematical theory of communication" which highlights one of the most significant aspects of his work: cryptography's transition from art to science.
In his works, Shannon described the two basic types of systems for secrecy. The first are those designed with the intent to protect against hackers and attackers who have infinite resources with which to decode a message (theoretical secrecy, now unconditional security), and the second are those designed to protect against hackers and attacks with finite resources with which to decode a message (practical secrecy, now computational security). Most of Shannon's work focused around theoretical secrecy; here, Shannon introduced a definition for the "unbreakability" of a cipher. If a cipher was determined "unbreakable", it was considered to have "perfect secrecy". In proving "perfect secrecy", Shannon determined that this could only be obtained with a secret key whose length given in binary digits was greater than or equal to the number of bits contained in the information being encrypted. Furthermore, Shannon developed the "unicity distance", defined as the "amount of plaintext that… determines the secret key."
Shannon's work influenced further cryptography research in the 1970s, as the public-key cryptography developers, M. E. Hellman and W. Diffie cited Shannon's research as a major influence. His work also impacted modern designs of secret-key ciphers. At the end of Shannon's work with cryptography, progress slowed until Hellman and Diffie introduced their paper involving "public-key cryptography".
An encryption standard
The mid-1970s saw two major public (i.e., non-secret) advances. First was the publication of the draft Data Encryption Standard in the U.S. Federal Register on 17 March 1975. The proposed DES cipher was submitted by a research group at IBM, at the invitation of the National Bureau of Standards (now NIST), in an effort to develop secure electronic communication facilities for businesses such as banks and other large financial organizations. After advice and modification by the NSA, acting behind the scenes, it was adopted and published as a Federal Information Processing Standard Publication in 1977 (currently at FIPS 46-3). DES was the first publicly accessible cipher to be 'blessed' by a national agency such as the NSA. The release of its specification by NBS stimulated an explosion of public and academic interest in cryptography.
The aging DES was officially replaced by the Advanced Encryption Standard (AES) in 2001 when NIST announced FIPS 197. After an open competition, NIST selected Rijndael, submitted by two Belgian cryptographers, to be the AES. DES, and more secure variants of it (such as Triple DES), are still used today, having been incorporated into many national and organizational standards. However, its 56-bit key-size has been shown to be insufficient to guard against brute force attacks (one such attack, undertaken by the cyber civil-rights group Electronic Frontier Foundation in 1997, succeeded in 56 hours.) As a result, use of straight DES encryption is now without doubt insecure for use in new cryptosystem designs, and messages protected by older cryptosystems using DES, and indeed all messages sent since 1976 using DES, are also at risk. Regardless of DES' inherent quality, the DES key size (56-bits) was thought to be too small by some even in 1976, perhaps most publicly by Whitfield Diffie. There was suspicion that government organizations even then had sufficient computing power to break DES messages; clearly others have achieved this capability.
Public key
The second development, in 1976, was perhaps even more important, for it fundamentally changed the way cryptosystems might work. This was the publication of the paper New Directions in Cryptography by Whitfield Diffie and Martin Hellman. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution, and has become known as Diffie–Hellman key exchange. The article also stimulated the almost immediate public development of a new class of enciphering algorithms, the asymmetric key algorithms.
Prior to that time, all useful modern encryption algorithms had been symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. All of the electromechanical machines used in World War II were of this logical class, as were the Caesar and Atbash ciphers and essentially all cipher systems throughout history. The 'key' for a code is, of course, the codebook, which must likewise be distributed and kept secret, and so shares most of the same problems in practice.
Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system (the term usually used is 'via a secure channel') such as a trustworthy courier with a briefcase handcuffed to a wrist, or face-to-face contact, or a loyal carrier pigeon. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels aren't available for key exchange, or when, as is sensible cryptographic practice, keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users. A system of this kind is known as a secret key, or symmetric key cryptosystem. D-H key exchange (and succeeding improvements and variants) made operation of these systems much easier, and more secure, than had ever been possible before in all of history.
In contrast, asymmetric key encryption uses a pair of mathematically related keys, each of which decrypts the encryption performed using the other. Some, but not all, of these algorithms have the additional property that one of the paired keys cannot be deduced from the other by any known method other than trial and error. An algorithm of this kind is known as a public key or asymmetric key system. Using such an algorithm, only one key pair is needed per user. By designating one key of the pair as private (always secret), and the other as public (often widely available), no secure channel is needed for key exchange. So long as the private key stays secret, the public key can be widely known for a very long time without compromising security, making it safe to reuse the same key pair indefinitely.
For two users of an asymmetric key algorithm to communicate securely over an insecure channel, each user will need to know their own public and private keys as well as the other user's public key. Take this basic scenario: Alice and Bob each have a pair of keys they've been using for years with many other users. At the start of their message, they exchange public keys, unencrypted over an insecure line. Alice then encrypts a message using her private key, and then re-encrypts that result using Bob's public key. The double-encrypted message is then sent as digital data over a wire from Alice to Bob. Bob receives the bit stream and decrypts it using his own private key, and then decrypts that bit stream using Alice's public key. If the final result is recognizable as a message, Bob can be confident that the message actually came from someone who knows Alice's private key (presumably actually her if she's been careful with her private key), and that anyone eavesdropping on the channel will need Bob's private key in order to understand the message.
Asymmetric algorithms rely for their effectiveness on a class of problems in mathematics called one-way functions, which require relatively little computational power to execute, but vast amounts of power to reverse, if reversal is possible at all. A classic example of a one-way function is multiplication of very large prime numbers. It's fairly quick to multiply two large primes, but very difficult to find the factors of the product of two large primes. Because of the mathematics of one-way functions, most possible keys are bad choices as cryptographic keys; only a small fraction of the possible keys of a given length are suitable, and so asymmetric algorithms require very long keys to reach the same level of security provided by relatively shorter symmetric keys. The need to both generate the key pairs, and perform the encryption/decryption operations make asymmetric algorithms computationally expensive, compared to most symmetric algorithms. Since symmetric algorithms can often use any sequence of (random, or at least unpredictable) bits as a key, a disposable session key can be quickly generated for short-term use. Consequently, it is common practice to use a long asymmetric key to exchange a disposable, much shorter (but just as strong) symmetric key. The slower asymmetric algorithm securely sends a symmetric session key, and the faster symmetric algorithm takes over for the remainder of the message.
Asymmetric key cryptography, Diffie–Hellman key exchange, and the best known of the public key / private key algorithms (i.e., what is usually called the RSA algorithm), all seem to have been independently developed at a UK intelligence agency before the public announcement by Diffie and Hellman in 1976. GCHQ has released documents claiming they had developed public key cryptography before the publication of Diffie and Hellman's paper. Various classified papers were written at GCHQ during the 1960s and 1970s which eventually led to schemes essentially identical to RSA encryption and to Diffie–Hellman key exchange in 1973 and 1974. Some of these have now been published, and the inventors (James H. Ellis, Clifford Cocks, and Malcolm Williamson) have made public (some of) their work.
Hashing
Hashing is a common technique used in cryptography to encode information quickly using typical algorithms. Generally, an algorithm is applied to a string of text, and the resulting string becomes the "hash value". This creates a "digital fingerprint" of the message, as the specific hash value is used to identify a specific message. The output from the algorithm is also referred to as a "message digest" or a "check sum". Hashing is good for determining if information has been changed in transmission. If the hash value is different upon reception than upon sending, there is evidence the message has been altered. Once the algorithm has been applied to the data to be hashed, the hash function produces a fixed-length output. Essentially, anything passed through the hash function should resolve to the same length output as anything else passed through the same hash function. It is important to note that hashing is not the same as encrypting. Hashing is a one-way operation that is used to transform data into the compressed message digest. Additionally, the integrity of the message can be measured with hashing. Conversely, encryption is a two-way operation that is used to transform plaintext into cipher-text and then vice versa. In encryption, the confidentiality of a message is guaranteed.
Hash functions can be used to verify digital signatures, so that when signing documents via the Internet, the signature is applied to one particular individual. Much like a hand-written signature, these signatures are verified by assigning their exact hash code to a person. Furthermore, hashing is applied to passwords for computer systems. Hashing for passwords began with the UNIX operating system. A user on the system would first create a password. That password would be hashed, using an algorithm or key, and then stored in a password file. This is still prominent today, as web applications that require passwords will often hash user's passwords and store them in a database.
Cryptography politics
The public developments of the 1970s broke the near monopoly on high quality cryptography held by government organizations (see S Levy's Crypto for a journalistic account of some of the policy controversy of the time in the US). For the first time ever, those outside government organizations had access to cryptography not readily breakable by anyone (including governments). Considerable controversy, and conflict, both public and private, began more or less immediately, sometimes called the crypto wars. They have not yet subsided. In many countries, for example, export of cryptography is subject to restrictions. Until 1996 export from the U.S. of cryptography using keys longer than 40 bits (too small to be very secure against a knowledgeable attacker) was sharply limited. As recently as 2004, former FBI Director Louis Freeh, testifying before the 9/11 Commission, called for new laws against public use of encryption.
One of the most significant people favoring strong encryption for public use was Phil Zimmermann. He wrote and then in 1991 released PGP (Pretty Good Privacy), a very high quality crypto system. He distributed a freeware version of PGP when he felt threatened by legislation then under consideration by the US Government that would require backdoors to be included in all cryptographic products developed within the US. His system was released worldwide shortly after he released it in the US, and that began a long criminal investigation of him by the US Government Justice Department for the alleged violation of export restrictions. The Justice Department eventually dropped its case against Zimmermann, and the freeware distribution of PGP has continued around the world. PGP even eventually became an open Internet standard (RFC 2440 or OpenPGP).
Modern cryptanalysis
While modern ciphers like AES and the higher quality asymmetric ciphers are widely considered unbreakable, poor designs and implementations are still sometimes adopted and there have been important cryptanalytic breaks of deployed crypto systems in recent years. Notable examples of broken crypto designs include the first Wi-Fi encryption scheme WEP, the Content Scrambling System used for encrypting and controlling DVD use, the A5/1 and A5/2 ciphers used in GSM cell phones, and the CRYPTO1 cipher used in the widely deployed MIFARE Classic smart cards from NXP Semiconductors, a spun off division of Philips Electronics. All of these are symmetric ciphers. Thus far, not one of the mathematical ideas underlying public key cryptography has been proven to be 'unbreakable', and so some future mathematical analysis advance might render systems relying on them insecure. While few informed observers foresee such a breakthrough, the key size recommended for security as best practice keeps increasing as increased computing power required for breaking codes becomes cheaper and more available. Quantum computers, if ever constructed with enough capacity, could break existing public key algorithms and efforts are underway to develop and standardize post-quantum cryptography.
Even without breaking encryption in the traditional sense, side-channel attacks can be mounted that exploit information gained from the way a computer system is implemented, such as cache memory usage, timing information, power consumption, electromagnetic leaks or even sounds emitted. Newer cryptographic algorithms are being developed that make such attacks more difficult.
See also
NSA encryption systems
Steganography
Timeline of cryptography
Topics in cryptography
Japanese cryptology from the 1500s to Meiji
World War I cryptography
World War II cryptography
List of cryptographers
:Category:Undeciphered historical codes and ciphers
References
External links
Helger Lipmaa's cryptography pointers
Timeline of cipher machines
zh-yue:密碼學史
Classical cryptography
Military communications
History of telecommunications |
520254 | https://en.wikipedia.org/wiki/Microsoft%20ergonomic%20keyboards | Microsoft ergonomic keyboards | Microsoft has designed and sold a variety of ergonomic keyboards for computers. The oldest is the Microsoft Natural Keyboard, released in 1994, the company's first computer keyboard. The newest models are the Sculpt ergonomic keyboard (2013) the Surface Ergonomic Keyboard (2016) and the Microsoft Ergonomic Keyboard (2019).
Models
In general, ergonomic keyboards are designed to keep the user's arms and wrists in a near-neutral position, which means the slant angle (the lateral rotation angle for the keys in each half relative to the axis of the home row in a conventional keyboard) is approximately 10 to 12.5°, the slope (the angle of the keytop surfaces starting from the front edge closer to the user towards the top of the keyboard, relative to a horizontal plane) is -7.5°, and the tent or gable angle of each half (the angle of the keytops from the center of the keyboard towards its left and right edges, relative to the horizontal desk surface) is 20 to 30°.
Notes
Natural Keyboard
The first generation of the Microsoft ergonomic keyboards, named the Natural Keyboard, was released in September 1994, designed for Microsoft Windows 95 and Novell Netware. It was designed for Microsoft by Ziba Design with assistance and manufacturing by Key Tronic. The Microswitch division of Honeywell, which was responsible for that company's keyboards and was acquired by Key Tronic in early 1994, is also credited with design input.
The keyboard uses a fixed-split design, with each half of the alphanumeric section separated, laterally rotated, and tilted upwards and down from the center of the keyboard. This key arrangement was ergonomically designed to prevent carpal tunnel syndrome and other repetitive strain injuries associated with typing for long periods of time. Another innovation was the integrated wrist pad helping to ensure correct posture while sitting at the computer and further reducing strain on the neck, arms and wrists.
This keyboard also introduced three new keys purposed for Microsoft's upcoming operating system: two Windows logo keys () between the and keys on each side, and a key between the right Windows and Ctrl keys. The three Num Lock/Caps Lock/Scroll Lock status lights are arranged vertically between the two halves of the alphanumeric section.
Although it was not the first ergonomic keyboard, it was the first widely available sub-$100 offering. The keyboard gained popularity quickly, exceeding Microsoft's forecast of 100,000 units sold by the end of 1994. Microsoft soon asked Key Tronic to ramp up production to 100,000 per month in 1995, and the Natural Keyboard sold over 600,000 per month at its peak. Over 3 million units had been sold by February 1998, when its successor, the Natural Keyboard Elite, was introduced.
As with most Microsoft keyboards, software (Microsoft IntelliType) is bundled with the keyboard for both Mac OS X and Windows, allowing users to customize the function keys and modify keys fairly extensively.
Natural Keyboard Elite
The Microsoft Natural Keyboard has had several upgrades and refreshes since its introduction. The first of these was the Natural Keyboard Elite, introduced in February 1998 at a retail price of . Like the original Natural Keyboard, the Elite was manufactured by Key Tronic, who also assisted in its development.
The Elite features a nonstandard layout of the six-key navigation/edit key cluster normally found above the cursor keys (/, /, and /). Another common criticism of the Elite is that the arrow keys' inverted-T layout has been changed into a cross-like layout, with left/right arrows keys side by side and up/down keys bracketing them from above and below, increasing the distance between the vertical keys. Another significant change was the keyboard's adjustable feet. While the original Microsoft Natural Keyboard had feet in the front to generate reverse tilt, the Elite and its successors have their feet in the back. The Natural Keyboard Elite was manufactured in at least two different color schemes; white with black lettering and black with white lettering.
Natural Keyboard Pro
The third iteration was the Natural Keyboard Pro, introduced in June 1999 at a retail price of . The Natural Keyboard Pro restored the standard inverted-T layout of the cursor keys and six-key nav/edit cluster 2×3 layout, and added a row of program shortcut keys along the top edge of the keyboard (above — and the numeric keypad), including multimedia keys and power management keys. Vista and Windows 7 have the ability to customize shortcut key behavior without additional software when using the "internet keyboard" keyboard layout. Some other modern operating systems, such as FreeBSD and most Linux distributions, have comparable configuration options. The Natural Keyboard Pro also included an internal two-port USB hub, which was commonly used to connect other input devices such as a mouse or trackball, but this was dropped in subsequent iterations.
Natural Multimedia Keyboard
In September 2002, Microsoft introduced the redesigned Natural Multimedia Keyboard (sometimes styled as MultiMedia) at a retail price of . The Natural Multimedia Keyboard reworked the row of multimedia buttons and included the controversial F Lock feature, originally debuted in the Microsoft Office Keyboard. Another common criticism is that although the bunched arrow keys of previous generation has been fixed by returning to the standard inverted T layout, the six-key nav/edit cluster has been changed to a 2x3 vertical layout, with moved to the F-Lock function of , and key expanded to double height. On the Natural Multimedia Keyboard, the status indicator lights for Num lock, Scroll lock, and Caps lock were moved from between the banks of alphanumeric keys to a more traditional location above and to the right of the backspace key. The Natural Multimedia Keyboard was manufactured in at least three different color schemes, including white with blue accents, black with silver accents, and white with black accents.
Wireless Optical Desktop Pro
The Wireless Optical Desktop Pro was introduced alongside the Natural MultiMedia Keyboard in September 2002 at a retail price of , That Desktop bundle included a wireless version of that keyboard, a wireless optical mouse (sold separately as the Wireless Optical Mouse blue), a USB receiver, and an adapter to convert the USB plug to PS/2 for older systems. The finish of the mouse and keyboard were changed to black with silver accents, and the indicator lights (Num Lock/Caps Lock/Scroll Lock) were moved to the receiver rather than the keyboard to conserve power.
Natural Ergonomic Keyboard 4000
In September 2005, Microsoft introduced the Natural Ergonomic Keyboard 4000 at a retail price of . This keyboard provides a significantly changed ergonomic design, including an integrated leatherette wrist rest, noticeably concave key banks, and a removable front stand to generate negative slope, which helps to straighten the wrist and allows the fingers to drop naturally onto the keys. The multimedia keys have again been redesigned, and the six-key nav/edit cluster has been fixed by returning to the standard 3×2 horizontal rectangular layout. The F Lock key now defaults to "on", providing the original function key features rather than the new "enhanced" functions, and retains its setting across reboots. The 4000 has the indicator lights for Caps lock, etc. moved back to between the banks of keys, although they are now below the space bar, rather than above. The 4000 is also significantly quieter to type on, with less of the distinctive "click clack" noise that is common with older keyboards; as it is likely uses cheaper membrane key switches (as opposed to mechanical), which tend to be quieter but have twice as much travel before depression. The space bar, however, has been reported by several reviewers to be unusually noisy and difficult to depress. The 4000 is available in two variants, Business and Retail. The exact differences are not known, although product descriptions imply that the packaging is different, and prices are often slightly higher for the Business edition.
Natural Ergonomic 7000 keyboard
In June 2007, Microsoft introduced the Natural Wireless Ergonomic Keyboard 7000 as part of the Natural Ergonomic Desktop 7000 bundle, which includes the Natural Wireless Laser Mouse 7000 at a retail price of . The biggest difference between the 7000 and 4000 keyboards, aside from wireless functionality, is the position of the status lights (Num Lock, Caps Lock, Scroll Lock, and F Lock). On the Natural Ergonomic 4000, these lights are on the wrist rest, centered vertically under the spacebar. In their place, on the Natural Ergonomic 7000 keyboard, resides a single battery indicator light.
The Microsoft Natural Ergonomic Desktop 7000 comes with a USB wireless dongle that connects both the mouse and the keyboard. The attachment to elevate the front of the keyboard is separate in the box. The Natural Ergonomic Desktop 7000 bundle also comes with batteries, a very brief user guide, a disk containing the Microsoft Intellitype and Intellipoint software, and the Natural Wireless Laser Mouse 7000, which uses the same physical design as the Natural Wireless Laser Mouse 6000 with a different radio frequency.
Sculpt ergonomic keyboard
Microsoft introduced the Sculpt Ergonomic keyboard in August 2013 as part of the Sculpt Ergonomic Desktop bundle with the Sculpt Ergonomic mouse at a retail price of . The keyboard was made available separately in "Business" packaging for .
The wireless keyboard uses a scissor-switch mechanism and features a detached number pad. The arrangement of the six-key nav/edit cluster is nonstandard, although the arrow keys are still laid out as an inverted-T. The Sculpt Ergonomic keyboard and mouse connect to the computer wirelessly using a 2.4 GHz radio through a proprietary USB dongle. The receiver and keyboard communicate using 128-bit AES encryption and are permanently paired at the factory. Because of this, the dongle cannot be replaced and must occupy a USB port; this feature has attracted criticism as a Bluetooth connection would free up a USB port and ensure the keyboard could still be used even if the dongle was misplaced. Microsoft state the proprietary wireless connection eliminates any delay in waking the computer. Rather than using feet to elevate the back of the keyboard, the Sculpt Ergonomic keyboard comes with a reverse-tilt riser that snaps on to the bottom front edge of the keyboard using magnets.
During development, the Sculpt keyboard was codenamed "Manta Ray" for its resemblance to the animal.
Surface Ergonomic Keyboard
The Microsoft Surface Ergonomic keyboard was announced in October 2016 at a retail price of , alongside other accessories for the Surface Studio.
The shape of the Surface Ergonomic keyboard is similar to the Sculpt, but the six-key navigation block returns to the standard two-row, three-column arrangement, and the number pad is integrated into right side of the Surface Ergonomic. The Surface is gray and the wrist pad is covered with Alcantara instead of the gloss black finish and vinyl wrist pad of the Sculpt. The Surface has also dropped the option to add the magnetic front riser for negative slope. In addition, the Surface Ergonomic is connected wirelessly over Bluetooth instead of using a proprietary dongle.
One reviewer noted the typing action of the Surface scissor switches was "more satisfying with improved quality" than the Sculpt. Paul Thurrott criticized the Surface Ergonomic keyboard for dropping the front riser option and the increased width, which extends the reach needed to use the mouse for right-handed users.
Microsoft Ergonomic Keyboard
The Microsoft Ergonomic keyboard was introduced in 2019 and is the successor to the 4000 keyboard. Like the 4000, it is black, wired, contains three sections (from left to right, alphanumeric, navigation, and numeric keypad), and is not backlit. It loses the zoom toggle, the back/forward button under the spacebar, and the space under the function keys are gone.
See also
Ergonomic keyboard
List of repetitive strain injury software
Maltron
Repetitive strain injury
References
External links
Official website
Technical datasheet for Natural Ergonomic Keyboard (pdf)
PC World review written by Michael S. Lasky (posted October 25, 2005)
CNet Asia review written by Felisa Yang (posted September 21, 2005)
Review written by Xah Lee, 2006
Coding horror review written by Jeff Atwood
Computer keyboard models
Microsoft peripherals
Physical ergonomics |
522938 | https://en.wikipedia.org/wiki/Ciphertext-only%20attack | Ciphertext-only attack | In cryptography, a ciphertext-only attack (COA) or known ciphertext attack is an attack model for cryptanalysis where the attacker is assumed to have access only to a set of ciphertexts. While the attacker has no channel providing access to the plaintext prior to encryption, in all practical ciphertext-only attacks, the attacker still has some knowledge of the plaintext. For instance, the attacker might know the language in which the plaintext is written or the expected statistical distribution of characters in the plaintext. Standard protocol data and messages are commonly part of the plaintext in many deployed systems and can usually be guessed or known efficiently as part of a ciphertext-only attack on these systems.
Attack
The attack is completely successful if the corresponding plaintexts can be deduced, or even better, the key. The ability to obtain any information at all about the underlying plaintext beyond what was pre-known to the attacker is still considered a success. For example, if an adversary is sending ciphertext continuously to maintain traffic-flow security, it would be very useful to be able to distinguish real messages from nulls. Even making an informed guess of the existence of real messages would facilitate traffic analysis.
In the history of cryptography, early ciphers, implemented using pen-and-paper, were routinely broken using ciphertexts alone. Cryptographers developed statistical techniques for attacking ciphertext, such as frequency analysis. Mechanical encryption devices such as Enigma made these attacks much more difficult (although, historically, Polish cryptographers were able to mount a successful ciphertext-only cryptanalysis of the Enigma by exploiting an insecure protocol for indicating the message settings). More advanced ciphertext-only attacks on the Enigma were mounted in Bletchley Park during World War II, by intelligently guessing plaintexts corresponding to intercepted ciphertexts.
Modern
Every modern cipher attempts to provide protection against ciphertext-only attacks. The vetting process for a new cipher design standard usually takes many years and includes exhaustive testing of large quantities of ciphertext for any statistical departure from random noise. See: Advanced Encryption Standard process. Also, the field of steganography evolved, in part, to develop methods like mimic functions that allow one piece of data to adopt the statistical profile of another. Nonetheless poor cipher usage or reliance on home-grown proprietary algorithms that have not been subject to thorough scrutiny has resulted in many computer-age encryption systems that are still subject to ciphertext-only attack. Examples include:
Examples
Early versions of Microsoft's PPTP virtual private network software used the same RC4 key for the sender and the receiver (later versions had other problems). In any case where a stream cipher like RC4 is used twice with the same key it is open to ciphertext-only attack. See: stream cipher attack
Wired Equivalent Privacy (WEP), the first security protocol for Wi-Fi, proved vulnerable to several attacks, most of them ciphertext-only.
GSM's A5/1 and A5/2
Some modern cipher designs have later been shown to be vulnerable to ciphertext-only attacks. For example, Akelarre.
A cipher whose key space is too small is subject to brute force attack with access to nothing but ciphertext by simply trying all possible keys. All that is needed is some way to distinguish valid plaintext from random noise, which is easily done for natural languages when the ciphertext is longer than the unicity distance. One example is DES, which only has 56-bit keys. All too common current examples are commercial security products that derive keys for otherwise impregnable ciphers like AES from a user-selected password. Since users rarely employ passwords with anything close to the entropy of the cipher's key space, such systems are often quite easy to break in practice using only ciphertext. The 40-bit CSS cipher used to encrypt DVD video discs can always be broken with this method, as all that is needed is to look for MPEG-2 video data.
References
Alex Biryukov and Eyal Kushilevitz, From Differential Cryptanalysis to Ciphertext-Only Attacks, CRYPTO 1998, pp72–88;
Cryptographic attacks |
523166 | https://en.wikipedia.org/wiki/Free%20monoid | Free monoid | In abstract algebra, the free monoid on a set is the monoid whose elements are all the finite sequences (or strings) of zero or more elements from that set, with string concatenation as the monoid operation and with the unique sequence of zero elements, often called the empty string and denoted by ε or λ, as the identity element. The free monoid on a set A is usually denoted A∗. The free semigroup on A is the subsemigroup of A∗ containing all elements except the empty string. It is usually denoted A+.
More generally, an abstract monoid (or semigroup) S is described as free if it is isomorphic to the free monoid (or semigroup) on some set.
As the name implies, free monoids and semigroups are those objects which satisfy the usual universal property defining free objects, in the respective categories of monoids and semigroups. It follows that every monoid (or semigroup) arises as a homomorphic image of a free monoid (or semigroup). The study of semigroups as images of free semigroups is called combinatorial semigroup theory.
Free monoids (and monoids in general) are associative, by definition; that is, they are written without any parenthesis to show grouping or order of operation. The non-associative equivalent is the free magma.
Examples
Natural numbers
The monoid (N0,+) of natural numbers (including zero) under addition is a free monoid on a singleton free generator, in this case the natural number 1.
According to the formal definition, this monoid consists of all sequences like "1", "1+1", "1+1+1", "1+1+1+1", and so on, including the empty sequence.
Mapping each such sequence to its evaluation result
and the empty sequence to zero establishes an isomorphism from the set of such sequences to N0.
This isomorphism is compatible with "+", that is, for any two sequences s and t, if s is mapped (i.e. evaluated) to a number m and t to n, then their concatenation s+t is mapped to the sum m+n.
Kleene star
In formal language theory, usually a finite set of "symbols" A (sometimes called the alphabet) is considered. A finite sequence of symbols is called a "word over A", and the free monoid A∗ is called the "Kleene star of A".
Thus, the abstract study of formal languages can be thought of as the study of subsets of finitely generated free monoids.
For example, assuming an alphabet A = {a, b, c}, its Kleene star A∗ contains all concatenations of a, b, and c:
{ε, a, ab, ba, caa, , ...}.
If A is any set, the word length function on A∗ is the unique monoid homomorphism from A∗ to (N0,+) that maps each element of A to 1. A free monoid is thus a graded monoid. (A graded monoid is a monoid that can be written as . Each is a grade; the grading here is just the length of the string. That is, contains those strings of length The symbol here can be taken to mean "set union"; it is used instead of the symbol because, in general, set unions might not be monoids, and so a distinct symbol is used. By convention, gradations are always written with the symbol.)
There are deep connections between the theory of semigroups and that of automata. For example, every formal language has a syntactic monoid that recognizes that language. For the case of a regular language, that monoid is isomorphic to the transition monoid associated to the semiautomaton of some deterministic finite automaton that recognizes that language. The regular languages over an alphabet A are the closure of the finite subsets of A*, the free monoid over A, under union, product, and generation of submonoid.
For the case of concurrent computation, that is, systems with locks, mutexes or thread joins, the computation can be described with history monoids and trace monoids. Roughly speaking, elements of the monoid can commute, (e.g. different threads can execute in any order), but only up to a lock or mutex, which prevent further commutation (e.g. serialize thread access to some object).
Conjugate words
We define a pair of words in A∗ of the form uv and vu as conjugate: the conjugates of a word are thus its circular shifts. Two words are conjugate in this sense if they are conjugate in the sense of group theory as elements of the free group generated by A.
Equidivisibility
A free monoid is equidivisible: if the equation mn = pq holds, then there exists an s such that either m = ps, sn = q (example see image) or ms = p, n = sq. This result is also known as Levi's lemma.
A monoid is free if and only if it is graded and equidivisible.
Free generators and rank
The members of a set A are called the free generators for A∗ and A+. The superscript * is then commonly understood to be the Kleene star. More generally, if S is an abstract free monoid (semigroup), then a set of elements which maps onto the set of single-letter words under an isomorphism to a semigroup A+ (monoid A∗) is called a set of free generators for S.
Each free semigroup (or monoid) S has exactly one set of free generators, the cardinality of which is called the rank of S.
Two free monoids or semigroups are isomorphic if and only if they have the same rank. In fact, every set of generators for a free semigroup or monoid S contains the free generators (see definition of generators in Monoid) since a free generator has word length 1 and hence can only be generated by itself. It follows that a free semigroup or monoid is finitely generated if and only if it has finite rank.
A submonoid N of A∗ is stable if u, v, ux, xv in N together imply x in N. A submonoid of A∗ is stable if and only if it is free.
For example, using the set of bits { "0", "1" } as A, the set N of all bit strings containing an even number of "1"s is a stable submonoid because if u contains an even number of "1"s, and ux as well, then x must contain an even number of "1"s, too. While N cannot be freely generated by any set of single bits, it can be freely generated by the set of bit strings { "0", "11", "101", "1001", "10001", ... } – the set of strings of the form "10n1" for some integer n.
Codes
A set of free generators for a free monoid P is referred to as a basis for P: a set of words C is a code if C* is a free monoid and C is a basis. A set X of words in A∗ is a prefix, or has the prefix property, if it does not contain a proper (string) prefix of any of its elements. Every prefix in A+ is a code, indeed a prefix code.
A submonoid N of A∗ is right unitary if x, xy in N implies y in N. A submonoid is generated by a prefix if and only if it is right unitary.
Factorization
A factorization of a free monoid is a sequence of subsets of words with the property that every word in the free monoid can be written as a concatenation of elements drawn from the subsets. The Chen–Fox–Lyndon theorem states that the Lyndon words furnish a factorization. More generally, Hall words provide a factorization; the Lyndon words are a special case of the Hall words.
Free hull
The intersection of free submonoids of a free monoid A∗ is again free. If S is a subset of a free monoid A* then the intersection of all free submonoids of A* containing S is well-defined, since A* itself is free, and contains S; it is a free monoid and called the free hull of S. A basis for this intersection is a code.
The defect theorem states that if X is finite and C is the basis of the free hull of X, then either X is a code and C = X, or
|C| ≤ |X| − 1 .
Morphisms
A monoid morphism f from a free monoid B∗ to a monoid M is a map such that f(xy) = f(x)⋅f(y) for words x,y and f(ε) = ι, where ε and ι denotes the identity element of B∗ and M, respectively. The morphism f is determined by its values on the letters of B and conversely any map from B to M extends to a morphism. A morphism is non-erasing or continuous if no letter of B maps to ι and trivial if every letter of B maps to ι.
A morphism f from a free monoid B∗ to a free monoid A∗ is total if every letter of A occurs in some word in the image of f; cyclic or periodic if the image of f is contained in {w}∗ for some word w of A∗. A morphism f is k-uniform if the length |f(a)| is constant and equal to k for all a in A. A 1-uniform morphism is strictly alphabetic or a coding.
A morphism f from a free monoid B∗ to a free monoid A∗ is simplifiable if there is an alphabet C of cardinality less than that of B such the morphism f factors through C∗, that is, it is the composition of a morphism from B∗ to C∗ and a morphism from that to A∗; otherwise f is elementary. The morphism f is called a code if the image of the alphabet B under f is a code: every elementary morphism is a code.
Test sets
For L a subset of B∗, a finite subset T of L is a test set for L if morphisms f and g on B∗ agree on L if and only if they agree on T. The Ehrenfeucht conjecture is that any subset L has a test set: it has been proved independently by Albert and Lawrence; McNaughton; and Guba. The proofs rely on Hilbert's basis theorem.
Map and fold
The computational embodiment of a monoid morphism is a map followed by a fold. In this setting, the free monoid on a set A corresponds to lists of elements from A with concatenation as the binary operation. A monoid homomorphism from the free monoid to any other monoid (M,•) is a function f such that
f(x1...xn) = f(x1) • ... • f(xn)
f() = e
where e is the identity on M. Computationally, every such homomorphism corresponds to a map operation applying f to all the elements of a list, followed by a fold operation which combines the results using the binary operator •. This computational paradigm (which can be generalized to non-associative binary operators) has inspired the MapReduce software framework.
Endomorphisms
An endomorphism of A∗ is a morphism from A∗ to itself. The identity map I is an endomorphism of A∗, and the endomorphisms form a monoid under composition of functions.
An endomorphism f is prolongable if there is a letter a such that f(a) = as for a non-empty string s.
String projection
The operation of string projection is an endomorphism. That is, given a letter a ∈ Σ and a string s ∈ Σ∗, the string projection pa(s) removes every occurrence of a from s; it is formally defined by
Note that string projection is well-defined even if the rank of the monoid is infinite, as the above recursive definition works for all strings of finite length. String projection is a morphism in the category of free monoids, so that
where is understood to be the free monoid of all finite strings that don't contain the letter a. Projection commutes with the operation of string concatenation, so that for all strings s and t. There are many right inverses to string projection, and thus it is a split epimorphism.
The identity morphism is defined as for all strings s, and .
String projection is commutative, as clearly
For free monoids of finite rank, this follows from the fact that free monoids of the same rank are isomorphic, as projection reduces the rank of the monoid by one.
String projection is idempotent, as
for all strings s. Thus, projection is an idempotent, commutative operation, and so it forms a bounded semilattice or a commutative band.
The free commutative monoid
Given a set A, the free commutative monoid on A is the set of all finite multisets with elements drawn from A, with the monoid operation being multiset sum and the monoid unit being the empty multiset.
For example, if A = {a, b, c}, elements of the free commutative monoid on A are of the form
{ε, a, ab, a2b, ab3c4, ...}.
The fundamental theorem of arithmetic states that the monoid of positive integers under multiplication is a free commutative monoid on an infinite set of generators, the prime numbers.
The free commutative semigroup is the subset of the free commutative monoid which contains all multisets with elements drawn from A except the empty multiset.
The free partially commutative monoid, or trace monoid, is a generalization that encompasses both the free and free commutative monoids as instances. This generalization finds applications in combinatorics and in the study of parallelism in computer science.
See also
String operations
Notes
References
External links
Semigroup theory
Formal languages
Free algebraic structures
Combinatorics on words |
528720 | https://en.wikipedia.org/wiki/Tiny%20Encryption%20Algorithm | Tiny Encryption Algorithm | In cryptography, the Tiny Encryption Algorithm (TEA) is a block cipher notable for its simplicity of description and implementation, typically a few lines of code. It was designed by David Wheeler and Roger Needham of the Cambridge Computer Laboratory; it was first presented at the Fast Software Encryption workshop in Leuven in 1994, and first published in the proceedings of that workshop.
The cipher is not subject to any patents.
Properties
TEA operates on two 32-bit unsigned integers (could be derived from a 64-bit data block) and uses a 128-bit key. It has a Feistel structure with a suggested 64 rounds, typically implemented in pairs termed cycles. It has an extremely simple key schedule, mixing all of the key material in exactly the same way for each cycle. Different multiples of a magic constant are used to prevent simple attacks based on the symmetry of the rounds. The magic constant, 2654435769 or 0x9E3779B9 is chosen to be ⌊232/ϕ⌋, where ϕ is the golden ratio (as a Nothing-up-my-sleeve number).
TEA has a few weaknesses. Most notably, it suffers from equivalent keys—each key is equivalent to three others, which means that the effective key size is only 126 bits. As a result, TEA is especially bad as a cryptographic hash function. This weakness led to a method for hacking Microsoft's Xbox game console, where the cipher was used as a hash function. TEA is also susceptible to a related-key attack which requires 223 chosen plaintexts under a related-key pair, with 232 time complexity. Because of these weaknesses, the XTEA cipher was designed.
Versions
The first published version of TEA was supplemented by a second version that incorporated extensions to make it more secure. Block TEA (which was specified along with XTEA) operates on arbitrary-size blocks in place of the 64-bit blocks of the original.
A third version (XXTEA), published in 1998, described further improvements for enhancing the security of the Block TEA algorithm.
Reference code
Following is an adaptation of the reference encryption and decryption routines in C, released into the public domain by David Wheeler and Roger Needham:
#include <stdint.h>
void encrypt (uint32_t v[2], const uint32_t k[4]) {
uint32_t v0=v[0], v1=v[1], sum=0, i; /* set up */
uint32_t delta=0x9E3779B9; /* a key schedule constant */
uint32_t k0=k[0], k1=k[1], k2=k[2], k3=k[3]; /* cache key */
for (i=0; i<32; i++) { /* basic cycle start */
sum += delta;
v0 += ((v1<<4) + k0) ^ (v1 + sum) ^ ((v1>>5) + k1);
v1 += ((v0<<4) + k2) ^ (v0 + sum) ^ ((v0>>5) + k3);
} /* end cycle */
v[0]=v0; v[1]=v1;
}
void decrypt (uint32_t v[2], const uint32_t k[4]) {
uint32_t v0=v[0], v1=v[1], sum=0xC6EF3720, i; /* set up; sum is (delta << 5) & 0xFFFFFFFF */
uint32_t delta=0x9E3779B9; /* a key schedule constant */
uint32_t k0=k[0], k1=k[1], k2=k[2], k3=k[3]; /* cache key */
for (i=0; i<32; i++) { /* basic cycle start */
v1 -= ((v0<<4) + k2) ^ (v0 + sum) ^ ((v0>>5) + k3);
v0 -= ((v1<<4) + k0) ^ (v1 + sum) ^ ((v1>>5) + k1);
sum -= delta;
} /* end cycle */
v[0]=v0; v[1]=v1;
}
Note that the reference implementation acts on multi-byte numeric values. The original paper does not specify how to derive the numbers it acts on from binary or other content.
See also
RC4 – A stream cipher that, just like TEA, is designed to be very simple to implement.
XTEA – First version of Block TEA's successor.
XXTEA – Corrected Block TEA's successor.
Treyfer – A simple and compact encryption algorithm with 64-bit key size and block size.
Notes
References
External links
Test vectors for TEA
JavaScript implementation of XXTEA with Base64
PHP implementation of XTEA (German language)
JavaScript implementation of XXTEA
JavaScript and PHP implementations of XTEA (Dutch text)
AVR ASM implementation
SEA Scalable Encryption Algorithm for Small Embedded Applications (Standaert, Piret, Gershenfeld, Quisquater - July 2005 UCL Belgium & MIT USA)
Broken block ciphers
Feistel ciphers
Free ciphers
University of Cambridge Computer Laboratory
Articles with example C code |
528914 | https://en.wikipedia.org/wiki/Bank%20of%20Ireland | Bank of Ireland | Bank of Ireland Group plc () is a commercial bank operation in Ireland and one of the traditional Big Four Irish banks. Historically the premier banking organisation in Ireland, the Bank occupies a unique position in Irish banking history. At the core of the modern-day group is the old Bank of Ireland, the ancient institution established by Royal Charter in 1783.
History
Bank of Ireland is the oldest bank in continuous operation (apart from closures due to bank strikes in 1950, 1966, 1970, and 1976) in Ireland.
In 1781, the Bank of Ireland Act was passed by the Parliament of Ireland, establishing Bank of Ireland. On 25 June 1783, Bank of Ireland opened for business at Mary's Abbey in a private house previously owned by one Charles Blakeney.
On 6 June 1808, Bank of Ireland moved to 2 College Green. In 1864, Bank of Ireland paid its first interest on deposits.
In 1926, Bank of Ireland took control of the National Land Bank. In 1948, The Bank of Ireland 1783–1946 by F.G. Hall was published jointly by Hodges Figgis (Dublin) and Blackwell's (Oxford).
In 1958, the Bank took over the Hibernian Bank Limited. In 1965, The National Bank Ltd, a bank founded by Daniel O'Connell in 1835, had branches in Ireland and Britain. The Irish branches were acquired by Bank of Ireland and rebranded temporarily as National Bank of Ireland, before being fully incorporated into Bank of Ireland. The British branches were acquired by Williams & Glyn's Bank.
In 1980, the first Pass card and machine, also known as ATMs, was opened by Bank of Ireland. In 1983, Bank of Ireland celebrated its Bi-Centenary and a commemorative stamp was issued. The Bank also commissioned the publication of "An Irish Florilegium" that year. In 1995, Bank of Ireland merged First New Hampshire Bank with Royal Bank of Scotland's Citizens Financial Group. Only branches in cities and major towns had ATMs in the 1980s but branches in most medium and small towns installed then in the 1990s.
In 1996, Bank of Ireland bought the Bristol and West building society for UK£600 million (€882 million), which kept its own brand. In 1999, the bank held merger talks with Alliance & Leicester, but they were called off. In 2000, it was announced that Bank of Ireland was acquiring Chase de Vere.
In 2002, Bank of Ireland acquired Iridian, a US investment manager, which doubled the size of its asset management business. In 2005, Bank of Ireland completed the sale of the Bristol and West branch and Direct Savings (Contact Centre) to Britannia Building Society.
In 2008, Moody's Investors Service changed its rating of Bank of Ireland from stable to negative. Moody's pinpointed concerns over weakening asset quality and the impact of a more challenging economic environment on profitability at Bank of Ireland. A share price collapse followed. In 2009, The Irish government announced a €7 billion rescue package for the bank and Allied Irish Banks plc in February. The biggest bank robbery in the history of the state took place at Bank of Ireland at College Green. Consultants Oliver Wyman validated Bank of Ireland's bad debt levels at €6 billion over three years to March 2011, a bad debt level which was exceeded by almost €1 billion within a matter of months.
In 2010, The European Commission ordered the disposal of Bank of Ireland Asset Management, New Ireland Assurance, ICS Building Society, its US Foreign Exchange business and the stakes held in the Irish Credit Bureau and in an American Asset Manager followed the receipt of Irish Government State aid. In 2011, the Securities Services Division of the bank was sold to Northern Trust Corporation.
In 2013, Bank of Ireland more than doubled interest rates on mortgages tracking Bank of England rates, (which had remained stable for four years), citing the need to hold more reserves and the 'increased cost of funding mortgages'. Described by Ray Boulger of broker John Charcol as 'having shot the reputation of its mortgages to smithereens', nevertheless the bank continues to offer highly competitive mortgages through the Post Office.
In 2014, regulation of the bank was transferred to the European Central Bank. Also in 2014, the bank entered into a marketing alliance with EVO Payments International and re-enters the card acquiring market. BOI Payment Acceptance launches in December 2014.
Role as government banker
Bank of Ireland is not, and was never, the Irish central bank. However, as well as being a commercial bank – a deposit-taker and a credit institution – it performed many central bank functions, much like the earlier-established Bank of Scotland and Bank of England. Bank of Ireland operated the Exchequer Account and during the nineteenth century acted as something of a banker of last resort. Even the titles of the chairman of the board of directors (the Governor) and the title of the board itself (the Court of Directors) suggest a central bank status. From the foundation of the Irish Free State in 1922 until 31 December 1971, Bank of Ireland was the banker of the Irish Government.
Headquarters
The headquarters of the bank until the 1970s was the impressive Parliament House on College Green, Dublin. This building was originally designed by Sir Edward Lovett Pearce in 1729 to host the Irish Parliament, and it was the world's first purpose-built bicameral parliament building.
The bank had planned to commission a building designed by Sir John Soane to be constructed on the site bounded by Westmoreland Street, Fleet Street, College Street and D'Olier Street (now occupied by the Westin Hotel). However, the project was cancelled following the Act of Union in 1800, when the newly defunct Parliament House was bought by Bank of Ireland in 1803. The former Parliament House continues today as a working branch. Today, visitors can still view the impressive Irish House of Lords chamber within the old headquarters building. The Oireachtas, the modern parliament of the Republic of Ireland, is now housed in Leinster House in Dublin. In 2011, the Irish Government set out proposals to acquire the building as a venue for the state to use as a cultural venue.
In the 1970s the bank moved its headquarters to a modern building, now known as Miesian Plaza, on Lower Baggot Street, Dublin 2. As Frank McDonald notes in his book Destruction of Dublin, when these headquarters were built, it caused the world price of copper to rise – such was the usage in the building.
In 2010 the bank moved to its current, smaller headquarters on Mespil Road.
Banking services
Republic of Ireland
The Group provides a broad range of financial services in Ireland to the personal, commercial, industrial and agricultural sectors. These include checking and deposit services, overdrafts, term loans, mortgages, international asset financing, leasing, instalment credit, debt financing, foreign exchange facilities, interest and exchange rate hedging instruments, executor and trustee services.
International Operations
The bank is headquartered in Dublin, and has operations throughout the Republic of Ireland. It also operates in Northern Ireland, where it prints its own banknotes in Pounds Sterling (see section on banknotes below). In Great Britain, the bank expanded largely through the takeover of the Bristol and West Building Society in 1996. Bank of Ireland also provides financial services for the British Post Office throughout the UK and AA Savings. Operations in the rest of the world are primarily undertaken by Bank of Ireland Corporate Banking who provide services in France, Germany, Spain and the United States.
Banknotes
Although the Bank of Ireland is not a central bank, it does have sterling note-issuing rights in the United Kingdom. While the Bank has its headquarters in Dublin, it also has operations in Northern Ireland, where it retains the legal right (dating from before the partition of Ireland) to print its own banknotes. These are pound sterling notes and equal in value to Bank of England notes, and should not be confused with banknotes of the former Irish pound.
The obverse side of Bank of Ireland banknotes features the Bank of Ireland logo, below which is a line of heraldic shields each representing one of the six counties of Northern Ireland. Below this is a depiction of a seated Hibernia figure, surrounded by the Latin motto of the Bank, Bona Fides Reipublicae Stabilitas ("Good Faith is the Cornerstone of the State"). The current series of £5, £10 and £20 notes, issued in April 2008, all feature an illustration of the Old Bushmills Distillery on the reverse side. Prior to 2008, all Bank of Ireland notes featured an image of the Queen's University of Belfast on the reverse side.
The principal difference between the denominations is their colour and size:
£5 note, blue
£10 note, pink
£20 note, green
£50 note, blue-green
£100 note, red.
The Bank of Ireland has never issued its own banknotes in the Republic of Ireland. Section 60 of the Currency Act 1927 removed the right of Irish banks to issue banknotes, however "consolidated banknotes", of a common design issued by all "Shareholder Banks" under the Act, were issued between 1929 and 1953. These notes were not legal tender.
Controversies
Michael Soden
Michael Soden abruptly quit as group chief executive on 29 May 2004 when it was discovered that adult material that contravened company policy was found on his Bank PC. Soden issued a personal statement explaining that the high standards of integrity and behaviour in an environment of accountability, transparency and openness, which he espoused, would cause embarrassment to the Bank.
DIRT controversy
A IR£30.5 million tax arrears liability was settled by Bank of Ireland in July 2000. The Bank told the Oireachtas Public Accounts Committee Inquiry that its liability was in the region of £1.5 million. The settlement figure was 'dictated' by the Revenue Commissioners following an audit by the Commissioners. It was in Bank of Ireland that some of the most celebrated of the "celebrated cases" of non-compliance and bogus non-resident accounts have to date been discovered and disclosed. Thurles, Boyle, Roscrea (1990), Milltown Malbay (1991), Dundalk (1989–90), Killester (1992), Tullamore (1993), Mullingar (1996), Castlecomer, Clonmel, Ballybricken, Ballinasloe, Skibbereen (1988), Dungarvan and, disclosed to the Oireachtas Public Accounts Sub-Committee, Ballaghaderreen (1998) and Ballygar (1999). The Public Accounts Sub-Committee Inquiry concluded that "the most senior executives in the Bank of Ireland did seek to set an ethical tone for the bank and unsuccessfully sought Revenue Commissioners assistance to promote an industry-wide Code of Practice.
Stolen laptops
In April 2008 it was announced that four laptops with data pertaining to 10,000 customers were stolen between June and October 2007. This customer information included names, addresses, bank details, medical and pension details.
The thefts were initially reported to the Garda Síochána, however the Banks senior management did not know about the problem until February 2008 after an internal audit uncovered the theft and the Bank did not advise the Data Protection Commissioner and the Central Bank of Ireland until mid-April 2008. It also came to light that none of the laptops used encryption to protect the sensitive data. The Bank has since released a press release detailing the seven branches affected and its initial response, later in the month the Bank confirmed that 31,500 customer records were affected as well as an increased number of branches.
Record bank robbery
On 27 February 2009, it was reported that a criminal gang from Dublin had stolen €7 million from the Bank of Ireland's main branch in College Green. The robbery was the biggest in the history of the Republic of Ireland, during which a girlfriend of an employee, her mother and her mother's five-year-old granddaughter were held hostage at gunpoint. Gardaí arrested six men the next day, and recovered €1.8 million. A spokesperson for the bank said: "Bank of Ireland's priority is for the safety and well-being of the staff member and the family involved in this incident and all of the bank's support services have been made available to them."
Wrong information on recapitalisation and bonuses
The information provided to the Department of Finance in 2009 in advance of a recapitalisation of the bank which cost the taxpayer €3.5 billion "was incomplete and misleading". It also gave wrong information to the Minister for Finance who in turn misled the Dáil on €66 million in bonuses it paid since receiving a State guarantee. External examiners found it used "a restrictive and uncommon interpretation of what constituted a performance bonus". Their report also found that there had been "a catalogue of errors" and that the information supplied by Bank of Ireland to the Department of Finance was "presented in a manner which minimised the level of additional payments made". The Bank paid €2 million by way of compensation to the Exchequer for providing "misleading" information.
Relationship with outsourcing companies
The Bank has forged strong links with IT outsourcing companies since 2004 or earlier. On 1 November 2010 IBM won the $450M full scope outsource contract to manage BoI Group's Information Technology (IT) infrastructure services (e.g. mainframe, servers, desktops and print services) in a competitive bid against HP (the incumbent outsource provider) and HCL. This follows on from the Bank's natural expiration of its current agreement with HP, which was signed in 2004.
Following a competitive bid process with a number of parties, IBM was selected for exclusive contract negotiations in July 2011. During the intervening period, an extensive due diligence phase has been undertaken and relevant regulatory approval has been granted.
IBM will manage the Group's entire IT infrastructure, including desktop systems, servers, mainframes, local area networks and service desk. Since then, BOI has given HCL a €30m Business Process Outsourcing contract and has selected them as strategic local resourcing partner in Ireland. In addition to that, HCL have opened a software factory for Bank of Ireland in India and has started to outsource production support for the retail banking and payments applications in BOI. This exclusive relationship with HCL has been seen as controversial in the context of the substantial Irish taxpayer investment in Bank of Ireland – and the lack of any significant investment by HCL in Ireland. A banking analyst said in July 2011 that BOI's IT system is "very antiquated."
Closing accounts associated with Palestine
Bank of Ireland closed the accounts of Irish Palestine Solidarity campaign citing that the bank considered Palestine a high risk country. Sinn Féin TD Mary-Lou McDonald called this outrageous and an insult to the Palestinian people.
2008 share price collapse
On 5 March 2009, the shares reached €0.12 during the day, thereby reducing the value of the company by over 99% from its 2007 high. At the 2009 AGM, shareholders criticised the performance of their Auditors, PriceWaterhouseCoopers.
The Central Bank told the Oireachtas Enterprise Committee that shareholders who lost their money in the banking collapse were to blame for their fate and got what was coming to them for not keeping bank chiefs in check, but did admit that the Central Bank had failed to give sufficient warning about reckless lending to property developers.
References
Sources
External links
Official Republic of Ireland Site
Official UK Site
BOI Payment Acceptance Official Site
Historical banknotes of the Bank of Ireland
Companies formerly listed on the New York Stock Exchange
Banks of Ireland
Companies listed on Euronext Dublin
Ireland
Financial services companies based in Dublin (city)
Banks of Northern Ireland
Banks established in 1783
1783 establishments in Ireland
Banks under direct supervision of the European Central Bank
Irish brands |
530786 | https://en.wikipedia.org/wiki/Codebook | Codebook | A codebook is a type of document used for gathering and storing cryptography codes. Originally codebooks were often literally books, but today codebook is a byword for the complete record of a series of codes, regardless of physical format.
Cryptography
In cryptography, a codebook is a document used for implementing a code. A codebook contains a lookup table for coding and decoding; each word or phrase has one or more strings which replace it. To decipher messages written in code, corresponding copies of the codebook must be available at either end. The distribution and physical security of codebooks presents a special difficulty in the use of codes, compared to the secret information used in ciphers, the key, which is typically much shorter.
The United States National Security Agency documents sometimes use codebook to refer to block ciphers; compare their use of combiner-type algorithm to refer to stream ciphers.
Codebook come in two forms, one-part or two-part:
In one part codes, the plain text words and phrases and the corresponding code words are in the same alphabetical order. They are organized similar to a standard dictionary. Such codes are half the size of two-part codes but are more vulnerable since an attacker who recovers some code word meanings can often infer the meaning of nearby code words. One part codes may be used simply to shorten messages for transmission or have their security enhanced with superencryption methods, such as adding a secret number to numeric code words.
In two part codes, one part is for converting plaintext to ciphertext, the other for the opposite purpose. They are usually organized similar to a language translation dictionary, with plaintext words (in the first part) and ciphertext words (in the second part) presented like dictionary headwords.
The earliest known use of a codebook system was by Gabriele de Lavinde in 1379 working for the Antipope Clement VII.
Two-part codebooks go back as least as far as
Antoine Rossignol in the 1800s.
From the 15th century until the middle of the 19th century,
nomenclators (named after nomenclator) were the most used cryptographic method.
Codebook with superencryption was the most used cryptographic method of World War I.
The JN-25 code used in World War II used a code book of 30,000 code groups superencrypted with 30,000 random additives.
The book used in a book cipher or the book used in a running key cipher can be any book shared by sender and receiver and is different from a cryptographic codebook.
Social sciences
In social sciences, a codebook is a document containing a list of the codes used in a set of data to refer to variables and their values, for example locations, occupations, or clinical diagnoses.
Data compression
Codebooks were also used in 19th- and 20th-century commercial codes for the non-cryptographic purpose of data compression.
Codebooks are used in relation to precoding and beamforming in mobile networks such as 5G and LTE. The usage is standardized by 3GPP, for example in the document TS 38.331, NR; Radio Resource Control (RRC); Protocol specification.
See also
Block cipher modes of operation
The Code Book
References
Cryptography
Social research |
532475 | https://en.wikipedia.org/wiki/Camellia%20%28cipher%29 | Camellia (cipher) | In cryptography, Camellia is a symmetric key block cipher with a block size of 128 bits and key sizes of 128, 192 and 256 bits. It was jointly developed by Mitsubishi Electric and NTT of Japan. The cipher has been approved for use by the ISO/IEC, the European Union's NESSIE project and the Japanese CRYPTREC project. The cipher has security levels and processing abilities comparable to the Advanced Encryption Standard.
The cipher was designed to be suitable for both software and hardware implementations, from low-cost smart cards to high-speed network systems. It is part of the Transport Layer Security (TLS) cryptographic protocol designed to provide communications security over a computer network such as the Internet.
The cipher was named for the flower Camellia japonica, which is known for being long-lived as well as because the cipher was developed in Japan.
Design
Camellia is a Feistel cipher with either 18 rounds (when using 128-bit keys) or 24 rounds (when using 192- or 256-bit keys). Every six rounds, a logical transformation layer is applied: the so-called "FL-function" or its inverse. Camellia uses four 8×8-bit S-boxes with input and output affine transformations and logical operations. The cipher also uses input and output key whitening. The diffusion layer uses a linear transformation based on a matrix with a branch number of 5.
Security analysis
Camellia is considered a modern, safe cipher. Even using the smaller key size option (128 bits), it's considered infeasible to break it by brute-force attack on the keys with current technology. There are no known successful attacks that weaken the cipher considerably. The cipher has been approved for use by the ISO/IEC, the European Union's NESSIE project and the Japanese CRYPTREC project. The Japanese cipher has security levels and processing abilities comparable to the AES/Rijndael cipher.
Camellia is a block cipher which can be completely defined by minimal systems of multivariate polynomials:
The Camellia (as well as AES) S-boxes can be described by a system of 23 quadratic equations in 80 terms.
The key schedule can be described by equations in 768 variables using linear and quadratic terms.
The entire block cipher can be described by equations in variables using linear and quadratic terms.
In total, equations in variables using linear and quadratic terms are required.
The number of free terms is , which is approximately the same number as for AES.
Theoretically, such properties might make it possible to break Camellia (and AES) using an algebraic attack, such as extended sparse linearisation, in the future, provided that the attack becomes feasible.
Patent status
Although Camellia is patented, it is available under a royalty-free license. This has allowed the Camellia cipher to become part of the OpenSSL Project, under an open-source license, since November 2006. It has also allowed it to become part of the Mozilla's NSS (Network Security Services) module.
Adoption
Support for Camellia was added to the final release of Mozilla Firefox 3 in 2008 (disabled by default as of Firefox 33 in 2014 in spirit of the "Proposal to Change the Default TLS Ciphersuites Offered by Browsers", and has been dropped from version 37 in 2015). Pale Moon, a fork of Mozilla/Firefox, continues to offer Camellia and had extended its support to include Galois/Counter mode (GCM) suites with the cipher, but has removed the GCM modes again with release 27.2.0, citing the apparent lack of interest in them.
Later in 2008, the FreeBSD Release Engineering Team announced that the cipher had also been included in the FreeBSD 6.4-RELEASE. Also, support for the Camellia cipher was added to the disk encryption storage class geli of FreeBSD by Yoshisato Yanagisawa.
In September 2009, GNU Privacy Guard added support for Camellia in version 1.4.10.
VeraCrypt (a fork of TrueCrypt) included Camellia as one of its supported encryption algorithms.
Moreover, various popular security libraries, such as Crypto++, GnuTLS, mbed TLS and OpenSSL also include support for Camellia.
On March 26, 2013, Camellia was announced as having been selected again for adoption in Japan's new e-Government Recommended Ciphers List as the only 128-bit block cipher encryption algorithm developed in Japan. This coincides with the CRYPTREC list being updated for the first time in 10 years. The selection was based on Camellia's high reputation for ease of procurement, and security and performance features comparable to those of the Advanced Encryption Standard (AES). Camellia remains unbroken in its full implementation. An impossible differential attack on 12-round Camellia without FL/FL−1 layers does exist.
Performance
The S-boxes used by Camellia share a similar structure to AES's S-box. As a result, it is possible to accelerate Camellia software implementations using CPU instruction sets designed for AES, such as x86 AES-NI, by affine isomorphism.
Standardization
Camellia has been certified as a standard cipher by several standardization organizations:
CRYPTREC
NESSIE
IETF
Algorithm
: A Description of the Camellia Encryption Algorithm
Block cipher mode
: Camellia Counter Mode and Camellia Counter with CBC-MAC Mode Algorithms
S/MIME
: Use of the Camellia Encryption Algorithm in Cryptographic Message Syntax (CMS)
XML Encryption
: Additional XML Security Uniform Resource Identifiers (URIs)
TLS/SSL
: Addition of Camellia Cipher Suites to Transport Layer Security (TLS)
: Camellia Cipher Suites for TLS
: Addition of the Camellia Cipher Suites to Transport Layer Security (TLS)
IPsec
: The Camellia Cipher Algorithm and Its Use With IPsec
: Modes of Operation for Camellia for Use with IPsec
Kerberos
: Camellia Encryption for Kerberos 5
OpenPGP
: The Camellia Cipher in OpenPGP
RSA-KEM in CMS
: Use of the RSA-KEM Key Transport Algorithm in the Cryptographic Message Syntax (CMS)
PSKC
: Portable Symmetric Key Container (PSKC)
Smart grid
: Internet Protocols for the Smart Grid
ISO/IEC
ISO/IEC 18033-3:2010 Information technology—Security techniques—Encryption algorithms—Part 3: Block ciphers
ITU-T
Security mechanisms and procedures for NGN (Y.2704)
RSA Laboratories
Approved cipher in the PKCS#11
TV-Anytime Forum
Approved cipher in TV-Anytime Rights Management and Protection Information for Broadcast Applications
Approved cipher in Bi-directional Metadata Delivery Protection
References
General
External links
Camellia's English home page by NTT
256 bit ciphers – CAMELLIA reference implementation and derived code
Use of the Camellia Encryption Algorithm in Cryptographic Message Syntax (CMS)
A Description of the Camellia Encryption Algorithm
Additional XML Security Uniform Resource Identifiers (URIs)
Addition of Camellia Cipher Suites to Transport Layer Security (TLS)
The Camellia Cipher Algorithm and Its Use With IPsec
Camellia Counter Mode and Camellia Counter with CBC-MAC Mode Algorithms
Modes of Operation for Camellia for Use with IPsec
Certification of Camellia Cipher as IETF standard for OpenPGP
Camellia Cipher Suites for TLS
Use of the RSA-KEM Key Transport Algorithm in the Cryptographic Message Syntax (CMS)
Portable Symmetric Key Container (PSKC)
Internet Protocols for the Smart Grid
Addition of the Camellia Cipher Suites to Transport Layer Security (TLS)
ISO/IEC 18033-3:2010 Information technology—Security techniques—Encryption algorithms—Part 3: Block ciphers
Feistel ciphers
Mitsubishi Electric products, services and standards
2000 introductions |
532495 | https://en.wikipedia.org/wiki/David%20Wheeler%20%28computer%20scientist%29 | David Wheeler (computer scientist) | David John Wheeler FRS (9 February 1927 – 13 December 2004) was a computer scientist and professor of computer science at the University of Cambridge.
Education
Wheeler was born in Birmingham, England, the second of the three children of (Agnes) Marjorie, née Gudgeon, and Arthur Wheeler, a press tool maker, engineer, and proprietor of a small shopfitting firm. He was educated at a local primary school in Birmingham and then went on to King Edward VI Camp Hill School after winning a scholarship in 1938. His education was disrupted by World War II, and he completed his sixth form studies at Hanley High School. In 1945 he gained a scholarship to study the Cambridge Mathematical Tripos at Trinity College, Cambridge, graduating in 1948. He was awarded the world's first PhD in computer science in 1951.
Career
Wheeler's contributions to the field included work on the Electronic delay storage automatic calculator (EDSAC) in the 1950s and the Burrows–Wheeler transform (published 1994). Along with Maurice Wilkes and Stanley Gill, he is credited with the invention around 1951 of the subroutine (which they referred to as the closed subroutine), and gave the first explanation of how to design software libraries; as a result, the jump to subroutine instruction was often called a Wheeler Jump. Wilkes published a paper in 1953 discussing relative addressing to facilitate the use of subroutines. (However, Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack.)
He was responsible for the implementation of the CAP computer, the first to be based on security capabilities. In cryptography, he was the designer of WAKE and the co-designer of the TEA and XTEA encryption algorithms together with Roger Needham. In 1950, with Maurice Wilkes, he used EDSAC to solve a differential equation relating to gene frequencies in a paper by Ronald Fisher. This represents the first use of a computer for a problem in the field of biology.
In August 1957 Wheeler married Joyce Blackler, who had used EDSAC for her own mathematical investigations as a research student from 1955. He became a Fellow of Darwin College, Cambridge in 1964 and formally retired in 1994, although he continued to be an active member of the University of Cambridge Computer Laboratory until his death.
Recognition and legacy
Wheeler was elected a fellow of the Royal Society in 1981, and received a Computer Pioneer Award in 1985 for his contributions to assembly language programming. In 1994 he was inducted as a Fellow of the Association for Computing Machinery. In 2003, he was named a Computer History Museum Fellow Award recipient "for his invention of the closed subroutine, and for his architectural contributions to ILLIAC, the Cambridge Ring, and computer testing."
The Computer Laboratory at the University of Cambridge annually holds the "Wheeler Lecture", a series of distinguished lectures named after him.
Personal life
On 24 August 1957 Wheeler married astrophysics research student Joyce Margaret Blackler. Together they had two daughters and a son. He died of a heart attack on 13 December 2004 while cycling home from the Computer Laboratory.
Quotes
Wheeler is often quoted as saying "All problems in computer science can be solved by another level of indirection." or "All problems in computer science can be solved by another level of indirection, except for the problem of too many layers of indirection." This has been called the fundamental theorem of software engineering.
Another quotation attributed to him is "Compatibility means deliberately repeating other people's mistakes."
References
External links
Oral history interview with David Wheeler, 14 May 1987. Charles Babbage Institute, University of Minnesota. Wheeler discusses projects that were run on EDSAC, user-oriented programming methods, and the influence of EDSAC on the ILLIAC, the ORDVAC, and the IBM 701. He also notes visits by Douglas Hartree, Nelson Blackman (of ONR), Peter Naur, Aad van Wijngarden, Arthur van der Poel, Friedrich Bauer, and Louis Couffignal.
Oral history interview with Gene H. Golub. Charles Babbage Institute, University of Minnesota. Golub discusses the construction of the ILLIAC computer, the work of Ralph Meager and David Wheeler on the ILLIAC design, British computer science, programming, and the early users of the ILLIAC at the University of Illinois.
1927 births
2004 deaths
Alumni of Trinity College, Cambridge
British computer scientists
British information theorists
Fellows of the Association for Computing Machinery
Fellows of the British Computer Society
Fellows of Darwin College, Cambridge
Fellows of the Royal Society
History of computing in the United Kingdom
Members of the University of Cambridge Computer Laboratory
Modern cryptographers
People educated at Hanley High School
People from Birmingham, West Midlands |