id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
105897 | https://en.wikipedia.org/wiki/Theodore%20Olson | Theodore Olson | Theodore Bevry Olson (born September 11, 1940) is an American lawyer, practicing at the Washington, D.C. office of Gibson, Dunn & Crutcher. Olson served as United States Solicitor General (2001–2004) under President George W. Bush.
Early life
Theodore Olson was born in Chicago, the son of Yvonne Lucy (Bevry) and Lester W. Olson. He grew up in Mountain View, California, in the San Francisco Bay Area. He attended Los Altos High School where he graduated in 1958. In 1962, Olson graduated cum laude from the University of the Pacific with a degree in communications and history where he was a charter member of the Phi Kappa Tau fraternity chapter. He earned his J.D. degree from the UC Berkeley School of Law in 1965. At Berkeley, Olson served as a contributor to the California Law Review and was a member of Order of the Coif.
Legal career
Early legal career: 1965 to 2000
In 1965, Olson joined the Los Angeles, California, office of Gibson, Dunn & Crutcher as an Associate. In 1972, he was named Partner.
From 1981 to 1984, Olson served as an Assistant Attorney General (Office of Legal Counsel) in the Reagan administration. While serving in the Reagan administration, Olson was Legal Counsel to President Reagan during the Iran-Contra Affair's investigation phase.
Olson was also the Assistant Attorney General for the Office of Legal Counsel when then President Ronald Reagan ordered the Administrator of the EPA to withhold the documents on the ground that they contained "enforcement sensitive information." This led to an investigation by the House Judiciary Committee that later produced a report suggesting Olson had given false and misleading testimony before a House subcommittee during the investigation. The Judiciary Committee forwarded a copy of the report to the Attorney General, requesting the appointment of an independent counsel investigation.
Olson argued that the Independent Counsel took executive powers away from the office of the President of the United States and created a hybrid "fourth branch" of government that was ultimately answerable to no one. He argued that the broad powers of the Independent Counsel could be easily abused or corrupted by partisanship. In the Supreme Court Case Morrison v. Olson, the Court disagreed with Olson and found in favor of the Plaintiff and independent counsel Alexia Morrison.
He returned to private law practice as a partner in the Washington, D.C., office of his firm, Gibson Dunn.
A high-profile client in the 1980s was Jonathan Pollard, who had been convicted of selling government secrets to Israel. Olson handled the appeal to United States Court of Appeals for the D.C. Circuit. Olson argued the life sentence Pollard received was in violation of the plea bargain agreement, which had specifically excluded a life sentence. Olson also argued that the violation of the plea bargain was grounds for a mistrial. The Court of Appeals ruled (2‑1) that no grounds for mistrial existed.
Olson argued a dozen cases before the Supreme Court prior to becoming Solicitor General. In one case, he argued against federal sentencing guidelines; and, in a case in New York state, he defended a member of the press who had first leaked the Anita Hill story. Olson successfully represented presidential candidate George W. Bush in the Supreme Court case Bush v. Gore, which effectively ended the recount of the contested 2000 Presidential election.
Later legal career: 2001 to present
Olson was nominated for the office of Solicitor General by President Bush on February 14, 2001. He was confirmed by the United States Senate on May 24, 2001, and took office on June 11, 2001. In 2002, Olson argued for the federal government in the Supreme Court case Christopher v. Harbury (536 U.S. 403). Olson maintained that the government had an inherent right to lie: “There are lots of different situations where the government quite legitimately may have reasons to give false information out.” In July 2004, Olson retired as Solicitor General and returned to private practice at the Washington office of Gibson Dunn.
In 2006, Olson represented a defendant journalist in the civil case filed by Wen Ho Lee and pursued the appeal to the Supreme Court. Lee sued the federal government to discover which public officials had named him as a suspect to journalists before he had been charged. Olson wrote a brief on behalf of one of the journalists involved in the case, saying that journalists should not have to identify confidential sources, even if subpoenaed by a court. In 2011, Olson represented the National Football League Players Association in the 2011 NFL lockout.
In 2009, he joined together with President Clinton's former attorney David Boies, who was also his opposing counsel in Bush v. Gore, to bring a federal lawsuit, Perry v. Schwarzenegger, challenging Proposition 8, a California state constitutional amendment banning same-sex marriage. His work on the lawsuit earned him a place among the Time 100's greatest thinkers. In 2011, Olson and David Boies were awarded the ABA Medal, the highest award of the American Bar Association. In 2014, Olson received the Golden Plate Award of the American Academy of Achievement presented by Awards Council member Brendan V. Sullivan, Jr.
Apple Inc. hired Olson to fight the FBI–Apple encryption dispute court order to unlock an iPhone, which ended with the government withdrawing its case.
Olson also represented New England Patriots quarterback Tom Brady in the Deflategate scandal, which ended with Brady electing not to pursue Supreme Court appeal of a four-game suspension.
In 2017, Olson represented a group of billboard advertisers in a lawsuit against the City of San Francisco. The group challenged a city law requiring soda companies to include in their advertisements warnings that consumption of sugar-sweetened beverages is associated with serious health risks like diabetes. The suit claimed that the law is an unconstitutional restriction on commercial speech. In September 2017, a panel of the 9th Circuit Court of Appeals agreed with Olson and provisionally barred the city's mandated warnings.
In March 2018, Olson turned down an offer to represent Donald Trump in the probe of Russian interference in the 2016 election.
In November 2019, Olson represented the DACA recipients in the Supreme court case Department of Homeland Security v. Regents of the University of California. On June 18, the Supreme Court upheld the program due to the failure of the Trump Administration to legally follow the Administrative Procedure Act while rescinding DACA.
Personal life
Olson has been married four times. His first marriage was to Karen Beatie whom he met in college at the University of the Pacific. Olson's second wife was Jolie Ann Bales, an attorney and a liberal Democrat. Olson's third wife, Barbara Kay Olson (née Bracher), an attorney and conservative commentator, was a passenger aboard the hijacked American Airlines Flight 77 that crashed into a sector of the Pentagon on September 11, 2001. Her original plan was to fly to California on September 10, but she delayed her departure until the next morning so she could wake up with her husband on his birthday. Before she died, she called her husband to warn him about the flight. Some of the phone call was recorded and can still be heard today. On October 21, 2006, Olson married Lady Evelyn Booth, a tax attorney from Kentucky and a lifelong Democrat.
Politics
Olson was a founding member of the Federalist Society. He has served on the board of directors of American Spectator magazine. Olson was a prominent critic of Bill Clinton's presidency, and he helped prepare the attorneys of Paula Jones prior to their Supreme Court appearance. Olson served Giuliani's 2008 presidential campaign as judicial committee chairman. In 2012 he participated in Paul Ryan's preparation for the Vice Presidential debate, portraying Joe Biden. He is one of the outspoken advocates for gay marriage in the Republican Party.
Executive appointment speculation
Prior to President Bush's nomination of D.C. Circuit Court of Appeals Judge John G. Roberts, Olson was considered a potential nominee to the Supreme Court of the United States to fill Sandra Day O'Connor's post. Following the withdrawal of Harriet Miers' nomination for that post, and prior to the nomination of Third Circuit Court of Appeals Judge Samuel Alito, Olson's name was again mentioned as a possible nominee.
In September 2007, Olson was considered by the Bush administration for the post of Attorney General to succeed Alberto Gonzales. The Democrats, however, were so vehemently opposed that Bush nominated Michael Mukasey instead.
References
Bibliography
External links
Gibson, Dunn & Crutcher profile
Dept. of Justice biography
Campaign contributions made by Theodore Olson
1940 births
Living people
California Republicans
California lawyers
Federalist Society members
George W. Bush administration personnel
Lawyers from Chicago
Lawyers from Washington, D.C.
LGBT rights activists from the United States
People associated with Gibson Dunn
Reagan administration personnel
The American Spectator people
United States Assistant Attorneys General for the Office of Legal Counsel
United States Solicitors General
UC Berkeley School of Law alumni
University of the Pacific (United States) alumni
Whitewater controversy |
105970 | https://en.wikipedia.org/wiki/XOR%20%28disambiguation%29 | XOR (disambiguation) | XOR may mean:
Exclusive or (logic)
XOR cipher, an encryption algorithm
XOR gate
bitwise XOR, an operator used in computer programming
XOR (video game)
XOR, an x200 instruction
Xor DDoS
See also
Exor (disambiguation) |
105971 | https://en.wikipedia.org/wiki/Initialization%20vector | Initialization vector | In cryptography, an initialization vector (IV) or starting variable (SV) is an input to a cryptographic primitive being used to provide the initial state. The IV is typically required to be random or pseudorandom, but sometimes an IV only needs to be unpredictable or unique. Randomization is crucial for some encryption schemes to achieve semantic security, a property whereby repeated usage of the scheme under the same key does not allow an attacker to infer relationships between (potentially similar) segments of the encrypted message. For block ciphers, the use of an IV is described by the modes of operation.
Some cryptographic primitives require the IV only to be non-repeating, and the required randomness is derived internally. In this case, the IV is commonly called a nonce (number used once), and the primitives (e.g. CBC) are considered stateful rather than randomized. This is because an IV need not be explicitly forwarded to a recipient but may be derived from a common state updated at both sender and receiver side. (In practice, a short nonce is still transmitted along with the message to consider message loss.) An example of stateful encryption schemes is the counter mode of operation, which has a sequence number for a nonce.
The IV size depends on the cryptographic primitive used; for block ciphers it is generally the cipher's block-size. In encryption schemes, the unpredictable part of the IV has at best the same size as the key to compensate for time/memory/data tradeoff attacks. When the IV is chosen at random, the probability of collisions due to the birthday problem must be taken into account. Traditional stream ciphers such as RC4 do not support an explicit IV as input, and a custom solution for incorporating an IV into the cipher's key or internal state is needed. Some designs realized in practice are known to be insecure; the WEP protocol is a notable example, and is prone to related-IV attacks.
Motivation
A block cipher is one of the most basic primitives in cryptography, and frequently used for data encryption. However, by itself, it can only be used to encode a data block of a predefined size, called the block size. For example, a single invocation of the AES algorithm transforms a 128-bit plaintext block into a ciphertext block of 128 bits in size. The key, which is given as one input to the cipher, defines the mapping between plaintext and ciphertext. If data of arbitrary length is to be encrypted, a simple strategy is to split the data into blocks each matching the cipher's block size, and encrypt each block separately using the same key. This method is not secure as equal plaintext blocks get transformed into equal ciphertexts, and a third party observing the encrypted data may easily determine its content even when not knowing the encryption key.
To hide patterns in encrypted data while avoiding the re-issuing of a new key after each block cipher invocation, a method is needed to randomize the input data. In 1980, the NIST published a national standard document designated Federal Information Processing Standard (FIPS) PUB 81, which specified four so-called block cipher modes of operation, each describing a different solution for encrypting a set of input blocks. The first mode implements the simple strategy described above, and was specified as the electronic codebook (ECB) mode. In contrast, each of the other modes describe a process where ciphertext from one block encryption step gets intermixed with the data from the next encryption step. To initiate this process, an additional input value is required to be mixed with the first block, and which is referred to as an initialization vector. For example, the cipher-block chaining (CBC) mode requires an unpredictable value, of size equal to the cipher's block size, as additional input. This unpredictable value is added to the first plaintext block before subsequent encryption. In turn, the ciphertext produced in the first encryption step is added to the second plaintext block, and so on. The ultimate goal for encryption schemes is to provide semantic security: by this property, it is practically impossible for an attacker to draw any knowledge from observed ciphertext. It can be shown that each of the three additional modes specified by the NIST are semantically secure under so-called chosen-plaintext attacks.
Properties
Properties of an IV depend on the cryptographic scheme used. A basic requirement is uniqueness, which means that no IV may be reused under the same key. For block ciphers, repeated IV values devolve the encryption scheme into electronic codebook mode: equal IV and equal plaintext result in equal ciphertext. In stream cipher encryption uniqueness is crucially important as plaintext may be trivially recovered otherwise.
Example: Stream ciphers encrypt plaintext P to ciphertext C by deriving a key stream K from a given key and IV and computing C as C = P xor K. Assume that an attacker has observed two messages C1 and C2 both encrypted with the same key and IV. Then knowledge of either P1 or P2 reveals the other plaintext since
C1 xor C2 = (P1 xor K) xor (P2 xor K) = P1 xor P2.
Many schemes require the IV to be unpredictable by an adversary. This is effected by selecting the IV at random or pseudo-randomly. In such schemes, the chance of a duplicate IV is negligible, but the effect of the birthday problem must be considered. As for the uniqueness requirement, a predictable IV may allow recovery of (partial) plaintext.
Example: Consider a scenario where a legitimate party called Alice encrypts messages using the cipher-block chaining mode. Consider further that there is an adversary called Eve that can observe these encryptions and is able to forward plaintext messages to Alice for encryption (in other words, Eve is capable of a chosen-plaintext attack). Now assume that Alice has sent a message consisting of an initialization vector IV1 and starting with a ciphertext block CAlice. Let further PAlice denote the first plaintext block of Alice's message, let E denote encryption, and let PEve be Eve's guess for the first plaintext block. Now, if Eve can determine the initialization vector IV2 of the next message she will be able to test her guess by forwarding a plaintext message to Alice starting with (IV2 xor IV1 xor PEve); if her guess was correct this plaintext block will get encrypted to CAlice by Alice. This is because of the following simple observation:
CAlice = E(IV1 xor PAlice) = E(IV2 xor (IV2 xor IV1 xor PAlice)).
Depending on whether the IV for a cryptographic scheme must be random or only unique the scheme is either called randomized or stateful. While randomized schemes always require the IV chosen by a sender to be forwarded to receivers, stateful schemes allow sender and receiver to share a common IV state, which is updated in a predefined way at both sides.
Block ciphers
Block cipher processing of data is usually described as a mode of operation. Modes are primarily defined for encryption as well as authentication, though newer designs exist that combine both security solutions in so-called authenticated encryption modes. While encryption and authenticated encryption modes usually take an IV matching the cipher's block size, authentication modes are commonly realized as deterministic algorithms, and the IV is set to zero or some other fixed value.
Stream ciphers
In stream ciphers, IVs are loaded into the keyed internal secret state of the cipher, after which a number of cipher rounds are executed prior to releasing the first bit of output. For performance reasons, designers of stream ciphers try to keep that number of rounds as small as possible, but because determining the minimal secure number of rounds for stream ciphers is not a trivial task, and considering other issues such as entropy loss, unique to each cipher construction, related-IVs and other IV-related attacks are a known security issue for stream ciphers, which makes IV loading in stream ciphers a serious concern and a subject of ongoing research.
WEP IV
The 802.11 encryption algorithm called WEP (short for Wired Equivalent Privacy) used a short, 24-bit IV, leading to reused IVs with the same key, which led to it being easily cracked. Packet injection allowed for WEP to be cracked in times as short as several seconds. This ultimately led to the deprecation of WEP.
SSL 2.0 IV
In cipher-block chaining mode (CBC mode), the IV need not be secret, but must be unpredictable (In particular, for any given
plaintext, it must not be possible to predict the IV that will be associated to the plaintext in advance of the generation of the IV.) at encryption time. Additionally for the output feedback mode (OFB mode), the IV must be unique.
In particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0).
If an attacker knows the IV (or the previous block of ciphertext) before he specifies the next plaintext, he can check his guess about plaintext of some block that was encrypted with the same key before.
This is known as the TLS CBC IV attack, also called the BEAST attack.
See also
Cryptographic nonce
Padding (cryptography)
Random seed
Salt (cryptography)
Block cipher modes of operation
CipherSaber (RC4 with IV)
References
Further reading
Block cipher modes of operation
Cryptography |
106162 | https://en.wikipedia.org/wiki/Poem%20code | Poem code | The poem code is a simple, and insecure, cryptographic method which was used during World War II by the British Special Operations Executive (SOE) to communicate with their agents in Nazi-occupied Europe.
The method works by the sender and receiver pre-arranging a poem to use. The sender chooses a set number of words at random from the poem and gives each letter in the chosen words a number. The numbers are then used as a key for a transposition cipher to conceal the plaintext of the message. The cipher used was often double transposition. To indicate to the receiver which words had been chosen, an indicator group of letters is sent at the start of the message.
Description
To encrypt a message, the agent would select words from the poem as the key. Every poem code message commenced with an indicator group of five letters, whose position in the alphabet indicated which five words of an agent's poem would be used to encrypt the message. For instance, suppose the poem is the first stanza of Jabberwocky:
’Twas brillig, and the slithy toves
Did gyre and gimble in the wabe:
All mimsy were the borogoves,
And the mome raths outgrabe.
We could select the five words THE WABE TOVES TWAS MOME, which are at positions 4, 13, 6, 1, and 21 in the poem, and describe them with the corresponding indicator group DMFAU.
The five words are written sequentially, and their letters numbered to create a transposition key to encrypt a message. Numbering proceeds by first numbering the A's in the five words starting with 1, then continuing with the B's, then the C's, and so on; any absent letters are simply skipped. In our example of THE WABE TOVES TWAS MOME, the two A's are numbered 1, 2; the B is numbered 3; there are no C's or D's; the four E's are numbered 4, 5, 6, 7; there are no G's; the H is numbered 8; and so on through the alphabet. This results in a transposition key of 15 8 4, 19 1 3 5, 16 11 18 6 13, 17 20 2 14, 9 12 10 7.
This defines a permutation which is used for encryption. First, the plaintext message is written in the rows of a grid that has as many columns as the transposition key is long. Then the columns are read out in the order given by the transposition key. For example, the plaintext "THE OPERATION TO DEMOLISH THE BUNKER IS TOMORROW AT ELEVEN RENDEZVOUS AT SIX AT FARMER JACQUES" would be written on grid paper, along with the transposition key numbers, like this:
15 8 4 19 1 3 5 16 11 18 6 13 17 20 2 14 9 12 10 7
T H E O P E R A T I O N T O D E M O L I
S H T H E B U N K E R I S T O M O R R O
W A T E L E V E N R E N D E Z V O U S A
T S I X A T F A R M E R J A C Q U E S X
The columns would then be read out in the order specified by the transposition key numbers:
PELA DOZC EBET ETTI RUVF OREE IOAX HHAS MOOU LRSS TKNR ORUE NINR EMVQ TSWT ANEA TSDJ IERM OHEX OTEA
The indicator group (DMFAU) would then be prepended, resulting in this ciphertext:
DMFAU PELAD OZCEB ETETT IRUVF OREEI OAXHH ASMOO ULRSS TKNRO RUENI NREMV QTSWT ANEAT SDJIE RMOHE XOTEA
In most uses of code poems, this process of selecting an indicator group and transposing the text would be repeated once (double transposition) to further scramble the letters. As an additional security measure, the agent would add prearranged errors into the text as security checks. For example, there might be an intentional error in every 18th letter to ensure that, if the agent was captured or the poem was found, the enemy might transmit without the security checks.
Analysis
The code's advantage is to provide relatively strong security while not requiring any codebook.
However, the encryption process is error-prone when done by hand, and for security reasons, messages should be at least 200 words long.
The security check was usually not effective: if a code was used after being intercepted and decoded by the enemy, any security checks were revealed. Further, the security check could often be tortured out from the agent.
There are a number of other weaknesses
Because the poem is re-used, if one message is broken by any means (including threat, torture, or even cryptanalysis), past and future messages will be readable.
If the agent used the same poem code words to send a number of similar messages, these words could be discovered easily by enemy cryptographers. If the words could be identified as coming from a famous poem or quotation, then all of the future traffic submitted in that poem code could be read. The German cryptologic units were successful in decoding many of the poems by searching through collections of poems.
Since the poems used must be memorable for ease of use by an agent, there is a temptation to use well-known poems or poems from well-known poets, further weakening the encryption (e.g., SOE agents often used verses by Shakespeare, Racine, Tennyson, Molière, Keats, etc.).
Development
When Leo Marks was appointed codes officer of the Special Operations Executive (SOE) in London during World War II, he very quickly recognized the weakness of the technique, and the consequent damage to agents and to their organizations on the Continent, and began to press for changes. Eventually, the SOE began using original compositions (thus not in any published collection of poems from any poet) to give added protection (see The Life That I Have, an example). Frequently, the poems were humorous or overtly sexual to make them memorable ("Is de Gaulle's prick//Twelve inches thick//Can it rise//To the size//Of a proud flag-pole//And does the sun shine//From his arse-hole?"). Another improvement was to use a new poem for each message, where the poem was written on fabric rather than memorized.
Gradually the SOE replaced the poem code with more secure methods. Worked-out Keys (WOKs) was the first major improvement – an invention of Marks. WOKs are pre-arranged transposition keys given to the agents and which made the poem unnecessary. Each message would be encrypted on one key, which was written on special silk. The key was disposed of, by tearing a piece off the silk, when the message was sent.
A project of Marks, named by him "Operation Gift-Horse", was a deception scheme aimed to disguise the more secure WOK code traffic as poem code traffic, so that German cryptographers would think "Gift-Horsed" messages were easier to break than they actually were. This was done by adding false duplicate indicator groups to WOK-keys, to give the appearance that an agent had repeated the use of certain words of their code poem. The aim of Gift Horse was to waste the enemy's time, and was deployed prior to D-Day, when code traffic increased dramatically.
The poem code was ultimately replaced with the one-time pad, specifically the letter one-time pad (LOP). In LOP, the agent was provided with a string of letters and a substitution square. The plaintext was written under the string on the pad. The pairs of letters in each column (such as P,L) indicated a unique letter on the square (Q). The pad was never reused while the substitution square could be reused without loss of security. This enabled rapid and secure encoding of messages.
Bibliography
Between Silk and Cyanide by Leo Marks, HarperCollins (1998) ; Marks was the Head of Codes at SOE and this book is an account of his struggle to introduce better encryption for use by field agents; it contains more than 20 previously unpublished code poems by Marks, as well as descriptions of how they were used and by whom.
See also
Book cipher
The Life That I Have (also known as Yours, arguably the most famous code poem)
Classical ciphers
History of cryptography
Special Operations Executive
Cryptography |
111444 | https://en.wikipedia.org/wiki/Streator%2C%20Illinois | Streator, Illinois | Streator is a city in LaSalle and Livingston counties in the U.S. state of Illinois. The city is situated on the Vermilion River approximately southwest of Chicago in the prairie and farm land of north-central Illinois. According to the 2010 census, the population of Streator was 13,710.
History
Although settlements had occasionally existed in the area, they were not permanent. In 1824, surveyors for the Illinois and Michigan Canal which would extend from Chicago's Bridgeport neighborhood to the Illinois River, a tributary of the Mississippi River, arrived in this area of the Vermillion River, followed by homesteaders by the 1830s. In 1861, miner John O'Neill established a trading post called "Hardscrabble" (ironically an early name for the Bridgeport neighborhood), supposedly because he watched loaded animals struggle up the river's banks. Another name for the new settlement was "Unionville".
Streator received its current name to honor Worthy S. Streator, an Ohio industrialist who financed the region's first coal mining operation. Streator received a town charter in 1868 and incorporated as a city in 1882. In 1882 Col. Ralph Plumb was elected as its first mayor. Streator's early growth was due to the coal mine, as well as a major glass manufacturer and its status as a midwestern railroad hub. Today Streator's economy is led by heavy-equipment manufacturer Vactor, food distributor U.S. Foodservice and glass bottle manufacturer Owens-Illinois.
The city is the hometown of Clyde Tombaugh, who in 1930 discovered the dwarf planet Pluto, the first object to be discovered in what would later be identified as the Kuiper belt; and George "Honey Boy" Evans, who wrote "In the Good Old Summer Time." Streator hosts annual events including Streator Park Fest; an Independence Day celebration, the Roamer Cruise Night and the Light Up Streator celebration. Streator is governed by a Manager–council style of government. It maintains police and fire departments as well as a public works system. Its current mayor is Jimmie Lansford.
Pre-settlement
Settlement in the region began with the Kaskaskia tribe of the Illiniwek Confederation. This Native American tribe's Grand Village was located on the north bank of the Illinois River in nearby Utica, Illinois. The Kaskaskia "were hunters and gatherers, farmers, warriors and traders." The Illiniwek were the last remnants of the Mississippian culture.
French explorers Father Jacques Marquette and Louis Jolliet were the first Europeans to enter this region during a visit to the Grand Village in 1673. Marquette established a mission at the village in 1675. In 1679, French explorer Robert de LaSalle ordered a fortification to be built at the site that was later known as Starved Rock. Later that year Iroquois attacked the Kaskaskia village and the 8,000 villagers dispersed. The French and local tribes again fortified the village and created Fort St. Louis, but the Iroqouis continued to attack. The settlement was eventually abandoned by 1691.
In the years after the initial exploration, the French settled their newly claimed territory as La Louisiane. During much of the 18th century the region was sparsely populated by French, British and American fur traders. The French ceded control of the part of the La Louisiane territory east of the Mississippi River to the British at the end of the French and Indian War in 1763. Of this territory ceded by the French to Britain, the part extending down to the Ohio River was added to Britain's Quebec Province when the British Parliament passed the Quebec Act in 1774. During the American Revolutionary War (1775–83), this region that had been added to Quebec was claimed by Virginia in 1778, after a victory over the British by George Rogers Clark at Kaskaskia; Virginia named the region Illinois County.
After the war, the area was included in the territory ceded by Britain to the United States under the Treaty of Paris (1783); in 1784, Virginia ceded its claim over Illinois County to the Congress of the Confederation of the United States. This area, south of what remained of Britain's Quebec but north of the Ohio River, later became the Northwest Territory created by the Congress on July 13, 1787. From part of this Northwest Territory area, the Indiana Territory was formed by the United States Congress on July 4, 1800; from part of this Indiana Territory area, the Illinois Territory created by Congress on March 1, 1809; and from part of that Illinois Territory area, the state of Illinois was admitted to the union on December 3, 1818 by Congress.
The city of Chicago served as the main impetus of growth in the area throughout the early 19th century, and more importantly to the region around Streator was the development of the Illinois and Michigan Canal in 1821. This canal connected Lake Michigan to the Mississippi River, greatly increasing shipping traffic in the region. Land speculation in areas lining the canal and rivers ensued and towns sprouted quickly. Individual settlements in the Bruce Township region started as early as 1821. In 1861, John O'Neil established the first settlement in what was to become the city of Streator when he opened a small grocery and trading business.
Coal and cityhood
Streator began with coal. Vast beds of coal lie just beneath the surface throughout much of Illinois. The demand for coal was increasing in the mid-19th century, and East Coast capitalists were willing to invest in this region. The area was originally known as Hardscrabble, "because it was a hard scrabble to cross the Vermilion River and get up the hill to where the town was first located". The town was renamed Unionville in honor of the local men who fought for the Union during the Civil War.
In 1866 Worthy S. Streator, a prominent railroad promoter from Cleveland, Ohio, financed the region's first mining operation. Streator approached his nephew Col. Ralph Plumb at a railway station in December 1865 about overseeing a mining operation in central Illinois for him and several investors. Colonel Plumb agreed and arrived in the town then called Hardscrabble in February 1866. Success of the project required a rail line near the mines. Plumb and Streator "invited" Streator's friend, then Ohio Congressman James A. Garfield to sign on as an investor. In return, Garfield was expected to work with Robert C. Schenck, then the president of the American Central railroad, in getting the railroad to "bend their lines" to Streator. Eventually the plan did not work. The Vermilion Company then made arrangements with the Fox River line for their needed rail service.
Included in Col. Plumb's duties were overseeing the platting and incorporation of the quickly growing area. Plumb served as Streator's first mayor serving two terms. Plumb's mark on the early development of Streator was notable. The main hotel and the local opera house bore his name. He financed the construction of the city's first high school. Earlier in his life he served as an Ohio state representative and as an officer in the Union Army. Later in life he served Illinois as a representative in Congress.
Streator grew rapidly due to a number of factors: the need for coal in Chicago, the desire of European immigrants to come to America, and the investments made by East Coast capitalists willing to invest in coal operations. Plumb needed laborers for his mines, but the Vermilion Coal Company was unable to afford European employment agents. Instead, it alerted steamship offices of the new job opportunities and convinced local railroads to carry notices of Streator's promise.
Land was sold to incoming miners at discounted prices as another enticement, but the company retained mineral rights to the land. In 1870, Streator's population was 1,486, but by 1880 its population tripled.
Scottish, English, Welsh, German and Irish immigrants came to the area first, followed later by scores of mostly Slovaks; Czechs, Austrians and Hungarians came in lesser numbers.
Today many of the residents are direct descendants of these original miners.
The success of the local mining operations and the introduction of the new glass making industry allowed for improvements in the living conditions and personal wealth of its miners and laborers. An 1884 survey by the Illinois Bureau of Labor Statistics showed that 20 percent of Streator's miners owned their houses. Labor movements like the Miners National Association and the United Mine Workers of America began to flourish, as did ethnic churches and social institutions such as the Masons and Knights of Pythias. In his 1877 History of LaSalle County, author H.F. Kett states:
Perhaps no city...in Illinois, outside of the great city of Chicago, presents an instance of such rapid and substantial growth as the city of Streator. From a single small grocery house... the locality has grown to be a city of 6,000 prosperous and intelligent people. Churches, school-houses, large, substantial business houses and handsome residences, with elegant grounds and surroundings, now beautify the waste of ten years ago, while the hum of machinery and thronged streets are unmistakable evidences of business importance and prosperity.
In addition to coal, the area around Streator contained rich clay and shale, which gave rise to Streator's brick, tile and pipe industries. In time, these supplanted coal as Streator's leading exports, but Streator was best known for its glass bottle industry. In the early 20th century Streator held the title of "Glass Manufacturing Capital of the World." Streator continued to flourish for much of the early 20th century. Ultimately the demand for coal was replaced with the growing needs for gas and oil. Many of the underground mines in Streator closed during the 1920s. The last of the mines shut down in 1958. While other areas of LaSalle County continued to grow, Streator's population peaked at about 17,000 residents in 1960 and has since declined. Many of the original downtown buildings have been demolished, but few have been replaced. Another reason for static growth in Streator is its distance from any major Interstate Highway. When the federal highway system started in the 1950s and 1960s no interstate was built near the city. Streator is from Interstate 55, from Interstate 80 and Interstate 39.
2007 Comprehensive plan
Streator and the North Central Illinois Council of Governments (NCICG) finalized the Streator Comprehensive Plan in February 2007. The plan if approved is a roadmap for civic, transportation, housing, commercial and recreational improvements in the city through 2027.
Geography
Streator is located at (41.1208668, −88.8353520).
According to the 2010 census, Streator has a total area of , of which (or 99.8%) is land and (or 0.2%) is water.
Topography and geology
Streator lies within the Vermilion River/Illinois River Basin Assessment Area (VRAA) defined by the watershed of the Vermilion River, a major tributary to the Illinois River in Central Illinois, an area of mostly flat prairie. The topography of the basin is a complex collection of buried valleys, lowlands and uplands carved by repeated episodes of continental glaciation.
Underneath the topsoil, the region's bedrock contains vast amounts of coal. About 68% of Illinois has coal-bearing strata of the Pennsylvanian geologic period. According to the Illinois State Geological Survey, 211 billion tons of bituminous coal are estimated to lie under the surface, having a total heating value greater than the estimated oil deposits in the Arabian Peninsula. However, this coal has a high sulfur content, which causes acid rain.
Streator's coal mining history closely parallels Illinois', with a great push in coal production from 1866 until the 1920s, when many of the mines closed. The low-sulfur coal of the Powder River Basin and the growing demands for oil caused a decline in demand for Streator's high-sulfur coal.
The St. Peter sandstone is an Ordovician formation in the Chazyan stage of the Champlainian series. This layer runs east–west from Illinois to South Dakota. The stone consists of 99.44% silica, which is used for the manufacture of glass. Its purity is especially important to glassmakers. Streator, which lies within the St. Peter sandstone formation, has mined this mineral since the late 19th century for use in its glass manufacturing industries.
Climate
Streator has a continental climate, influenced by the Great Lakes. Its average winter temperature is and its average summer temperature is .
Streator has an average annual rainfall of , with an annual snowfall of 22.0 in (55.88 cm). The highest temperature recorded in Streator was in July 1936. The lowest temperature recorded was in January 1985.
1951 flood
The worst flood in Streator's history occurred in 1951. The Vermilion River reached a flood level of .
2010 tornado
At approximately 8:50 pm (CST) on June 5, 2010 an EF2 tornado swept through southern Streator. The tornado initially touched down east of Magnolia, causing EF0 and EF1 damage as it traveled east. EF2 damage began as the tornado passed East 15th Road. No fatalities were reported, but there were reports of leveled houses and extensive damage throughout the area. The National Weather Service reported that there were two tornadoes. The second was reported to have touched down one mile west of Streator, with a base of 50 feet.
Demographics
As of the census of 2010, there were 13,710 people, 5,621 households, and 3,481 families residing in the city. The population density was /sq mi (/km). There were 6,271 housing units at an average density of /sq mi (/km). The racial makeup of the city was 91.2% White, 2.5% African American, 0.3% Native American, 0.4% Asian, 0.01% Pacific Islander, 3.50% from other races, and 2.1% from two or more races. Hispanic or Latino of any race were 10.4% of the population.
There were 5,621 households, out of which 26.7% had children under the age of 18 living with them, 41.8% were married couples living together, 14.3% had a female householder with no husband present, and 38.1% were non-families. 33.1% of all households were made up of individuals, and 16.3% had someone living alone who was 65 years of age or older. The average household size was 2.41 and the average family size was 3.04.
In the city, the age distribution of the population shows 27% under the age of 19, 6.0% from 20 to 24, 22.7% from 25 to 44, 26.5% from 45 to 64, and 17.7% who were 65 years of age or older. The median age was 39.9 years. For every 100 females, there were 91.7 males. For every 100 females age 18 and over, there were 87.3 males.
The median income for a household in the city was $39,597, and the median income for a family was $46,417. Males had a median income of $34,932 versus $24,621 for females. The per capita income for the city was $19,980. About 9.4% of families and 14.8% of the population were below the poverty line, including 21.4% of those under age 18 and 9.5% of those age 65 or over.
Streator is a principal city of the Ottawa Micropolitan Statistical Area, which was the tenth-most populous Micropolitan Statistical Area in the United States as of 2009. The small Livingston County portion of Streator is part of the Pontiac Micropolitan Statistical Area.
Historically, the population of LaSalle County has increased 75% between 1870 and 1990, while the statewide population has grown 350%.
Economy
Streator's economic history has been tied with its natural resources. Coal was the initial catalyst of the city's economy from 1866 until the late 1920s. As the community matured, silica deposits provided the resource for Streator's next industry leader: glass-container manufacturing. While the coal industry eventually died, glass manufacturing remains a presence in Streator. Agriculture and related agri-business in the farmlands of LaSalle County and nearby Livingston County are also a strong influence in Streator's economic engine. Though manufacturing provides the greatest share of earnings, the service industry now accounts for the largest share of jobs.
Coal
Coal production in LaSalle County and Illinois peaked in the 1910s. Wyoming's Powder River Basin coal reserves, which contain a much lower sulphur content, were discovered in 1889, with full scale mining beginning in the 1920s.
Glass manufacturing
Glassmaking and, more specifically, glass blowing was a highly skilled craft. Most of America's glassblowers came from Europe, or were trained there. Many of Streator's immigrant coal miners were trained in glass blowing. High-grade silica, the main ingredient in glass was in abundance in the Streator region and nearby Ottawa. The combination of silica, coal to fire the furnaces and skilled craftsmen were a perfect match for Streator's second major industry which began in 1887 with the Streator Bottle and Glass Company. Other companies like Thatcher Glass Manufacturing Corp (later Anchor Glass Containers) which began manufacturing milk bottles in 1909, the American Bottle Company in 1905, the Streator Cathedral Glass Company in 1890, Owens-Illinois and others soon followed. Through the 20th century Streator was known as the "Glass Container Capital of the World."
Major employers
Three of Streator's largest companies are some of its longest-lasting companies. Vactor Manufacturing began in 1911 as the Myers-Sherman Company, manufacturing milking machines and conveyors for the agricultural industry. In the 1960s Myers-Sherman patented a sewer cleaning vehicle for the municipal public works market. The company was renamed Vactor when it became a subsidiary of the Federal Signal Corporation They are the world's leading producer of heavy-duty sewer cleaning equipment. They are the second-largest employer in Streator with 530 employees.
Owens-Illinois' Streator plant produces Duraglas XL bottles; a lightweight, stronger beer bottle for the Miller Brewing Company. Owens Bottle Company opened in Streator in 1916. Production peaked in the 1960s with 3,500 employees working in its facility. Today it is Streator's fifth-largest employer, with 210 employees. In 2006, the plant was honored by the Miller Brewing Company for producing 650 million bottles for the brewer.
St. Mary's Hospital is the city's largest employer with 550 employees. In late 2015, OSF Healthcare system purchased the hospital from HSHS Medical Group. It is undetermined what OSF Healthcare will do with the hospital. Founded in 1886 by the Hospital Sisters of the Third Order of St. Francis, this 251-bed hospital serves Streator and its outlying areas.
Streator was briefly home to the Erie Motor Carriage Company (which became Barley Motor Car Co.).
Current products of Streator include building and paving brick, milk, soda bottles, auto parts, sewer pipe, clothing, drain tile, auto truck dump bodies, and hydraulic hoists. Its major agricultural crops include corn and soybeans.
Arts, culture and media
Streator's parks and events reflect its heritage and prairie locale. A number of its residents have distinguished themselves in the art world.
Arts
The Community Players of Streator offer summer stock theatre performances each year at the William C. Schiffbauer Center for the Performing Arts at Engle Lane Theatre.
The Majestic Theatre, an art deco style movie house, originally opened in 1907 as a vaudeville house. It has gone through many changes, openings, and closings throughout its history, having most recently reopened in 2002. The Majestic shows recently released movies as well as hosting live musical acts. This has since closed due to deterioration
The Walldogs painted 17 murals in the summer of 2018. The downtown now is home to more than 20 murals.
Museums and historical buildings
The Streatorland Historical Society Museum houses displays of Streator history and memorabilia of some of its famous citizens. One of the displays is a tribute to the Free Canteen. The Canteen was a group of local volunteers who served over 1.5 million soldiers during World War II who briefly stopped at the city's old Santa Fe Train Depot while traveling by troop trains. Other features include a homemade telescope used by astronomer Clyde Tombaugh and a Burlington Northern caboose rail car.
During World War II the Streator Santa Fe Train Depot was a busy way-station for millions of soldiers and sailors who passed through the town on the way to or from training for the war. Beginning in 1943, the Streator Parents Service Club, a group of parents of veterans of the war, created the Streator Free Canteen. The volunteers handed out sandwiches and coffee and presented a friendly face to the servicemen during their stopover in Streator. During the 2½ years that the canteen operated, volunteers hosted over 1.5 million servicemen and women. Thirty other service groups from Streator joined to assist the Parents Service Club as well as 43 other organizations throughout the central-Illinois region. On Veterans Day, November 10, 2006 a bronze statue commemorating the "Coffee Pot Ladies" of Streator was dedicated at the Santa Fe Railroad Station.
The Streator Public Library was made possible with a $35,000 grant from Andrew Carnegie. With its two-story high domed ceiling, Ionic columns and oak staircases, it was considered too extravagant by critics when it opened in 1903. The Library was added to the National Register of Historic Places in 1996. The Ruffin Drew Fletcher House located on East Broadway Street is an example of Stick style architecture. It was placed on the National Register of Historic Places in August 1991. The Silas Williams House is a Queen Anne style home built-in 1893. It was placed on the National Register of Historic Places in June 1976. Founded in 1883, St. Stephens Catholic Church was the first Slovak Catholic church in the United States. In September 2010, the four Roman Catholic churches in Streator were consolidated into one new parish named St. Michael the Archangel. Currently all masses are conducted at St. Stephen's Church and discussions are continuing to decide if a new church will be built or if one of the existing churches will be rebuilt.
Among Streator's other notable buildings are the ornate Bauhaus-inspired National Guard Armory near the Vermilion River and the town's turn-of-the-20th-century City Hall on Park Street (now a business). These facilities are accessible to the public, with some limitations. Streator is also home to many private residences of significant historical interest and value, including the Kennedy Home on Pleasant Avenue.
Events
The Streator Food Truck Festival is held annually in May.
Park Fest is held during the Memorial Day weekend through Sunday. Park Fest activities are held in City Park (the main public park in the downtown Streator area).
A Memorial Day observance is held on the morning of Memorial Day at the Veterans Plaza at the southeast part of City Park.
Streator is a designated stop each year in the annual "Heritage Tractor Adventure" along the Illinois and Michigan Canal. This three-day tractor ride/rally attracts hundreds of antique tractor owners.
The annual Fourth of July celebration runs for over four days with events throughout the city, with most of the events held in City Park; the park-based events include a carnival, 5K run and a talent contest. Other Fourth of July events include the annual parade which runs through downtown and the fireworks display which is held at Streator High School.
"Roamer Cruise Night" is an annual cruise / car show held on Labor Day weekend in the downtown district that attracts over 600 cars and 18,000 attendees. Special features of the Cruise Night include a display of a Roamer which was built at a factory in Streator in 1917. Cruise Night was rained out in 2011 and 2012, leaving Streatorites hungry for 2013.
A Veterans Day observance is held on the morning of Veterans Day at Veterans Plaza.
Streator also has an annual event called Light Up Streator held the first Saturday after Thanksgiving. Light Up Streator is a group of volunteers who place holiday decorations throughout the Streator area, most notably in City Park.
The Keeping Christmas Close to Home Parade of Lights is held the weekend after Thanksgiving in downtown Streator.
Media
Streator has one daily newspaper, The Times. The daily paper, published by the Small Newspaper Group Inc., in nearby Ottawa, provides the local news for the Ottawa, IL Micropolitan Statistical Area. Streator's original daily, The Times-Press News merged with the Ottawa Daily Times in 2005. Television broadcasts are provided by stations in nearby Bloomington and Peoria. Local cable providers also air Chicago stations. Streator has three local radio stations: WSPL 1250 AM, which has a news/talk format, WSTQ 97.7 FM, which has a contemporary pop format and WYYS 106.1 FM, which broadcasts a classic hits format. The three stations are owned by the Mendota Broadcast Group, Inc. One of the longest-running programs on WSPL was "Polka Party", which was broadcast live on Saturday mornings for more than thirty years until its host Edward Nowotarski retired in 2001.
Parks and recreation
The city of Streator maintains eight local parks and one public golf course.
Spring Lake Park is a city-owned park west of the Streator city limits (and north of Illinois Route 18). The park has two creeks, waterfalls and six trails. It offers hiking, horseback riding and picnicking. In September 2008, Spring Lake Park received the Governor's Hometown Award from the state of Illinois in recognition of its volunteer-led restoration project.
City Park is the main park in Streator's downtown area; a section of Streator City Park called Veterans Plaza contains memorials bearing the names of citizens who gave their lives for their country in the Civil War and in later wars. The park is also home to the Reuben G. Soderstrom Plaza, a monument dedicated to former Illinois AFL-CIO President and Streator native Reuben Soderstrom. City Park is the site of annual events including Streator Park Fest (successor to Heritage Days), held on Memorial Day weekend; the Roamer Cruise Night, held on Labor Day weekend; and the annual Light Up Streator celebration and display held each November. Patriotic observances use the park's Veterans Plaza on Memorial Day and Veterans Day. The park is also the site of other events, including concerts. In 2012 construction began, in the southwest quadrant of City Park, on a new venue suitable for concerts; it was later announced that this would be called Plumb Pavilion (in honor of Streator's first mayor, Ralph Plumb).
Marilla Park, located at the northeast end of Streator, is among Streator's larger parks, and includes picnic areas and a playground area. In 2012, a Disc Golf Course was added to Marilla Park.
Other city parks in Streator include Oakland Park, Central Park, Bodznick Park, Merriner Park, and Southside Athletic Park.
Local sports
Organized local sports activities include the Youth football league, American Youth Soccer Organization, Little League Baseball, and American Legion Baseball. The Streator High School "Bulldogs" and Woodland High School "Warriors" participate in the Interstate Eight Conference and the Tri-County Conferences, respectively, which are part of the Illinois High School Association. Local golf is played at the city-owned Anderson Field Municipal Golf Course and The Eastwood Golf Course
The Streator Zips won the Illinois State Championship for Mickey Mantle baseball in both 2003 and 2004.
Streator was represented in the Illinois–Missouri League, an American minor league baseball league, from 1912 through 1914. The Streator Speedboys had a record of 45–65 and finished last in 1912. In 1913, The Streator Boosters were in fourth place with a 30–57 record, and in 1914 the Boosters had a record of 40–48, again finishing in fourth place. The Streator Boosters competed in the Bi-State League in 1915. When the league disbanded in the middle of the season, the Streator Boosters were in first place with a record of 30 wins and 18 losses.
In 2008, the Streator Reds, an age 16-and-under team, won the Senior League Illinois State Tournament defeating the team from Burbank, Illinois. The Reds then qualified for the Senior League Regional Tournament in Columbia, Missouri, where they were eliminated in the first round with a 2–2 record.
Three local residents have had notable success in professional sports. Doug Dieken played 14 seasons for the Cleveland Browns in the National Football League from 1971 to 1984. He was selected to the Pro Bowl in 1980, and named a "Cleveland Brown Legend" by the team in 2006. He serves as a color commentator on Browns radio broadcasts. Bob Tattersall (1924–1971) was known as the "King of Midget Car Racing" in the 1950s and 1960s in both the US and Australia. Tattersall had a long list of victories, including the 1960, 1962, 1966 and 1969 Australian Speedcar Grand Prix (Midgets are known as Speedcars in Australia), while his crowning achievement was when he won the 1969 USAC National Midget Series. He died of cancer at his home in Streator in 1971. Also, in 2009, Clay Zavada, made his professional debut as a relief pitcher for the Arizona Diamondbacks. Other local residents who have enjoyed careers in Major League Baseball include Andy Bednar (pitcher, Pittsburgh Pirates), Rube Novotney (catcher, Chicago Cubs) and Adam Shabala (outfielder, San Francisco Giants).
The Streator 10-year-old All-Stars took home the city's first Little League State Baseball Championship in 2002 versus Chicago(Ridge Beverly). After winning district and sectional championships, the state tournament finals was held in Utica, Illinois. The 12-year-old team from Streator competed in the World Series in 2012.
Outdoor recreation
Outdoor recreation activities in the Streator area primarily center around the Vermilion River, Spring Lake Park (Located on the west side of the city) and nearby state parks. Fishing, kayaking and canoeing are popular activities along the Vermilion River. Matthiessen State Park and Starved Rock State Park offer hiking, hunting, camping and other amenities in their geologically diverse areas.
Law and government
The city operates under a City Manager form of government. Elected officials include its mayor, Jimmie Lansford and the four members of the city council; Brian Crouch, Ed Brozak, Joe Scarbeary, and Tara Bedei, who meet monthly.
The Streator Police Department is headquartered in City Hall. The first chief of police was Martin Malloy (1840–1911). Led by Chief of Police, Kurt Pastirik, the current department has a staff of 19 patrol officers, 1 school resource officer, 3 investigators, and 1 administrative assistant who all oversee the city's law enforcement operations. 911 Center has since been consolidated with Livingston County Dispatch.
The Streator Fire Department is headed by Chief Garry Bird and serviced by a staff of fifteen firefighters. Firefighters work a traditional 24 on/48 off schedule.
Streator's Public Works Department oversees the maintenance and operation of the city's public infrastructure including roadways, sanitation, parks and fleet.
The unincorporated portions of South Streator are served by the Livingston County Sheriff's Office in Pontiac. The unincorporated portions of Otter Creek and Eagle Townships in LaSalle County are served by LaSalle County Sheriff's Office in Ottawa. Fire protection services for unincorporated portions of Streator are provided by Reading Township Fire Department in the south, east and west. Grand Ridge Fire Department covers fire services for the northern unincorporated areas.
Streator is in served in Illinois' 16th congressional district, currently represented by Adam Kinzinger. The city is in the 38th legislative district and 76th representative district. The respective legislators for these districts are Senator Sue Rezin and Representative Jerry Lee Long.
Education
Streator is served by three school districts. Streator Elementary School District serves two elementary schools; Centennial Elementary School and Kimes Elementary School; and one junior high school; Northlawn Junior High School. Streator Township High School District serves just one school, Streator Township High School. The Woodland Community Unit School District #5 which serves the Livingston County portion of Streator, serves one high school, Woodland High School, and one combination elementary/junior high school, Junior High and Elementary School. Streator has one parochial elementary school, St. Anthony's Catholic School, now known as St. Michael the Archangel. Nearby Illinois Valley Community College is located in Oglesby, Illinois.
The Carnegie Foundation funded the Streator Public Library, which opened in 1903. It was added to the National Register of Historic Places in 1996.
Infrastructure
Health care
St. Mary's Hospital provides medical service to the Streator region. It is an affiliate of the Hospital Sisters Health System (HSHS). Advanced Medical Transport of Central Illinois, headquartered in Peoria has a satellite office in Streator and provides paramedic advanced life support. Lifeflight from St. Francis Medical Center, Peoria, Illinois and MedForce from Colona, Illinois provide aeromedical transportation for more advanced care from St. Mary's Hospital. In January 2010, St. Mary's Hospital announced the addition of SAINTS Flight 2, a helicopter transport service, the first to be dedicated to the Illinois Valley. SAINTS Flight 2 is owned and operated by Air Methods and bases its helicopter on the helipad at St. Mary's Hospital. On October 1, 2010, Air Methods announced it would be ceasing SAINTS Flight 2 due to an insufficient flight volume to sustain operations.
Transportation
Streator is served by Illinois State Routes 23 and 18, which intersect in downtown. Streator is isolated in that it is located at least a 15-minute drive from the nearest US interstate highway. Rail service is provided by Norfolk Southern Railway, BNSF Railway and the Illinois Railway. The city of Streator does not provide a mass transit system. Amtrak and AT&SF previously served Streator at Streator Station.
Notable people
Burt Baskin, who co-established the Baskin-Robbins chain of ice cream parlors with Irv Robbins, was born in Streator in 1913.
Kevin Chalfant, lead singer of The Storm and former live performance member of Journey.
Mary Lee Robb Cline, actress, known as Marjorie in the radio program The Great Gildersleeve
Phillipe Cunningham, Minneapolis City Council Member, one of the first openly transgender men to be elected to public office in the United States. He was born in Streator and lived there until the age of 18.
"Mad Sam" DeStefano, infamous Chicago Outfit gangster, was born in Streator
Doug Dieken, an offensive tackle who played 14 seasons in the National Football League with the Cleveland Browns, was born in Streator in 1949
Doriot Anthony Dwyer, flutist, born in Streator (1922), first woman named Principal Chair of a major US Orchestra (Boston Symphony Orchestra in 1952)
Thurlow Essington (1886–1964), Illinois state senator, lawyer and mayor of Streator; was born in Streator
George "Honey Boy" Evans, songwriter (In the Good Old Summer Time)
Fred J. Hart (1908-1983), Illinois state legislator and businessman
Edward Hugh Hebern, was an early inventor of rotor machines, devices for encryption
Dick Jamieson, pro football coach, was born in Streator
William Jungers, chairman of Department of Anatomical Sciences and professor in Interdepartmental Doctoral Program in Anthropological Sciences at Stony Brook University Medical Center.
Patrick Lucey, Illinois Attorney General, Mayor of Streator, was born in Streator
Clarence E. Mulford, author (Hopalong Cassidy)
Ed Plumb, musical director for Disney's Fantasia and score composer for Bambi, multiple Academy Award nominee
Ralph Plumb, first mayor of Streator (1882–1885), and a U.S. Representative from Illinois (1885–1889)
Ken Sears, catcher for the New York Yankees and St. Louis Browns; born in Streator
Adam Shabala, outfielder for the San Francisco Giants
Reuben G. Soderstrom, President of the Illinois State Federation of Labor and Illinois AFL-CIO from 1930 to 1970. He moved to Streator in 1901 and resided in the city until his death in 1970.
Clyde Tombaugh, astronomer, discovered Pluto in 1930. He was born in Streator in 1906 and lived there until his family moved to Burdett, Kansas in 1922.
Clay Zavada, pitcher for the St. Louis Cardinals and Arizona Diamondbacks
References
Further reading
External links
Official City Website
Streator Area Chamber of Commerce and Industry
Cities in Illinois
Cities in Livingston County, Illinois
Ottawa, IL Micropolitan Statistical Area
Cities in LaSalle County, Illinois
Populated places established in 1868
1868 establishments in Illinois |
126844 | https://en.wikipedia.org/wiki/Internationalization%20and%20localization | Internationalization and localization | In computing, internationalization and localization (American) or internationalisation and localisation (British English), often abbreviated i18n and L10n, are means of adapting computer software to different languages, regional peculiarities and technical requirements of a target locale. Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components. Localization (which is potentially performed multiple times, for different locales) uses the infrastructure or flexibility provided by internationalization (which is ideally performed only once before localization, or as an integral part of ongoing development).
Naming
The terms are frequently abbreviated to the numeronyms i18n (where 18 stands for the number of letters between the first i and the last n in the word internationalization, a usage coined at Digital Equipment Corporation in the 1970s or 1980s) and L10n for localization, due to the length of the words. Some writers have the latter acronym capitalized to help distinguish the two.
Some companies, like IBM and Oracle, use the term globalization, g11n, for the combination of internationalization and localization.
Microsoft defines internationalization as a combination of world-readiness and localization. World-readiness is a developer task, which enables a product to be used with multiple scripts and cultures (globalization) and separating user interface resources in a localizable format (localizability, abbreviated to L12y).
Hewlett-Packard and HP-UX created a system called "National Language Support" or "Native Language Support" (NLS) to produce localizable software.
Scope
According to Software without frontiers, the design aspects to consider when internationalizing a product are "data encoding, data and documentation, software construction, hardware device support, user interaction"; while the key design areas to consider when making a fully internationalized product from scratch are "user interaction, algorithm design and data formats, software services, documentation".
Translation is typically the most time-consuming component of language localization. This may involve:
For film, video, and audio, translation of spoken words or music lyrics, often using either dubbing or subtitles
Text translation for printed materials, digital media (possibly including error messages and documentation)
Potentially altering images and logos containing text to contain translations or generic icons
Different translation length and differences in character sizes (e.g. between Latin alphabet letters and Chinese characters) can cause layouts that work well in one language to work poorly in others
Consideration of differences in dialect, register or variety
Writing conventions like:
Formatting of numbers (especially decimal separator and digit grouping)
Date and time format, possibly including use of different calendars
Standard locale data
Computer software can encounter differences above and beyond straightforward translation of words and phrases, because computer programs can generate content dynamically. These differences may need to be taken into account by the internationalization process in preparation for translation. Many of these differences are so regular that a conversion between languages can be easily automated. The Common Locale Data Repository by Unicode provides a collection of such differences. Its data is used by major operating systems, including Microsoft Windows, macOS and Debian, and by major Internet companies or projects such as Google and the Wikimedia Foundation. Examples of such differences include:
Different "scripts" in different writing systems use different characters – a different set of letters, syllograms, logograms, or symbols. Modern systems use the Unicode standard to represent many different languages with a single character encoding.
Writing direction is left to right in most European languages, right-to-left in Hebrew and Arabic, or both in boustrophedon scripts, and optionally vertical in some Asian languages.
Complex text layout, for languages where characters change shape depending on context
Capitalization exists in some scripts and not in others
Different languages and writing systems have different text sorting rules
Different languages have different numeral systems, which might need to be supported if Western Arabic numerals are not used
Different languages have different pluralization rules, which can complicate programs that dynamically display numerical content. Other grammar rules might also vary, e.g. genitive.
Different languages use different punctuation (e.g. quoting text using double-quotes (" ") as in English, or guillemets (« ») as in French)
Keyboard shortcuts can only make use of buttons actually on the keyboard layout which is being localized for. If a shortcut corresponds to a word in a particular language (e.g. Ctrl-s stands for "save" in English), it may need to be changed.
National conventions
Different countries have different economic conventions, including variations in:
Paper sizes
Broadcast television systems and popular storage media
Telephone number formats
Postal address formats, postal codes, and choice of delivery services
Currency (symbols, positions of currency markers, and reasonable amounts due to different inflation histories) – ISO 4217 codes are often used for internationalization
System of measurement
Battery sizes
Voltage and current standards
In particular, the United States and Europe differ in most of these cases. Other areas often follow one of these.
Specific third-party services, such as online maps, weather reports, or payment service providers, might not be available worldwide from the same carriers, or at all.
Time zones vary across the world, and this must be taken into account if a product originally only interacted with people in a single time zone. For internationalization, UTC is often used internally and then converted into a local time zone for display purposes.
Different countries have different legal requirements, meaning for example:
Regulatory compliance may require customization for a particular jurisdiction, or a change to the product as a whole, such as:
Privacy law compliance
Additional disclaimers on a web site or packaging
Different consumer labelling requirements
Compliance with export restrictions and regulations on encryption
Compliance with an Internet censorship regime or subpoena procedures
Requirements for accessibility
Collecting different taxes, such as sales tax, value added tax, or customs duties
Sensitivity to different political issues, like geographical naming disputes and disputed borders shown on maps (e.g., India has proposed a bill that would make failing to show Kashmir and other areas as intended by the government a crime)
Government-assigned numbers have different formats (such as passports, Social Security Numbers and other national identification numbers)
Localization also may take into account differences in culture, such as:
Local holidays
Personal name and title conventions
Aesthetics
Comprehensibility and cultural appropriateness of images and color symbolism
Ethnicity, clothing, and socioeconomic status of people and architecture of locations pictured
Local customs and conventions, such as social taboos, popular local religions, or superstitions such as blood types in Japanese culture vs. astrological signs in other cultures
Business process for internationalizing software
In order to internationalize a product, it is important to look at a variety of markets that the product will foreseeably enter. Details such as field length for street addresses, unique format for the address, ability to make the postal code field optional to address countries that do not have postal codes or the state field for countries that do not have states, plus the introduction of new registration flows that adhere to local laws are just some of the examples that make internationalization a complex project. A broader approach takes into account cultural factors regarding for example the adaptation of the business process logic or the inclusion of individual cultural (behavioral) aspects.
Already in the 1990s, companies such as Bull used machine translation (Systran) in large scale, for all their translation activity: human translators handled pre-editing (making the input machine-readable) and post-editing.
Engineering
Both in re-engineering an existing software or designing a new internationalized software, the first step of internationalization is to split each potentially locale-dependent part (whether code, text or data) into a separate module. Each module can then either rely on a standard library/dependency or be independently replaced as needed for each locale.
The current prevailing practice is for applications to place text in resource strings which are loaded during program execution as needed. These strings, stored in resource files, are relatively easy to translate. Programs are often built to reference resource libraries depending on the selected locale data.
The storage for translatable and translated strings is sometimes called a message catalog as the strings are called messages. The catalog generally comprises a set of files in a specific localization format and a standard library to handle said format. One software library and format that aids this is gettext.
Thus to get an application to support multiple languages one would design the application to select the relevant language resource file at runtime. The code required to manage data entry verification and many other locale-sensitive data types also must support differing locale requirements. Modern development systems and operating systems include sophisticated libraries for international support of these types, see also Standard locale data above.
Many localization issues (e.g. writing direction, text sorting) require more profound changes in the software than text translation. For example, OpenOffice.org achieves this with compilation switches.
Process
A globalization method includes, after planning, three implementation steps: internationalization, localization and quality assurance.
To some degree (e.g. for quality assurance), development teams include someone who handles the basic/central stages of the process which then enable all the others. Such persons typically understand foreign languages and cultures and have some technical background. Specialized technical writers are required to construct a culturally appropriate syntax for potentially complicated concepts, coupled with engineering resources to deploy and test the localization elements.
Once properly internationalized, software can rely on more decentralized models for localization: free and open source software usually rely on self-localization by end-users and volunteers, sometimes organized in teams. The KDE3 project, for example, has been translated into over 100 languages; MediaWiki in 270 languages, of which 100 mostly complete .
When translating existing text to other languages, it is difficult to maintain the parallel versions of texts throughout the life of the product. For instance, if a message displayed to the user is modified, all of the translated versions must be changed.
Commercial considerations
In a commercial setting, the benefit from localization is access to more markets. In the early 1980s, Lotus 1-2-3 took two years to separate program code and text and lost the market lead in Europe over Microsoft Multiplan. MicroPro found that using an Austrian translator for the West German market caused its WordStar documentation to, an executive said, not "have the tone it should have had".
However, there are considerable costs involved, which go far beyond engineering. Further, business operations must adapt to manage the production, storage and distribution of multiple discrete localized products, which are often being sold in completely different currencies, regulatory environments and tax regimes.
Finally, sales, marketing and technical support must also facilitate their own operations in the new languages, in order to support customers for the localized products. Particularly for relatively small language populations, it may never be economically viable to offer a localized product. Even where large language populations could justify localization for a given product, and a product's internal structure already permits localization, a given software developer or publisher may lack the size and sophistication to manage the ancillary functions associated with operating in multiple locales.
See also
Subcomponents and standards
Bidirectional script support
International Components for Unicode
Language code
Language localization
Website localization
Related concepts
Computer accessibility
Computer Russification, localization into Russian language
Separation of concerns
Methods and examples
Game localization
Globalization Management System
Pseudolocalization, a software testing method for testing a software product's readiness for localization.
Other
Input method editor
Language industry
References
Further reading
External links
Instantly Learn Localization Testing
Localization vs. Internationalization by The World Wide Web Consortium
Business terms
Globalization
Information and communication technologies for development
International trade
Natural language and computing
Technical communication
Translation
Transliteration
Word coinage |
127759 | https://en.wikipedia.org/wiki/End%20user | End user | In product development, an end user (sometimes end-user) is a person who ultimately uses or is intended to ultimately use a product. The end user stands in contrast to users who support or maintain the product, such as sysops, system administrators, database administrators, information technology experts, software professionals and computer technicians. End users typically do not possess the technical understanding or skill of the product designers, a fact easily overlooked and forgotten by designers: leading to features creating low customer satisfaction. In information technology, end users are not "customers" in the usual sense—they are typically employees of the customer. For example, if a large retail corporation buys a software package for its employees to use, even though the large retail corporation was the "customer" which purchased the software, the end users are the employees of the company, who will use the software at work.
Certain American defense-related products and information require export approval from the United States Government under the ITAR and EAR. In order to obtain a license to export, the exporter must specify both the end user and the end use for undertaking an end-user certificate. In End-User License Agreements (EULAs), the end user is distinguished from the value-added reseller, who installs the software or the organization who purchases and manages the software. In the UK, there exist documents that accompany licenses for products named in the end user undertaking statements(EUU).
Context
End users are one of the three major factors contributing to the complexity of managing information systems. The end user's position has changed from a position in the 1950s (where end users did not interact with the mainframe; computer experts programmed and ran the mainframe) to one in the 2010s where the end user collaborates with and advises the management information system and Information Technology department about his or her needs regarding the system or product. This raises new questions, such as: Who manages each resource?, What is the role of the MIS Department? and What is the optimal relationship between the end-user and the MIS Department?.
Empowerment
The concept of "end-user" first surfaced in the late 1980s and has since then raised many debates. One challenge was the goal to give both the user more freedom, by adding advanced features and functions (for more advanced users) and add more constraints (to prevent a neophyte user from accidentally erasing an entire company's database). This phenomenon appeared as a consequence of "consumerization" of computer products and software. In the 1960s and 1970s, computer users were generally programming experts and computer scientists. However, in the 1980s, and especially in the mid-to-late 1990s and the early 2000s, everyday, regular people began using computer devices and software for personal and work use. IT specialists needed to cope with this trend in various ways. In the 2010s, users now want to have more control over the systems they operate, to solve their own problems, and be able to change, customize and "tweak" the systems to suit their needs. The apparent drawbacks were the risk of corruption of the systems and data the users had control of, due to their lack of knowledge on how to properly operate the computer/software at an advanced level.
For companies to appeal to the user, they took primary care to accommodate and think of end-users in their new products, software launches, and updates. A partnership needed to be formed between the programmer-developers and the everyday end users so both parties could maximize the use of the products effectively. A major example of the public's effects on end-users requirements were the public libraries. They have been effected by new technologies in many ways, ranging from the digitalization of their card catalog, the shift to e-books, e-journals and offering online services. Libraries have had to undergo many changes in order to cope, including training existing librarians in Web 2.0 and database skills, to hiring IT and software experts...
End user documentation
The aim of end user documentation (e.g., manuals and guidebooks for products) is to help the user understand certain aspects of the systems and to provide all the answers in one place. A lot of documentation is available for users to help them understand and properly use a certain product or service. Due to the fact that the information available is usually very vast, inconsistent or ambiguous (e.g., a user manual with hundreds of pages, including guidance on using advanced features), many users suffer from an information overload. Therefore, they become unable to take the right course of action. This needs to be kept in mind when developing products and services and the necessary documentation for them.
Well-written documentation is needed for a user to reference. Some key aspects of such a documentation are:
Specific titles and subtitles for subsections to aid the reader in finding sections
Use of videos, annotated screenshots, text and links to help the reader understand how to use the device or program
Structured provision of information, which goes from the most basic instructions, written in plain language, without specialist jargon or acronyms, progressing to the information that intermediate or advanced users will need (these sections can include jargon and acronyms, but each new term should be defined or spelled out upon its first use)
Easy to search the help guide, find information and access information
Clear end results are described to the reader (e.g., "When the program is installed properly, an icon will appear in the left-hand corner of your screen and the LED will turn on...")
Detailed, numbered steps, to enable users with a range of proficiency levels (from novice to advanced) to go step-by-step to install, use and troubleshoot the product or service
Unique Uniform Resource Locator (URLs) so that the user can go to the product website to find additional help and resources.
At times users do not refer to the documentation available to them due to various reasons, ranging from finding the manual too large or due to not understanding the jargon and acronyms it contains. In other cases, the users may find that the manual makes too many assumptions about a user having pre-existing knowledge of computers and software, and thus the directions may "skip over" these initial steps (from the users' point of view). Thus, frustrated user may report false problems because of their inability to understand the software or computer hardware. This in turn causes the company to focus on “perceived” problems instead of focusing on the “actual” problems of the software.
Security
In the 2010s, there is a lot of emphasis on user's security and privacy. With the increasing role that computers are playing in people's lives, people are carrying laptops and smartphones with them and using them for scheduling appointments, making online purchases using credit cards and searching for information. These activities can potentially be observed by companies, governments or individuals, which can lead to breaches of privacy, identity theft, by, blackmailing and other serious concerns. As well, many businesses, ranging from small business startups to huge corporations are using computers and software to design, manufacture, market and sell their products and services, and businesses also use computers and software in their back office processes (e.g., human resources, payroll, etc.). As such, it is important for people and organizations to need know that the information and data they are storing, using, or sending over computer networks or storing on computer systems is secure.
However, developers of software and hardware are faced with many challenges in developing a system that can be both user friendly, accessible 24/7 on almost any device and be truly secure. Security leaks happen, even to individuals and organizations that have security measures in place to protect their data and information (e.g., firewalls, encryption, strong passwords). The complexities of creating such a secure system come from the fact that the behaviour of humans is not always rational or predictable. Even in a very-well secured computer system, a malicious individual can telephone a worker and pretend to be a private investigator working for the software company, and ask for the individual's password, a dishonest process called "phishing". As well, even with a well-secured system, if a worker decides to put the company's electronic files on a USB drive to take them home to work on them over the weekend (against many companies' policies), and then loses this USB drive, the company's data may be compromised. Therefore, developers need to make systems that are intuitive to the user in order to have information security and system security.
Another key step to end user security is informing the people and employees about the security threats and what they can do to avoid them or protect themselves and the organization. Underlining clearly the capabilities and risks makes users more aware and informed whilst they are using the products.
Some situations that could put the user at risk are:
Auto-logon as administrator options
Auto-fill options, in which a computer or program "remembers" a user's personal information and HTTP "cookies"
Opening junk emails of suspicious emails and/or opening/running attachments or computer files contained in these
Email can be monitored by third parties, especially when using Wi-Fi connections
Unsecure Wi-Fi or use of a public Wi-Fi network at a coffee shop or hotel
Weak passwords (using a person's own name, own birthdate, name or birthdate of children, or easy-to-guess passwords such as "1234")
Malicious programs such as viruses
Even if the security measures in place are strong, the choices the user makes and his/her behaviour have a major impact on how secure their information really is. Therefore, an informed user is one who can protect and achieve the best security out of the system they use. Because of the importance of end-user security and the impact it can have on organisations the UK government set out a guidance for the public sector, to help civil servants learn how to be more security aware when using government networks and computers. While this is targeted to a certain sector, this type of educational effort can be informative to any type of user. This helps developers meet security norms and end users be aware of the risks involved.
Reimers and Andersson have conducted a number of studies on end user security habits and found that the same type of repeated education/training in security "best practices" can have a marked effect on the perception of compliance with good end user network security habits, especially concerning malware and ransomware.
Undertaking
End user undertaking (EUU) is a document saying who the user is, why they are using a product and where they live (or where they work). This document needs to be completed and signed by a person in a position of authority who is in the end user business. All documents should be in English or if not so accompanied by a valid English translation. Usually the EUU is sent together with the product license.
See also
End-user certificate
End-user computing
End-user development
End-user license agreement
Voice of the customer
Notes
References
Computing terminology
Export and import control
Consumer |
141916 | https://en.wikipedia.org/wiki/Magma%20%28algebra%29 | Magma (algebra) | In abstract algebra, a magma, binar or, rarely, groupoid is a basic kind of algebraic structure. Specifically, a magma consists of a set equipped with a single binary operation that must be closed by definition. No other properties are imposed.
History and terminology
The term groupoid was introduced in 1927 by Heinrich Brandt describing his Brandt groupoid (translated from the German ). The term was then appropriated by B. A. Hausmann and Øystein Ore (1937) in the sense (of a set with a binary operation) used in this article. In a couple of reviews of subsequent papers in Zentralblatt, Brandt strongly disagreed with this overloading of terminology. The Brandt groupoid is a groupoid in the sense used in category theory, but not in the sense used by Hausmann and Ore. Nevertheless, influential books in semigroup theory, including Clifford and Preston (1961) and Howie (1995) use groupoid in the sense of Hausmann and Ore. Hollings (2014) writes that the term groupoid is "perhaps most often used in modern mathematics" in the sense given to it in category theory.
According to Bergman and Hausknecht (1996): "There is no generally accepted word for a set with a not necessarily associative binary operation. The word groupoid is used by many universal algebraists, but workers in category theory and related areas object strongly to this usage because they use the same word to mean 'category in which all morphisms are invertible'. The term magma was used by Serre [Lie Algebras and Lie Groups, 1965]." It also appears in Bourbaki's .
Definition
A magma is a set M matched with an operation • that sends any two elements to another element, . The symbol • is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation must satisfy the following requirement (known as the magma or closure axiom):
For all a, b in M, the result of the operation is also in M.
And in mathematical notation:
If • is instead a partial operation, then is called a partial magma or more often a partial groupoid.
Morphism of magmas
A morphism of magmas is a function mapping magma M to magma N that preserves the binary operation:
f (x •M y) = f(x) •N f(y),
where •M and •N denote the binary operation on M and N respectively.
Notation and combinatorics
The magma operation may be applied repeatedly, and in the general, non-associative case, the order matters, which is notated with parentheses. Also, the operation • is often omitted and notated by juxtaposition:
A shorthand is often used to reduce the number of parentheses, in which the innermost operations and pairs of parentheses are omitted, being replaced just with juxtaposition: . For example, the above is abbreviated to the following expression, still containing parentheses:
A way to avoid completely the use of parentheses is prefix notation, in which the same expression would be written . Another way, familiar to programmers, is postfix notation (reverse Polish notation), in which the same expression would be written , in which the order of execution is simply left-to-right (no currying).
The set of all possible strings consisting of symbols denoting elements of the magma, and sets of balanced parentheses is called the Dyck language. The total number of different ways of writing applications of the magma operator is given by the Catalan number . Thus, for example, , which is just the statement that and are the only two ways of pairing three elements of a magma with two operations. Less trivially, : , , , , and .
There are magmas with elements, so there are 1, 1, 16, 19683, , ... magmas with 0, 1, 2, 3, 4, ... elements. The corresponding numbers of non-isomorphic magmas are 1, 1, 10, 3330, , ... and the numbers of simultaneously non-isomorphic and non-antiisomorphic magmas are 1, 1, 7, 1734, , ... .
Free magma
A free magma MX on a set X is the "most general possible" magma generated by X (i.e., there are no relations or axioms imposed on the generators; see free object). The binary operation on MX is formed by wrapping each of the two operands in parenthesis and juxtaposing them in the same order. For example:
MX can be described as the set of non-associative words on X with parentheses retained.
It can also be viewed, in terms familiar in computer science, as the magma of binary trees with leaves labelled by elements of X. The operation is that of joining trees at the root. It therefore has a foundational role in syntax.
A free magma has the universal property such that if is a function from X to any magma N, then there is a unique extension of f to a morphism of magmas f′
f′ : MX → N.
Types of magma
Magmas are not often studied as such; instead there are several different kinds of magma, depending on what axioms the operation is required to satisfy. Commonly studied types of magma include:
Quasigroup: A magma where division is always possible.
Loop: A quasigroup with an identity element.
Semigroup: A magma where the operation is associative.
Monoid: A semigroup with an identity element.
Inverse semigroup: A semigroup with inverse. (Also a quasigroup with associativity)
Group: A magma with inverse, associativity, and an identity element.
Note that each of divisibility and invertibility imply the cancellation property.
Magmas with commutativity
Commutative magma: A magma with commutativity.
Semilattice: A monoid with commutativity.
Abelian group: A group with commutativity.
Classification by properties
A magma , with ∈ , is called
Medial If it satisfies the identity
Left semimedial If it satisfies the identity
Right semimedial If it satisfies the identity
Semimedial If it is both left and right semimedial
Left distributive If it satisfies the identity
Right distributive If it satisfies the identity
Autodistributive If it is both left and right distributive
Commutative If it satisfies the identity
Idempotent If it satisfies the identity
Unipotent If it satisfies the identity
Zeropotent If it satisfies the identities
Alternative If it satisfies the identities and
Power-associative If the submagma generated by any element is associative
Flexible if
A semigroup, or associative If it satisfies the identity
A left unar If it satisfies the identity
A right unar If it satisfies the identity
Semigroup with zero multiplication, or null semigroup If it satisfies the identity
Unital If it has an identity element
Left-cancellative If, for all , relation implies
Right-cancellative If, for all , relation implies
Cancellative If it is both right-cancellative and left-cancellative
A semigroup with left zeros If it is a semigroup and for all the identity holds
A semigroup with right zeros If it is a semigroup and for all the identity holds
Trimedial If any triple of (not necessarily distinct) elements generates a medial submagma
Entropic If it is a homomorphic image of a medial cancellation magma.
Category of magmas
The category of magmas, denoted Mag, is the category whose objects are magmas and whose morphisms are magma homomorphisms. The category Mag has direct products, and there is an inclusion functor: as trivial magmas, with operations given by projection .
An important property is that an injective endomorphism can be extended to an automorphism of a magma extension, just the colimit of the (constant sequence of the) endomorphism.
Because the singleton is the terminal object of Mag, and because Mag is algebraic, Mag is pointed and complete.
Generalizations
See n-ary group.
See also
Magma category
Auto magma object
Universal algebra
Magma computer algebra system, named after the object of this article.
Commutative magma
Algebraic structures whose axioms are all identities
Groupoid algebra
Hall set
References
.
.
.
Further reading
Non-associative algebra
Binary operations
Algebraic structures |
142983 | https://en.wikipedia.org/wiki/IBM%20Db2%20Family | IBM Db2 Family | Db2 is a family of data management products, including database servers, developed by IBM. They initially supported the relational model, but were extended to support object–relational features and non-relational structures like JSON and XML. The brand name was originally styled as DB/2, then DB2 until 2017 and finally changed to its present form.
History
Historically, and unlike other database vendors, IBM produced a platform-specific Db2 product for each of its major operating systems. However, in the 1990s IBM changed track and produced a Db2 common product, designed with a mostly common code base for L-U-W (Linux-Unix-Windows); DB2 for System z and DB2 for IBM i are different. As a result, they use different drivers.
DB2 traces its roots back to the beginning of the 1970s when Edgar F. Codd, a researcher working for IBM, described the theory of relational databases, and in June 1970 published the model for data manipulation.
In 1974, the IBM San Jose Research center developed a relational DBMS, System R, to implement Codd's concepts. A key development of the System R project was the Structured Query Language (SQL). To apply the relational model, Codd needed a relational-database language he named DSL/Alpha. At the time, IBM didn't believe in the potential of Codd's ideas, leaving the implementation to a group of programmers not under Codd's supervision. This led to an inexact interpretation of Codd's relational model, that matched only part of the prescriptions of the theory; the result was Structured English QUEry Language or SEQUEL.
When IBM released its first relational-database product, they wanted to have a commercial-quality sublanguage as well, so it overhauled SEQUEL, and renamed the revised language Structured Query Language (SQL) to differentiate it from SEQUEL and also because the acronym "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.
IBM bought Metaphor Computer Systems to utilize their GUI interface and encapsulating SQL platform that had already been in use since the mid 80's.
In parallel with the development of SQL, IBM also developed Query by Example (QBE), the first graphical query language.
IBM's first commercial relational-database product, SQL/DS, was released for the DOS/VSE and VM/CMS operating systems in 1981. In 1976, IBM released Query by Example for the VM platform where the table-oriented front-end produced a linear-syntax language that drove transactions to its relational database. Later, the QMF feature of DB2 produced real SQL, and brought the same "QBE" look and feel to DB2. The inspiration for the mainframe version of DB2's architecture came in part from IBM IMS, a hierarchical database, and its dedicated database-manipulation language, IBM DL/I.
The name DB2 (IBM Database 2), was first given to the Database Management System or DBMS in 1983 when IBM released DB2 on its MVS mainframe platform.
For some years DB2, as a full-function DBMS, was exclusively available on IBM mainframes. Later, IBM brought DB2 to other platforms, including OS/2, UNIX, and MS Windows servers, and then Linux (including Linux on IBM Z) and PDAs. This process occurred through the 1990s. An implementation of DB2 is also available for z/VSE and z/VM. An earlier version of the code that would become DB2 LUW (Linux, Unix, Windows) was part of an Extended Edition component of OS/2 called Database Manager.
IBM extended the functionality of Database Manager a number of times, including the addition of distributed database functionality by means of Distributed Relational Database Architecture (DRDA) that allowed shared access to a database in a remote location on a LAN. (Note that DRDA is based on objects and protocols defined by Distributed Data Management Architecture (DDM).)
Eventually, IBM took the decision to completely rewrite the software. The new version of Database Manager was called DB2/2 and DB2/6000 respectively. Other versions of DB2, with different code bases, followed the same '/' naming convention and became DB2/400 (for the AS/400), DB2/VSE (for the DOS/VSE environment) and DB2/VM (for the VM operating system). IBM lawyers stopped this handy naming convention from being used, and decided that all products needed to be called "product FOR platform" (for example, DB2 for OS/390). The next iteration of the mainframe and the server-based products were named DB2 Universal Database (or DB2 UDB).
In the mid-1990s, IBM released a clustered DB2 implementation called DB2 Parallel Edition, which initially ran on AIX. This edition allowed scalability by providing a shared-nothing architecture, in which a single large database is partitioned across multiple DB2 servers that communicate over a high-speed interconnect. This DB2 edition was eventually ported to all Linux, UNIX, and Windows (LUW) platforms, and was renamed to DB2 Extended Enterprise Edition (EEE). IBM now refers to this product as the Database Partitioning Feature (DPF) and bundles it with their flagship DB2 Enterprise product.
When Informix Corporation acquired Illustra and made their database engine an object-SQL DBMS by introducing their Universal Server, both Oracle Corporation and IBM followed suit by changing their database engines to be capable of object–relational extensions. In 2001, IBM bought Informix Software, and in the following years incorporated Informix technology into the DB2 product suite. DB2 can technically be considered to be an object–SQL DBMS.
In mid-2006, IBM announced "Viper," which is the codename for DB2 9 on both distributed platforms and z/OS. DB2 9 for z/OS was announced in early 2007. IBM claimed that the new DB2 was the first relational database to store XML "natively". Other enhancements include OLTP-related improvements for distributed platforms, business intelligence/data warehousing-related improvements for z/OS, more self-tuning and self-managing features, additional 64-bit exploitation (especially for virtual storage on z/OS), stored procedure performance enhancements for z/OS, and continued convergence of the SQL vocabularies between z/OS and distributed platforms.
In October 2007, IBM announced "Viper 2," which is the codename for DB2 9.5 on the distributed platforms. There were three key themes for the release, Simplified Management, Business Critical Reliability and Agile XML development.
In June 2009, IBM announced "Cobra" (the codename for DB2 9.7 for LUW. DB2 9.7 added data compression for database indexes, temporary tables, and large objects. DB2 9.7 also supported native XML data in hash partitioning (database partitioning), range partitioning (table partitioning), and multi-dimensional clustering. These native XML features allows users to directly work with XML in data warehouse environments. DB2 9.7 also added several features that make it easier for Oracle Database users to work with DB2. These include support for the most commonly used SQL syntax, PL/SQL syntax, scripting syntax, and data types from Oracle Database. DB2 9.7 also enhanced its concurrency model to exhibit behavior that is familiar to users of Oracle Database and Microsoft SQL Server.
In October 2009, IBM introduced its second major release of the year when it announced DB2 pureScale. DB2 pureScale is a cluster database for non-mainframe platforms, suitable for Online transaction processing (OLTP) workloads. IBM based the design of DB2 pureScale on the Parallel Sysplex implementation of DB2 data sharing on the mainframe. DB2 pureScale provides a fault-tolerant architecture and shared-disk storage. A DB2 pureScale system can grow to 128 database servers, and provides continuous availability and automatic load balancing.
In 2009, it was announced that DB2 can be an engine in MySQL. This allows users on the IBM i platform and users on other platforms to access these files through the MySQL interface. On IBM i and its predecessor OS/400, DB2 is tightly integrated into the operating system, and comes as part of the operating system. It provides journaling, triggers and other features.
In early 2012, IBM announced the next version of DB2, DB2 10.1 (code name Galileo) for Linux, UNIX, and Windows. DB2 10.1 contained a number of new data management capabilities including row and column access control which enables ‘fine-grained’ control of the database and multi-temperature data management that moves data to cost effective storage based on how "hot" or "cold" (how frequently the data is accessed) the data is. IBM also introduced ‘adaptive compression’ capability in DB2 10.1, a new approach to compressing data tables.
In June 2013, IBM released DB2 10.5 (code name "Kepler").
On 12 April 2016, IBM announced DB2 LUW 11.1, and in June 2016, it was released.
In mid-2017, IBM re-branded its DB2 and dashDB product offerings and amended their names to "Db2".
On June 27, 2019, IBM released Db2 11.5, the AI Database. It added AI functionality to improve query performance as well as capabilities to facilitate AI application development.
Db2 (LUW) Family
Db2 embraces a "hybrid data" strategy to unify and simplify the entire ecosystem of data management, integration and analytical engines for both on-premises and cloud environments to gain value from typically siloed data sources. The strategy allows access, sharing and analyzing all types of data - structured, semi-structured or unstructured - wherever it's stored or deployed.
Db2 Database
Db2 Database is a relational database that delivers advanced data management and analytics capabilities for transactional workloads. This operational database is designed to deliver high performance, actionable insights, data availability and reliability, and it is supported across Linux, Unix and Windows operating systems.
The Db2 database software includes advanced features such as in-memory technology (IBM BLU Acceleration), advanced management and development tools, storage optimization, workload management, actionable compression and continuous data availability (IBM pureScale).
Db2 Warehouse
"Data warehousing" was first mentioned in a 1988 IBM Systems Journal article entitled, "An Architecture for Business Information Systems." This article illustrated the first use-case for data warehousing in a business setting as well as the results of its application.
Traditional transaction processing databases were not able to provide the insight business leaders needed to make data-informed decisions. A new approach was needed to aggregate and analyze data from multiple transactional sources to deliver new insights, uncover patterns and find hidden relationships among the data. Db2 Warehouse, with capabilities to normalize data from multiple sources, performs sophisticated analytic and statistical modeling, provides businesses these features at speed and scale.
Increases in computational power resulted in an explosion of data inside businesses generally and data warehouses specifically. Warehouses grew from being measured in GBs to TBs and PBs. As both the volume and variety of data grew, Db2 Warehouse adapted as well. Initially purposed for star and snowflake schemas, Db2 Warehouse now includes support for the following data types and analytical models, among others:
Relational data
Non-Relational data
XML data
Geospatial data
RStudio
Apache Spark
Embedded Spark Analytics engine
Multi-Parallel Processing
In-memory analytical processing
Predictive Modeling algorithms
Db2 Warehouse uses Docker containers to run in multiple environments: on-premise, private cloud and a variety of public clouds, both managed and unmanaged. Db2 Warehouse can be deployed as software only, as an appliance and in Intel x86, Linux and mainframe platforms. Built upon IBM's Common SQL engine, Db2 Warehouse queries data from multiple sources—Oracle, Microsoft SQL Server, Teradata, open source, Netezza and others. Users write a query once and data returns from multiple sources quickly and efficiently.
Db2 on Cloud/Db2 Hosted
Db2 on Cloud: Formerly named “dashDB for Transactions”, Db2 on Cloud is a fully managed, cloud SQL database with a high-availability option featuring a 99.99 percent uptime SLA. Db2 on Cloud offers independent scaling of storage and compute, and rolling security updates.
Db2 on Cloud is deployable on both IBM Cloud and Amazon Web Services (AWS).
Key features include:
Elasticity: Db2 on Cloud offers independent scaling of storage and compute through the user interface and API, so businesses can burst on compute during peak demand and scale down when demand falls. Storage is also scalable, so organizations can scale up as their storage needs grow.
Backups and Recovery: Db2 on Cloud provides several disaster recovery options: (1) Fourteen days’ worth of back-ups, (2) point in time restore options, (3) 1-click failover to the DR node at an offsite data center of user's choice.
Encryption: Db2 on Cloud complies with data protection laws and includes at-rest database encryption and SSL connections. The Db2 on Cloud high availability plans offer rolling security updates and all database instances include daily backups. Security patching and maintenance is managed by the database administrator.
High availability options: Db2 on Cloud provides a 99.99% uptime service level agreement on the high availability option. Highly available option allows for updates and scaling operations without downtime to applications running on Db2 on Cloud, using Db2's HADR technology.
Data federation: A single query displays a view of all your data by accessing data distributed across Db2 on-premises and/or Db2 Warehouse on-premises or in the cloud.
Private networking: Db2 on Cloud can be deployed on an isolated network that is accessible through a secure Virtual Private Network (VPN).
Db2 Hosted: Formally named “DB2 on Cloud”, Db2 Hosted is an unmanaged, hosted version of Db2 on Cloud's transactional, SQL cloud database.
Key features:
Server control: Db2 Hosted provides custom software for direct server installation. This reduces application latency and integrates with a business's current data management set up. Db2 Hosted offers exact server configuration based on the needs of the business.
Encryption: Db2 Hosted supports SSL connections.
Elasticity: Db2 Hosted allows for independent scaling of compute and storage to meet changing business needs.
Db2 Warehouse on Cloud
Formerly named “dashDB for Analytics”, Db2 Warehouse on Cloud is a fully managed, elastic, cloud data warehouse built for high-performance analytics and machine learning workloads.
Key features include:
Autonomous cloud service: Db2 Warehouse on Cloud runs on an autonomous platform-as-a-service, and is powered by Db2's autonomous self-tuning engine. Day-to-day operations, including database monitoring, uptime checks and failovers, are fully automated. Operations are supplemented by a DevOps team that are on-call to handle unexpected system failures.
Optimized for analytics: Db2 Warehouse on Cloud delivers high performance on complex analytics workloads by utilizing IBM BLU Acceleration, a collection of technologies pioneered by IBM Research that features four key optimizations: (1) a columnar organized storage model, (2) in-memory processing, (3) querying of compressed data sets, and (4) data skipping.
Manage highly concurrent workloads: Db2 Warehouse on Cloud includes an Adaptive Workload Management technology that automatically manages resources between concurrent workloads, given user-defined resource targets. This technology ensures stable and reliable performance when tackling highly concurrent workloads.
Built-in machine learning and geospatial capabilities: Db2 Warehouse on Cloud comes with in-database machine learning capabilities that allow users to train and run machine learning models on Db2 Warehouse data without the need for data movement. Examples of algorithms include Association Rules, ANOVA, k-means, Regression, and Naïve Bayes. Db2 Warehouse on Cloud also supports spatial analytics with Esri compatibility, supporting Esri data types such as GML, and supports native Python drivers and native Db2 Python integration into Jupyter Notebooks.
Elasticity: Db2 Warehouse on Cloud offers independent scaling of storage and compute, so organizations can customize their data warehouses to meet the needs of their businesses. For example, customers can burst on compute during peak demand, and scale down when demand falls. Users can also expand storage capacity as their data volumes grow. Customers can scale their data warehouse through the Db2 Warehouse on Cloud web console or API.
Data security: Data is encrypted at-rest and in-motion by default. Administrators can also restrict access to sensitive data through data masking, row permissions, and role-based security, and can utilize database audit utilities to maintain audit trails for their data warehouse.
Polyglot persistence: Db2 Warehouse on Cloud is optimized for polyglot persistence of data, and supports relational (columnar and row-oriented tables), geospatial, and NoSQL document (XML, JSON, BSON) models. All data is subject to advanced data compression.
Deployable on multiple cloud providers: Db2 Warehouse on Cloud is currently deployable on IBM Cloud and Amazon Web Services (AWS). .
Db2 BigSQL
In 2018 the IBM SQL product was renamed and is now known as IBM Db2 Big SQL (Big SQL). Big SQL is an enterprise-grade, hybrid ANSI-compliant SQL on the Hadoop engine delivering massively parallel processing (MPP) and advanced data query. Additional benefits include low latency, high performance, security, SQL compatibility and federation capabilities.
Big SQL offers a single database connection or query for disparate sources such as HDFS, RDMS, NoSQL databases, object stores and WebHDFS. Exploit Hive, Or to exploit Hbase and Spark and whether on the cloud, on premises or both, access data across Hadoop and relational data bases.
Users (data scientists and analysts) can run smarter ad hoc and complex queries supporting more concurrent users with less hardware compared to other SQL options for Hadoop. Big SQL provides an ANSI-compliant SQL parser to run queries from unstructured streaming data using new APIs.
Through the integration with the IBM Common SQL Engine, Big SQL was designed to work with all the Db2 family of offerings, as well as with the IBM Integrated Analytics System. Big SQL is a part of the IBM Hybrid Data Management Platform, a comprehensive IBM strategy for flexibility and portability, strong data integration and flexible licensing.
Db2 Event Store
Db2 Event Store targets the needs of the Internet of things (IOT), industrial, telecommunications, financial services, online retail and other industries needing to perform real-time analytics on streamed high volume, high velocity data. It became publicly available in June 2017. It can store and analyze 250 billion events in a day with just 3 server nodes with its high speed data capture and analytics capabilities. The need to support AI and machine learning was envisioned from the start by including IBM Watson Studio into the product, and integrating Jupyter notebooks for collaborative app and model development. Typically combined with streaming tools, it provides persistent data by writing the data out to object storage in an open data format (Apache Parquet). Built on Spark, Db2 Event Store is compatible with Spark Machine Learning, Spark SQL, other open technologies, as well as the Db2 family Common SQL Engine and all languages supported – including Python, GO, JDBC, ODBC, and more.
Db2 for IBM i
In 1994, IBM renamed the integrated relational database of the OS/400 to DB2/400 to indicate comparable functionality to DB2 on other platforms. Despite this name, it is not based on DB2 code, but instead it evolved from the IBM System/38 integrated database. The product is currently named IBM Db2 for i.
Other Platforms
Db2 for Linux, UNIX and Windows (informally known as Db2 LUW)
Db2 for z/OS (mainframe)
Db2 for VSE & VM
Db2 on IBM Cloud
Db2 on Amazon Web Services (AWS)
Db2 for z/OS is available in its traditional product packaging, or in the Value Unit Edition, which allows customers to instead pay a one-time charge.
Db2 also powers IBM InfoSphere Warehouse, which offers data warehouse capabilities. InfoSphere Warehouse is available for z/OS. It includes several BI features such as ETL, data mining, OLAP acceleration, and in-line analytics.
Db2 11.5 for Linux, UNIX and Windows, contains all of the functionality and tools offered in the prior generation of DB2 and InfoSphere Warehouse on Linux, UNIX and Windows.
Technical information
Db2 can be administered from either the command-line or a GUI. The command-line interface requires more knowledge of the product but can be more easily scripted and automated. The GUI is a multi-platform Java client that contains a variety of wizards suitable for novice users. Db2 supports both SQL and XQuery. DB2 has native implementation of XML data storage, where XML data is stored as XML (not as relational data or CLOB data) for faster access using XQuery.
Db2 has APIs for Rexx, PL/I, COBOL, RPG, Fortran, C++, C, Delphi, .NET CLI, Java, Python, Perl, PHP, Ruby, and many other programming languages. Db2 also supports integration into the Eclipse and Visual Studio integrated development environments.
pureQuery is IBM's data access platform focused on applications that access data. pureQuery supports both Java and .NET. pureQuery provides access to data in databases and in-memory Java objects via its tools, APIs, and runtime environment as delivered in IBM Data Studio Developer and IBM Data Studio pureQuery Runtime.
Error processing
An important feature of Db2 computer programs is error handling. The SQL communications area (SQLCA) structure was once used exclusively within a Db2 program to return error information to the application program after every SQL statement was executed. The primary, but not singularly useful, error diagnostic is held in the field SQLCODE within the SQLCA block.
The SQL return code values are:
0 means successful execution.
A positive number means successful execution with one or more warnings. An example is +100, which means no rows found.
A negative number means unsuccessful with an error. An example is -911, which means a lock timeout (or deadlock) has occurred, triggering a rollback.
Later versions of Db2 added functionality and complexity to the execution of SQL. Multiple errors or warnings could be returned by the execution of an SQL statement; it may, for example, have initiated a database trigger and other SQL statements. Instead of the original SQLCA, error information should now be retrieved by successive executions of a GET DIAGNOSTICS statement.
See SQL return codes for a more comprehensive list of common SQLCODEs.
See also
Comparison of relational database management systems
Comparison of database tools
List of relational database management systems
List of column-oriented DBMSes
Data Language Interface
References
External links
IBM Db2 trial and downloads
Db2 - IBM Data for developers
Made in IBM Labs: New IBM Software Accelerates Decision Making in the Era of Big Data
What's new in DB2 10.5 for Linux, UNIX, and Windows
Db2 Tutorial
IBM DB2
Cross-platform software
Relational database management systems
IBM software
RDBMS software for Linux
Client-server database management systems
Proprietary database management systems
Db2 Express-C |
143327 | https://en.wikipedia.org/wiki/Fibre%20Channel | Fibre Channel | Fibre Channel (FC) is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data. Fibre Channel is primarily used to connect computer data storage to servers in storage area networks (SAN) in commercial data centers.
Fibre Channel networks form a switched fabric because the switches in a network operate in unison as one big switch. Fibre Channel typically runs on optical fiber cables within and between data centers, but can also run on copper cabling. Supported data rates include 1, 2, 4, 8, 16, 32, 64, and 128 gigabit per second resulting from improvements in successive technology generations.
There are various upper-level protocols for Fibre Channel, including two for block storage. Fibre Channel Protocol (FCP) is a protocol that transports SCSI commands over Fibre Channel networks. FICON is a protocol that transports ESCON commands, used by IBM mainframe computers, over Fibre Channel. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands.
Etymology
When the technology was originally devised, it ran over optical fiber cables only and, as such, was called "Fiber Channel". Later, the ability to run over copper cabling was added to the specification. In order to avoid confusion and to create a unique name, the industry decided to change the spelling and use the British English fibre for the name of the standard.
History
Fibre Channel is standardized in the T11 Technical Committee of the International Committee for Information Technology Standards (INCITS), an American National Standards Institute (ANSI)-accredited standards committee. Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON.
Fibre Channel was designed as a serial interface to overcome limitations of the SCSI and HIPPI physical-layer parallel-signal copper wire interfaces. Such interfaces face the challenge of, among other things, maintaining signal timing coherence across all the data-signal wires (8, 16 and finally 32 for SCSI, 50 for HIPPI) so that a receiver can determine when all the electrical signal values are "good" (stable and valid for simultaneous reception sampling). This challenge becomes evermore difficult in a mass-manufactured technology as data signal frequencies increase, with part of the technical compensation being ever reducing the supported connecting copper-parallel cable length. See Parallel SCSI. FC was developed with leading-edge multi-mode optical fiber technologies that overcame the speed limitations of the ESCON protocol. By appealing to the large base of SCSI disk drives and leveraging mainframe technologies, Fibre Channel developed economies of scale for advanced technologies and deployments became economical and widespread.
Commercial products were released while the standard was still in draft. By the time the standard was ratified lower speed versions were already growing out of use. Fibre Channel was the first serial storage transport to achieve gigabit speeds where it saw wide adoption, and its success grew with each successive speed. Fibre Channel has doubled in speed every few years since 1996.
Fibre Channel has seen active development since its inception, with numerous speed improvements on a variety of underlying transport media. The following table shows the progression of native Fibre Channel speeds:
In addition to a modern physical layer, Fibre Channel also added support for any number of "upper layer" protocols, including ATM, IP (IPFC) and FICON, with SCSI (FCP) being the predominant usage.
Characteristics
Two major characteristics of Fibre Channel networks are in-order delivery and lossless delivery of raw block data. Lossless delivery of raw data block is achieved based on a credit mechanism.
Topologies
There are three major Fibre Channel topologies, describing how a number of ports are connected together. A port in Fibre Channel terminology is any entity that actively communicates over the network, not necessarily a hardware port. This port is usually implemented in a device such as disk storage, a Host Bus Adapter (HBA) network connection on a server or a Fibre Channel switch.
(see FC-FS-3). Two devices are connected directly to each other using N_ports. This is the simplest topology, with limited connectivity. The bandwidth is dedicated.
Arbitrated loop (see FC-AL-2). In this design, all devices are in a loop or ring, similar to Token Ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted. The failure of one device causes a break in the ring. Fibre Channel hubs exist to connect multiple devices together and may bypass failed ports. A loop may also be made by cabling each port to the next in a ring.
A minimal loop containing only two ports, while appearing to be similar to point-to-point, differs considerably in terms of the protocol.
Only one pair of ports can communicate concurrently on a loop.
Maximum speed of 8GFC.
Arbitrated Loop has been rarely used after 2010 and its support is being discontinued for new gen switches.
Switched Fabric (see FC-SW-6). In this design, all devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. Advantages of this topology over point-to-point or Arbitrated Loop include:
The Fabric can scale to tens of thousands of ports.
The switches manage the state of the Fabric, providing optimized paths via Fabric Shortest Path First (FSPF) data routing protocol.
The traffic between two ports flows through the switches and not through any other ports like in Arbitrated Loop.
Failure of a port is isolated to a link and should not affect operation of other ports.
Multiple pairs of ports may communicate simultaneously in a Fabric.
Layers
Fibre Channel does not follow the OSI model layering, and is split into five layers:
FC-4 – Protocol-mapping layer, in which upper level protocols such as NVM Express (NVMe), SCSI, IP, and FICON are encapsulated into Information Units (IUs) for delivery to FC-2. Current FC-4s include FCP-4, FC-SB-5, and FC-NVMe.
FC-3 – Common services layer, a thin layer that could eventually implement functions like encryption or RAID redundancy algorithms; multiport connections;
FC-2 – Signaling Protocol, defined by the Fibre Channel Framing and Signaling 4 (FC-FS-5) standard, consists of the low level Fibre Channel network protocols; port to port connections;
FC-1 – Transmission Protocol, which implements line coding of signals;
FC-0 – physical layer, includes cabling, connectors etc.;
This diagram from FC-FS-4 defines the layers.
Layers FC-0 are defined in Fibre Channel Physical Interfaces (FC-PI-6), the physical layers of Fibre Channel.
Fibre Channel products are available at 1, 2, 4, 8, 10, 16 and 32 and 128 Gbit/s; these protocol flavors are called accordingly 1GFC, 2GFC, 4GFC, 8GFC, 10GFC, 16GFC, 32GFC or 128GFC. The 32GFC standard was approved by the INCITS T11 committee in 2013, and those products became available in 2016. The 1GFC, 2GFC, 4GFC, 8GFC designs all use 8b/10b encoding, while the 10GFC and 16GFC standard uses 64b/66b encoding. Unlike the 10GFC standards, 16GFC provides backward compatibility with 4GFC and 8GFC since it provides exactly twice the throughput of 8GFC or four times that of 4GFC.
Ports
Fibre Channel ports come in a variety of logical configurations. The most common types of ports are:
N_Port (Node port) An N_Port is typically an HBA port that connects to a switch's F_Port or another N_Port. Nx_Port communicating through a PN_Port that is not operating a Loop Port State Machine.
F_Port (Fabric port) An F_Port is a switch port that is connected to an N_Port.
E_Port (Expansion port) Switch port that attaches to another E_Port to create an Inter-Switch Link.
Fibre Channel Loop protocols create multiple types of Loop Ports:
L_Port (Loop port) FC_Port that contains Arbitrated Loop functions associated with the Arbitrated Loop topology.
FL_Port (Fabric Loop port) L_Port that is able to perform the function of an F_Port, attached via a link to one or more NL_Ports in an Arbitrated Loop topology.
NL_Port (Node Loop port) PN_Port that is operating a Loop port state machine.
If a port can support loop and non-loop functionality, the port is known as:
Fx_Port switch port capable of operating as an F_Port or FL_Port.
Nx_Port end point for Fibre Channel frame communication, having a distinct address identifier and Name_Identifier,providing an independent set of FC-2V functions to higher levels, and having the ability to act as an Originator, a Responder, or both.
Ports have virtual components and physical components and are described as:
PN_Port entity that includes a Link_Control_Facility and one or more Nx_Ports.
VF_Port (Virtual F_Port) instance of the FC-2V sublevel that connects to one or more VN_Ports.
VN_Port (Virtual N_Port) instance of the FC-2V sublevel. VN_Port is used when it is desired to emphasize support for multiple Nx_Ports on a single Multiplexer (e.g., via a single PN_Port).
VE_Port (Virtual E_Port) instance of the FC-2V sublevel that connects to another VE_Port or to a B_Port to create an Inter-Switch Link.
The following types of ports are also used in Fibre Channel:
A_Port (Adjacent port) combination of one PA_Port and one VA_Port operating together.
B_Port (Bridge Port) Fabric inter-element port used to connect bridge devices with E_Ports on a Switch.
D_Port (Diagnostic Port) A configured port used to perform diagnostic tests on a link with another D_Port.
EX_Port A type of E_Port used to connect to an FC router fabric.
G_Port (Generic Fabric port) Switch port that may function either as an E_Port, A_Port, or as an F_Port.
GL_Port (Generic Fabric Loop port) Switch port that may function either as an E_Port, A_Port, or as an Fx_Port.
PE_Port LCF within the Fabric that attaches to another PE_Port or to a B_Port through a link.
PF_Port LCF within a Fabric that attaches to a PN_Port through a link.
TE_Port (Trunking E_Port) A trunking expansion port that expands the functionality of E ports to support VSAN trunking, Transport quality of service (QoS) parameters, and Fibre Channel trace (fctrace) feature.
U_Port (Universal port) A port waiting to become another port type
VA_Port (Virtual A_Port) instance of the FC-2V sublevel of Fibre Channel that connects to another VA_Port.
VEX_Port VEX_Ports are no different from EX_Ports, except underlying transport is IP rather than FC.
Media and modules
The Fibre Channel physical layer is based on serial connections that use fiber optics to copper between corresponding pluggable modules. The modules may have a single lane, dual lanes or quad lanes that correspond to the SFP, SFP-DD and QSFP form factors. Fibre Channel has not used 8 or 16 lane modules (like CFP8, QSFP-DD, or COBO) used in 400GbE and has no plans to use these expensive and complex modules.
The small form-factor pluggable transceiver (SFP) module and its enhanced version SFP+, SFP28 and SFP56 are common form factors for Fibre Channel ports. SFP modules support a variety of distances via multi-mode and single-mode optical fiber as shown in the table below. The SFP module uses duplex fiber cabling that has LC connectors.
The SFP-DD module is used for high density applications that need to double the throughput of an SFP Port. The SFP-DD is defined by the SFP-DD MSA and enables breakout to two SFP ports. As seen in the picture, two rows of electrical contacts enable the doubling of the throughput of the module in a similar fashion as the QSFP-DD.
The quad small form-factor pluggable (QSFP) module began being used for switch inter-connectivity and was later adopted for use in 4-lane implementations of Gen 6 Fibre Channel supporting 128GFC. The QSFP uses either the LC connector for 128GFC-CWDM4 or an MPO connector for 128GFC-SW4 or 128GFC-PSM4. The MPO cabling uses 8- or 12-fiber cabling infrastructure that connects to another 128GFC port or may be broken out into four duplex LC connections to 32GFC SFP+ ports. Fibre Channel switches use either SFP or QSFP modules.
Modern Fibre Channel devices support SFP+ transceiver, mainly with LC (Lucent Connector) fiber connector. Older 1GFC devices used GBIC transceiver, mainly with SC (Subscriber Connector) fiber connector.
Storage area networks
The goal of Fibre Channel is to create a storage area network (SAN) to connect servers to storage.
The SAN is a dedicated network that enables multiple servers to access data from one or more storage devices. Enterprise storage uses the SAN to backup to secondary storage devices including disk arrays, tape libraries, and other backup while the storage is still accessible to the server. Servers may access storage from multiple storage devices over the network as well.
SANs are often designed with dual fabrics to increase fault tolerance. Two completely separate fabrics are operational and if the primary fabric fails, then the second fabric becomes the primary.
Switches
Fibre Channel switches can be divided into two classes. These classes are not part of the standard, and the classification of every switch is a marketing decision of the manufacturer:
Directors offer a high port-count in a modular (slot-based) chassis with no single point of failure (high availability).
Switches are typically smaller, fixed-configuration (sometimes semi-modular), less redundant devices.
A fabric consisting entirely of one vendors products is considered to be homogeneous. This is often referred to as operating in its "native mode" and allows the vendor to add proprietary features which may not be compliant with the Fibre Channel standard.
If multiple switch vendors are used within the same fabric it is heterogeneous, the switches may only achieve adjacency if all switches are placed into their interoperability modes. This is called the "open fabric" mode as each vendor's switch may have to disable its proprietary features to comply with the Fibre Channel standard.
Some switch manufacturers offer a variety of interoperability modes above and beyond the "native" and "open fabric" states. These "native interoperability" modes allow switches to operate in the native mode of another vendor and still maintain some of the proprietary behaviors of both. However, running in native interoperability mode may still disable some proprietary features and can produce fabrics of questionable stability.
Host bus adapters
Fibre Channel HBAs, as well as CNAs, are available for all major open systems, computer architectures, and buses, including PCI and SBus. Some are OS dependent. Each HBA has a unique World Wide Name (WWN), which is similar to an Ethernet MAC address in that it uses an Organizationally Unique Identifier (OUI) assigned by the IEEE. However, WWNs are longer (8 bytes). There are two types of WWNs on an HBA; a World Wide Node Name (WWNN), which can be shared by some or all ports of a device, and a World Wide Port Name (WWPN), which is necessarily unique to each port.
See also
Arbitrated loop
8b/10b encoding, 64b/66b encoding
Converged network adapter (CNA)
Fibre Channel electrical interface
Fibre Channel fabric
Fabric Application Interface Standard
Fabric Shortest Path First – routing algorithm
Fibre Channel zoning
Registered State Change Notification
Virtual Storage Area Network
Fibre Channel frame
Fibre Channel Logins (FLOGI)
Fibre Channel network protocols
Fibre Channel over Ethernet (FCoE)
Fibre Channel over IP (FCIP), contrast with Internet Fibre Channel Protocol (iFCP)
Fibre Channel switch
Fibre Channel time-out values
Gen 5 Fibre Channel
Host Bus Adapter (HBA)
Interconnect bottleneck
FATA, IDE, ATA, SATA, SAS, AoE, SCSI, iSCSI, PCI Express
IP over Fibre Channel (IPFC)
List of Fibre Channel standards
List of device bandwidths
N_Port ID Virtualization
Optical communication
Optical fiber cable
Parallel optical interface
Serial Storage Architecture (SSA)
Storage Area Network
Storage Hypervisor
World Wide Name
References
INCITS Fibre Channel standards
Sources
Clark, T. Designing Storage Area Networks, Addison-Wesley, 1999.
Further reading
– IP and ARP over Fibre Channel
– Definitions of Managed Objects for the Fabric Element in Fibre Channel Standard
– Securing Block Storage Protocols over IP
– Fibre Channel Management MIB
– Fibre Channel Routing Information MIB
– MIB for Fibre Channel's Fabric Shortest Path First (FSPF) Protocol
External links
Fibre Channel Industry Association (FCIA)
INCITS technical committee responsible for FC standards(T11)
IBM SAN Survival Guide
Introduction to Storage Area Networks
Fibre Channel overview
Fibre Channel tutorial (UNH-IOL)
Storage Networking Industry Association (SNIA)
Virtual fibre Channel in Hyper V
FC Switch Configuration Tutorial
Computer storage buses |
144676 | https://en.wikipedia.org/wiki/Man-in-the-middle%20attack | Man-in-the-middle attack | In cryptography and computer security, a man-in-the-middle, monster-in-the-middle, machine-in-the-middle, monkey-in-the-middle, meddler-in-the-middle (MITM) or person-in-the-middle (PITM) attack is a cyberattack where the attacker secretly relays and possibly alters the communications between two parties who believe that they are directly communicating with each other, as the attacker has inserted themselves between the two parties. One example of a MITM attack is active eavesdropping, in which the attacker makes independent connections with the victims and relays messages between them to make them believe they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The attacker must be able to intercept all relevant messages passing between the two victims and inject new ones. This is straightforward in many circumstances; for example, an attacker within the reception range of an unencrypted Wi-Fi access point could insert themselves as a man-in-the-middle. As it aims to circumvent mutual authentication, a MITM attack can succeed only when the attacker impersonates each endpoint sufficiently well to satisfy their expectations. Most cryptographic protocols include some form of endpoint authentication specifically to prevent MITM attacks. For example, TLS can authenticate one or both parties using a mutually trusted certificate authority.
Example
Suppose Alice wishes to communicate with Bob. Meanwhile, Mallory wishes to intercept the conversation to eavesdrop and optionally to deliver a false message to Bob.
First, Alice asks Bob for his public key. If Bob sends his public key to Alice, but Mallory is able to intercept it, an MITM attack can begin. Mallory sends Alice a forged message that appears to originate from Bob, but instead includes Mallory's public key.
Alice, believing this public key to be Bob's, encrypts her message with Mallory's key and sends the enciphered message back to Bob. Mallory again intercepts, deciphers the message using her private key, possibly alters it if she wants, and re-enciphers it using the public key she intercepted from Bob when he originally tried to send it to Alice. When Bob receives the newly enciphered message, he believes it came from Alice.
Alice sends a message to Bob, which is intercepted by Mallory:
Alice "Hi Bob, it's Alice. Give me your key." → Mallory Bob
Mallory relays this message to Bob; Bob cannot tell it is not really from Alice:
Alice Mallory "Hi Bob, it's Alice. Give me your key." → Bob
Bob responds with his encryption key:
Alice Mallory ← [Bob's key] Bob
Mallory replaces Bob's key with her own, and relays this to Alice, claiming that it is Bob's key:
Alice ← [Mallory's key] Mallory Bob
Alice encrypts a message with what she believes to be Bob's key, thinking that only Bob can read it:
Alice "Meet me at the bus stop!" [encrypted with Mallory's key] → Mallory Bob
However, because it was actually encrypted with Mallory's key, Mallory can decrypt it, read it, modify it (if desired), re-encrypt with Bob's key, and forward it to Bob:
Alice Mallory "Meet me at the van down by the river!" [encrypted with Bob's key] → Bob
Bob thinks that this message is a secure communication from Alice.
This example shows the need for Alice and Bob to have some way to ensure that they are truly each using each other's public keys, rather than the public key of an attacker. Otherwise, such attacks are generally possible, in principle, against any message sent using public-key technology. A variety of techniques can help defend against MITM attacks.
Defense and detection
MITM attacks can be prevented or detected by two means: authentication and tamper detection. Authentication provides some degree of certainty that a given message has come from a legitimate source. Tamper detection merely shows evidence that a message may have been altered.
Authentication
All cryptographic systems that are secure against MITM attacks provide some method of authentication for messages. Most require an exchange of information (such as public keys) in addition to the message over a secure channel. Such protocols, often using key-agreement protocols, have been developed with different security requirements for the secure channel, though some have attempted to remove the requirement for any secure channel at all.
A public key infrastructure, such as Transport Layer Security, may harden Transmission Control Protocol against MITM attacks. In such structures, clients and servers exchange certificates which are issued and verified by a trusted third party called a certificate authority (CA). If the original key to authenticate this CA has not been itself the subject of a MITM attack, then the certificates issued by the CA may be used to authenticate the messages sent by the owner of that certificate. Use of mutual authentication, in which both the server and the client validate the other's communication, covers both ends of a MITM attack. If the server or client's identity is not verified or deemed as invalid, the session will end. However, the default behavior of most connections is to only authenticate the server, which means mutual authentication is not always employed and MITM attacks can still occur.
Attestments, such as verbal communications of a shared value (as in ZRTP), or recorded attestments such as audio/visual recordings of a public key hash are used to ward off MITM attacks, as visual media is much more difficult and time-consuming to imitate than simple data packet communication. However, these methods require a human in the loop in order to successfully initiate the transaction.
In a corporate environment, successful authentication (as indicated by the browser's green padlock) does not always imply secure connection with the remote server. Corporate security policies might contemplate the addition of custom certificates in workstations' web browsers in order to be able to inspect encrypted traffic. As a consequence, a green padlock does not indicate that the client has successfully authenticated with the remote server but just with the corporate server/proxy used for SSL/TLS inspection.
HTTP Public Key Pinning (HPKP), sometimes called "certificate pinning," helps prevent a MITM attack in which the certificate authority itself is compromised, by having the server provide a list of "pinned" public key hashes during the first transaction. Subsequent transactions then require one or more of the keys in the list must be used by the server in order to authenticate that transaction.
DNSSEC extends the DNS protocol to use signatures to authenticate DNS records, preventing simple MITM attacks from directing a client to a malicious IP address.
Tamper detection
Latency examination can potentially detect the attack in certain situations, such as with long calculations that lead into tens of seconds like hash functions. To detect potential attacks, parties check for discrepancies in response times. For example: Say that two parties normally take a certain amount of time to perform a particular transaction. If one transaction, however, were to take an abnormal length of time to reach the other party, this could be indicative of a third party's interference inserting additional latency in the transaction.
Quantum cryptography, in theory, provides tamper-evidence for transactions through the no-cloning theorem. Protocols based on quantum cryptography typically authenticate part or all of their classical communication with an unconditionally secure authentication scheme. As an example Wegman-Carter authentication.
Forensic analysis
Captured network traffic from what is suspected to be an attack can be analyzed in order to determine whether there was an attack and, if so, determine the source of the attack. Important evidence to analyze when performing network forensics on a suspected attack includes:
IP address of the server
DNS name of the server
X.509 certificate of the server
Whether the certificate has been self signed
Whether the certificate has been signed by a trusted certificate authority
Whether the certificate has been revoked
Whether the certificate has been changed recently
Whether other clients, elsewhere on the Internet, received the same certificate
Notable instances
A notable non-cryptographic MITM attack was perpetrated by a Belkin wireless network router in 2003. Periodically, it would take over an HTTP connection being routed through it: this would fail to pass the traffic on to its destination, but instead itself responded as the intended server. The reply it sent, in place of the web page the user had requested, was an advertisement for another Belkin product. After an outcry from technically literate users, this feature was removed from later versions of the router's firmware.
In 2011, a security breach of the Dutch certificate authority DigiNotar resulted in the fraudulent issuing of certificates. Subsequently, the fraudulent certificates were used to perform MITM attacks.
In 2013, Nokia's Xpress Browser was revealed to be decrypting HTTPS traffic on Nokia's proxy servers, giving the company clear text access to its customers' encrypted browser traffic. Nokia responded by saying that the content was not stored permanently, and that the company had organizational and technical measures to prevent access to private information.
In 2017, Equifax withdrew its mobile phone apps following concern about MITM vulnerabilities.
Other notable real-life implementations include the following:
DSniff the first public implementation of MITM attacks against SSL and SSHv1
Fiddler2 HTTP(S) diagnostic tool
NSA impersonation of Google
Qaznet Trust Certificate
Superfish malware
Forcepoint Content Gateway used to perform inspection of SSL traffic at the proxy
Comcast uses MITM attacks to inject JavaScript code to 3rd party web pages, showing their own ads and messages on top of the pages
See also
ARP spoofing – a technique by which an attacker sends Address Resolution Protocol messages onto a local area network
Aspidistra transmitter a British radio transmitter used for World War II "intrusion" operations, an early MITM attack.
Babington Plot the plot against Elizabeth I of England, where Francis Walsingham intercepted the correspondence.
Computer security the design of secure computer systems.
Cryptanalysis the art of deciphering encrypted messages with incomplete knowledge of how they were encrypted.
Digital signature a cryptographic guarantee of the authenticity of a text, usually the result of a calculation only the author is expected to be able to perform.
Evil maid attack attack used against full disk encryption systems
Interlock protocol a specific protocol to circumvent an MITM attack when the keys may have been compromised.
Key management how to manage cryptographic keys, including generation, exchange and storage.
Key-agreement protocol a cryptographic protocol for establishing a key in which both parties can have confidence.
Man-in-the-browser a type of web browser MITM
Man-on-the-side attack a similar attack, giving only regular access to a communication channel.
Mutual authentication how communicating parties establish confidence in one another's identities.
Password-authenticated key agreement a protocol for establishing a key using a password.
Quantum cryptography the use of quantum mechanics to provide security in cryptography.
Secure channel a way of communicating resistant to interception and tampering.
References
External links
Finding Hidden Threats by Decrypting SSL (PDF). SANS Institute.
Cryptographic attacks
Computer network security
Transport Layer Security |
145035 | https://en.wikipedia.org/wiki/SIGABA | SIGABA | In the history of cryptography, the ECM Mark II was a cipher machine used by the United States for message encryption from World War II until the 1950s. The machine was also known as the SIGABA or Converter M-134 by the Army, or CSP-888/889 by the Navy, and a modified Navy version was termed the CSP-2900.
Like many machines of the era it used an electromechanical system of rotors to encipher messages, but with a number of security improvements over previous designs. No successful cryptanalysis of the machine during its service lifetime is publicly known.
History
It was clear to US cryptographers well before World War II that the single-stepping mechanical motion of rotor machines (e.g. the Hebern machine) could be exploited by attackers. In the case of the famous Enigma machine, these attacks were supposed to be upset by moving the rotors to random locations at the start of each new message. This, however, proved not to be secure enough, and German Enigma messages were frequently broken by cryptanalysis during World War II.
William Friedman, director of the US Army's Signals Intelligence Service, devised a system to correct for this attack by truly randomizing the motion of the rotors. His modification consisted of a paper tape reader from a teletype machine attached to a small device with metal "feelers" positioned to pass electricity through the holes. When a letter was pressed on the keyboard the signal would be sent through the rotors as it was in the Enigma, producing an encrypted version. In addition, the current would also flow through the paper tape attachment, and any holes in the tape at its current location would cause the corresponding rotor to turn, and then advance the paper tape one position. In comparison, the Enigma rotated its rotors one position with each key press, a much less random movement. The resulting design went into limited production as the M-134 Converter, and its message settings included the position of the tape and the settings of a plugboard that indicated which line of holes on the tape controlled which rotors. However, there were problems using fragile paper tapes under field conditions.
Friedman's associate, Frank Rowlett, then came up with a different way to advance the rotors, using another set of rotors. In Rowlett's design, each rotor must be constructed such that between one and four output signals were generated, advancing one or more of the rotors (rotors normally have one output for every input). There was little money for encryption development in the US before the war, so Friedman and Rowlett built a series of "add on" devices called the SIGGOO (or M-229) that were used with the existing M-134s in place of the paper tape reader. These were external boxes containing a three rotor setup in which five of the inputs were live, as if someone had pressed five keys at the same time on an Enigma, and the outputs were "gathered up" into five groups as well — that is all the letters from A to E would be wired together for instance. That way the five signals on the input side would be randomized through the rotors, and come out the far side with power in one of five lines. Now the movement of the rotors could be controlled with a day code, and the paper tape was eliminated. They referred to the combination of machines as the M-134-C.
In 1935 they showed their work to Joseph Wenger, a cryptographer in the OP-20-G section of the U.S. Navy. He found little interest for it in the Navy until early 1937, when he showed it to Commander Laurance Safford, Friedman's counterpart in the Office of Naval Intelligence. He immediately saw the potential of the machine, and he and Commander Seiler then added a number of features to make the machine easier to build, resulting in the Electric Code Machine Mark II (or ECM Mark II), which the navy then produced as the CSP-889 (or 888).
Oddly, the Army was unaware of either the changes or the mass production of the system, but were "let in" on the secret in early 1940. In 1941 the Army and Navy joined in a joint cryptographic system, based on the machine. The Army then started using it as the SIGABA. Just over 10,000 machines were built.
On 26 June 1942, the Army and Navy agreed not to allow SIGABA machines to be placed in foreign territory except where armed American personnel were able to protect the machine. The SIGABA would be made available to another Allied country only if personnel of that country were denied direct access to the machine or its operation by an American liaison officer who would operate it.
Description
SIGABA was similar to the Enigma in basic theory, in that it used a series of rotors to encipher every character of the plaintext into a different character of ciphertext. Unlike Enigma's three rotors however, the SIGABA included fifteen, and did not use a reflecting rotor.
The SIGABA had three banks of five rotors each; the action of two of the banks controlled the stepping of the third.
The main bank of five rotors was termed the cipher rotors (Army) or alphabet maze (Navy) and each rotor had 26 contacts. This assembly acted similarly to other rotor machines, such as the Enigma; when a plaintext letter was entered, a signal would enter one side of the bank and exit the other, denoting the ciphertext letter. Unlike the Enigma, there was no reflector.
The second bank of five rotors was termed the control rotors or stepping maze. These were also 26-contact rotors. The control rotors received four signals at each step. After passing through the control rotors, the outputs were divided into ten groups of various sizes, ranging from 1–6 wires. Each group corresponded to an input wire for the next bank of rotors.
The third bank of rotors was called the index rotors. These rotors were smaller, with only ten contacts, and did not step during the encryption. After travelling though the index rotors, one to four of five output lines would have power. These then turned the cypher rotors.
The SIGABA advanced one or more of its main rotors in a complex, pseudorandom fashion. This meant that attacks which could break other rotor machines with more simple stepping (for example, Enigma) were made much more complex. Even with the plaintext in hand, there were so many potential inputs to the encryption that it was difficult to work out the settings.
On the downside, the SIGABA was also large, heavy, expensive, difficult to operate, mechanically complex, and fragile. It was nowhere near as practical a device as the Enigma, which was smaller and lighter than the radios with which it was used. It found widespread use in the radio rooms of US Navy ships, but as a result of these practical problems the SIGABA simply couldn't be used in the field. In most theatres other systems were used instead, especially for tactical communications. One of the most famous was the use of Navajo code talkers for tactical field communications in the Pacific Theater. In other theatres, less secure, but smaller, lighter, and sturdier machines were used, such as the M-209. SIGABA, impressive as it was, was overkill for tactical communications. This said, new speculative evidence emerged more recently that the M-209 code was broken by German cryptanalysts during World War II.
Operation
Because SIGABA did not have a reflector, a 26+ pole switch was needed to change the signal paths through the alphabet maze between the encryption and decryption modes. The long “controller” switch was mounted vertically, with its knob on the top of the housing. See image. It had five positions, O, P, R, E and D. Besides encrypt (E) and decrypt (D), it had a plain text position (P) that printed whatever was typed on the output tape, and a reset position (R) that was used to set the rotors and to zeroize the machine. The O position turned the machine off. The P setting was used to print the indicators and date/time groups on the output tape. It was the only mode that printed numbers. No printing took place in the R setting, but digit keys were active to increment rotors.
During encryption, the Z key was connected to the X key and the space bar produced a Z input to the alphabet maze. A Z was printed as a space on decryption. The reader was expected to understand that a word like “xebra” in a decrypted message was actually “zebra.” The printer automatically addled a space between each group of five characters during encryption.
The SIGABA was zeroized when all the index rotors read zero in their low order digit and all the alphabet and code rotors were set to the letter O. Each rotor had a cam that caused the rotor to stop in the proper position during the zeroize process.
SIGABA’s rotors were all housed in a removable frame held in place by four thumb screws. This allowed the most sensitive elements of the machine to be stored in more secure safes and to be quickly thrown overboard or otherwise destroyed if capture was threatened. It also allowed a machine to quickly switch between networks that used different rotor orders. Messages had two 5- character indicators, an exterior indicator that specified the system being used and the security classification and an interior indicator that determined the initial settings of the code and alphabet rotors. The key list included separate index rotor settings for each security classification. This prevented lower classification messages from being used as cribs to attack higher classification messages.
The Navy and Army had different procedures for the interior indicator. Both started by zeroizing the machine and having the operator select a random 5-character string for each new message. This was then encrypted to produce the interior indicator. Army key lists included an initial setting for the rotors that was used to encrypt the random string. The Navy operators used the keyboard to increment the code rotors until they matched the random character string. The alphabet rotor would move during this process and their final position was the internal indicator. In case of joint operations, the Army procedures were followed.
The key lists included a “26-30” check string. After the rotors were reordered according to the current key, the operator would zeroize the machine, encrypt 25 characters and then encrypt “AAAAA”. The ciphertext resulting from the five A’s had to match the check string. The manual warned that typographical errors were possible in key lists and that a four character match should be accepted.
The manual also gave suggestions on how to generate random strings for creating indicators. These included using playing cards and poker chips, to selecting characters from cipher texts and using the SIGABA itself as a random character generator.
Security
Although the SIGABA was extremely secure, the US continued to upgrade its capability throughout the war, for fear of the Axis cryptanalytic ability to break SIGABA's code. When the German's ENIGMA messages and Japan's Type B Cipher Machine were broken, the messages were closely scrutinized for signs that Axis forces were able to read the US cryptography codes. Axis prisoners of war (POWs) were also interrogated with the goal of finding evidence that US cryptography had been broken. However, both the Germans and Japanese were not making any progress in breaking the SIGABA code. A decrypted JN-A-20 message, dated 24 January 1942, sent from the naval attaché in Berlin to vice chief of Japanese Naval General Staff in Tokyo stated that “joint Jap[anese]-German cryptanalytical efforts” to be “highly satisfactory,” since the “German[s] have exhibited commendable ingenuity and recently
experienced some success on English Navy systems,” but are “encountering difficulty in establishing successful techniques of attack on ‘enemy’ code setup.” In another decrypted JN-A-20 message, the Germans admitted that their progress in breaking US communications was unsatisfactory. The Japanese also admitted in their own communications that they had made no real progress against the American cipher system. In September 1944, when the Allies were advancing steadily on the Western front, the war diary of the German Signal Intelligence Group recorded: "U.S. 5-letter traffic: Work discontinued as unprofitable at this time".
SIGABA systems were closely guarded at all times, with separate safes for the system base and the code-wheel assembly, but there was one incident where a unit was lost for a time. On February 3, 1945, a truck carrying a SIGABA system in three safes was stolen while its guards were visiting a brothel in recently-liberated Colmar, France. General Eisenhower ordered an extensive search, which finally discovered the safes six weeks later in a nearby river.
Interoperability with Allied counterparts
The need for cooperation among the US/British/Canadian forces in carrying out joint military operations against Axis forces gave rise to the need for a cipher system that could be used by all Allied forces. This functionality was achieved in three different ways. Firstly, the ECM Adapter (CSP 1000), which could be retrofitted on Allied cipher machines, was produced at the Washington Naval Yard ECM Repair Shop. A total of 3,500 adapters were produced. The second method was to adapt the SIGABA for interoperation with a modified British machine, the Typex. The common machine was known as the Combined Cipher Machine (CCM), and was used from November 1943. Because of the high cost of production, only 631 CCMs were made. The third way was the most common and most cost-effective. It was the "X" Adapter manufactured by the Teletype Corporation in Chicago. A total of 4,500 of these adapters were installed at depot-level maintenance facilities.
See also
Mercury — British machine which also used rotors to control other rotors
SIGCUM — teleprinter encryption system which used SIGABA-style rotors
References
Notes
Sources
Mark Stamp, Wing On Chan, "SIGABA: Cryptanalysis of the Full Keyspace", Cryptologia v 31, July 2007, pp 201–2222
Rowlett wrote a book about SIGABA (Aegean Press, Laguna Hills, California).
Michael Lee, "Cryptanalysis of the Sigaba", Masters Thesis, University of California, Santa Barbara, June 2003 (PDF) (PS).
John J. G. Savard and Richard S. Pekelney, "The ECM Mark II: Design, History and Cryptology", Cryptologia, Vol 23(3), July 1999, pp211–228.
Crypto-Operating Instructions for ASAM 1, 1949, .
CSP 1100(C), Operating Instructions for ECM Mark 2 (CSP 888/889) and CCM Mark 1 (CSP 1600), May 1944, .
George Lasry, "A Practical Meet-in-the-Middle Attack on SIGABA", 2nd International Conference on Historical Cryptology, HistoCrypt 2019 .
George Lasry, "Cracking SIGABA in less than 24 hours on a consumer PC", Cryptologia, 2021 .
External links
Electronic Cipher Machine (ECM) Mark II by Rich Pekelney
SIGABA simulator for Windows
Code Book Tool for the Sigaba Simulator (Windows 2000-XP)
The ECM Mark II, also known as SIGABA, M-134-C, and CSP-889 — by John Savard
Cryptanalysis of SIGABA, Michael Lee, University of California Santa Barbara Masters Thesis, 2003
The SIGABA ECM Cipher Machine - A Beautiful Idea
A Practical Meet-in-the-Middle Attack on SIGABA by George Lasry
World War II military equipment of the United States
Rotor machines
Encryption devices
Cryptographic hardware
United States Army Signals Intelligence Service |
145140 | https://en.wikipedia.org/wiki/Back%20Orifice%202000 | Back Orifice 2000 | Back Orifice 2000 (often shortened to BO2k) is a computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a pun on Microsoft BackOffice Server software.
BO2k debuted on July 10, 1999, at DEF CON 7, a computer security convention in Las Vegas, Nevada. It was originally written by Dildog, a member of US hacker group Cult of the Dead Cow. It was a successor to the cDc's Back Orifice remote administration tool, released the previous year. , BO2k was being actively developed.
Whereas the original Back Orifice was limited to the Windows 95 and Windows 98 operating systems, BO2k also supports Windows NT, Windows 2000 and Windows XP. Some BO2k client functionality has also been implemented for Linux systems. In addition, BO2k was released as free software, which allows one to port it to other operating systems.
Plugins
BO2k has a plugin architecture. The optional plugins include:
communication encryption with AES, Serpent, CAST-256, IDEA or Blowfish encryption algorithms
network address altering notification by email and CGI
total remote file control
remote Windows registry editing
watching at the desktop remotely by streaming video
remote control of both the keyboard and the mouse
a chat, allowing administrator to discuss with users
option to hide things from system (rootkit behavior, based on FU Rootkit)
accessing systems hidden by a firewall (the administrated system can form a connection outward to the administrator's computer. Optionally, to escape even more connection problems, the communication can be done by a web browser the user uses to surf the web.)
forming connection chains through a number of administrated systems
client-less remote administration over IRC
on-line keypress recording
Controversy
Back Orifice and Back Orifice 2000 are widely regarded as malware, tools intended to be used as a combined rootkit and backdoor. For example, at present many antivirus software packages identify them as Trojan horses. This classification is justified by the fact that BO2k can be installed by a Trojan horse, in cases where it is used by an unauthorized user, unbeknownst to the system administrator.
There are several reasons for this, including: the association with cDc; the tone of the initial product launch at DEF CON (including that the first distribution of BO2k by cDc was infected by the CIH virus); the existence of tools (such as "Silk Rope") designed to add BO2k dropper capability to self-propagating malware; and the fact that it has actually widely been used for malicious purposes. The most common criticism is that BO2k installs and operates silently, without warning a logged-on user that remote administration or surveillance is taking place. According to the official BO2k documentation, the person running the BO2k server is not supposed to know that it is running on their computer.
BO2k developers counter these concerns in their Note on Product Legitimacy and Security, pointing out—among other things—that some remote administration tools widely recognized as legitimate also have options for silent installation and operation.
See also
Sub7
MiniPanzer and MegaPanzer
File binder
External links
References
Windows remote administration software
Cult of the Dead Cow software
Remote administration software
de:Back Orifice
es:Back Orifice
fr:Back Orifice
it:Back Orifice
pt:Back Orifice
sv:Back Orifice |
145630 | https://en.wikipedia.org/wiki/GNOME%20Evolution | GNOME Evolution | GNOME Evolution (formerly Novell Evolution and Ximian Evolution, prior to Novell's 2003 acquisition of Ximian) is the official personal information manager for GNOME. It has been an official part of GNOME since Evolution 2.0 was included with the GNOME 2.8 release in September 2004. It combines e-mail, address book, calendar, task list and note-taking features. Its user interface and functionality is similar to Microsoft Outlook. Evolution is free software licensed under the terms of the GNU Lesser General Public License (LGPL).
Features
Evolution delivers the following features:
E-mail retrieval with the POP and IMAP protocols and e-mail transmission with SMTP
Secure network connections encrypted with SSL, TLS and STARTTLS
E-mail encryption with GPG and S/MIME
E-mail filters
Search folders: saved searches that look like normal mail folders as an alternative to using filters and search queries
Automatic spam filtering with SpamAssassin and Bogofilter
Connectivity to Microsoft Exchange Server, Novell GroupWise and Kolab (provided in separate packages as plug-ins)
Calendar support for the iCalendar file format, the WebDAV and CalDAV standards and Google Calendar
Contact management with local address books, CardDAV, LDAP and Google address books
Synchronization via SyncML with SyncEvolution and with Palm OS devices via gnome-pilot
Address books that can be used as a data source in LibreOffice
User avatars loading from address book, e-mail headers X-Face, Face or automatic lookup by hashed e-mail address from Gravatar service
An RSS reader plug-in
A news client
Import from Microsoft Outlook archives (dbx, pst) and Berkley Mailbox
The Novell GroupWise plug-in is no longer in active development. A Scalix plug-in is also available, but its development stopped in 2009.
Evolution Data Server
Evolution Data Server (EDS) is a collection of libraries and session services for storing address books and calendars. Other software such as California and GNOME Calendar depends on EDS as well.
Some documentation about the software architecture is available in the GNOME wiki.
Connecting to Microsoft Exchange Server
Depending on which version of Microsoft Exchange Server is used, different packages need to be installed to be able to connect to it. The documentation recommends the evolution-ews package (which uses Exchange Web Services) for Exchange Server 2007, 2010 and newer. If evolution-ews does not work well, it is advised to try the evolution-mapi package. This supports Exchange Server 2010, 2007 and possibly older versions supporting MAPI. For Exchange Server 2003, 2000 and possibly earlier versions supporting Outlook Web App the package evolution-exchange is recommended.
History
Ximian decided to develop Evolution in 2000. It felt there were no e-mail clients for Linux at the time that could provide the functionality and interoperability necessary for corporate users. Ximian saw an opportunity for Linux to penetrate the corporate environment if the right enterprise software was available for it. It released Evolution 1.0 in December 2001 and offered the paid Ximian Connector plug-in which allowed users to connect with Microsoft Exchange Server. Evolution itself has been free software from the start, but Ximian Connector was sold as proprietary software so that Ximian could generate revenue. This changed after Novell's acquisition of Ximian in August 2003. Novell decided to integrate the Exchange plug-in as free software in Evolution 2.0 in May 2004.
Novell was in turn acquired by The Attachmate Group in 2011. It transferred Novell's former Evolution developers to its subsidiary SUSE. In 2012 SUSE decided to stop its funding of Evolution's development and assigned its developers elsewhere. As a consequence only two full-time developers employed by Red Hat remained. Later in 2013 Red Hat dedicated more developers to the project, reinvigorating its development. The reasons given for the decision were the cessation of active development on Mozilla Thunderbird and the need for an e-mail client with good support for Microsoft Exchange.
Distribution
As a part of GNOME, Evolution is released as source code. Linux distributions provide packages of GNOME for end-users. Evolution is used as the default personal information manager on several Linux distributions which use GNOME by default, most notably Debian and Fedora. Ubuntu has replaced Evolution with Mozilla Thunderbird as the default e-mail client since Ubuntu 11.10 Oneiric Ocelot.
Defunct Mac OS X and Windows ports
In the past, Evolution was ported to Apple Mac OS X and Microsoft Windows, but these ports are no longer developed.
In 2006, Novell released an installer for Evolution 2.6 on Mac OS X. In January 2005, Novell's Nat Friedman announced in his blog that the company had hired Tor Lillqvist, the programmer who ported GIMP to Microsoft Windows, to do the same with Evolution. Prior to this announcement, several projects with the same goal had been started but none of them reached alpha status. In 2008 DIP Consultants released a Windows installer for Evolution 2.28.1-1 for Microsoft Windows XP and newer. Currently it is only available for download from the project's page on SourceForge.
A slightly more recent (2010/2011) experimental installer for Evolution 3.0.2 is provided by openSUSE. Users have faced difficulties getting this version working.
See also
Geary – another email client for GNOME
List of personal information managers
List of applications with iCalendar support
Comparison of e-mail clients
Notes
References
External links
Email client software for Linux
Email clients that use GTK
Free calendaring software
Free email software
Free personal information managers
Free software programmed in C
Office software that uses GTK |
146606 | https://en.wikipedia.org/wiki/Freeview%20%28UK%29 | Freeview (UK) | Freeview is the United Kingdom's sole digital terrestrial television platform. It is operated by DTV Services Ltd, a joint venture between the BBC, ITV, Channel 4, Sky and transmitter operator Arqiva. It was launched on 30 October 2002, taking over the licence from ITV Digital which collapsed that year. The service provides consumer access via an aerial to the seven DTT multiplexes covering the United Kingdom. As of July 2020, it has 85 TV channels, 26 digital radio channels, 10 HD channels, six text services, 11 streamed channels, and one interactive channel.
DTV Services' delivery of standard-definition television and radio is labelled Freeview, while its delivery of HDTV is called Freeview HD. Reception of Freeview requires a DVB-T/DVB-T2 tuner, either in a separate set-top box or built into the TV set. Since 2008 all new TV sets sold in the United Kingdom have a built-in Freeview tuner. Freeview HD requires a HDTV-capable tuner. Digital video recorders (DVRs) with a built-in Freeview tuner are labelled Freeview+. Depending on model, DVRs and HDTV sets with a Freeview tuner may offer standard Freeview or Freeview HD. Freeview Play is a more recent addition which adds direct access to catch-up services via the Internet.
The technical specification for Freeview is published and maintained by the Digital TV Group, the industry association for digital TV in the UK which also provide the test and conformance regime for Freeview, Freeview + and Freeview HD products. DMOL (DTT Multiplex Operators Ltd.), a company owned by the operators of the six DTT multiplexes (BBC, ITV, Channel 4, and Arqiva) is responsible for technical platform management and policy, including the electronic programme guide and channel numbering.
History
Freeview officially launched on 30 October 2002 at 5 am, when the BBC and Crown Castle (now Arqiva) officially took over the digital terrestrial television (DTT) licences to broadcast on the three multiplexes from the defunct ITV Digital. The founding members of DTV Services, who trade as Freeview, were the BBC, Crown Castle UK and British Sky Broadcasting. On 11 October 2006, ITV plc and Channel 4 became equal shareholders. Since then, the Freeview model has been copied in Australia and New Zealand.
Although all pay channels had been closed down on ITV Digital, many free-to-air channels continued broadcasting, including the five analogue channels and digital channels such as ITV2, ITN News Channel, S4C2, TV Travel Shop and QVC. With the launch of Freeview other channels were broadcast free-to-air, such as: Sky Travel, UK History, Sky News, Sky Sports News, The Hits (now 4Music) and TMF (renamed Viva, now defunct) were available from the start. BBC Four and the interactive BBC streams were moved to multiplex B. Under the initial plans, the two multiplexes operated by Crown Castle would carry eight channels altogether. The seventh stream became shared by UK Bright Ideas and Ftn which launched in February 2003. The eighth stream was left unused until April 2004 when the shopping channel Ideal World launched on Freeview. There are now 14 streams carried by two multiplexes, with Multiplex C carrying 6 streams, and Multiplex D carrying 8. It has recently been announced that more streams are now available on the multiplexes, and that bidding is under way.
2009 retune
The Freeview service underwent a major upgrade on 30 September 2009, which required 18 million households to retune their Freeview receiving equipment. The changes, meant to ensure proper reception of Channel 5, led to several thousand complaints from people who lost channels (notably ITV3 and ITV4) as a result of retuning their equipment. The Freeview website crashed and the call centre was inundated as a result of the problems. The change involved an update to the NIT (Network Information Table), which some receivers could not accommodate. Many thousands of people could not receive some channels. This included 460,000 fed from relay stations who lost access to ITV3 and ITV4. Updates were broadcast to enable firmware changes, but in some cases the receiver must be left on and receiving broadcasts to accept the updates; not everyone was aware of this.
2014 retune
The Freeview service underwent a major upgrade on 3 September 2014 which required 18 million households to retune their Freeview receiving equipment. The changes included a reshuffle of the Children's, News, and Interactive genres.
A number of new HD channels launched in 2014, from a new group of multiplexes awarded to Arqiva. The new HD channels were launched in selected areas on 10 December 2013 with a further roll-out during 2014.
Temporary multiplex removal
The temporary multiplexes are Arqiva-owned multiplexes called COM7 and COM8, DVB-T2 multiplexes for Freeview HD capable devices carrying some channels including HD channels. COM7 is made up of mostly +1s and HDs such as More4+1 and BBC News HD. COM8 consisted of +1s, HDs and other channels such as NOW 80s, PBS America+1 and BBC Four HD. Over the decade these multiplexes are being shut off with COM8 closing on 6 June 2020, with many +1 and HD channels like 5Star+1 and 4seven HD closing and others (like Now 80s) moving to COM7.
Technical problems
On 10 August 2021, the 315-metre Bilsdale transmitter caught fire leaving up to a million homes in the North East of England without a TV or radio signal.
Work is ongoing to restore services, but delays to the granting of planning permission for an 80-metre temporary mast sited at Bilsdale, and the lack of safe access to the site, have left up to half a million homes without a service as of 8 September 2021.
On the evening of 25 September 2021, transmissions of Freeview channels operated by the BBC, Channel 4 and ViacomCBS (Channel 5) were impacted by the activation of a fire suppressant system at the premises of Red Bee Media. While the BBC moved its playout from White City to Salford and Channel 5 went into 'recovery mode' (with viewers seeing an additional black & white symbol at the top of the screen), Channel 4's channels went off air for a number of hours with E4+1 and 4Music still off air on Monday 27 September (though 4Music's channel 30 slot was relaying the output of The Box, with its back-to-back music video format, on that date).
Channels
The Freeview service broadcasts free-to-air television channels, radio stations and interactive services from the existing public service broadcasters. Channels on the service include the BBC, ITV, Channel 4 and Channel 5 terrestrial channels, as well as their digital services. In addition, channels from other commercial operators, such as Sky and UKTV, are available, as well as radio services from a number of broadcasters.
The full range of channels broadcast via digital terrestrial television includes some pay television services such as BoxNation and Racing UK. These channels, although available only to subscribers with appropriate equipment, are listed in the on-screen electronic programme guides displayed by many Freeview receivers but cannot be viewed.
The link above gives a full up-to-date list of channels, but, as of January 2020, excluding channels such as S4C or the many Local TV services (1 service included in the count) they total 105 Freeview, 17 Freeview HD and 33 radio.
Reception equipment
Receivers
To receive Freeview, either a television with an integrated digital tuner or an older analogue television with a suitable Freeview-branded set-top box is required.
Aerial
An aerial is required for viewing any broadcast television transmissions. For all transmissions indoor, loft-mounted, and external aerials are available. In regions of strong signal an indoor aerial may be adequate; in marginal areas a high-gain external aerial mounted high above the ground with an electronic amplifier at its top may be needed.
Aerial requirements for analogue (the old standard) and digital reception in the UK are identical; there is no such thing as a special "digital aerial", although installers and suppliers often falsely say one is necessary. As the signal degrades, the analogue picture degrades gradually, but the digital picture holds up well then suddenly becomes unwatchable; an aerial which gave poor analogue viewing may give unwatchable, rather than poor, digital viewing, and need replacing, at a cost of typically £80 to £180, most of which is fitting cost. An aerial intended for external use may be fitted indoors if there is space and the signal is strong enough.
Services
The Digital TV Group, the industry association for digital television in the UK, is responsible for co-ordination between Freeview and other digital services.
The original Freeview was later expanded with additional facilities (Freeview+), high-definition channels (Freeview HD), and Internet connectivity (Freeview Play). All services remain available; the original Freeview equipment will work (unenhanced) in the same way it always did.
Freeview
The original Freeview service allowed a large number of digital television channels to be received on a compatible television receiver, set-top box, or personal video recorder. An electronic programme guide was available. Freeview channels are not encrypted and can be received by anyone in the UK. There is no additional charge to receive Freeview but it is a legal obligation to hold a current television licence to watch or record TV as it is being broadcast.
A subscription-based DTT service, Top Up TV, launched in March 2004. The Top Up TV service was not connected with the Freeview service, but ran alongside it on the DTT platform and was included in the Freeview EPG; programmes could be received on some Freeview set-top boxes and televisions equipped with a card slot or CI slot. Top Up TV was replaced in 2006, by a service that did not run on Freeview equipment.
The Freeview logo certification for standard definition (SD) receivers and recorders was withdrawn in January 2017.
Freeview HD
Freeview HD comprises a number of high-definition versions of existing channels. It requires a different high-definition tuner, and does not supersede or replace standard Freeview.
On 20 August 2020, Freeview announced that it would phase out their Freeview HD brand in 2022.
Channels
With two channels (BBC HD and ITV HD) Freeview HD completed a "technical launch" on 2 December 2009 from Winter Hill (as a full power service) and Crystal Palace (as a reduced power temporary service). It operates on multiplex BBC B (aka Multiplex B or PSB3). The service was broadcast to all regions by the end of 2012. Channel 4 HD commenced test broadcasts on 25 March 2010 with an animated caption, ahead of its full launch on 30 March 2010, coinciding with the commercial launch of Freeview HD. S4C Clirlun launched on 30 April 2010, in Wales, where Channel 4 HD did not broadcast. STV HD launched in Scotland, where ITV HD does not broadcast, on 6 June 2010. S4C Clirlun closed on 1 December 2012, allowing Channel 4 HD to begin broadcasting in Wales.
Five HD was due to launch during 2010 but was unable to reach 'key criteria' to keep its slot. Spare allocation on multiplex B was handed over to the BBC, two years from the date when it was anticipated that further capacity on multiplex B would revert to the control of the BBC Trust. On 3 November 2010, BBC One HD launched on Freeview HD. Initially it was available in addition to the existing BBC HD channel, which continued to show the "best of the rest" of the BBC in HD. However, BBC HD was replaced by BBC Two HD on 26 March 2013.
Until 17 October 2011, the Commercial Public Service Broadcasters had the opportunity to apply to Ofcom to provide an additional HD service from between 28 November 2011 and 1 April 2012. Channel 5 HD was the sole applicant, with the aim of launching in spring or early summer 2012. On 15 December 2011, Channel 5 dropped its bid to take the fifth slot after being unable to resolve "issues of commercial importance". Subject to any future Ofcom decision to re-advertise the slot, the capacity will remain with the BBC and can be used by it for BBC services or services provided by a third party via a commercial arrangement. The BBC temporarily used the space to broadcast a high definition simulcast of their main Freeview red button feed for the duration of the 2012 Summer Olympics, followed by a channel from Channel 4 for the 2012 Summer Paralympics. On 13 June 2013, the BBC temporarily launched a high-definition red button stream in the vacant space.
On 16 July 2013, Ofcom announced that up to 10 new HD channels would be launched by early 2014, using new capacity made available by the digital switchover. This provided additional spectrum in the 600Mhz band for additional DVB-T2 multiplexes, reaching up to 70% of the UK population. At the same time, the BBC announced that they would provide five new HD channels due to the newly available capacity: BBC Three HD, BBC Four HD, CBBC HD, CBeebies HD and BBC News HD. BBC Three HD and CBBC HD launched to all viewers on 10 December 2013 using the capacity released by the Red Button HD service, and the other BBC channels launched in some regions, expanding to 70% UK coverage by June 2014.
Channel 5 HD launched on Freeview on 4 May 2016.
Technical
The Digital TV Group publishes and maintains the UK technical specification for high-definition services on digital terrestrial television (Freeview) based on the new DVB-T2 standard. The specification is known as the D-book. Freeview HD is the first operational TV service in the world using the DVB-T2 standard. This standard is incompatible with DVB-T, and can only be received using compatible reception equipment. Some television receivers sold before the HD launch claimed to be "HD-ready", but this usually implies that the screen can display HD, rather than that DVB-T2 signals can be received a suitable tuner (typically built into a STB or PVR) is additionally required. Freeview HD set-top boxes and televisions are available. To qualify for the Freeview HD logo, receivers will need to be IPTV-capable and display Freeview branding, including the logo, on the electronic programme guide screen. The Freeview HD trademark requirements state that any manufacturer applying for the Freeview HD logo should submit their product to the Digital TV Group's test centre (DTG Testing) for conformance testing.
On 2 February 2010, Vestel became the first manufacturer to gain Freeview HD certification, for the Vestel T8300 set top box. Humax released the first Freeview HD reception equipment, the Humax HD-FOX T2, on 13 February 2010.
It was announced on 10 February 2009, that the signal would be encoded with MPEG-4 AVC High Profile Level 4, which supports up to 1080i30/1080p30, so 1080p50 cannot be used. The system has been designed from the start to allow regional variations in the broadcast schedule. Services are statistically multiplexed bandwidth is dynamically allocated between channels, depending on the complexity of the images with the aim of maintaining a consistent quality, rather than a specific bit rate. Video for each channel can range between 3 Mbit/s and 17 Mbit/s. AAC or Dolby Digital Plus audio is transmitted at 384 kb/s for 5.1 surround sound, with stereo audio at 128–192 kbit/s; audio description takes up 64 kbit/s, subtitles 200 kbit/s and the data stream, for interactive applications 50 kbit/s. Recording sizes for Freeview HD television transmissions average around 3 GB per hour. Between 22 and 23 March 2011, an encoder software change allowed the Freeview version of BBC HD to automatically detect progressive material and change encoding mode appropriately, meaning the channel can switch to 1080p25. This was extended to all of the other Freeview HD channels in October 2011.
To ensure provision of audio description, broadcasters typically use the AAC codec. Hardware restrictions allow only a single type of audio decoder to operate at any one time, so the main audio and the audio description must use the same encoding family for them to be successfully combined at the receiver. In the case of BBC HD, the main audio is coded as AAC-LC and only the audio description is encoded as HE-AAC. Neither AAC nor Dolby Digital Plus codecs are supported by most home AV equipment, which typically accept Dolby Digital or DTS, leaving owners with stereo, rather than surround sound, output. Transcoding from AAC to Dolby Digital or DTS and multi-channel output via HDMI was not originally necessary for Freeview HD certification. As of June 2010 the DTG D-Book includes the requirement for mandatory transcoding when sending audio via S/PDIF, and for either transcoding or multi-channel PCM audio when sending it via HDMI in order for manufacturers to gain Freeview HD certification from April 2011. Thus equipment sold as Freeview HD before April 2011 may not deliver surround sound to audio equipment (some equipment may, but this is not mandatory); later equipment must be capable of surround sound compatible with most suitable audio equipment.
In early February 2011, it was announced that one million Freeview HD set-top boxes had been sold.
Copy protection
In August 2009 the BBC wrote to Ofcom after third-party content owners asked the BBC to undertake measures to ensure that all Freeview HD boxes would include copy protection systems as required by the Digital TV Group's D-Book, which sets technical standards for digital terrestrial television in the UK. The BBC proposed to ensure compliance with copy-protection standards on the upgraded Freeview HD multiplex by compressing the service information (SI) data, which receivers need to understand the TV services in the data stream. To encourage boxes to adopt copy protection, the BBC made its own look-up tables and decompression algorithm, necessary for decoding the EPG data on high-definition channels, available without charge only to manufacturers who implement the copy-protection technology. This technology would control the way HD films and TV shows are copied onto, for example Blu-ray discs, and shared with others over the internet. No restrictions will be placed on standard-definition services. In a formal written response, Ofcom principal advisor Greg Bensberg said that wording of the licence would probably need to be changed to reflect the fact that this new arrangement is permitted. The BBC had suggested that as an alternative to the SI compression scheme, the Freeview HD multiplex may have to adopt encryption. Bensberg said that it would appear "inappropriate to encrypt public service broadcast content on DTT".
On 14 June 2010, Ofcom agreed to allow the BBC to limit the full availability of its own and other broadcasters' high definition (HD) Freeview services to receivers that control how HD content can be used. Ofcom concluded that the decision to accept the BBC's request will deliver net benefits to licence-holders by ensuring they have access to the widest possible range of HD television content on DTT.
Freeview HD Recorder
Freeview HD Recorder (formerly Freeview+, originally named Freeview Playback) is the marketing name for Freeview-capable digital video recorders with some enhancements over the original Freeview.
All recorders are required to include the following features in addition to standard Freeview:
At least eight-day electronic programme guide (EPG)
Series link (one timer to record whole series)
Record split programmes as one programme
Offer to record related programme
Record alternative showing if there is a time conflict
Schedule changes updated in standby (e.g. scheduled recording starting early)
Accurate Recording (AR, equivalent to PDC) – programmes are recorded based on signals from the broadcaster rather than scheduled time. (Since this is based on signals from the broadcaster, the broadcaster can prevent recording by sending nonsense signals as a form of copy protection, as already happens on music channels. However, this can be circumvented by specifying a timer recording instead of a programme recording or by connecting the receiver to a traditional videocassette recorder.)
Pace plc introduced the first DTT DVR in the UK in September 2002, called the Pace Twin. However this was before the Freeview brand and its Playback and + marketing names were introduced.
Freeview Play
Freeview Play combines the existing live television service with catch-up TV (BBC iPlayer, ITV Hub, STV Player, All 4, My5, UKTV Play, CBS Catchup Channels UK) on a variety of compatible TV and set-top boxes via the user’s standard broadband Internet connection. Its main purpose is to provide easy access to catch-up services by scrolling backwards on the traditional electronic programming guide (EPG); YouView is a similar but competing combination of live Freeview and catch-up using the EPG.
The technology is an open standard, but with prominent Freeview Play branding. The service launched in October 2015 on compliant equipment, initially 2015 Panasonic TV receivers and Humax set-top boxes, including existing models with a software update. Other manufacturers were announcing new models "later this [2015] year". The 2017 specification for Freeview Play includes support for HDR video using hybrid log–gamma (HLG), when playing on demand broadband content.
Mobile app
In 2019 Freeview released an app for iOS and Android devices. The app provides a centralised TV guide for 23 channels and the ability to watch them through BBC iPlayer, ITV Hub, STV Player, All 4, My5 and UKTV Play.
See also
YouView
BT TV
Virgin Media
Freesat
Freesat from Sky
Now
High-definition television in the United Kingdom
Saorview
References
External links
Digital television in the United Kingdom |
146678 | https://en.wikipedia.org/wiki/Military%20intelligence | Military intelligence | Military intelligence is a military discipline that uses information collection and analysis approaches to provide guidance and direction to assist commanders in their decisions. This aim is achieved by providing an assessment of data from a range of sources, directed towards the commanders' mission requirements or responding to questions as part of operational or campaign planning. To provide an analysis, the commander's information requirements are first identified, which are then incorporated into intelligence collection, analysis, and dissemination.
Areas of study may include the operational environment, hostile, friendly and neutral forces, the civilian population in an area of combat operations, and other broader areas of interest. Intelligence activities are conducted at all levels, from tactical to strategic, in peacetime, the period of transition to war, and during a war itself.
Most governments maintain a military intelligence capability to provide analytical and information collection personnel in both specialist units and from other arms and services. The military and civilian intelligence capabilities collaborate to inform the spectrum of political and military activities.
Personnel performing intelligence duties may be selected for their analytical abilities and personal intelligence before receiving formal training.
Levels
Intelligence operations are carried out throughout the hierarchy of political and military activity.
Strategic
Strategic intelligence is concerned with broad issues such as economics, political assessments, military capabilities and intentions of foreign nations (and, increasingly, non-state actors). Such intelligence may be scientific, technical, tactical, diplomatic or sociological, but these changes are analyzed in combination with known facts about the area in question, such as geography, demographics and industrial capacities.
Strategic Intelligence is formally defined as "intelligence required for the formation of policy and military plans at national and international levels", and corresponds to the Strategic Level of Warfare, which is formally defined as "the level of warfare at which a nation, often as a member of a
group of nations, determines national or multinational (alliance or coalition) strategic security objectives and guidance, then develops and uses national resources to achieve those objectives."
Operational
Operational intelligence is focused on support or denial of intelligence at operational tiers. The operational tier is below the strategic level of leadership and refers to the design of practical manifestation. Formally defined as "Intelligence that is required for planning and conducting campaigns and major operations to accomplish strategic objectives within theaters or operational areas." It aligns with the Operational Level of Warfare, defined as "The level of warfare at which campaigns and major operations are planned, conducted, and sustained to achieve strategic objectives within theaters or other operational areas."
The term operation intelligence is used within law enforcement to refer to intelligence that supports long-term investigations into multiple, similar targets. Operational intelligence, in the discipline of law enforcement intelligence, is concerned primarily with identifying, targeting, detecting and intervening in criminal activity. The use within law enforcement and law enforcement intelligence is not scaled to its use in general intelligence or military/naval intelligence, being more narrowed in scope.
Tactical
Tactical intelligence is focused on support to operations at the tactical level and would be attached to the battlegroup. At the tactical level, briefings are delivered to patrols on current threats and collection priorities. These patrols are then debriefed to elicit information for analysis and communication through the reporting chain.
Tactical Intelligence is formally defined as "intelligence required for the planning and conduct of tactical operations", and corresponds with the Tactical Level of Warfare, itself defined as "the level of warfare at which battles and engagements are planned and executed to achieve military objectives assigned to tactical units or task forces".
Tasking
Intelligence should respond to the needs of leadership, based on the military objective and operational plans. The military objective provides a focus for the estimate process, from which a number of information requirements are derived. Information requirements may be related to terrain and impact on vehicle or personnel movement, disposition of hostile forces, sentiments of the local population and capabilities of the hostile order of battle.
In response to the information requirements, analysts examine existing information, identifying gaps in the available knowledge. Where gaps in knowledge exist, the staff may be able to task collection assets to target the requirement.
Analysis reports draw on all available sources of information, whether drawn from existing material or collected in response to the requirement. The analysis reports are used to inform the remaining planning staff, influencing planning and seeking to predict adversary intent.
This process is described as Collection Co-ordination and Intelligence Requirement Management (CCIRM).
Process
The process of intelligence has four phases: collection, analysis, processing and dissemination.
In the United Kingdom these are known as direction, collection, processing and dissemination.
In the U.S. military, Joint Publication 2-0 (JP 2-0) states: "The six categories of intelligence operations are: planning and direction; collection; processing and exploitation; analysis and production; dissemination and integration; and evaluation and feedback."
Collection
Many of the most important facts are well known or may be gathered from public sources. This form of information collection is known as open-source intelligence. For example, the population, ethnic make-up and main industries of a region are extremely important to military commanders, and this information is usually public. It is however imperative that the collector of information understands that what is collected is "information", and does not become intelligence until after an analyst has evaluated and verified this information. Collection of read materials, composition of units or elements, disposition of strength, training, tactics, personalities (leaders) of these units and elements contribute to the overall intelligence value after careful analysis.
The tonnage and basic weaponry of most capital ships and aircraft are also public, and their speeds and ranges can often be reasonably estimated by experts, often just from photographs. Ordinary facts like the lunar phase on particular days or the ballistic range of common military weapons are also very valuable to planning, and are habitually collected in an intelligence library.
A great deal of useful intelligence can be gathered from photointerpretation of detailed high-altitude pictures of a country. Photointerpreters generally maintain catalogs of munitions factories, military bases and crate designs in order to interpret munition shipments and inventories.
Most intelligence services maintain or support groups whose only purpose is to keep maps. Since maps also have valuable civilian uses, these agencies are often publicly associated or identified as other parts of the government. Some historic counterintelligence services, especially in Russia and China, have intentionally banned or placed disinformation in public maps; good intelligence can identify this disinformation.
It is commonplace for the intelligence services of large countries to read every published journal of the nations in which it is interested, and the main newspapers and journals of every nation. This is a basic source of intelligence.
It is also common for diplomatic and journalistic personnel to have a secondary goal of collecting military intelligence. For western democracies, it is extremely rare for journalists to be paid by an official intelligence service, but they may still patriotically pass on tidbits of information they gather as they carry on their legitimate business. Also, much public information in a nation may be unavailable from outside the country. This is why most intelligence services attach members to foreign service offices.
Some industrialized nations also eavesdrop continuously on the entire radio spectrum, interpreting it in real time. This includes not only broadcasts of national and local radio and television, but also local military traffic, radar emissions and even microwaved telephone and telegraph traffic, including satellite traffic.
The U.S. in particular is known to maintain satellites that can intercept cell-phone and pager traffic, usually referred to as the ECHELON system. Analysis of bulk traffic is normally performed by complex computer programs that parse natural language and phone numbers looking for threatening conversations and correspondents. In some extraordinary cases, undersea or land-based cables have been tapped as well.
More exotic secret information, such as encryption keys, diplomatic message traffic, policy and orders of battle are usually restricted to analysts on a need-to-know basis in order to protect the sources and methods from foreign traffic analysis.
Analysis
Analysis consists of assessment of an adversary's capabilities and vulnerabilities. In a real sense, these are threats and opportunities. Analysts generally look for the least defended or most fragile resource that is necessary for important military capabilities. These are then flagged as critical vulnerabilities. For example, in modern mechanized warfare, the logistics chain for a military unit's fuel supply is often the most vulnerable part of a nation's order of battle.
Human intelligence, gathered by spies, is usually carefully tested against unrelated sources. It is notoriously prone to inaccuracy. In some cases, sources will just make up imaginative stories for pay, or they may try to settle grudges by identifying personal enemies as enemies of the state that is paying for the intelligence. However, human intelligence is often the only form of intelligence that provides information about an opponent's intentions and rationales, and it is therefore often uniquely valuable to successful negotiation of diplomatic solutions.
In some intelligence organizations, analysis follows a procedure. First, general media and sources are screened to locate items or groups of interest, and then their location, capabilities, inputs and environment are systematically assessed for vulnerabilities using a continuously-updated list of typical vulnerabilities.
Filing
Critical vulnerabilities are then indexed in a way that makes them easily available to advisors and line intelligence personnel who package this information for policy-makers and war-fighters. Vulnerabilities are usually indexed by the nation and military unit with a list of possible attack methods.
Critical threats are usually maintained in a prioritized file, with important enemy capabilities analyzed on a schedule set by an estimate of the enemy's preparation time. For example, nuclear threats between the USSR and the U.S. were analyzed in real time by continuously on-duty staffs. In contrast, analysis of tank or army deployments are usually triggered by accumulations of fuel and munitions, which are monitored every few days. In some cases, automated analysis is performed in real time on automated data traffic.
Packaging threats and vulnerabilities for decision-makers is a crucial part of military intelligence. A good intelligence officer will stay very close to the policy-maker or war fighter to anticipate their information requirements and tailor the information needed. A good intelligence officer will also ask a fairly large number of questions in order to help anticipate needs. For an important policy-maker, the intelligence officer will have a staff to which research projects can be assigned.
Developing a plan of attack is not the responsibility of intelligence, though it helps an analyst to know the capabilities of common types of military units. Generally, policy-makers are presented with a list of threats and opportunities. They approve some basic action, and then professional military personnel plan the detailed act and carry it out. Once hostilities begin, target selection often moves into the upper end of the military chain of command. Once ready stocks of weapons and fuel are depleted, logistic concerns are often exported to civilian policy-makers.
Dissemination
The processed intelligence information is disseminated through database systems, intel bulletins and briefings to the different decision-makers. The bulletins may also include consequently resulting information requirements and thus conclude the intelligence cycle.
Military intelligence organisations
Defence Intelligence Organisation (Australia)
Intelligence Branch (Canadian Forces)
Military Intelligence (Czech Republic)
Direction du Renseignement Militaire (France)
Bundesnachrichtendienst (BND - German Federal Intelligence Service) and Militärischer Abschirmdienst (MAD- German Military Counter-Intelligence)
Strategic Intelligence Agency (Indonesia)
Defence Intelligence Agency (India)
Inter-Services Intelligence and Military Intelligence of Pakistan
Centro de Informações e Segurança Militares (CISMIL - Portugal)
Glavnoye Razvedyvatel'noye Upravleniye (GRU - Russian Military Intelligence)
Military Intelligence Agency (VOA - Serbia)
Secret Intelligence Service (MI6), Defence Intelligence and Intelligence Corps (United Kingdom)
United States Intelligence Community
Defense Intelligence Agency
G-2, US Army unit
Defence Intelligence (SANDF) (South Africa)
See also
Intelligence gathering disciplines
List of intelligence gathering disciplines
References
Further reading
N. J. E. Austin and N. B. Rankov, Exploratio: Military and Political Intelligence in the Roman World From the Second Punic War to the Battle of Adrianople. London: Routledge, 1995.
Julius Caesar, The Civil War. Translated by Jane F. Mitchell. Baltimore, MD: Penguin Books, 1967.
Cassius Dio, Dio's Roman History. Translated by Earnest Cary. New York: G.P. Putnam's Sons, 1916.
Francis Dvornik, Origins of Intelligence Services. New Brunswick, NJ: Rutgers University Press, 1974.
Terrance Finnegan, "The Origins of Modern Intelligence, Surveillance, and Reconnaissance: Military Intelligence at the Front, 1914–18," Studies in Intelligence 53#4 (2009) pp. 25–40.
J. F. C. Fuller, A Military History of the Western World, Vol. 1: From the Earliest Times to the Battle of Lepanto. New York: Da Capo Press, 1987.
Richard A. Gabriel and Karen S. Metz, From Sumer to Rome; The Military Capabilities of Ancient Armies. New York: Greenwood Press, 1991.
John Keegan, Intelligence in War. New York: Knopf, 2003.
Charles H. Harris & Louis R. Sadler. The Border and the Revolution: Clandestine Activities of the Mexican Revolution 1910–1920. HighLonesome Books, 1988.
Ishmael Jones, The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture, New York: Encounter Books, 2010 ().
Henry Landau, The Enemy Within: The Inside Story of German Sabotage in America. G. P. Putnam Sons, 1937.
Sidney F. Mashbir. I Was An American Spy. Vantage, 1953.
Nathan Miller. Spying for America: The Hidden History of U.S. Intelligence. Dell Publishing, 1989.
Ian Sayer & Douglas Botting. America's Secret Army, The Untold Story of the Counter Intelligence Corps. Franklin Watts Publishers, 1989.
Barbara W. Tuchman, The Zimmermann Telegram. Ballantine Books, 1958.
"Coast Guard Intelligence Looking For a Few Good Men and Women." Commandant's Bulletin (Jun 10 1983), p. 34.
"Coast Guard Investigative Service." Coast Guard (Dec 1996), pp. 24–25.
The Coast Guard at War: Volume XII: Intelligence. Washington, DC: Historical Section, Public Information Division, U.S. Coast Guard Headquarters, January 1, 1949.
Hinsley, Francis H. "British Intelligence in the Second World War: Its Influence on Strategy and Operations". Cambridge University Press, 1990.
Alfred Rolington. Strategic Intelligence for the 21st Century: The Mosaic Method. Oxford University Press, 2013.
Creating Intelligence, Neil Garra.
External links
Office of the Director of National Intelligence
Intelligence Resource Program of the Federation of American Scientists Reference
Joint Publication 2-0
S2 Creating Intelligence
Types of espionage
Combat support occupations
Intelligence gathering disciplines |
147130 | https://en.wikipedia.org/wiki/Virtual%20private%20network | Virtual private network | A virtual private network (VPN) extends a private network across a public network and enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. The benefits of a VPN include increases in functionality, security, and management of the private network. It provides access to resources that are inaccessible on the public network and is typically used for telecommuting workers. Encryption is common, although not an inherent part of a VPN connection.
A VPN is created by establishing a virtual point-to-point connection through the use of dedicated circuits or with tunneling protocols over existing networks. A VPN available from the public Internet can provide some of the benefits of a wide area network (WAN). From a user perspective, the resources available within the private network can be accessed remotely.
Types
Virtual private networks may be classified by several categories:
Remote access A host-to-network configuration is analogous to connecting a computer to a local area network. This type provides access to an enterprise network, such as an intranet. This may be employed for telecommuting workers who need access to private resources, or to enable a mobile worker to access important tools without exposing them to the public Internet.
Site-to-site A site-to-site configuration connects two networks. This configuration expands a network across geographically disparate offices, or a group of offices to a data center installation. The interconnecting link may run over a dissimilar intermediate network, such as two IPv6 networks connected over an IPv4 network.
Extranet-based site-to-site In the context of site-to-site configurations, the terms intranet and extranet are used to describe two different use cases. An intranet site-to-site VPN describes a configuration where the sites connected by the VPN belong to the same organization, whereas an extranet site-to-site VPN joins sites belonging to multiple organizations.
Typically, individuals interact with remote access VPNs, whereas businesses tend to make use of site-to-site connections for business-to-business, cloud computing, and branch office scenarios. Despite this, these technologies are not mutually exclusive and, in a significantly complex business network, may be combined to enable remote access to resources located at any given site, such as an ordering system that resides in a datacenter.
VPN systems also may be classified by:
the tunneling protocol used to tunnel the traffic
the tunnel's termination point location, e.g., on the customer edge or network-provider edge
the type of topology of connections, such as site-to-site or network-to-network
the levels of security provided
the OSI layer they present to the connecting network, such as Layer 2 circuits or Layer 3 network connectivity
the number of simultaneous connections
Security mechanisms
VPNs cannot make online connections completely anonymous, but they can usually increase privacy and security. To prevent disclosure of private information, VPNs typically allow only authenticated remote access using tunneling protocols and encryption techniques.
The VPN security model provides:
confidentiality such that even if the network traffic is sniffed at the packet level (see network sniffer and deep packet inspection), an attacker would see only encrypted data
sender authentication to prevent unauthorized users from accessing the VPN
message integrity to detect any instances of tampering with transmitted messages.
Secure VPN protocols include the following:
Internet Protocol Security (IPsec) was initially developed by the Internet Engineering Task Force (IETF) for IPv6, which was required in all standards-compliant implementations of IPv6 before made it only a recommendation. This standards-based security protocol is also widely used with IPv4 and the Layer 2 Tunneling Protocol. Its design meets most security goals: availability, integrity, and confidentiality. IPsec uses encryption, encapsulating an IP packet inside an IPsec packet. De-encapsulation happens at the end of the tunnel, where the original IP packet is decrypted and forwarded to its intended destination.
Transport Layer Security (SSL/TLS) can tunnel an entire network's traffic (as it does in the OpenVPN project and SoftEther VPN project) or secure an individual connection. A number of vendors provide remote-access VPN capabilities through SSL. An SSL VPN can connect from locations where IPsec runs into trouble with Network Address Translation and firewall rules.
Datagram Transport Layer Security (DTLS) – used in Cisco AnyConnect VPN and in OpenConnect VPN to solve the issues SSL/TLS has with tunneling over TCP (tunneling TCP over TCP can lead to big delays and connection aborts).
Microsoft Point-to-Point Encryption (MPPE) works with the Point-to-Point Tunneling Protocol and in several compatible implementations on other platforms.
Microsoft Secure Socket Tunneling Protocol (SSTP) tunnels Point-to-Point Protocol (PPP) or Layer 2 Tunneling Protocol traffic through an SSL/TLS channel (SSTP was introduced in Windows Server 2008 and in Windows Vista Service Pack 1).
Multi Path Virtual Private Network (MPVPN). Ragula Systems Development Company owns the registered trademark "MPVPN".
Secure Shell (SSH) VPN – OpenSSH offers VPN tunneling (distinct from port forwarding) to secure remote connections to a network or to inter-network links. OpenSSH server provides a limited number of concurrent tunnels. The VPN feature itself does not support personal authentication.
WireGuard is a protocol. In 2020, WireGuard support was added to both the Linux and Android kernels, opening it up to adoption by VPN providers. By default, WireGuard utilizes Curve25519 for key exchange and ChaCha20 for encryption, but also includes the ability to pre-share a symmetric key between the client and server.
IKEv2 is an acronym that stands for Internet Key Exchange volume 2. It was created by Microsoft and Cisco and is used in conjunction with IPSec for encryption and authentication. Its primary use is in mobile devices, whether on 3G or 4G LTE networks, since it is effective at rejoining when a connection is lost.
Authentication
Tunnel endpoints must be authenticated before secure VPN tunnels can be established. User-created remote-access VPNs may use passwords, biometrics, two-factor authentication or other cryptographic methods. Network-to-network tunnels often use passwords or digital certificates. They permanently store the key to allow the tunnel to establish automatically, without intervention from the administrator.
Routing
Tunneling protocols can operate in a point-to-point network topology that would theoretically not be considered a VPN because a VPN by definition is expected to support arbitrary and changing sets of network nodes. But since most router implementations support a software-defined tunnel interface, customer-provisioned VPNs often are simply defined tunnels running conventional routing protocols.
Provider-provisioned VPN building-blocks
Depending on whether a provider-provisioned VPN (PPVPN) operates in Layer 2 (L2) or Layer 3 (L3), the building blocks described below may be L2 only, L3 only, or a combination of both. Multi-protocol label switching (MPLS) functionality blurs the L2-L3 identity.
generalized the following terms to cover L2 MPLS VPNs and L3 (BGP) VPNs, but they were introduced in .
Customer (C) devices
A device that is within a customer's network and not directly connected to the service provider's network. C devices are not aware of the VPN.
Customer Edge device (CE)
A device at the edge of the customer's network which provides access to the PPVPN. Sometimes it is just a demarcation point between provider and customer responsibility. Other providers allow customers to configure it.
Provider edge device (PE)
A device, or set of devices, at the edge of the provider network which connects to customer networks through CE devices and presents the provider's view of the customer site. PEs are aware of the VPNs that connect through them, and maintain VPN state.
Provider device (P)
A device that operates inside the provider's core network and does not directly interface to any customer endpoint. It might, for example, provide routing for many provider-operated tunnels that belong to different customers' PPVPNs. While the P device is a key part of implementing PPVPNs, it is not itself VPN-aware and does not maintain VPN state. Its principal role is allowing the service provider to scale its PPVPN offerings, for example, by acting as an aggregation point for multiple PEs. P-to-P connections, in such a role, often are high-capacity optical links between major locations of providers.
User-visible PPVPN services
OSI Layer 2 services
Virtual LAN
Virtual LAN (VLAN) is a Layer 2 technique that allows for the coexistence of multiple local area network (LAN) broadcast domains interconnected via trunks using the IEEE 802.1Q trunking protocol. Other trunking protocols have been used but have become obsolete, including Inter-Switch Link (ISL), IEEE 802.10 (originally a security protocol but a subset was introduced for trunking), and ATM LAN Emulation (LANE).
Virtual private LAN service (VPLS)
Developed by Institute of Electrical and Electronics Engineers, Virtual LANs (VLANs) allow multiple tagged LANs to share common trunking. VLANs frequently comprise only customer-owned facilities. Whereas VPLS as described in the above section (OSI Layer 1 services) supports emulation of both point-to-point and point-to-multipoint topologies, the method discussed here extends Layer 2 technologies such as 802.1d and 802.1q LAN trunking to run over transports such as Metro Ethernet.
As used in this context, a VPLS is a Layer 2 PPVPN, emulating the full functionality of a traditional LAN. From a user standpoint, a VPLS makes it possible to interconnect several LAN segments over a packet-switched, or optical, provider core, a core transparent to the user, making the remote LAN segments behave as one single LAN.
In a VPLS, the provider network emulates a learning bridge, which optionally may include VLAN service.
Pseudo wire (PW)
PW is similar to VPLS, but it can provide different L2 protocols at both ends. Typically, its interface is a WAN protocol such as Asynchronous Transfer Mode or Frame Relay. In contrast, when aiming to provide the appearance of a LAN contiguous between two or more locations, the Virtual Private LAN service or IPLS would be appropriate.
Ethernet over IP tunneling
EtherIP () is an Ethernet over IP tunneling protocol specification. EtherIP has only packet encapsulation mechanism. It has no confidentiality nor message integrity protection. EtherIP was introduced in the FreeBSD network stack and the SoftEther VPN server program.
IP-only LAN-like service (IPLS)
A subset of VPLS, the CE devices must have Layer 3 capabilities; the IPLS presents packets rather than frames. It may support IPv4 or IPv6.
OSI Layer 3 PPVPN architectures
This section discusses the main architectures for PPVPNs, one where the PE disambiguates duplicate addresses in a single routing instance, and the other, virtual router, in which the PE contains a virtual router instance per VPN. The former approach, and its variants, have gained the most attention.
One of the challenges of PPVPNs involves different customers using the same address space, especially the IPv4 private address space. The provider must be able to disambiguate overlapping addresses in the multiple customers' PPVPNs.
BGP/MPLS PPVPN
In the method defined by , BGP extensions advertise routes in the IPv4 VPN address family, which are of the form of 12-byte strings, beginning with an 8-byte route distinguisher (RD) and ending with a 4-byte IPv4 address. RDs disambiguate otherwise duplicate addresses in the same PE.
PEs understand the topology of each VPN, which are interconnected with MPLS tunnels either directly or via P routers. In MPLS terminology, the P routers are label switch routers without awareness of VPNs.
Virtual router PPVPN
The virtual router architecture, as opposed to BGP/MPLS techniques, requires no modification to existing routing protocols such as BGP. By the provisioning of logically independent routing domains, the customer operating a VPN is completely responsible for the address space. In the various MPLS tunnels, the different PPVPNs are disambiguated by their label but do not need routing distinguishers.
Unencrypted tunnels
Some virtual networks use tunneling protocols without encryption for protecting the privacy of data. While VPNs often do provide security, an unencrypted overlay network does not neatly fit within the secure or trusted categorization. For example, a tunnel set up between two hosts with Generic Routing Encapsulation (GRE) is a virtual private network but is neither secure nor trusted.
Native plaintext tunneling protocols include Layer 2 Tunneling Protocol (L2TP) when it is set up without IPsec and Point-to-Point Tunneling Protocol (PPTP) or Microsoft Point-to-Point Encryption (MPPE).
Trusted delivery networks
Trusted VPNs do not use cryptographic tunneling; instead they rely on the security of a single provider's network to protect the traffic.
Multi-Protocol Label Switching (MPLS) often overlays VPNs, often with quality-of-service control over a trusted delivery network.
L2TP which is a standards-based replacement, and a compromise taking the good features from each, for two proprietary VPN protocols: Cisco's Layer 2 Forwarding (L2F) (obsolete ) and Microsoft's Point-to-Point Tunneling Protocol (PPTP).
From the security standpoint, VPNs either trust the underlying delivery network or must enforce security with mechanisms in the VPN itself. Unless the trusted delivery network runs among physically secure sites only, both trusted and secure models need an authentication mechanism for users to gain access to the VPN.
VPNs in mobile environments
Mobile virtual private networks are used in settings where an endpoint of the VPN is not fixed to a single IP address, but instead roams across various networks such as data networks from cellular carriers or between multiple Wi-Fi access points without dropping the secure VPN session or losing application sessions. Mobile VPNs are widely used in public safety where they give law-enforcement officers access to applications such as computer-assisted dispatch and criminal databases, and in other organizations with similar requirements such as field service management and healthcare..
Networking limitations
A limitation of traditional VPNs is that they are point-to-point connections and do not tend to support broadcast domains; therefore, communication, software, and networking, which are based on layer 2 and broadcast packets, such as NetBIOS used in Windows networking, may not be fully supported as on a local area network. Variants on VPN such as Virtual Private LAN Service (VPLS) and layer 2 tunneling protocols are designed to overcome this limitation.
Common misconceptions
A VPN does not make your internet "private". You can still be tracked through tracking cookies and device fingerprinting, even if your IP address is hidden.
A VPN does not make you immune to hackers.
See also
Anonymizer
Dynamic Multipoint Virtual Private Network
Ethernet VPN
Internet privacy
Mediated VPN
Opportunistic encryption
Split tunneling
Virtual private server
VPN service
References
Further reading
Network architecture
Internet privacy |
147399 | https://en.wikipedia.org/wiki/WinZip | WinZip | WinZip (not to be confused with WinRAR) is a trialware file archiver and compressor for Windows, macOS, iOS and Android. It is developed by WinZip Computing (formerly Nico Mak Computing), which is owned by Corel Corporation. The program can create archives in Zip file format, unpack some other archive file formats and it also has various tools for system integration.
Features
Pack (create) ZIP and Zipx archive files.
Unpack BZ2, LHA, LZH, RAR, ZIP, Zipx, 7Z.
Decode B64, HQX, UUE files.
Configurable Microsoft Windows Shell integration
Direct write of archives to CD/DVD
Automation of backup jobs
Integrated FTP upload
Email archives
Support for ARC and ARJ archives if suitable external programs are installed.
History
WinZip 1.0 was released in April 1991 as a Graphical User Interface (GUI) front-end for PKZIP. Earlier in January 1991 Nico Mak Computing had released a GUI front-end for OS/2 Presentation Manager called PMZIP. It used the OS/2 versions of the PKWARE, Inc. PKZIP and PKUNZIP programs. Originally released on CompuServe, availability of WinZip expanded across major online services, including GEnie, Prodigy and other online services. In 1993, WinZip announced the launch of its official support for customers on the Windows Utility Forum, serving over 100,000 members, providing updates and related information. The freely downloadable WinZip soon found itself included in best-selling Windows computing titles as part of companion disks, including the all-time best-selling Windows 3.0 book, Windows Secrets, by Brian Livingston. By 1994, WinZip had become the official and required compression tool used by system operators on CompuServe for forum file libraries.
Starting from version 5.0 in 1993, the creators of WinZip incorporated compression code from the Info-ZIP project, thus eliminating the need for the PKZIP executable to be present.
From version 6.0 until version 9.0, registered users could download the newest versions of the software, enter their original registration information or install over the top of their existing registered version, and thereby obtain a free upgrade. This upgrade scheme was discontinued as of version 10.0. WinZip is available in standard and professional versions. However, the shell in Windows ME and later versions of Microsoft Windows has the ability to open and create .zip files (titled "compressed folders"), which reduces the need for extra compression software.
On May 2, 2006, WinZip Computing was acquired by Corel Corporation using the proceeds from its initial public offering.
WinZip 1.0 for Mac OS X was released in November 2010. This version is compatible with Mac OS X 10.6 "Snow Leopard" and Intel-based v10.5 "Leopard" Macs. WinZip Mac Edition 2 includes support for OS X 10.8 "Mountain Lion".
Supported .ZIP archive features
128- and 256-bit key AES encryption in addition to the less secure PKZIP 2.0 encryption method used in earlier versions. The AES implementation, using Brian Gladman's code, was FIPS-197 certified, on March 27, 2003. However, Central Directory Encryption feature is not supported.
Beginning with WinZip 9.0, ZIP64 archives are supported, eliminating both the maximum limit of 65,535 members for single archive and the 4-gigabyte size limit on either the archive and each member file.
Support of additional compression methods: bzip2 (9.0), PPMd (10.0), WavPack (11.0), LZMA (12.0), JPEG (12.0), Zipx (12.1), xz (18.0), MP3 (21.0).
Unicode support to ensure international characters are displayed for filenames in a Zip file. (WinZip prior to 11.2 does not support Unicode characters in filenames. Attempting to add these files to an archive results in the error message "Warning: Could not open for reading: ...")
Release history
Windows
The ZIP file archive format (ZIP) was originally invented for MS-DOS in 1989 by Phil Katz.
Seeing the need for an archive application with a Graphical User Interface, Nico Mak (then employed by Mansfield Software Group, Inc) developed the WinZip application for Microsoft Windows.
WinZip 1.0 was initial version for Windows.
Mac
WinZip 1.0 for Mac OS X (November 16, 2010): Initial release is compatible with Intel Macs and can be run on v10.5 'Leopard.'
iOS
The iOS version was first released on February 17, 2012. The free English app is designed for iOS 4.2 on iPhone, iPad, and iPod Touch, and is available on Apple's App Store.
Android
WinZip Android was first released on June 19, 2012. The free English app is designed for Android operating system versions 2.1 (Eclair), 2.2 (Froyo), 2.3 (Gingerbread), 3.x (Honeycomb), 4.x (Ice Cream Sandwich) and higher, it was initially available at Google Play.
See also
ZIP (file format)
Comparison of file archivers
Comparison of archive formats
List of archive formats
References
External links
Windows compression software
Data compression
Proprietary software
Corel software
1991 software
File archivers
2006 mergers and acquisitions
Data compression software |
147748 | https://en.wikipedia.org/wiki/Jimmy%20Doolittle | Jimmy Doolittle | James Harold Doolittle (December 14, 1896 – September 27, 1993) was an American military general and aviation pioneer who received the Medal of Honor for his daring raids on Japan during World War II. He also made early coast-to-coast flights, record-breaking speed flights, won many flying races, and helped develop and flight-test instrument flying.
Raised in Nome, Alaska, Doolittle studied as an undergraduate at University of California, Berkeley, graduating with a Bachelor of Arts in 1922. He also earned a doctorate in aeronautics from the Massachusetts Institute of Technology in 1925, the first issued in the United States. In 1929, he pioneered the use of "blind flying", where a pilot relies on flight instruments alone, which later won him the Harmon Trophy and made all-weather airline operations practical. He was a flying instructor during World War I and a reserve officer in the United States Army Air Corps, but he was recalled to active duty during World War II. He was awarded the Medal of Honor for personal valor and leadership as commander of the Doolittle Raid, a bold long-range retaliatory air raid on some of the Japanese main islands on April 18, 1942, four months after the attack on Pearl Harbor. The raid used 16 B-25B Mitchell medium bombers with reduced armament to decrease weight and increase range, each with a crew of five and no escort fighter aircraft. It was a major morale booster for the United States and Doolittle was celebrated as a hero, making him one of the most important national figures of the war.
Doolittle was promoted to lieutenant general and commanded the Twelfth Air Force over North Africa, the Fifteenth Air Force over the Mediterranean, and the Eighth Air Force over Europe. Doolittle retired from the Air Force in 1959 but remained active in many technical fields. He was inducted into the National Aviation Hall of Fame in 1967, eight years after retirement and only five years after the Hall was founded. He was eventually promoted to general in 1985, presented to him by President Ronald Reagan 43 years after the Doolittle Raid. In 2003, he topped Air & Space/Smithsonian magazine's list of the greatest pilots of all time, and ten years later, Flying magazine ranked Doolittle sixth on its list of the 51 Heroes of Aviation. He died in 1993 at the age of 96, and was buried at Arlington National Cemetery.
Early life and education
Doolittle was born in Alameda, California, and spent his youth in Nome, Alaska, where he earned a reputation as a boxer. His parents were Frank Henry Doolittle and Rosa (Rose) Cerenah Shephard. By 1910, Jimmy Doolittle was attending school in Los Angeles. When his school attended the 1910 Los Angeles International Air Meet at Dominguez Field, Doolittle saw his first airplane. He attended Los Angeles City College after graduating from Manual Arts High School in Los Angeles, and later won admission to the University of California, Berkeley where he studied at the College of Mines. He was a member of Theta Kappa Nu fraternity, which would merge into Lambda Chi Alpha during the later stages of the Great Depression.
Doolittle took a leave of absence in October 1917 to enlist in the Signal Corps Reserve as a flying cadet; he received ground training at the School of Military Aeronautics (an Army school) on the campus of the University of California, and flight-trained at Rockwell Field, California. Doolittle received his Reserve Military Aviator rating and was commissioned a second lieutenant in the Signal Officers Reserve Corps of the U.S. Army on March 11, 1918.
Military career
During World War I, Doolittle stayed in the United States as a flight instructor and performed his war service at Camp John Dick Aviation Concentration Center ("Camp Dick"), Texas; Wright Field, Ohio; Gerstner Field, Louisiana; Rockwell Field, California; Kelly Field, Texas and Eagle Pass, Texas.
Doolittle served at Rockwell as a flight leader and gunnery instructor. At Kelly Field, he served with the 104th Aero Squadron and with the 90th Aero Squadron of the 1st Surveillance Group. His detachment of the 90th Aero Squadron was based at Eagle Pass, patrolling the Mexican border. Recommended by three officers for retention in the Air Service during demobilization at the end of the war, Doolittle qualified by examination and received a Regular Army commission as a 1st Lieutenant, Air Service, on July 1, 1920.
On May 10, 1921, he was engineering officer and pilot for an expedition recovering a plane that had force-landed in a Mexican canyon on February 10 during a transcontinental flight attempt by Alexander Pearson Jr. Doolittle reached the plane on May 3 and found it serviceable, then returned May 8 with a replacement motor and four mechanics. The oil pressure of the new motor was inadequate and Doolittle requested two pressure gauges, using carrier pigeons to communicate. The additional parts were dropped by air and installed, and Doolittle flew the plane to Del Rio, Texas himself, taking off from a 400-yard airstrip hacked out of the canyon floor.
Subsequently, he attended the Air Service Mechanical School at Kelly Field and the Aeronautical Engineering Course at McCook Field, Ohio. Having at last returned to complete his college degree, he earned a Bachelor of Arts from the University of California, Berkeley in 1922, and joined the Lambda Chi Alpha fraternity.
Doolittle was one of the most famous pilots during the inter-war period. On September 4, 1922, he made the first of many pioneering flights, flying a de Havilland DH-4 – which was equipped with early navigational instruments – in the first cross-country flight, from Pablo Beach (now Jacksonville Beach), Florida, to Rockwell Field, San Diego, California, in 21 hours and 19 minutes, making only one refueling stop at Kelly Field. The U.S. Army awarded him the Distinguished Flying Cross.
Within days after the transcontinental flight, he was at the Air Service Engineering School (a precursor to the Air Force Institute of Technology) at McCook Field, Dayton, Ohio. For Doolittle, the school assignment had special significance: "In the early '20s, there was not complete support between the flyers and the engineers. The pilots thought the engineers were a group of people who zipped slide rules back and forth, came out with erroneous results and bad aircraft; and the engineers thought the pilots were crazy – otherwise they wouldn't be pilots. So some of us who had previous engineering training were sent to the engineering school at old McCook Field. After a year's training there in practical aeronautical engineering, some of us were sent on to MIT where we took advanced degrees in aeronautical engineering. I believe that the purpose was served, that there was thereafter a better understanding between pilots and engineers."
In July 1923, after serving as a test pilot and aeronautical engineer at McCook Field, Doolittle entered MIT. In March 1924, he conducted aircraft acceleration tests at McCook Field, which became the basis of his master's thesis and led to his second Distinguished Flying Cross. He received his MS degree in Aeronautics from MIT in June 1924. Because the Army had given him two years to get his degree and he had done it in just one, he immediately started working on his Sc.D. in Aeronautics, which he received in June 1925. His doctorate in aeronautical engineering was the first issued in the United States. He said that he considered his master's work more significant than his doctorate.
Following graduation, Doolittle attended special training in high-speed seaplanes at Naval Air Station Anacostia in Washington, D.C. He also served with the Naval Test Board at Mitchel Field, Long Island, New York, and was a familiar figure in air speed record attempts in the New York area. He won the Schneider Cup race in a Curtiss R3C in 1925 with an average speed of 232 MPH. For that feat, Doolittle was awarded the Mackay Trophy in 1926.
In April 1926, Doolittle was given a leave of absence to go to South America to perform demonstration flights for Curtiss Aircraft. In Chile, he broke both ankles while demonstrating his acrobatic abilities in an incident that was known as Night of the Pisco Sours. Despite having both ankles in casts, Doolittle put his Curtiss P-1 Hawk through aerial maneuvers that outdid the competition. He returned to the United States, and was confined to Walter Reed Army Hospital for his injuries until April 1927. He was then assigned to McCook Field for experimental work, with additional duty as an instructor pilot to the 385th Bomb Squadron of the Air Corps Reserve. During this time, in 1927 he was the first to perform an outside loop, previously thought to be a fatal maneuver. Carried out in a Curtiss fighter at Wright Field in Ohio, Doolittle executed the dive from 10,000 feet, reached 280 mph, bottomed out upside down, then climbed and completed the loop.
Instrument flight
Doolittle's most important addition to aeronautical technology was his early advancement of instrument flying. He was the first to recognize that true operational freedom in the air could not be achieved until pilots developed the ability to control and navigate aircraft in flight from takeoff run to landing rollout, regardless of the range of vision from the cockpit. Doolittle was the first to envision that a pilot could be trained to use instruments to fly through fog, clouds, precipitation of all forms, darkness, or any other impediment to visibility; and in spite of the pilot's own possibly convoluted motion sense inputs. Even at this early stage, the ability to control aircraft was getting beyond the motion sense capability of the pilot. That is, as aircraft became faster and more maneuverable, pilots could become seriously disoriented without visual cues from outside the cockpit, because aircraft could move in ways that pilots' senses could not accurately decipher.
Doolittle was also the first to recognize these psycho-physiological limitations of the human senses (particularly the motion sense inputs, i.e., up, down, left, right). He initiated the study of the relationships between the psychological effects of visual cues and motion senses. His research resulted in programs that trained pilots to read and understand navigational instruments. A pilot learned to "trust his instruments," not his senses, as visual cues and his motion sense inputs (what he sensed and "felt") could be incorrect or unreliable.
In 1929, he became the first pilot to take off, fly and land an airplane using instruments alone, without a view outside the cockpit. Having returned to Mitchell Field that September, he helped develop blind-flying equipment. He helped develop, and was then the first to test, the now universally used artificial horizon and directional gyroscope. He attracted wide newspaper attention with this feat of "blind" flying and later received the Harmon Trophy for conducting the experiments. These accomplishments made all-weather airline operations practical.
Reserve status
In January 1930, he advised the Army on the construction of Floyd Bennett Field in New York City. Doolittle resigned his regular commission on February 15, 1930, and was commissioned a Major in the Air Reserve Corps a month later, being named manager of the Aviation Department of Shell Oil Company, in which capacity he conducted numerous aviation tests. While in the Reserve, he also returned to temporary active duty with the Army frequently to conduct tests.
Doolittle helped influence Shell Oil Company to produce the first quantities of 100 octane aviation gasoline. High octane fuel was crucial to the high-performance planes that were developed in the late 1930s.
In 1931, Doolittle won the first Bendix Trophy race from Burbank, California, to Cleveland, in a Laird Super Solution biplane.
In 1932, Doolittle set the world's high-speed record for land planes at 296 miles per hour in the Shell Speed Dash. Later, he took the Thompson Trophy race at Cleveland in the notorious Gee Bee R-1 racer with a speed averaging 252 miles per hour. After having won the three big air racing trophies of the time, the Schneider, Bendix, and Thompson, he officially retired from air racing stating, "I have yet to hear anyone engaged in this work dying of old age."
In April 1934, Doolittle was selected to be a member of the Baker Board. Chaired by former Secretary of War Newton D. Baker, the board was convened during the Air Mail scandal to study Air Corps organization. In 1940, he became president of the Institute of Aeronautical Science.
The development of 100-octane aviation gasoline on an economic scale was due in part to Doolittle, who had become aviation manager of Shell Oil Company. Around 1935 he convinced Shell to invest in refining capacity to produce 100-octane fuel on a scale that nobody needed since no aircraft existed that required a fuel that nobody made. Some fellow employees would call his effort "Doolittle's million-dollar blunder" but time would prove him correct. Before this the Army had considered 100-octane tests using pure octane but at $25 a gallon it did not happen. By 1936 tests at Wright Field using a cheaper alternative to pure octane proved the value of the fuel and both Shell and Standard Oil of New Jersey would win the contract to supply test quantities for the Army. By 1938 the price was down to 17.5 cents a gallon, only 2.5 cents more than 87 octane fuel. By the end of WW II the price would be down to 16 cents a gallon and the U.S. armed forces would be consuming 20 million gallons a day.
Doolittle returned to active duty in the U.S. Army Air Corps on July 1, 1940, with the rank of Major. He was assigned as the assistant district supervisor of the Central Air Corps Procurement District at Indianapolis and Detroit, where he worked with large auto manufacturers on the conversion of their plants to aircraft production. The following August, he went to England as a member of a special mission and brought back information about other countries' air forces and military build-ups.
Doolittle Raid
Following the reorganization of the Army Air Corps into the USAAF in June 1941, Doolittle was promoted to lieutenant colonel on January 2, 1942, and assigned to Army Air Forces Headquarters to plan the first retaliatory air raid on the Japanese homeland following the attack on Pearl Harbor. He volunteered for and received General H.H. Arnold's approval to lead the top secret attack of 16 B-25 medium bombers from the aircraft carrier , with targets in Tokyo, Kobe, Yokohama, Osaka and Nagoya.
After training at Eglin Field and Wagner Field in northwest Florida, Doolittle, his aircraft, and volunteer flight crews proceeded to McClellan Field, California for aircraft modifications at the Sacramento Air Depot, followed by a short final flight to Naval Air Station Alameda, California for embarkation aboard the aircraft carrier USS Hornet. On April 18, Doolittle and his 16 B-25 crews took off from the Hornet, reached Japan, and bombed their targets. Fifteen of the planes then headed for their recovery airfield in China, while one crew chose to land in Russia due to their bomber's unusually high fuel consumption. As did most of the other crewmen who participated in the one-way mission, Doolittle and his crew bailed out safely over China when their B-25 ran out of fuel. By then, they had been flying for about 12 hours, it was nighttime, the weather was stormy, and Doolittle was unable to locate their landing field. Doolittle came down in a rice paddy (saving a previously injured ankle from breaking) near Chuchow (Quzhou). He and his crew linked up after the bailout and were helped through Japanese lines by Chinese guerrillas and American missionary John Birch. Other aircrews were not so fortunate, although most eventually reached safety with the help of friendly Chinese. Seven crew members lost their lives, four as a result of being captured and murdered by the Japanese and three due to an aircraft crash or while parachuting. Doolittle thought he would be court martialed due to having to launch the raid ahead of schedule after being spotted by Japanese patrol boats and the loss of all the aircraft.
Doolittle went on to fly more combat missions as commander of the 12th Air Force in North Africa, for which he was awarded four Air Medals. He later commanded the 12th, 15th and 8th Air Forces in Europe. The other surviving members of the Doolittle raid also went on to new assignments.
Doolittle received the Medal of Honor from President Franklin D. Roosevelt at the White House for planning and leading his raid on Japan. His citation reads: "For conspicuous leadership above and beyond the call of duty, involving personal valor and intrepidity at an extreme hazard to life. With the apparent certainty of being forced to land in enemy territory or to perish at sea, Lt. Col. Doolittle personally led a squadron of Army bombers, manned by volunteer crews, in a highly destructive raid on the Japanese mainland." He was also promoted to brigadier general.
The Doolittle Raid is viewed by historians as a major morale-building victory for the United States. Although the damage done to Japanese war industry was minor, the raid showed the Japanese that their homeland was vulnerable to air attack, and forced them to withdraw several front-line fighter units from Pacific war zones for homeland defense. More significantly, Japanese commanders considered the raid deeply embarrassing, and their attempt to close the perceived gap in their Pacific defense perimeter led directly to the decisive American victory at the Battle of Midway in June 1942.
When asked from where the Tokyo raid was launched, President Roosevelt coyly said its base was Shangri-La, a fictional paradise from the popular novel Lost Horizon. In the same vein, the U.S. Navy named one of its Essex-class fleet carriers the .
World War II, post-raid
In July 1942, as a brigadier general—he had been promoted by two grades on the day after the Tokyo attack, bypassing the rank of full colonel—Doolittle was assigned to the nascent Eighth Air Force. This followed his rejection by General Douglas MacArthur as commander of the South West Pacific Area to replace Major General George Brett. Major General Frank Andrews first turned down the position, and, offered a choice between George Kenney and Doolittle, MacArthur chose Kenney. In September, Doolittle became commanding general of the Twelfth Air Force, soon to be operating in North Africa. He was promoted to major general in November 1942, and in March 1943 became commanding general of the Northwest African Strategic Air Force, a unified command of U.S. Army Air Force and Royal Air Force units. In September, he commanded a raid against the Italian town of Battipaglia that was so thorough in its destruction that General Carl Andrew Spaatz sent him a joking message: "You're slipping Jimmy. There's one crabapple tree and one stable still standing."
Maj. Gen. Doolittle took command of the Fifteenth Air Force in the Mediterranean Theater of Operations in November 1943. On June 10, he flew as co-pilot with Jack Sims, fellow Tokyo Raider, in a B-26 Marauder of the 320th Bombardment Group, 442nd Bombardment Squadron on a mission to attack gun emplacements at Pantelleria. Doolittle continued to fly, despite the risk of capture, while being privy to the Ultra secret, which was that the German encryption systems had been broken by the British. From January 1944 to September 1945, he held his largest command, the Eighth Air Force (8 AF) in England as a lieutenant general, his promotion date being March 13, 1944 and the highest rank ever held by an active reserve officer in modern times.
Escort fighter tactics
Doolittle's major influence on the European air war occurred late in 1943 - and primarily after he took command of the 8th Air Force on January 6, 1944 - when he changed the policy of requiring escorting fighters to remain with their bombers at all times. Instead, he permitted escort fighters to fly far ahead of the bombers' combat box formations, allowing them to freely engage the German fighters laying in wait for the bombers. Throughout most of 1944, this tactic negated the effectiveness of the twin-engined Zerstörergeschwader heavy fighter wings and single-engined Sturmgruppen of heavily armed Fw 190As by clearing the Luftwaffe's bomber destroyers from ahead of the bomber formations. After the bombers had hit their targets, the American fighters were free to strafe German airfields, transportation, and other “targets of opportunity” on their return flight to base. These tasks were initially performed with Lockheed P-38 Lightnings and Republic P-47 Thunderbolts through the end of 1943. They were progressively replaced with the long-ranged North American P-51 Mustangs as the spring of 1944 wore on.
Post-VE Day
After Germany surrendered, the Eighth Air Force was re-equipped with B-29 Superfortress bombers and started to relocate to Okinawa in southern Japan. Two bomb groups had begun to arrive on August 7. However, the 8th was not scheduled to be at full strength until February 1946 and Doolittle declined to rush 8th Air Force units into combat saying that "If the war is over, I will not risk one airplane nor a single bomber crew member just to be able to say the 8th Air Force had operated against the Japanese in Asia."
Postwar
Doolittle Board
Secretary of War Robert P. Patterson asked Doolittle on March 27, 1946, to head a commission on the relationships between officers and enlisted men in the Army called the "Doolittle Board" or the "GI Gripes Board". The Army implemented many of the board's recommendations in the postwar volunteer Army, though many professional officers and noncommissioned officers thought that the Board "destroyed the discipline of the Army". Columnist Hanson Baldwin said that the Doolittle Board "caused severe damage to service effectiveness by recommendations intended to 'democratize' the Army—a concept that is self-contradictory".
U.S. space program
Doolittle became acquainted with the field of space science in its infancy. He wrote in his autobiography, "I became interested in rocket development in the 1930s when I met Robert H. Goddard, who laid the foundation [in the US]. ... While with Shell [Oil] I worked with him on the development of a type of [rocket] fuel. ... " Harry Guggenheim, whose foundation sponsored Goddard's work, and Charles Lindbergh, who encouraged Goddard's efforts, arranged for (then Major) Doolittle to discuss with Goddard a special blend of gasoline. Doolittle piloted himself to Roswell, New Mexico in October 1938 and was given a tour of Goddard's workshop and a "short course" in rocketry and space travel. He then wrote a memo, including a rather detailed description of Goddard's rocket. In closing he said, "interplanetary transportation is probably a dream of the very distant future, but with the moon only a quarter of a million miles away—who knows!" In July 1941 he wrote Goddard that he was still interested in rocket propulsion research. The Army, however, was interested only in JATO at this point. Doolittle was concerned about the state of rocketry in the US and remained in touch with Goddard.
Shortly after World War II, Doolittle spoke to an American Rocket Society conference at which a large number interested in rocketry attended. The topic was Robert Goddard's work. He later stated that at that time "... we [the aeronautics field in the US] had not given much credence to the tremendous potential of rocketry.
In 1956, Doolittle was appointed chairman of the National Advisory Committee for Aeronautics (NACA) because the previous chairman, Jerome C. Hunsaker, thought Doolittle to be more sympathetic to the rocket, which was increasing in importance as a scientific tool as well as a weapon. The NACA Special Committee on Space Technology was organized in January 1958 and chaired by Guy Stever to determine the requirements of a national space program and what additions were needed to NACA technology. Doolittle, Dr. Hugh Dryden and Stever selected committee members including Dr. Wernher von Braun from the Army Ballistic Missile Agency, Sam Hoffman of Rocketdyne, Abe Hyatt of the Office of Naval Research and Colonel Norman Appold from the USAF missile program, considering their potential contributions to US space programs and ability to educate NACA people in space science.
Reserve status
On 5 January 1946, Doolittle reverted to inactive reserve status in the Army Air Forces in the grade of lieutenant general, a rarity in those days when reserve officers were usually limited to the rank of major general or rear admiral, a restriction that would not end in the US armed forces until the 21st century. He retired from the United States Army on 10 May 1946. On 18 September 1947, his reserve commission as a general officer was transferred to the newly established United States Air Force. Doolittle returned to Shell Oil as a vice president, and later as a director.
In the summer of 1946, Doolittle went to Stockholm where he consulted about the "ghost rockets" that had been observed over Scandinavia.
In 1947, Doolittle became the first president of the Air Force Association, an organization which he helped create.
In 1948, Doolittle advocated the desegregation of the US military. He wrote "I am convinced that the solution to the situation is to forget that they are colored." Industry was in the process of integrating, Doolittle said, "and it is going to be forced on the military. You are merely postponing the inevitable and you might as well take it gracefully."
In March 1951, Doolittle was appointed a special assistant to the Chief of Staff of the Air Force, serving as a civilian in scientific matters which led to Air Force ballistic missile and space programs. In 1952, following a string of three air crashes in two months at Elizabeth, New Jersey, the President of the United States, Harry S. Truman, appointed him to lead a presidential commission examining the safety of urban airports. The report "Airports and Their Neighbors" led to zoning requirements for buildings near approaches, early noise control requirements, and initial work on "super airports" with 10,000 ft runways, suited to 150 ton aircraft.
Doolittle was appointed a life member of the MIT Corporation, the university's board of trustees, an uncommon permanent appointment, and served as an MIT Corporation Member for 40 years.
In 1954, President Dwight D. Eisenhower asked Doolittle to perform a study of the Central Intelligence Agency; the resulting work was known as the Doolittle Report, 1954, and was classified for a number of years.
In January 1956, Eisenhower asked Doolittle to serve as a member on the first edition of the President's Board of Consultants on Foreign Intelligence Activities which, years later, would become known as the President's Intelligence Advisory Board.
From 1957 to 1958, he was chairman of the National Advisory Committee for Aeronautics (NACA). This period was during the events of Sputnik, Vanguard and Explorer. He was the last person to hold this position, as the NACA was superseded by NASA. Doolittle was asked to serve as the first NASA administrator, but he turned it down.
Doolittle retired from Air Force Reserve duty on February 28, 1959. He remained active in other capacities, including chairman of the board of TRW Space Technology Laboratories.
Honors and awards
On April 4, 1985, the U.S. Congress promoted Doolittle to the rank of full four-star general (O-10) on the U.S. Air Force retired list. In a later ceremony, President Ronald Reagan and U.S. Senator and retired Air Force Reserve Major General Barry Goldwater pinned on Doolittle's four-star insignia.
In addition to his Medal of Honor for the Tokyo raid, Doolittle received the Presidential Medal of Freedom, two Distinguished Service Medals, the Silver Star, three Distinguished Flying Crosses, the Bronze Star Medal, four Air Medals, and decorations from Belgium, China, Ecuador, France, Great Britain, and Poland. He was the first American to be awarded both the Medal of Honor and the Medal of Freedom. He is also one of only two persons (the other being Douglas MacArthur) to receive both the Medal of Honor and a British knighthood, when he was appointed an honorary Knight Commander of the Order of the Bath.
In 1972, Doolittle received the Tony Jannus Award for his distinguished contributions to commercial aviation, in recognition of the development of instrument flight.
Doolittle was awarded the Public Welfare Medal from the National Academy of Sciences in 1959. In 1983, he was awarded the United States Military Academy's Sylvanus Thayer Award. He was inducted in the Motorsports Hall of Fame of America as the only member of the air racing category in the inaugural class of 1989, and into the Aerospace Walk of Honor in the inaugural class of 1990.
Namesakes
Many US Air Force bases have facilities and streets named for Doolittle, such as the Jimmy Doolittle Event Center at Minot Air Force Base and the Doolittle Lounge at Goodfellow Air Force Base.
The headquarters of the United States Air Force Academy Association of Graduates (AOG) on the grounds of the United States Air Force Academy is named Doolittle Hall.
On May 9, 2007, the new 12th Air Force Combined Air Operations Center (CAOC), Building 74, at Davis-Monthan Air Force Base in Tucson, Arizona, was named the "General James H. Doolittle Center". Several surviving members of the Doolittle Raid were in attendance during the ribbon-cutting ceremony.
Personal life
Doolittle married Josephine "Joe" E. Daniels on December 24, 1917. At a dinner celebration after Jimmy Doolittle's first all-instrument flight in 1929, Josephine Doolittle asked her guests to sign her white damask tablecloth. Later, she embroidered the names in black. She continued this tradition, collecting hundreds of signatures from the aviation world. The tablecloth was donated to the Smithsonian Institution. Married for exactly 71 years, Josephine Doolittle died on December 24, 1988, five years before her husband.
The Doolittles had two sons, James Jr., and John. Both became military officers and pilots. James Jr. was an A-26 Invader pilot in the U.S. Army Air Forces during World War II and later a fighter pilot in the U.S. Air Force in the late 1940s through the late 1950s. He died by suicide in 1958, aged 38. At the time of his death, James Jr. was a Major and commander of the 524th Fighter-Bomber Squadron, piloting the F-101 Voodoo.
The other son, John P. Doolittle, retired from the Air Force as a colonel, and his grandson, Colonel James H. Doolittle III, was the vice commander of the Air Force Flight Test Center at Edwards Air Force Base, California.
James H. "Jimmy" Doolittle died at the age of 96 in Pebble Beach, California, on September 27, 1993, and is buried at Arlington National Cemetery in Virginia, near Washington, D.C., next to his wife. In his honor at the funeral, there was also a flyover of Miss Mitchell, a lone B-25 Mitchell, and USAF Eighth Air Force bombers from Barksdale Air Force Base, Louisiana. After a brief graveside service, fellow Doolittle Raider Bill Bower began the final tribute on the bugle. When emotion took over, Doolittle's great-grandson, Paul Dean Crane, Jr., played Taps.
Doolittle was initiated to the Scottish Rite Freemasonry, where he took the 33rd degree, becoming also a Shriner.
Dates of military rank
Military and civilian awards
Doolittle's military and civilian decorations include the following:
Medal of Honor citation
Rank and organization: Brigadier General, U.S. Army Air Corps
Place and date: Over Japan
Entered service at: Berkeley, Calif.
Birth: Alameda, Calif.
G.O. No.: 29, 9 June 1942
Citation:For conspicuous leadership above the call of duty, involving personal valor and intrepidity at an extreme hazard to life. With the apparent certainty of being forced to land in enemy territory or to perish at sea, Gen. Doolittle personally led a squadron of Army bombers, manned by volunteer crews, in a highly destructive raid on the Japanese mainland.
Other awards and honors
Doolittle also received the following awards and honors:
Awards
In 1972, he was awarded the Horatio Alger Award, given to dedicated community leaders who demonstrate individual initiative and a commitment to excellence; as exemplified by remarkable achievements accomplished through honesty, hard work, self-reliance and perseverance over adversity. The Horatio Alger Association of Distinguished Americans, Inc. bears the name of the renowned author Horatio Alger, Jr., whose tales of overcoming adversity through unyielding perseverance and basic moral principles captivated the public in the late 19th century.
In 1977, Doolittle received the Golden Plate Award of the American Academy of Achievement.
On December 11, 1981, Doolittle was awarded Honorary Naval Aviator wings in recognition of his many years of support of military aviation by Chief of Naval Operations Admiral Thomas B. Hayward.
In 1983, Doolittle was awarded the Sylvanus Thayer Award.
Honors
The city of Doolittle, Missouri, located 5 miles west of Rolla was named in his honor after World War II.
Doolittle was invested into the Sovereign Order of Cyprus and his medallion is now on display at the Smithsonian National Air and Space Museum.
His Bolivian Order of the Condor of the Andes is in the collection of the Smithsonian National Air and Space Museum.
In 1967, James H. Doolittle was inducted into the National Aviation Hall of Fame.
The Society of Experimental Test Pilots annually presents the James H. Doolittle Award in his memory. The award is for "outstanding accomplishment in technical management or engineering achievement in aerospace technology".
Doolittle was inducted into the International Air & Space Hall of Fame at the San Diego Air & Space Museum in 1966.
The oldest residence hall on Embry-Riddle Aeronautical University's campus, Doolittle Hall (1968), was named in his honor.
He was inducted into the Motorsports Hall of Fame of America in 1989.
Air & Space/Smithsonian ranked him the greatest aviator in history.
Flying magazine ranked him 6th on its list of the 51 Heroes of Aviation.
Doolittle Avenue, a residential street in Arcadia, California, is named for Jimmy Doolittle, according to a longtime resident.
Doolittle Drive (California State Route 61) runs along the east side of the Oakland Airport (OAK) in Oakland, California. It parallels Earhart Road (another aviation-themed name), then heads toward Hayward, California.
A television special, All-Star Tribute to General Jimmy Doolittle, aired in 1986 to honor his 90th birthday. Celebrity appearances included Bob Hope, Gerald Ford, and Ronald Reagan.
General Doolittle was named as the inaugural class exemplar at the United States Air Force Academy for the Class of 2000.
In popular culture
Spencer Tracy played Doolittle in Mervyn LeRoy's 1944 film Thirty Seconds Over Tokyo. This portrayal has received much praise.
Alec Baldwin played Doolittle in Michael Bay's 2001 film Pearl Harbor.
Vincent Riotta played Jimmy Doolittle in Bille August's 2017 film The Chinese Widow aka The Hidden Soldier.
Aaron Eckhart played Doolittle in Roland Emmerich's 2019 film Midway.
Bob Clampett's 1946 cartoon Baby Bottleneck briefly portrays a dog named "Jimmy Do-quite-a-little", who invents a failed rocket ship.
Spike Jones' wartime song "Casey Jones" commemorates the raid, and refers to the "Shangri-La" origin story in the following lyrics:
In Shangri-La they got to board the planeThe ceiling was high not a sign of rainThey revved up the motors and got set to goTo pay an unexpected visit down on Tokyo.In climbed Casey he's the bombardier,Doolittle read the orders and they gave a cheer.
See also
Aviation history
List of Medal of Honor recipients for World War II
References
Citations
General bibliography
External links
Media
"Doolittle Tames the Gee Bee"—Story of the 1932 Thompson Trophy race. Includes quotes, photos, video
1896 births
1993 deaths
Aerobatic record holders
Air Corps Tactical School alumni
American air racers
American aviation record holders
American flight instructors
American test pilots
Aviators from California
Burials at Arlington National Cemetery
Chief Scientists of the United States Air Force
Doolittle Raiders
Grand Croix of the Légion d'honneur
Harmon Trophy winners
Honorary Knights Commander of the Order of the Bath
Knights of the Order of Polonia Restituta
Mackay Trophy winners
Military personnel from California
MIT School of Engineering alumni
People from Alameda, California
People from Nome, Alaska
People from Pebble Beach, California
Presidential Medal of Freedom recipients
Recipients of the Air Medal
Recipients of the Croix de guerre (Belgium)
Recipients of the Croix de Guerre 1939–1945 (France)
Recipients of the Distinguished Flying Cross (United States)
Recipients of the Distinguished Service Medal (US Army)
Recipients of the Silver Star
Schneider Trophy pilots
UC Berkeley College of Engineering alumni
United States Army Air Forces bomber pilots of World War II
United States Army Air Forces generals of World War II
United States Army Air Forces generals
United States Army Air Forces Medal of Honor recipients
United States Army Air Service pilots of World War I
United States Army personnel of World War I
World War II recipients of the Medal of Honor
American Freemasons |
148285 | https://en.wikipedia.org/wiki/64-bit%20computing | 64-bit computing | In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64-bit (8-octet) wide. Also, 64-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on processor registers, address buses, or data buses of that size. 64-bit microcomputers are computers in which 64-bit microprocessors are the norm. From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and ARMv8, for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all 0's or all 1's, and several 64-bit instruction sets support fewer than 64 bits of physical memory address.
The term 64-bit describes a generation of computers in which 64-bit processors are the norm. 64 bits is a word size that defines certain classes of computer architecture, buses, memory, and CPUs and, by extension, the software that runs on them. 64-bit CPUs have been used in supercomputers since the 1970s (Cray-1, 1975) and in reduced instruction set computers (RISC) based workstations and servers since the early 1990s, notably the MIPS R4000, R8000, and R10000, the Digital Equipment Corporation (DEC) Alpha, the Sun Microsystems UltraSPARC, and the IBM RS64 and POWER3 and later IBM Power microprocessors. In 2003, 64-bit CPUs were introduced to the (formerly 32-bit) mainstream personal computer market in the form of x86-64 processors and the PowerPC G5, and were introduced in 2012 into the ARM architecture targeting smartphones and tablet computers, first sold on September 20, 2013, in the iPhone 5S powered by the ARMv8-A Apple A7 system on a chip (SoC).
A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (16 exbibytes or EiB) of byte-addressable memory.
With no further qualification, a 64-bit computer architecture generally has integer and addressing processor registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses. However, a CPU might have external data buses or address buses with different sizes from the registers, even larger (the 32-bit Pentium had a 64-bit data bus, for instance).
The term may also refer to the size of low-level data types, such as 64-bit floating-point arithmetic numbers.
Architectural implications
Processor registers are typically divided into several groups: integer, floating-point, single instruction, multiple data (SIMD), control, and often special registers for address arithmetic which may have various uses and names such as address, index, or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider.
Most high performance 32-bit and 64-bit processors (some notable exceptions are older or embedded ARM architecture (ARM) and 32-bit MIPS architecture (MIPS) CPUs) have integrated floating point hardware, which is often, but not always, based on 64-bit units of data. For example, although the x86/x87 architecture has instructions able to load and store 64-bit (and 32-bit) floating-point values in memory, the internal floating point data and register format is 80 bits wide, while the general-purpose registers are 32 bits wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and register format, and 64-bit integer registers.
History
Many computer instruction sets are designed so that a single integer register can store the memory address to any location in the computer's physical or virtual memory. Therefore, the total number of addresses to memory is often determined by the width of these registers. The IBM System/360 of the 1960s was an early 32-bit computer; it had 32-bit integer registers, although it only used the low order 24 bits of a word for addresses, resulting in a 16 MiB () address space. 32-bit superminicomputers, such as the DEC VAX, became common in the 1970s, and 32-bit microprocessors, such as the Motorola 68000 family and the 32-bit members of the x86 family starting with the Intel 80386, appeared in the mid-1980s, making 32 bits something of a de facto consensus as a convenient register size.
A 32-bit address register meant that 232 addresses, or 4 GiB of random-access memory (RAM), could be referenced. When these architectures were devised, 4 GiB of memory was so far beyond the typical amounts (4 MiB) in installations, that this was considered to be enough headroom for addressing. 4.29 billion addresses were considered an appropriate size to work with for another important reason: 4.29 billion integers are enough to assign unique references to most entities in applications like databases.
Some supercomputer architectures of the 1970s and 1980s, such as the Cray-1, used registers up to 64 bits wide, and supported 64-bit integer arithmetic, although they did not support 64-bit addressing. In the mid-1980s, Intel i860 development began culminating in a (too late for Windows NT) 1989 release; the i860 had 32-bit integer registers and 32-bit addressing, so it was not a fully 64-bit processor, although its graphics unit supported 64-bit integer arithmetic. However, 32 bits remained the norm until the early 1990s, when the continual reductions in the cost of memory led to installations with amounts of RAM approaching 4 GiB, and the use of virtual memory spaces exceeding the 4 GiB ceiling became desirable for handling certain types of problems. In response, MIPS and DEC developed 64-bit microprocessor architectures, initially for high-end workstation and server machines. By the mid-1990s, HAL Computer Systems, Sun Microsystems, IBM, Silicon Graphics, and Hewlett Packard had developed 64-bit architectures for their workstation and server systems. A notable exception to this trend were mainframes from IBM, which then used 32-bit data and 31-bit address sizes; the IBM mainframes did not include 64-bit processors until 2000. During the 1990s, several low-cost 64-bit microprocessors were used in consumer electronics and embedded applications. Notably, the Nintendo 64 and the PlayStation 2 had 64-bit microprocessors before their introduction in personal computers. High-end printers, network equipment, and industrial computers, also used 64-bit microprocessors, such as the Quantum Effect Devices R5000. 64-bit computing started to trickle down to the personal computer desktop from 2003 onward, when some models in Apple's Macintosh lines switched to PowerPC 970 processors (termed G5 by Apple), and Advanced Micro Devices (AMD) released its first 64-bit x86-64 processor.
64-bit data timeline
1961 IBM delivers the IBM 7030 Stretch supercomputer, which uses 64-bit data words and 32- or 64-bit instruction words.
1974 Control Data Corporation launches the CDC Star-100 vector supercomputer, which uses a 64-bit word architecture (prior CDC systems were based on a 60-bit architecture).
International Computers Limited launches the ICL 2900 Series with 32-bit, 64-bit, and 128-bit two's complement integers; 64-bit and 128-bit floating point; 32-bit, 64-bit, and 128-bit packed decimal and a 128-bit accumulator register. The architecture has survived through a succession of ICL and Fujitsu machines. The latest is the Fujitsu Supernova, which emulates the original environment on 64-bit Intel processors.
1976 Cray Research delivers the first Cray-1 supercomputer, which is based on a 64-bit word architecture and will form the basis for later Cray vector supercomputers.
1983 Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi architecture has 64-bit data registers but a 32-bit address space.
1989 Intel introduces the Intel i860 reduced instruction set computer (RISC) processor. Marketed as a "64-Bit Microprocessor", it had essentially a 32-bit architecture, enhanced with a 3D graphics unit capable of 64-bit integer operations.
1993 Atari introduces the Atari Jaguar video game console, which includes some 64-bit wide data paths in its architecture.
64-bit address timeline
1991 MIPS Computer Systems produces the first 64-bit microprocessor, the R4000, which implements the MIPS III architecture, the third revision of its MIPS architecture. The CPU is used in SGI graphics workstations starting with the IRIS Crimson. Kendall Square Research deliver their first KSR1 supercomputer, based on a proprietary 64-bit RISC processor architecture running OSF/1.
1992 Digital Equipment Corporation (DEC) introduces the pure 64-bit Alpha architecture which was born from the PRISM project.
1994 Intel announces plans for the 64-bit IA-64 architecture (jointly developed with Hewlett-Packard) as a successor to its 32-bit IA-32 processors. A 1998 to 1999 launch date was targeted.
1995 Sun launches a 64-bit SPARC processor, the UltraSPARC. Fujitsu-owned HAL Computer Systems launches workstations based on a 64-bit CPU, HAL's independently designed first-generation SPARC64. IBM releases the A10 and A30 microprocessors, the first 64-bit PowerPC AS processors. IBM also releases a 64-bit AS/400 system upgrade, which can convert the operating system, database and applications.
1996 Nintendo introduces the Nintendo 64 video game console, built around a low-cost variant of the MIPS R4000. HP releases the first implementation of its 64-bit PA-RISC 2.0 architecture, the PA-8000.
1998 IBM releases the POWER3 line of full-64-bit PowerPC/POWER processors.
1999 Intel releases the instruction set for the IA-64 architecture. AMD publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later branded AMD64).
2000 IBM ships its first 64-bit z/Architecture mainframe, the zSeries z900. z/Architecture is a 64-bit version of the 32-bit ESA/390 architecture, a descendant of the 32-bit System/360 architecture.
2001 Intel ships its IA-64 processor line, after repeated delays in getting to market. Now branded Itanium and targeting high-end servers, sales fail to meet expectations.
2003 AMD introduces its Opteron and Athlon 64 processor lines, based on its AMD64 architecture which is the first x86-based 64-bit processor architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU produced by IBM. Intel maintains that its Itanium chips would remain its only 64-bit processors.
2004 Intel, reacting to the market success of AMD, admits it has been developing a clone of the AMD64 extensions named IA-32e (later renamed EM64T, then yet again renamed to Intel 64). Intel ships updated versions of its Xeon and Pentium 4 processor families supporting the new 64-bit instruction set.
VIA Technologies announces the Isaiah 64-bit processor.
2006 Sony, IBM, and Toshiba begin manufacturing the 64-bit Cell processor for use in the PlayStation 3, servers, workstations, and other appliances. Intel released Core 2 Duo as the first mainstream x86-64 processor for its mobile, desktop, and workstation line. Prior 64-bit extension processor lines were not widely available in the consumer retail market (most of 64-bit Pentium 4/D were OEM), 64-bit Pentium 4, Pentium D, and Celeron were not into mass production until late 2006 due to poor yield issue (most of good yield wafers were targeted at server and mainframe while mainstream still remain 130 nm 32-bit processor line until 2006) and soon became low end after Core 2 debuted. AMD released their first 64-bit mobile processor and manufactured in 90 nm.
2011 ARM Holdings announces ARMv8-A, the first 64-bit version of the ARM architecture.
2012 ARM Holdings announced their Cortex-A53 and Cortex-A57 cores, their first cores based on their 64-bit architecture, on 30 October 2012.
2013Apple announces the iPhone 5S, with the world's first 64-bit processor in a smartphone, which uses their A7 ARMv8-A-based system-on-a-chip.
2014Google announces the Nexus 9 tablet, the first Android device to run on the 64-bit Tegra K1 chip.
64-bit operating system timeline
1985 Cray releases UNICOS, the first 64-bit implementation of the Unix operating system.
1993 DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system (later renamed Tru64 UNIX) for its systems based on the Alpha architecture.
1994 Support for the R8000 processor is added by Silicon Graphics to the IRIX operating system in release 6.0.
1995 DEC releases OpenVMS 7.0, the first full 64-bit version of OpenVMS for Alpha. First 64-bit Linux distribution for the Alpha architecture is released.
1996 Support for the R4x00 processors in 64-bit mode is added by Silicon Graphics to the IRIX operating system in release 6.2.
1998 Sun releases Solaris 7, with full 64-bit UltraSPARC support.
2000 IBM releases z/OS, a 64-bit operating system descended from MVS, for the new zSeries 64-bit mainframes; 64-bit Linux on z Systems follows the CPU release almost immediately.
2001 Linux becomes the first OS kernel to fully support x86-64 (on a simulator, as no x86-64 processors had been released yet).
2001 Microsoft releases Windows XP 64-Bit Edition for the Itanium's IA-64 architecture; it could run 32-bit applications through an execution layer.
2003 Apple releases its Mac OS X 10.3 "Panther" operating system which adds support for native 64-bit integer arithmetic on PowerPC 970 processors. Several Linux distributions release with support for AMD64. FreeBSD releases with support for AMD64.
2005 On January 4, Microsoft discontinues Windows XP 64-Bit Edition, as no PCs with IA-64 processors had been available since the previous September, and announces that it is developing x86-64 versions of Windows to replace it. On January 31, Sun releases Solaris 10 with support for AMD64 and EM64T processors. On April 29, Apple releases Mac OS X 10.4 "Tiger" which provides limited support for 64-bit command-line applications on machines with PowerPC 970 processors; later versions for Intel-based Macs supported 64-bit command-line applications on Macs with EM64T processors. On April 30, Microsoft releases Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition for AMD64 and EM64T processors.
2006 Microsoft releases Windows Vista, including a 64-bit version for AMD64/EM64T processors that retains 32-bit compatibility. In the 64-bit version, all Windows applications and components are 64-bit, although many also have their 32-bit versions included for compatibility with plug-ins.
2007 Apple releases Mac OS X 10.5 "Leopard", which fully supports 64-bit applications on machines with PowerPC 970 or EM64T processors.
2009 Microsoft releases Windows 7, which, like Windows Vista, includes a full 64-bit version for AMD64/Intel 64 processors; most new computers are loaded by default with a 64-bit version. Microsoft also releases Windows Server 2008 R2, which is the first 64-bit only server operating system. Apple releases Mac OS X 10.6, "Snow Leopard", which ships with a 64-bit kernel for AMD64/Intel64 processors, although only certain recent models of Apple computers will run the 64-bit kernel by default. Most applications bundled with Mac OS X 10.6 are now also 64-bit.
2011 Apple releases Mac OS X 10.7, "Lion", which runs the 64-bit kernel by default on supported machines. Older machines that are unable to run the 64-bit kernel run the 32-bit kernel, but, as with earlier releases, can still run 64-bit applications; Lion does not support machines with 32-bit processors. Nearly all applications bundled with Mac OS X 10.7 are now also 64-bit, including iTunes.
2012 Microsoft releases Windows 8 which supports UEFI Class 3 (UEFI without CSM) and Secure Boot.
2013 Apple releases iOS 7, which, on machines with AArch64 processors, has a 64-bit kernel that supports 64-bit applications.
2014 Google releases Android Lollipop, the first version of the Android operating system with support for 64-bit processors.
2017 Apple releases iOS 11, supporting only machines with AArch64 processors. It has a 64-bit kernel that only supports 64-bit applications. 32-bit applications are no longer compatible.
2019 Apple releases macOS 10.15 "Catalina", dropping support for 32-bit Intel applications.
2021 Google releases Android 12, dropping support for 32-bit applications. Microsoft releases Windows 11 on October 5, which only supports 64-bit systems, dropping support for IA-32 systems.
Limits of processors
In principle, a 64-bit microprocessor can address 16 EiB (, or about 18.4 exabytes) of memory. However, not all instruction sets, and not all processors implementing those instruction sets, support a full 64-bit virtual or physical address space.
The x86-64 architecture () allows 48 bits for virtual memory and, for any given processor, up to 52 bits for physical memory. These limits allow memory sizes of 256 TiB () and 4 PiB (), respectively. A PC cannot currently contain 4 pebibytes of memory (due to the physical size of the memory chips), but AMD envisioned large servers, shared memory clusters, and other uses of physical address space that might approach this in the foreseeable future. Thus the 52-bit physical address provides ample room for expansion while not incurring the cost of implementing full 64-bit physical addresses. Similarly, the 48-bit virtual address space was designed to provide 65,536 (216) times the 32-bit limit of 4 GiB (), allowing room for later expansion and incurring no overhead of translating full 64-bit addresses.
The Power ISA v3.0 allows 64 bits for an effective address, mapped to a segmented address with between 65 and 78 bits allowed, for virtual memory, and, for any given processor, up to 60 bits for physical memory.
The Oracle SPARC Architecture 2015 allows 64 bits for virtual memory and, for any given processor, between 40 and 56 bits for physical memory.
The ARM AArch64 Virtual Memory System Architecture allows 48 bits for virtual memory and, for any given processor, from 32 to 48 bits for physical memory.
The DEC Alpha specification requires minimum of 43 bits of virtual memory address space (8 TiB) to be supported, and hardware need to check and trap if the remaining unsupported bits are zero (to support compatibility on future processors). Alpha 21064 supported 43 bits of virtual memory address space (8 TiB) and 34 bits of physical memory address space (16 GiB). Alpha 21164 supported 43 bits of virtual memory address space (8 TiB) and 40 bits of physical memory address space (1 TiB). Alpha 21264 supported user-configurable 43 or 48 bits of virtual memory address space (8 TiB or 256 TiB) and 44 bits of physical memory address space (16 TiB).
64-bit applications
32-bit vs 64-bit
A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture, because that software has to manage the actual memory addressing hardware. Other software must also be ported to use the new abilities; older 32-bit software may be supported either by virtue of the 64-bit instruction set being a superset of the 32-bit instruction set, so that processors that support the 64-bit instruction set can also run code for the 32-bit instruction set, or through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with some Itanium processors from Intel, which included an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications.
One significant exception to this is the IBM AS/400, software for which is compiled into a virtual instruction set architecture (ISA) called Technology Independent Machine Interface (TIMI); TIMI code is then translated to native machine code by low-level software before being executed. The translation software is all that must be rewritten to move the full OS and all software to a new platform, as when IBM transitioned the native instruction set for AS/400 from the older 32/48-bit IMPI to the newer 64-bit PowerPC-AS, codenamed Amazon. The IMPI instruction set was quite different from even 32-bit PowerPC, so this transition was even bigger than moving a given instruction set from 32 to 64 bits.
On 64-bit hardware with x86-64 architecture (AMD64), most 32-bit operating systems and applications can run with no compatibility issues. While the larger address space of 64-bit architectures makes working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate on whether they or their 32-bit compatibility modes will be faster than comparably priced 32-bit systems for other tasks.
A compiled Java program can run on a 32- or 64-bit Java virtual machine with no modification. The lengths and precision of all the built-in types, such as char, short, int, long, float, and double, and the types that can be used as array indices, are specified by the standard and are not dependent on the underlying architecture. Java programs that run on a 64-bit Java virtual machine have access to a larger address space.
Speed is not the only factor to consider in comparing 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering – for high-performance computing (HPC) – may be more suited to a 64-bit architecture when deployed appropriately. For this reason, 64-bit clusters have been widely deployed in large organizations, such as IBM, HP, and Microsoft.
Summary:
A 64-bit processor performs best with 64-bit software.
A 64-bit processor may have backward compatibility, allowing it to run 32-bit application software for the 32-bit version of its instruction set, and may also support running 32-bit operating systems for the 32-bit version of its instruction set.
A 32-bit processor is incompatible with 64-bit software.
Pros and cons
A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GiB of random-access memory. This is not entirely true:
Some operating systems and certain hardware configurations limit the physical memory space to 3 GiB on IA-32 systems, due to much of the 3–4 GiB region being reserved for hardware addressing; see 3 GiB barrier; 64-bit architectures can address far more than 4 GiB. However, IA-32 processors from the Pentium Pro onward allow a 36-bit physical memory address space, using Physical Address Extension (PAE), which gives a 64 GiB physical address range, of which up to 62 GiB may be used by main memory; operating systems that support PAE may not be limited to 4 GiB of physical memory, even on IA-32 processors. However, drivers and other kernel mode software, more so older versions, may be incompatible with PAE; this has been cited as the reason for 32-bit versions of Microsoft Windows being limited to 4 GiB of physical RAM (although the validity of this explanation has been disputed).
Some operating systems reserve portions of process address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, 32-bit Windows reserves 1 or 2 GiB (depending on the settings) of the total address space for the kernel, which leaves only 3 or 2 GiB (respectively) of the address space available for user mode. This limit is much higher on 64-bit operating systems.
Memory-mapped files are becoming more difficult to implement in 32-bit architectures as files of over 4 GiB become more common; such large files cannot be memory-mapped easily to 32-bit architectures, as only part of the file can be mapped into the address space at a time, and to access such a file by memory mapping, the parts mapped must be swapped into and out of the address space as needed. This is a problem, as memory mapping, if properly implemented by the OS, is one of the most efficient disk-to-memory methods.
Some 64-bit programs, such as encoders, decoders and encryption software, can benefit greatly from 64-bit registers, while the performance of other programs, such as 3D graphics-oriented ones, remains unaffected when switching from a 32-bit to a 64-bit environment.
Some 64-bit architectures, such as x86-64 and AArch64, support more general-purpose registers than their 32-bit counterparts (although this is not due specifically to the word length). This leads to a significant speed increase for tight loops since the processor does not have to fetch data from the cache or main memory if the data can fit in the available registers.
Example in C:
int a, b, c, d, e;
for (a = 0; a < 100; a++)
{
b = a;
c = b;
d = c;
e = d;
}
This code first creates 5 values: a, b, c, d and e; and then puts them in a loop. During the loop, this code changes the value of b to the value of a, the value of c to the value of b, the value of d to the value of c and the value of e to the value of d. This has the same effect as changing all the values to a.
If a processor can keep only two or three values or variables in registers, it would need to move some values between memory and registers to be able to process variables d and e also; this is a process that takes many CPU cycles. A processor that can hold all values and variables in registers can loop through them with no need to move data between registers and memory for each iteration. This behavior can easily be compared with virtual memory, although any effects are contingent on the compiler.
The main disadvantage of 64-bit architectures is that, relative to 32-bit architectures, the same data occupies more space in memory (due to longer pointers and possibly other types, and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache use. Maintaining a partial 32-bit model is one way to handle this, and is in general reasonably effective. For example, the z/OS operating system takes this approach, requiring program code to reside in 31-bit address spaces (the high order bit is not used in address calculation on the underlying hardware platform) while data objects can optionally reside in 64-bit regions. Not all such applications require a large address space or manipulate 64-bit data items, so these applications do not benefit from these features.
Software availability
x86-based 64-bit systems sometimes lack equivalents of software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers for obsolete hardware. Most 32-bit application software can run on a 64-bit operating system in a compatibility mode, also termed an emulation mode, e.g., Microsoft WoW64 Technology for IA-64 and AMD64. The 64-bit Windows Native Mode driver environment runs atop 64-bit , which cannot call 32-bit Win32 subsystem code (often devices whose actual hardware function is emulated in user mode software, like Winprinters). Because 64-bit drivers for most devices were unavailable until early 2007 (Vista x64), using a 64-bit version of Windows was considered a challenge. However, the trend has since moved toward 64-bit computing, more so as memory prices dropped and the use of more than 4 GiB of RAM increased. Most manufacturers started to provide both 32-bit and 64-bit drivers for new devices, so unavailability of 64-bit drivers ceased to be a problem. 64-bit drivers were not provided for many older devices, which could consequently not be used in 64-bit systems.
Driver compatibility was less of a problem with open-source drivers, as 32-bit ones could be modified for 64-bit use. Support for hardware made before early 2007, was problematic for open-source platforms, due to the relatively small number of users.
64-bit versions of Windows cannot run 16-bit software. However, most 32-bit applications will work well. 64-bit users are forced to install a virtual machine of a 16- or 32-bit operating system to run 16-bit applications.
Mac OS X 10.4 "Tiger" and Mac OS X 10.5 "Leopard" had only a 32-bit kernel, but they can run 64-bit user-mode code on 64-bit processors. Mac OS X 10.6 "Snow Leopard" had both 32- and 64-bit kernels, and, on most Macs, used the 32-bit kernel even on 64-bit processors. This allowed those Macs to support 64-bit processes while still supporting 32-bit device drivers; although not 64-bit drivers and performance advantages that can come with them. Mac OS X 10.7 "Lion" ran with a 64-bit kernel on more Macs, and OS X 10.8 "Mountain Lion" and later macOS releases only have a 64-bit kernel. On systems with 64-bit processors, both the 32- and 64-bit macOS kernels can run 32-bit user-mode code, and all versions of macOS up to macOS Mojave (10.14) include 32-bit versions of libraries that 32-bit applications would use, so 32-bit user-mode software for macOS will run on those systems. The 32-bit versions of libraries have been removed by Apple in macOS Catalina (10.15).
Linux and most other Unix-like operating systems, and the C and C++ toolchains for them, have supported 64-bit processors for many years. Many applications and libraries for those platforms are open-source software, written in C and C++, so that if they are 64-bit-safe, they can be compiled into 64-bit versions. This source-based distribution model, with an emphasis on frequent releases, makes availability of application software for those operating systems less of an issue.
64-bit data models
In 32-bit programs, pointers and data types such as integers generally have the same length. This is not necessarily true on 64-bit machines. Mixing data types in programming languages such as C and its descendants such as C++ and Objective-C may thus work on 32-bit implementations but not on 64-bit implementations.
In many programming environments for C and C-derived languages on 64-bit machines, int variables are still 32 bits wide, but long integers and pointers are 64 bits wide. These are described as having an LP64 data model, which is an abbreviation of "Long, Pointer, 64". Other models are the ILP64 data model in which all three data types are 64 bits wide, and even the SILP64 model where short integers are also 64 bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment with no changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. LL refers to the long long integer type, which is at least 64 bits on all platforms, including 32-bit environments.
There are also systems with 64-bit processors using an ILP32 data model, with the addition of 64-bit long long integers; this is also used on many platforms with 32-bit processors. This model reduces code size and the size of data structures containing pointers, at the cost of a much smaller address space, a good choice for some embedded systems. For instruction sets such as x86 and ARM in which the 64-bit version of the instruction set has more registers than does the 32-bit version, it provides access to the additional registers without the space penalty. It is common in 64-bit RISC machines, explored in x86 as x32 ABI, and has recently been used in the Apple Watch Series 4 and 5.
Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX, Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The disadvantage of the LP64 model is that storing a long into an int may truncate. On the other hand, converting a pointer to a long will “work” in LP64. In the LLP64 model, the reverse is true. These are not problems which affect fully standard-compliant code, but code is often written with implicit assumptions about the widths of data types. C code should prefer (u)intptr_t instead of long when casting pointers into integer objects.
A programming model is a choice made to suit a given compiler, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS application programming interface (API) typically dominates.
Another consideration is the data model used for device drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for direct memory access (DMA). As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gibibyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an input–output memory management unit (IOMMU).
Current 64-bit architectures
, 64-bit architectures for which processors are being manufactured include:
The 64-bit extension created by Advanced Micro Devices (AMD) to Intel's x86 architecture (later licensed by Intel); commonly termed x86-64, AMD64, or x64:
AMD's AMD64 extensions (used in Athlon 64, Opteron, Sempron, Turion 64, Phenom, Athlon II, Phenom II, APU, FX, Ryzen, and Epyc processors)
Intel's Intel 64 extensions, used in Intel Core 2/i3/i5/i7/i9, some Atom, and newer Celeron, Pentium, and Xeon processors
Intel's K1OM architecture, a variant of Intel 64 with no CMOV, MMX, and SSE instructions, used in first-generation Xeon Phi (Knights Corner) coprocessors, binary incompatible with x86-64 programs
VIA Technologies' 64-bit extensions, used in the VIA Nano processors
IBM's PowerPC/Power ISA:
IBM's POWER4, POWER5, POWER6, POWER7, POWER8, POWER9, and IBM A2 processors
SPARC V9 architecture:
Oracle's M8 and S7 processors
Fujitsu's SPARC64 XII and SPARC64 XIfx processors
IBM's z/Architecture, a 64-bit version of the ESA/390 architecture, used in IBM's eServer zSeries and System z mainframes:
IBM z13 and z14
Hitachi AP8000E
HP–Intel's IA-64 architecture:
Intel's Itanium processors (discontinued)
MIPS Technologies' MIPS64 architecture
ARM Holdings' AArch64 architecture
Elbrus architecture:
Elbrus-8S
NEC SX architecture
SX-Aurora TSUBASA
RISC-V
Most architectures of 64 bits that are derived from the same architecture of 32 bits can execute code written for the 32-bit versions natively, with no performance penalty. This kind of support is commonly called bi-arch support or more generally multi-arch support.
See also
Computer memory
References
External links
64-bit Transition Guide, Mac Developer Library
Lessons on development of 64-bit C/C++ applications
64-Bit Programming Models: Why LP64?
AMD64 (EM64T) architecture
64-bit computers
Data unit |
148487 | https://en.wikipedia.org/wiki/Scytale | Scytale | In cryptography, a scytale (; also transliterated skytale, skutálē "baton, cylinder", also skútalon) is a tool used to perform a transposition cipher, consisting of a cylinder with a strip of parchment wound around it on which is written a message. The ancient Greeks, and the Spartans in particular, are said to have used this cipher to communicate during military campaigns.
The recipient uses a rod of the same diameter on which the parchment is wrapped to read the message.
Encrypting
Suppose the rod allows one to write four letters around in a circle and five letters down the side of it.
The plaintext could be: "I am hurt very badly help".
To encrypt, one simply writes across the leather:
_
| | | | | | |
| I | a | m | h | u | |
__| r | t | v | e | r |__|
| | y | b | a | d | l |
| | y | h | e | l | p |
| | | | | | |
_
so the ciphertext becomes, "Iryyatbhmvaehedlurlp" after unwinding.
Decrypting
To decrypt, all one must do is wrap the leather strip around the rod and read across.
The ciphertext is: "Iryyatbhmvaehedlurlp"
Every fifth letter will appear on the same line, so the plaintext (after re-insertion of spaces) becomes: "I am hurt very badly help".
History
From indirect evidence, the scytale was first mentioned by the Greek poet Archilochus, who lived in the 7th century BC. Other Greek and Roman writers during the following centuries also mentioned it, but it was not until Apollonius of Rhodes (middle of the 3rd century BC) that a clear indication of its use as a cryptographic device appeared. A description of how it operated is not known from before Plutarch (50–120 AD):
Due to difficulties in reconciling the description of Plutarch with the earlier accounts, and circumstantial evidence such as the cryptographic weakness of the device, several authors have suggested that the scytale was used for conveying messages in plaintext and that Plutarch's description is mythological.
Message authentication hypothesis
An alternative hypothesis is that the scytale was used for message authentication rather than encryption. Only if the sender wrote the message around a scytale of the same diameter as the receiver's would the receiver be able to read it. It would therefore be difficult for enemy spies to inject false messages into the communication between two commanders.
See also
Caesar cipher
References
Further reading
()
Classical ciphers
Encryption devices
Military history of Sparta |
151154 | https://en.wikipedia.org/wiki/Reed%20Richards | Reed Richards | Mister Fantastic (Reed Richards) is a fictional superhero appearing in American comic books published by Marvel Comics. The character is a founding member of the Fantastic Four. Richards possesses a mastery of mechanical, aerospace and electrical engineering, chemistry, all levels of physics, and human and alien biology. BusinessWeek listed Mister Fantastic as one of the top ten most intelligent fictional characters in American comics. He is the inventor of the spacecraft that was bombarded by cosmic radiation on its maiden voyage, granting the Fantastic Four their powers. Richards gained the ability to stretch his body into any shape he desires.
Mister Fantastic acts as the leader and father figure of the Fantastic Four, and although his cosmic ray powers are primarily stretching abilities, his presence on the team is heavily defined by his scientific acumen, as he is officially acknowledged as the smartest man in the Marvel Universe. This is particularly a point of tragedy in regards to his best friend, Ben Grimm, who he has constantly tried to turn back into his human form but who typically remains a large, rocky creature called the Thing. He is the husband of Susan Storm, father of Franklin Richards and Valeria Richards, and mentor of his brother-in-law, Johnny Storm.
The character of Reed Richards was portrayed by actors Alex Hyde-White in the 1994 The Fantastic Four film, Ioan Gruffudd in the 2005 film Fantastic Four and its 2007 sequel Fantastic Four: Rise of the Silver Surfer, and Miles Teller in the 2015 film Fantastic Four.
Publication history
Created by writer Stan Lee and artist/co-plotter Jack Kirby, the character first appeared in The Fantastic Four #1 (Nov. 1961). He was one of the four main characters in the title. Lee has stated the stretch powers were inspired by DC's Plastic Man, which had no equivalent in Marvel.
Reed Richards has continued to appear regularly in the Fantastic Four comic for most of its publication run.
Fictional character biography
Pre-Fantastic Four
Born in Central City, California, Reed Richards is the son of Evelyn and Nathaniel Richards. Nathaniel was a scientific genius, and Reed inherited a similar level of intellect and interests. A child prodigy with special aptitude in mathematics, physics, and mechanics, Reed Richards was taking college-level courses when he was 14. He attended such prestigious universities as the Massachusetts Institute of Technology, California Institute of Technology, Harvard University, Columbia University, and the fictional Empire State University. By the age of 20, he had several degrees in the sciences under his belt.
It was at Empire State University that he met Benjamin J. Grimm. Reed had already begun designing a starship capable of traveling in hyperspace. Sharing his plans with his new roommate, Grimm jokingly volunteered to pilot the craft.
Also while at State U he met a brilliant fellow student, Victor Von Doom. In Richards, Doom met the first person who could match him intellectually; regarding Richards as his ultimate rival, Doom became increasingly jealous of Richards. Determined to prove he was better, Doom conducted reckless experiments which eventually scarred his face and would lead him to become Doctor Doom.
During the summer months, Reed rented a room in a boarding house owned by the aunt of a young woman named Susan Storm, who was an undergraduate student at the time. Reed fell in love with Sue instantly and began courting her. Ultimately, Reed was too distracted from his work on his dissertation due to his romance with Sue and decided that the best thing for the both of them was to move out of Marygay's home.
Moving on to Harvard, Reed earned Ph.D.s in Physics and Electrical Engineering while working as a military scientist, all this by the age of 22. He also worked in communications for the Army. Three years later, in his mid-20s, Reed used his inheritance, along with government funding, to finance his research. Determined to go to Mars and beyond, Richards based the fateful project in Central City. Susan Storm moved into the area, and within a short time, found herself engaged to Reed. Reed's old college roommate, Ben Grimm, now a successful test pilot and astronaut, was indeed slated to pilot the craft.
All seemed well; however, when the government threatened to cut funding and cancel the project, Reed, Ben, Sue, and Sue's younger brother Johnny, agreed to sneak aboard the starship and take it up immediately. They knew they had not completed all the testing that had been planned, but Reed was confident they would be safe. Ben was initially skeptical about the unknown effects of radiation, while Reed theorized that their ship's shielding would be adequate to protect them.
It was on Reed's initiative that the fateful mission which had Susan Storm, Johnny Storm and Ben Grimm accompanying him into space took place. When their ship passed through the Van Allen belt they found their cockpit bombarded with nearly lethal doses of cosmic radiation. Reed had neglected to account for the abnormal radiation levels in the belt's atmosphere. The cosmic rays wreaked havoc on the starship's insufficient shielding and they were forced to return to Earth immediately. When they crash-landed they found that their bodies were changed dramatically. Reed's body was elastic and he could reshape any portion of his body at will. At his suggestion, they decided to use their new abilities to serve mankind as the Fantastic Four. Reed was chosen to lead the group, under the name "Mr. Fantastic". He later told his daughter, by way of a bedtime story, that the reason he suggested they become adventurers and gave them such outlandish costumes and names as "Mister Fantastic" and "The Thing" was that he knew they would likely be hated and feared for their powers without such an over-the-top public image.
This history has been changed over the years in order to keep it current. In the original comics, Richards was a veteran of World War II who had served behind enemy lines in occupied France, and the goal of his space mission was a crewed space flight to the moon before the Communists were able to. This was later changed to getting there before the Chinese Communists and to explore the interstellar areas of the red planet and beyond. Also, Reed originally states that he and Sue "were kids together" and that Sue was the "girl next door" who Reed left behind to go fight in the war. This origin story was, many years later, altered so that Sue was thirteen years old when she first met Reed, who was in his early twenties, at her Aunt Marygay's boardinghouse. Current official Marvel canon has since altered this origin story by combining elements of both—it eliminated the large age gap but maintained that Reed and Sue did not meet until their late teens at Aunt Marygay's boardinghouse. The change in age was an editorial decision made in 2013 as Reed developing, several years later, a romantic interest in a girl he first met when she was thirteen years old was deemed inappropriate.
Leadership of the Fantastic Four
The early career of the Fantastic Four led Mister Fantastic to a number of discoveries, and had their first encounters with many unusual characters. In the team's first appearance, they battled the Mole Man. They then battled the Skrulls. Soon after that, the team encountered the Sub-Mariner. They then had the first of many battles with Doctor Doom. They later journeyed to a subatomic world. Soon after that, they encountered Rama-Tut. They then battled the Molecule Man.
As the team leader, Mr. Fantastic created numerous exotic devices and vehicles for the team to use such as clothing made of 'unstable molecules' so that it can be used with their powers safely. Furthermore, he often leads the team into daring expeditions such as into the Negative Zone in addition to opposing evil. Also, he has felt personally responsible for Ben Grimm's grotesque change and has labored off and on to reverse it permanently.
Under his guidance, the team went on to become Earth's most celebrated band of heroes. Together, they would save the world countless times. Ever driven by his quest for knowledge, Reed is believed by most to be the Earth's foremost intelligence. There is little he cannot create, fix, or understand given time. The patents and royalties on his inventions alone have funded the group over the years.
However, there are drawbacks to his association with the team. Chief among them is the team's violent encounters with Doctor Doom, who believes that Reed was responsible for the accident that scarred him. Doom has never forgiven Reed and has sworn revenge. Doom has even gone as far as transforming Reed into a monstrous freak, attacking Reed's children and attempting to seduce Sue.
Subplots and story arcs
After many adventures as the Fantastic Four, Reed married Sue. Not long after that, the team encountered the Inhumans for the first time. They next encountered Galactus and the Silver Surfer. Reed then opened a portal to the Negative Zone for the first time. Soon, the team first battled the Psycho-Man. Before long Reed and Sue had a baby, young Franklin Richards; the team battled Annihilus right before Franklin's birth. Franklin was a mutant with incredible powers, but, due to the cosmic ray alteration to his parents' DNA, they manifested while he was still very young (in the Marvel world, most mutants like the X-Men get their powers while teenagers). Franklin appears to have power that can rival a member of the Celestials; the power of a god in the body of a small child. The couple briefly separated, and Reed further alienated Sue by shutting down Franklin's mind in order to prevent his power from causing global catastrophe; Sue Storm initiated divorce proceedings but the two were reconciled soon after.
Reed also saved Galactus's life during the course of his adventures. He then bought the Baxter Building. Later, he was tried by the Shi'ar for saving Galactus's life, but the charges were dropped when Eternity briefly granted all those attending the trial a moment of 'cosmic awareness' that allowed them to understand that Galactus was necessary for the continued well-being of the universe. Reed's battle with Gormuu prior to getting his powers was recounted. Just after that, Reed was reunited with his father in an alternate timeline. Some time later, Reed and Sue retired from the Fantastic Four, and then joined the Avengers, although Reed's past experience as leader of the Fantastic Four meant that he had trouble adjusting to following Captain America's lead regardless of his respect for the other man. Eventually, Reed and Sue did rejoin the Fantastic Four.
In the course of fighting an alien called Hunger, Doctor Doom was seriously injured. The Fantastic Four were also part of the battle against Hunger and Doom asked for his old enemy to take his hand. At that point they both disappeared in a flash, leaving nothing but ashes. It would appear as if the two sworn enemies had fittingly died in each other's hands.
However, unknown to anyone else at the time, Reed and Doom had actually been thrown back into the time of barbarians and onto an alien world by a being called Hyperstorm, Reed's grandchild from an alternate future, the child of Franklin Richards and Rachel Summers, daughter of Scott Summers and Jean Grey. They were so far into the past, and with no technology, that even their brilliant minds couldn't find a way back home. Doom was captured and held prisoner by Hyperstorm. Reed wandered aimlessly for about a year.
Meanwhile, the rest of the FF recruited Scott Lang as scientific advisor. They even confronted an insane alternative version of Reed called the Dark Raider who was traveling from reality to reality, destroying all the various versions of himself after his own failure to save his world from Galactus in their original confrontation.
A while later, the remaining members of the FF along with the Sub-Mariner, Lyja and Kristoff Vernard found themselves trapped in the same era as their Reed and Doom. They found Reed but faced a new problem: during his time alone, Reed had resigned himself to the reality that it was impossible for his old friends to stage a rescue and attacked them but soon realized that these truly were his friends. After returning to their own time period, he sought out Galactus as he was the only being in the Universe who could defeat Hyperstorm. Upon Hyperstorm's defeat at the hands of Galactus, the FF returned to the present day where they continued their lives, not only as a team, but as a family.
Onslaught and Heroes Reborn
Shortly after their return, the FF were confronted by a being called Onslaught. Onslaught took control of an army of Sentinels and invaded New York City, hunting down every mutant being he could find. Onslaught wished to add the abilities of the godlike Franklin Richards to his own. Only through the apparent sacrifice of the Fantastic Four's own lives and that of many of the heroes in the Marvel universe was Onslaught finally vanquished. The heroes would have died then and there if not for Franklin, who created an alternate reality for them to reside in. Completely oblivious to what had taken place, Reed and his compatriots relived most of their lives. In their absence the Fantastic Four's headquarters, Four Freedoms Plaza, was annihilated by a super villain group called the Masters of Evil, posing as heroes, The Thunderbolts. One year later, Franklin returned with his family along with the other heroes from the parallel reality. Reed was overjoyed to see his son again, but he and the rest of the FF found themselves without a home, moving into Reed's storage warehouse on Pier 4, overlooking the East River. Making this their home, the Fantastic Four continued with their lives, eventually managing to move back into the Baxter Building.
It has emerged that Reed is one of the members of the "Illuminati", unknown to his wife. He is also in possession of the "Power" gem of the Infinity Gauntlet.
Pro-Registration
In Marvel's Civil War miniseries and crossover event, Reed Richards is one of the leading figures, along with Iron Man, on the side which is in favor of the Superhuman Registration Act. He speculates that this will lead to conflict with his wife, which came true in issue #4 of the miniseries when a clone of Thor, created by him and Tony Stark, went out of control and killed Goliath and nearly killed all the rest of the Secret Avengers until Sue Storm stepped in and saved them. Soon after, Sue left Reed, along with Johnny, to join the Secret Avengers in hopes that it would drive Reed to end the conflict quickly.
In The Amazing Spider-Man #535, which takes place shortly before the events of Civil War #5, Peter Parker demands to see the conditions inside the detention facility designed by Reed to hold unregistered superhumans. After being escorted to and from the prison by Iron Man, Parker returns with more doubts than ever about whether he is on the right side and asks Reed why he supports the Superhuman Registration Act, a question Reed answers by telling the story of his paternal uncle, Ted. A professional writer, Reed remembers his uncle as "funny", "colorful", and "accepting". As a boy, Reed loved spending time with Ted. Unfortunately, Ted was also "an eccentric" and "stubborn". Because he had a career in the arts and because he stood out, Ted was called before HUAC, imprisoned on contempt of Congress charges for six months, and was unable to find work after he'd served his sentence. He was even shunned by Reed's father. Ted lost everything, which Reed says finally "killed him" without going into greater detail. Richards opines that his uncle was wrong to take such a stand, to pick a fight he couldn't win, and to fail to respect the law.
However Fantastic Four #542 reveals that the real reason for supporting the registration act is due to his development of a working version of Isaac Asimov's fictional psychohistory concept. His application of this science indicates to him that billions will die in escalating conflicts without the presence of the act. In the final battle of the war, he is shot by Taskmaster, saving Sue Storm's life. With Reed on the brink of death, a furious Sue crushes Taskmaster with a telekinetic field. Reed survives, however, and Sue returns to him in the aftermath of the battle, having been granted amnesty. Seeking to repair the damage done to their marriage as a result of the war, Sue and Reed take time off from the Fantastic Four, but ask Storm and the Black Panther to take their places in the meantime.
World War Hulk
Within the midst of Civil War, Reed Richards learned from a brief conversation with Mastermind Excello that the Hulk is not on the planet where the Illuminati attempted to exile him. After a conversation over the good the Hulk has done for humanity, Reed tells Iron Man of what happened to the Hulk and also states that the Hulk has friends, and "may God help us if they find him before we do".
Within the aftermath of Civil War, Reed Richards had been keeping tabs on Mastermind Excello and when the She-Hulk learned about the Hulk's exile, Reed Richards sends out Doc Samson to confront her when he sees her meeting with Mastermind Excello.
In World War Hulk #1, Reed is shown with Tony Stark as Iron Man. Both men were trying to convince Sentry to fight the Hulk, thinking that the calm aura that the Sentry produces may be able to stop the Hulk's rampage. In World War Hulk #2, with the aid of the rest of the Fantastic Four, Storm, and the Black Panther, Reed is able to create a machine that projects an image of the Sentry and recreates the hero's aura of calm. He uses the machine on the Hulk just as he is about to defeat the Thing, but the Hulk knows it is not the real Sentry and destroys the machine. In a last line of defense, Sue Storm tries to protect her husband by encapsulating the Hulk in an energy field while pleading with him to spare Reed. The Hulk does not listen and is able to easily exert enough strength against her force field to cause Susan to collapse and experience a nosebleed from the stress, before she dooms Reed to the Hulk's wrath.
Reed is later seen among the various heroes that the Hulk defeated so far, within the depths of the Hulk's Gladiator arena located within Madison Square Garden. He and all of the heroes are implanted with "obedience disks" that are used to suppress their powers. These disks are the same that were used upon the Hulk during his time on Sakaar. The Hulk orders Iron Man and Mr. Fantastic to face off in battle. Richards, after having the upper hand on Stark, is then given the thumbs down by Hulk, instructing Richards to kill Stark. However the Hulk spares their lives, showing them that he proved his point to the world. They survived the encounter by Hulk's mercy and the timely intervention of the Sentry. The Illuminati are partially cleared from the responsibility of Sakaar's destruction when Miek, one of the Hulk's alien allies, admits he saw the Red King forces breach the ship's warp core. Miek kept quiet to initiate what he felt was Hulk's destiny as the "Worldbreaker".
Secret Invasion
Mister Fantastic is at the Illuminati's meeting discussing the threat of the Skrulls when the Black Bolt with them is revealed to be a Skrull in disguise.
Mister Fantastic and Hank Pym autopsy the body of the Skrull who impersonated Elektra (with Reed pretending to be seeing the corpse for the first time, thus keeping the secret of the Illuminati). After completing the dissection, Reed claims to have discovered the secret of how the Skrulls have been able to conceal their identities. Before he can elaborate, "Hank Pym" reveals himself to be a Skrull and shoots Richards with a weapon that violently leaves his elastic body in a state of chaotic disarray similar to Silly String. In Secret Invasion: Fantastic Four #1, it is shown that a Skrull assumed Reed's form in order to successfully ambush and capture Sue Richards, to facilitate an attack on the Fantastic Four's headquarters.
A conscious but still mostly shapeless Reed Richards is seen being forcibly stretched in all directions to cover the floor of a medium-sized arena aboard a Skrull ship, with all of the seats filled by Skrull onlookers. He is freed by Abigail Brand, and then travels with her to the Savage Land and uses a device to reveal all the Skrull invaders present. After helping the Avengers to defeat the imposters and return to New York, Reed aids the heroes and villains of Earth in their battle against the Skrulls.
Dark Reign
Mister Fantastic aids the New Avengers in the search for the daughter of Luke Cage and Jessica Jones. He and the rest of the Fantastic Four are magically reduced to television signals by the chaotic activities of the Elder God Chthon, to prevent them from intervening, though after Chthon is defeated, he and the other three are turned back.
Norman Osborn has sent H.A.M.M.E.R. agents to shut down the Fantastic Four and capture them, expelling them from the Initiative and stripping them all their rights. Taking place a week from the Secret Invasion, which has caused Richards to seriously reevaluate his own life and the life he has built for his family, resulting in turbulent internal conflicts. Richards, as he takes a long, hard look at the life, is prompted to construct a machine that is capable of bending reality itself, to do so. Agents from H.A.M.M.E.R. (sent by Norman Osborn, a man well aware Reed is one of the few that have intellect exceeding even his own and thus pose a great threat to his carefully constructed shadow world) arrive just as Reed activates the machine, interfacing with the Baxter Building's power supply, resulting in an energy flactuation that sends Sue, Ben and Johnny back to the prehistoric era, fraught with dangers, which manifest in the form of the First Celestial Host. Reed searches for answers which can only be found in alternate timelines as the three find themselves in a super hero Hyborian-age civil war; Franklin and Valeria are the only ones available to confront the agents Osborn sent. Richards studies other parallel Earths to see if any found a peaceful solution to the Civil War, which resulted from the Superhuman Registration Act. Reed peers into different worlds, some more bizarre than our own, to see what they did differently. This is an insightful look into where the Marvel Universe has gone in the past "year" (in Marvel time) and to see who was at fault, if anyone. Reed meets with the other five Illuminati to handle the problem. Ultimately, after seeing about a million alternate Earths, he had concluded that there was no way for the Civil War to be resolved but that he, as the cleverest man in the Marvel Universe, had a responsibility to put things right. But before turning the machine off, he searches for other realities where they have the same machines he is using; the machine locates them and the people he finds tell him that they can help him.
Reed Richards reappeared in The Mighty Avengers #24 refusing to give Hank Pym an invention previously left in the care of the Fantastic Four, following Bill Foster's untimely death, causing the Wasp to lead the team of Mighty Avengers to retrieve the device. In the next issue, Pym's attempting to knock down the Baxter Building (due to Loki's manipulations, likely) sows further displeasure and leads to a direct conflict between the Fantastic Four and Loki's Mighty Avengers, stretching across to The Mighty Avengers #26. This conflict eventually ends in a new base for the Mighty Avengers and a shockingly disturbing alliance of the Dark Reign; the next issue of the Mighty Avengers will show a new character, a ruler known as the Unspoken, one more powerful than any other in the universe, to the point he had to be erased from history, and how his return will impact the planet and the cosmos beyond, in the War of Kings.
After the Secret Invasion and some time into the Dark Reign, in the War of Kings, when Blastaar is about to open the portal to Earth, on the other side in Camp Hammond, Star-Lord and the Guardians tell Reed Richards and others to never open the portal, or they will face the wrath of Blastaar. Mister Fantastic agrees and says they will not open the portal and then asks who they are. Star-Lord replies, "We're the Guardians of the Galaxy" before leaving.
Future Foundation
Tragedy strikes the team when Johnny is apparently killed in the Negative Zone. As the Fantastic Four recover from Johnny's apparent death, Mister Fantastic grows disillusioned at how scientists see science and its applications. Therefore, he creates a new team, the Future Foundation, to help create a better future for mankind. However, the team's initial tasks are complicated when they are forced to deal with the 'Council of Reeds', a group of alternate versions of Reed Richards who lack his morality or family that became trapped in the FF's reality after an accident with Reed's dimensional portal.
The Quiet Man
After Johnny is returned and the team resume using the Fantastic Four name, they are systematically attacked by the mysterious Quiet Man, a figure who reveals that he has been behind many of the villain attacks the FF have faced over the years, now stepping forward to take a more active role as he shuts down Johnny's powers, frames the Thing for murder, has Reed and Sue Richards declared unfit guardians for their children, and then abducts Reed with the intention of framing him for a series of attacks committed by the heroes created in Franklin's pocket universe. However, Reed is able to defeat the Quiet Man's plan to proclaim himself the hero who defeated Reed's 'attack' on the world when he realizes that the Psycho-Man - one of the Quiet Man's allies - intended to betray him, forcing Reed and the Quiet Man to work together to deactivate the Psycho-Man's equipment.
Secret Wars
During the Secret Wars storyline, Reed is one of the few survivors of Earth-616 to come through with his memories intact, hiding in a 'life raft' with various other heroes and eventually released by Doctor Strange. Learning that the new 'Battleworld' is now ruled by Doctor Doom, who has absorbed the power of various Beyonders and Molecule Men to become a virtual god, Reed and the other heroes disperse across Battleworld to come up with a plan.
Reed eventually starts working with his counterpart from the Ultimate Marvel universe who calls himself Maker. The two find themselves disagreeing on their methods, with Reed preferring to find a way to save the world while Maker is more focused on killing their enemy, Reed countering his other self's accusations that he is weak by musing that he simply has things to care for outside of himself. In the final confrontation, Reed and the Maker discover that the source of Doom's power is the Molecule Man (Owen Reece), but although the Maker attempts to betray Reed by forcibly devolving him into an ape, Reece reverses the Maker's attack and turns him into pepperoni pizza. When Doom comes down to confront Reed, Reed challenges Doom as Reece temporarily deprives Doom of his new god-level powers, Reed proclaiming Doom to be nothing more than a coward for taking control of what was left of existence rather than trying to reshape it. Inspired by Reed's words, Reece transfers Doom's power to him, with Reed recreating the former Earth-616 before he, Susan, their children, and the rest of the Future Foundation move to use Franklin and Reed's powers to rebuild the multiverse.
Return to Prime Earth
Mister Fantastic, Invisible Woman, and the Future Foundation were later confronted by the Griever at the End of All Things after Reed had expended most of his Beyonder-based energies while Franklin was left with only limited reality-warping abilities. Due to the Griever causing the collapse of 100 worlds, the Future Foundation had to make a stand even after Molecule Man was killed. When Mister Fantastic tricked the Griever into bringing their Fantastic Four teammates to them, Thing and Human Torch were reunited as every other superhero that was part of the Fantastic Four showed up as well. Faced with these numbers, the Griever was eventually driven back when most of her equipment was destroyed, forcing her to return to her universe or be trapped in this one, while the heroes were able to repair the damaged equipment to create a new teleporter to send them all home.
Powers and abilities
Reed Richards gained the power of elasticity from irradiation by cosmic rays. He has the ability to convert his entire body into a highly malleable state at will, allowing him to stretch, deform, and reform himself into virtually any shape. Richards has been observed as being able to utilize his stretching form in a variety of offensive and defensive manners, such as compressing himself into a ball and ricocheting into enemies, flattening himself into a trampoline or a parachute to catch a falling teammate, or inflating himself into a life raft to aid in a water rescue. He can willfully reduce his body's cohesion until he reaches a fluid state, which can flow through minute openings or into tiny pipes. Reed is also able to shape his hands into hammer and mace style weapons, and concentrate his mass into his fists to increase their density and effectiveness as weapons.
Having an elastic-like texture allows Reed protection from damage. He can be punched with incredible force, flattened, squished, and still re-form or survive without any form of sufficient injury.
Reed's control over his shape has been developed to such a point that he has been able to radically alter his facial features and his entire physical form to pass among human and non-humans unnoticed and unrecognized. He has even molded himself into the shapes of inanimate objects, such as a mailbox. He rarely uses his powers in such an undignified fashion. However, he appears to have no qualms about stretching out his ears, taking the shape of a dinosaur, becoming a human trampoline, or inflating his hands into pool toys to entertain his children.
The most extreme demonstration of Reed's powers is when at one point he was able to increase his size and mass to Thing-like proportions which also increased his physical strength.
Assuming and maintaining these shapes used to require extreme effort. Due to years of mental and physical training, Reed can now perform these feats at will. His powers (and those of the Red Ghost) were also increased when they were exposed to a second dose of cosmic rays. Maintaining his body's normal human shape requires a certain degree of ongoing concentration. When Reed is relaxed and distracted, his body appears to "melt in slow motion" according to Susan Storm. Being forcefully stretched to extremes during a short span of time (by a taffy-puller-type machine or a strong character, for example) causes Reed to suffer intense pain and the temporary loss of his natural elastic resiliency. He possesses other weaknesses too; a great shock to his body - for example, from the likes of Doctor Doom when he constructs around him - can make his body so rubbery he loses motor control. Enough energy can reduce him to a liquid state in which he is immobile.
Mister Fantastic's strength comes more from the powers of his mind than the powers of his body; indeed, he once told Spider-Man that he considers his stretching powers to be expendable compared to his intellect. Some stories have implied that his intellect may have been boosted by his powers, as he once visited an alternate universe where his other self-had never been exposed to cosmic rays and was notably less intelligent than him, though purely human versions of Reed that are as or even more intelligent than himself have been shown, particularly among the Council of Reeds. Tony Stark has commented that Reed's ability to make his brain physically larger (via his elastic powers) gives him an advantage, though this seems to be meant more as a joke. That said, scenes from the same issue show Reed "inflating" his skull as he calculates the power output of Tony's Repulsor-battery heart implant.
For virtually his entire publication history, Mister Fantastic has been depicted as one of the most intelligent characters in the Marvel Universe. A visionary theoretician and an inspired machine smith, he has made breakthroughs in such varied fields as space travel, time travel, extra-dimensional travel, biochemistry, robotics, computers, synthetic polymers, communications, mutations, transportation, holography, energy generation, and spectral analysis, among others. However, he is never afraid to admit when others have greater expertise in certain fields than him, such as recognizing that Doctor Octopus possesses greater knowledge of radiation, that Hank Pym is a superior biochemist, or that Spider-Man can think of a problem from a biology perspective where he would be unable to do so, since his expertise is in physics. Richards has earned Ph.D.s in Mathematics, Physics, and Engineering. His patents are so valuable that he is able to bankroll the Fantastic Four, Inc., without any undue financial stress. Mind control is rarely effective on him and when it does work, it wears off sooner than it would on a normal person, due to what he describes as an "elastic consciousness". However, this intelligence can be a handicap in his dealings with magic, as it required an intense lesson from Doctor Strange and facing the threat of his son being trapped in Hell for Reed to fully acknowledge that the key to using magic was to accept that he would never understand it.
Richards is also an accomplished fighter due to his years of combat experience with the Fantastic Four, and has earned a black belt in judo.
Following the Battleworld crisis, Reed has acquired the powers of the Beyonders that were once wielded by Doom, but he relies on his son Franklin's creativity and new powers to help him recreate the multiverse after the incursions destroyed the other parallel universes.
Equipment and technology
Although the Fantastic Four have numerous devices, crafts, and weapons, there are some items that Reed Richards carries with him at all times.
Fantastiflare: Launches a fiery "4" into the sky that is used during combat situations to let other members of the group know their location.
Uniform Computer: Like all the Fantastic Four's costumes and the rest of Reed's wardrobe, his suit is made of "unstable molecules". This means that the suit is attuned to his powers, which is why Johnny's costume doesn't burn when he "flames on", Sue's costume turns invisible when she does, and Reed's costume stretches with him. The costume also insulates them from electrical assaults. In addition, the team's uniforms are also, in essence, wearable computers. Their costumes have a complete data processing and telemetry system woven into the material of the uniform on a molecular level. This forms a network with the entire team, providing a constant, real-time uplink of everyone's physical condition as well as their location and current situation. The suit is capable of displaying data and touch-pad controls on the gauntlets. Its sensors can track all of the team's uniforms and provide a picture of their immediate vicinity. The suit has an intricate scanner system which can detect things around the wearer, from how many people are in the next room to what dimension or planet they are on. Reed can also up-link the bodysuit to any computer by stretching his fingertips to filament size and plugging them into an I/O data-port. With this, Reed can establish a fairly comprehensive database of any computer's cybernetic protocols and encryption algorithms.
Other versions
Age of Apocalypse
In the alternate reality known as the Age of Apocalypse, Richards never received superpowers as he was never bombarded with cosmic radiation in space. Instead, he attempted to evacuate a large group of humans from Manhattan during Apocalypse's regime. Along with Ben Grimm as the pilot and his friends Johnny and Susan Storm as crew, Richards used one of his prototype rockets to fly off the island. Unfortunately, a mutant sabotaged the launch and both Reed and Johnny sacrificed themselves to let the others blast off safely.
Following the rise of Weapon Omega, it is revealed that when Apocalypse came into power, Reed became the world's foremost authority on the Celestials and had collected all the information he could gather about these cosmic beings in several journals. Apocalypse himself was known to fear Reed Richards' knowledge and had him targeted and created a special taskforce to locate the journals but while they succeeded in killing him, they weren't able to find the journals which eventually came to Victor von Doom's possession.
Amalgam Comics
Amalgam Comics is a 1997-98 shared imprint of DC Comics and Marvel, which features composites of characters from the two publishers. Two alternate versions of Reed Richards appear in this series.
The one shot issue Challengers of the Fantastic #1 features Reed "Prof" Richards (a composite of Marvel's Reed Richards and DC Comics Prof Haley), a nonsuperpowered scientist and leader of the eponymous team of adventurers.
In Spider-Boy Team-Up #1, Elastic Lad makes a cameo appearance as a member of the Legion of Galactic Guardians 2099 (a composite of DC's Legion of Super-Heroes and Marvel's Guardians of the Galaxy and "2099" imprint). Elastic Lad is a composite of Richards and Jimmy Olsen's Elastic Lad character.
Bullet Points
In Bullet Points, Dr. Reed Richards is drafted by the government to act as technical support to Steve Rogers, who in this reality is Iron Man. Along with Sue, Ben and Johnny, he later attempts the rocket flight that in the mainstream continuity saw the creation of the Fantastic Four, but the flight is sabotaged and the rocket crashes, killing everyone aboard except Reed. He thus never develops superpowers, and following the tragedy, he accepts the position as Director of S.H.I.E.L.D. Having lost his eye in the rocket crash, Reed wears an eyepatch, giving him a strong resemblance to Nick Fury.
Council of Reeds
The Interdimensional Council of Reeds first appeared in Fantastic Four #570 (Oct. 2009). The council is composed of multiple versions of Reed Richards from alternate universes, each with different powers, intellects, and abilities. Reeds join the council when they are able to invent a device (called "the Bridge") that allows them to cross into the nothingness between realities. The leaders of the council are the three Reeds that have acquired their reality's Infinity Gauntlet. The 616 Reed discovers that the other Reeds have one thing in common: each of them grew up without their father Nathaniel Richards, whose influence made the 616 Reed a more compassionate man. Reed declines membership to the council after realizing he would have to sacrifice his family ties to join. Nearly all the Council members are killed when the mad Celestials of Universe-4280 gain entry to the Council headquarters and attack the Reeds.
Due to an accident caused by Valeria Richards, four Reeds gain access to the Earth-616 reality. 616-Reed is forced to assemble a team of his old enemies - including Doctor Doom, the Wizard, and the Mad Thinker - to try to outthink his alternate selves before they destroy their world.
Counter Earth
The Counter-Earth version of Reed Richards is from a world created by the High Evolutionary; Counter-Earth originally existed within the reality of Earth-616 and is thus, technically, not an alternate Earth. His exposure to cosmic rays gives him the ability to transform into a savage purple-skinned behemoth called the Brute. The Brute makes his way to Earth, where he traps Mister Fantastic in the Negative Zone and replaces him. He manages to trap the Human Torch and the Thing shortly thereafter but is found out by the Invisible Woman, who rescues her teammates and leaves the Brute trapped in their place. The Brute is later a member of the Frightful Four Brute joined the Frightful Four and assisted them in fighting the Fantastic Four. He first appeared in Marvel Premiere #2 (May 1972).
Dark Raider
An alternate Reed Richards from Earth-944, he first appeared in Fantastic Four #387 (April 1994). He is driven mad when he fails to save his reality's Earth from Galactus. Taking the identity of the Dark Raider, he travels from reality to reality on a quest to destroy every possible version of himself. The Fantastic Four first encounter him when they traveled to an alternate past and see younger versions of themselves die at his hands. When the Dark Raider comes to the Fantastic Four's reality, he attempts to activate the Ultimate Nullifier but is apparently destroyed by Uatu. This appearance of Uatu is later revealed to be Aron, the Rogue Watcher, who had simply teleported the Raider away. The Dark Raider returns, and is finally killed by the Invisible Woman in the Negative Zone.
Earth-A
In this reality, only Reed and Ben Grimm go up in the experimental spacecraft. Reed is transformed by cosmic radiation into The Thing, while Ben gains the stretching powers of Mr. Fantastic and the flaming powers of the Human Torch.
Exiles
An Earth dominated by hedonistic Skrulls since the late nineteenth century is attacked by Galactus. This Reed Richards is portrayed as an inventive genius with nothing to confirm that he possesses the powers of his 616 counterpart. He leads the super-human effort to drive off Galactus and save the planet. He becomes one of the caretakers of Thunderbird, a dimension-hopping hero who had been severely injured in the battle.
Heroes Reborn (2021)
In the 2021 "Heroes Reborn" reality, Reed Richards is a scientist at S.H.I.E.L.D. Labs. He alerts Hyperion that the inmates of the Negative Zone have escaped.
Marvel 1602
Set in the 17th century Marvel 1602 universe, Reed (apparently called Sir Richard Reed, although he is often addressed as "Sir Reed" or "Master Richards") is the leader of 'The Four from the Fantastick', and his pliability is compared to water. Sharing the genius of his counterpart, he has devised uses for electrical force, categorized the sciences, and speculated as to whether light has a speed.
According to Peter David, who is writing a Marvel 1602 miniseries about the Four, Gaiman describes Sir Richard as even more pedantic than the mainstream Mr. Fantastic. During a trip to Atlantis, Richard Reed had trouble accepting the idea that the Atlanteans had a connection to Poseidon or their brief encounter with the Watcher, until Susan helped him realise that just because such things could not be understood by their present standards did not mean that humanity could not come to understand them later.
Marvel Apes
The Reed of this reality is an intelligent ape given stretching powers by exposure to cosmic radiation, as in the mainstream Marvel universe. He tries to find a way to send Marty "The Gibbon" Blank back to his home reality. He is also one of the few that realizes Captain America is really a disguised Baron Blood. Reed is impaled and killed by Blood in an attempt to stop the vampire from trying to invade Earth-616 for a new source of blood. The rest of the Marvel Ape-verse heroes are led to believe Marty is responsible for Reed's death and pursue him until discovering the truth.
Marvel Mangaverse
In the Marvel Mangaverse comics, Reed Richards leads the Megascale Metatalent Response Team Fantastic Four as a commander, not a field operative like Jonatha, Sioux, and Benjamin. In Mangaverse, Richards has been re-imagined as a long-haired intellectual with a laid-back attitude. The other members of the team often describe him as a "smartass". His team used power packs in order to manifest their talents on mecha-sized levels so that they may fight the Godzilla-sized monsters from alien cultures that attack Earth for performing experiments which endanger all of reality. Along with assigning battle tactics, Richards okayed the amount of power his team was allowed to use. He has stretching talents which he considered "near useless" except for stretching his neurons, allowing him to brainstorm new ideas. In the New Mangaverse, Richards (along with the rest of the Fantastic Four with the exception of the Human Torch) was murdered by ninja assassins.
Marvel Zombies
This version of Reed Richards deliberately infects his team and himself with the zombie virus after suffering a mental breakdown due to the murder of his children at the hands of a zombified She-Hulk. Regarding the zombies as a superior form of life, Reed sets out "to spread the Gospel", a twisted plan to start turning the survivors of the Marvel Universe into zombies. Reed later assists his fellow zombies in tracking down several Latverian human survivors; they escape to alternate dimensions but Doctor Doom does not. Using a dimensional crossing device created by Tony before his infection, Reed makes contact with his Ultimate counterpart. The Zombie FF attempts to escape into the Ultimate Marvel universe, but the zombie Reed is neutralized when the Ultimate Invisible Girl destroys a chunk of his brain, allowing the Ultimate team to contain their counterparts. When the Zombie FF try to escape after a brief period of imprisonment, Ultimate Reed Richards (in Doctor Doom's body) destroys them by covering him with maggots, and their corpses returned to their universe. It is suggested in Marvel Zombies: Evil Evolution that Richards was inadvertently responsible for allowing the zombie virus to infect this reality through the construction of a device allowing access to alternate dimensions, namely he was responsible for bringing in the zombie Sentry, the only zombie left.
MC2
In the MC2 continuity, Reed Richards designs a small robot into which he claims to have transferred his brain after his body was scarred in an accident; in reality, Richards' injuries are minor, and he controls the robot remotely from an outpost in the Negative Zone. This robot, called Big Brain, is a member of the Fantastic Five, and is capable of projecting force fields and can hover or fly. When Reed solves the problem keeping Susan in stasis in the Negative Zone, the mental block preventing his scars from healing dissolves, and his appearance returns to normal.
In an alternate universe in the MC2 line, the Red Skull conquered the world and killed Captain America. The Skull is later killed by Doctor Doom, for whom Reed serves as an advisor. After Doom and Crimson Curse fall into a portal, Richards turns on fellow advisor Helmut Zemo. Later Reed becomes a mad scientist, aided by evil versions of Ben Grimm, Franklin Richards and Peter Parker. They are defeated by Spider Girl, Thunderstrike and Stinger.
When Dr. Doom returns, Reed is forced into a mental duel with the villain, which ends in a tie that banishes both their minds to the "Crossroads of Infinity". He is currently in Latveria, under the care of his wife.
Mutant X
In the alternate universe visited by Alex Summers, a.k.a. Havok, the Fantastic Four have no powers, though Reed still has his genius-level intelligence. Reed generally wears a battle suit with two extra arms. In his first appearance, he is attempting to build a machine that will allow the Goblin Queen to summon demons from another dimension. In the final issue of the series, Reed joins a makeshift team of villains and heroes in order to stop the Goblin Queen's threat against the entire multiverse. He is interrupted in his work by Dracula, who slices open his throat, killing him.
Spider-Gwen
In the universe featuring Gwen Stacy as Spider-Woman, Reed is an African-American child genius that shares his inventions with other kids around his age. He is asked for help by Jessica Drew from the 616 universe to try to get her home, and reveals that he has encountered other Dimensional hoppers before her. He aids the Spider-Women by altering their weaponry for their final battle against his earth's Cindy Moon and later serves as an ally to Spider-Gwen.
Spider-Man: Life Story
Spider-Man: Life Story features an alternate continuity where the characters naturally age after Peter Parker becomes Spider-Man in 1962. In the 1970s, Peter Parker and Otto Octavius began working for Reed at Future Foundations. He and Peter got into several arguments over the Vietnam War and how superheroes should serve humanity, eventually leading Peter to quit. In the 1980s, Peter and Reed made amends while participating in the Secret Wars, though Peter laments that Richards is a ghost of his past self, having pushed away all his friends and loved ones out of a misplaced sense of responsibility for Doctor Doom. In this continuity, Sue left Reed to be with Namor.
Spider-Man Unlimited
In the comic book based on the Spider-Man Unlimited animated series, Peter is assigned by the Daily Byte to investigate the Counter-Earth version of Reed Richards, as Richards is suspected of knowing about a mysterious creature called the Brute. After fighting the Brute as Spider-Man, it is revealed that the Brute is actually Reed Richards himself, who is helping the rebels fight the Beastials, while Reed is actually a spy. He is also assisted in this mission by his friend, Ben Grimm, who gathers data held by the High Evolutionary. Reed reveals that after a test flight similar to the one that gave the mainstream Fantastic Four their powers, the cosmic rays transformed Reed into the Brute, leaving Grimm unaffected, Johnny Storm dead, and Susan Storm in a coma.
Ultimate Marvel
Maker, previously known as Ultimate Reed Richards is the Ultimate Marvel's version of Mister Fantastic. The origin of Ultimate Reed's powers is different than the original Reed. Ultimate Reed Richards gets engulfed in a malfunctioned teleporter experiment to get the superpower to stretch. He founds the Ultimate Fantastic Four that explores the N-Zone and fight various villains. After the Fantastic Four disbands due to the damage caused by "Ultimatum", Reed begins to change his worldview and eventually calls himself Maker and becomes a nemesis to the Ultimates. When Galactus arrives in the Ultimate universe due to a temporal distortion, the Ultimates are forced to approach the Maker for help. Maker travels to Earth-616 to access his counterpart's files. Using the information gained, Reed defeats Galactus by sending him to the Negative Zone.
After the events of Secret Wars, Maker moves to the main Marvel universe, also known as Earth-616, where he continues his nefarious plans.
What If?
Marvel's What If? comic book series featured several alternate versions of Reed Richards and the Fantastic Four.
Spider-Man in the FF
On the world designated Earth-772, in What If?, Spider-Man joined the Fantastic Four, but his presence resulted in Sue feeling increasingly sidelined in favour of the four male members of the team, resulting in her leaving the team to marry the Sub-Mariner. Although Reed was briefly driven insane and declared war on Atlantis, he eventually recovered and the two apparently reconciled, resulting in the 'Fantastic Five' reforming once again in time to confront Annihilus in the Negative Zone to help Susan give birth.
Vol. I #6
In What If? #6 (Dec. 1977), after the team are exposed to cosmic rays, they develop powers based on their personalities. Reed Richards' vast intellect causes him to become a giant floating brain, and he takes to calling himself "Big Brain". Reed's brain is destroyed during a battle with Doctor Doom, but not before he manages to transfer his mind into Doom's body. This version of the Fantastic Four reappeared in the Volume II story arc 'Timestorm', summoned by the Watcher to persuade the man who would become Kang/Immortus not to become a threat. Richards' and the other members of his Fantastic Four are killed by Immortus.
Vol. I #11
In What If? #11 (May 1978), an alternate universe is shown wherein the original 1960's staff of Marvel Comics are exposed to cosmic rays. Stan Lee gains the powers of Mister Fantastic, and is described as slowly gaining Reed's scientific intellect as well. Lee continues to write and edit Marvel Comics by day, but fight evil along with his fellow members of the Fantastic Four. The story was written by Lee, and drawn and co-written by Jack Kirby, who in this reality became the Thing.
Vol. II #11
In What If? vol. 2 #11 (March 1990), the origins of the Fantastic Four are retold, showing how the heroes lives would have changed if all four had gained the same powers as the individual members of the original Fantastic Four.
Fire Powers: In this alternate history, the cosmic rays give the four the powers of the Human Torch. They decide to use their powers for good, and become the Fantastic Four. They battle such menaces as the Mole Man and the alien race Skrulls. During a battle with the mystic Miracle Man, the villain brings to life a statue advertising a monster movie called "The Monster from Mars." When the heroes set fire to the statue, the fire spreads to a local apartment building, killing young Angelica Parsons. Feeling responsible for Parsons's death, the team disbands, with Reed devoting his life to science.
Elastic powers: In this alternate history, Reed, Sue, Johnny, and Ben develop the ability to stretch. Deciding not to become superheroes, Ben and Sue discover their love for one another and settle down to raise a family, never using their stretching powers again. Reed devotes his life to science, while Johnny becomes the celebrity Mister Fantastic.
Monstrous forms: The cosmic rays in this alternate history transform the four into monstrous creatures, with Reed taking on a purple-skinned form similar to the Brute. When the public reacts with fright at their appearances, Reed convinces the others to leave civilization and live on Monster Isle.
Invisibility powers: In the final What If? story, Ben Grimm, Reed Richards, Johnny Storm, and Sue Storm gain different aspects of the mainstream Sue Storm's power. Reed can project invisibility onto other objects. Reed and his three associates join Colonel Nick Fury's new C.I.A. unit, codenamed S.H.I.E.L.D., where he worked as Head of Laboratories. The story retells their initial encounter with Doctor Doom under these circumstances.
In other media
Television
Mister Fantastic appeared in the 1967 Fantastic Four TV series, voiced by Gerald Mohr.
Mister Fantastic also led the team in the 1978 Fantastic Four TV series, voiced by Mike Road.
Mister Fantastic appeared in the 1994 animated series voiced by Beau Weaver. He and Sue Storm are already married before they get their powers.
Beau Weaver reprises his role of Mister Fantastic in The Incredible Hulk episode "Fantastic Fortitude". He and the other Fantastic Four take their vacation prior to Hulk, She-Hulk, and Thing fighting Leader's Gamma Soldiers.
Mister Fantastic appears toward the end of the 1994 Spider-Man TV series voiced by Cam Clarke. He and the Fantastic Four are among the heroes Spider-Man summons to a planet to help him against the villains the Beyonder brought there. Mister Fantastic helps to awaken the dormant part of Curt Conners' mind in The Lizard.
Mister Fantastic is in the 2006 Fantastic Four TV series voiced by Hiro Kanagawa.
Mister Fantastic is featured in The Super Hero Squad Show voiced by James Marsters.
Mister Fantastic appeared in The Avengers: Earth's Mightiest Heroes, voiced by Dee Bradley Baker. He made a brief cameo appearance in the episode "The Man Who Stole Tomorrow". He reappears in the episode "The Private War of Doctor Doom" where the Avengers and the Fantastic Four team up to battle Doctor Doom and his Doombots.
Mister Fantastic appears in the Hulk and the Agents of S.M.A.S.H. episode "Monsters No More", voiced by Robin Atkin Downes. He teamed up with the Agents of S.M.A.S.H. to stop the Tribbitites invasion.
Film
The unreleased 1994 film The Fantastic Four featured Alex Hyde-White as Mister Fantastic.
Mister Fantastic has been played by actor Ioan Gruffudd in the 2005 film Fantastic Four and it's 2007 sequel Fantastic Four: Rise of the Silver Surfer. Both movies were directed by Tim Story. In the film continuity, initially, Reed Richards is a brilliant yet timid and pedantic scientist who, despite his genius-level understanding of the sciences and being (as he is described in the second film) "one of the greatest minds of the 21st century", is fiscally incompetent and nearing bankruptcy, forcing him to seek investment from Victor von Doom (in the film continuity a rival scientist and successful businessman) to further his projects.
By the events of Fantastic Four: Rise of the Silver Surfer, Reed, along with his teammates, is an internationally recognized superhero and celebrity. Reed's celebrity status sometimes goes to his head, like when he gives into the seduction of three sexy women who he meets at a bar. Reed and Sue are now engaged, although Reed has trouble keeping himself from being distracted from his imminent wedding (which is established as the fifth attempt they have made).
Mister Fantastic makes a non-speaking cameo appearance in the animated direct-to-video film Planet Hulk. His face was shadowed because the rights to the character were held by 20th Century Fox. He and the members of the Illuminati regretfully inform Hulk of the decisions made to ensure his removal from Earth.
Miles Teller portrayed Reed in Fantastic Four, directed by Josh Trank. At a young age, Reed Richards and Ben Grimm work on a project teleporter which catches the attention of the Baxter Foundation's director Franklin Storm. Reed helps to create the Quantum Gate which takes him, Ben, Johnny Storm and Victor von Doom to Planet Zero. The effects of Planet Zero gives Reed the ability to stretch. Blaming himself for the incident while being held at a government facility, Reed escapes and remains incognito. After being found by the military one year later, Reed is taken to Area 57 where he is persuaded to help repair the Quantum Gate. Things get worse when Victor resurfaces and plans to use Planet Zero to reshape Earth. After he, Ben, Johnny and Susan defeat Victor, they remain together as Reed is the one who comes up with their group name.
Video games
Mister Fantastic is a playable character in the Fantastic Four PlayStation game.
Mister Fantastic has a cameo appearance in the Spider-Man game based on his 1990s animated series for Sega Genesis and Super NES. By reaching certain levels of the game, Mr. Fantastic can be called a limited number of times for assistance.
Mister Fantastic appears as a playable character in Marvel: Ultimate Alliance voiced by David Naughton. He has special dialogue with Bruce Banner, Uatu, Black Bolt, Karnak, Crystal, Arcade, and Colonel Fury. A simulation disk has Mister Fantastic fight Bulldozer in Murderworld. Another simulation disk has Thing protect Mister Fantastic, when he's frozen by Rhino. His classic, New Marvel, original and Ultimate costumes are available.
Mister Fantastic appears in the Fantastic Four video game based on the 2005 film voiced by Ioan Gruffudd with his classic appearance voiced by Robin Atkin Downes in bonus levels.
Mister Fantastic appears in the Fantastic Four: Rise of the Silver Surfer video game based on the film voiced by Matthew Kaminsky.
Mister Fantastic appears in Marvel: Ultimate Alliance 2 voiced by Robert Clotworthy. His classic design is his default costume and his Ultimate design is his alternate costume. Since the game is based on Civil War, he is locked onto the Pro-Registration side along with Iron Man and Songbird.
Mister Fantastic is a playable character in Marvel Super Hero Squad Online.
Reed Richards is one of the four scientists Spider-Man tries to call in the 2008 video game Spider-Man: Web of Shadows, along with Tony Stark, Hank McCoy and Hank Pym. The answering machine states that the Fantastic Four are out of the galaxy.
Mister Fantastic makes a cameo in Ultimate Marvel vs. Capcom 3 in Frank West's ending. In the ending, he tells Frank about the Marvel Zombies, and that as soon as they are done consuming their own world, they will be coming to theirs. Not wanting that to happen, Frank and Mr. Fantastic team up to stop them.
Mister Fantastic appeared in the virtual pinball game Fantastic Four for Pinball FX 2 voiced by Travis Willingham.
Mister Fantastic is a playable character in the Facebook game Marvel: Avengers Alliance.
Mister Fantastic appears in Marvel Heroes as an NPC and as a playable character, voiced by Wally Wingert. However, due to legal reasons, he was removed from the game on July 1, 2017.
Mister Fantastic appears as a playable character in Lego Marvel Super Heroes, voiced by Dee Bradley Baker. While working on the mysteries of the Cosmic Bricks, Mister Fantastic and Captain America end up fighting Doctor Octopus until they are assisted by Spider-Man.
Mister Fantastic is a playable character in the mobile game Marvel: Future Fight.
Mister Fantastic is a playable character in the mobile game Marvel Puzzle Quest.
Mister Fantastic appears in the "Shadow of Doom" DLC of Marvel Ultimate Alliance 3: The Black Order, voiced again by Wally Wingert. This version resembles his current bearded appearance.
In popular culture
A parody of Mr. Fantastic is shown on the Adult Swim cartoon, The Venture Bros. The show features a character named Professor Richard Impossible (voiced by Stephen Colbert in seasons 1, 2, and "All This and Gargantua-2," Peter McCulloch in "The Terrible Secret of Turtle Bay," Christopher McCulloch in season 3, Bill Hader in season 4), who attains the same powers as Mr. Fantastic.
In a season 4 episode of Stargate Atlantis, "Travelers" Lt Col John Sheppard uses the alias Reed Richards when kidnapped.
In The Simpsons "Treehouse of Horror" episode segment titled " Stop the World, I Want to Goof Off!", there is a moment where the family is transformed to resemble members of the Fantastic Four; Bart is Mr. Fantastic. He exhibits the same ability as Stretch Dude in a previous "Treehouse of Horror" episode entitled "Desperately Xeeking Xena".
Mister Fantastic appears in the Robot Chicken episode "Monstourage" voiced by Seth Green.
Norm Macdonald plays Reed Richards in a skit appearing in his comedy album Ridiculous. In it, the members of the Fantastic Four are deciding on their names; after Reed comes up with "The Thing", "The Invisible Woman," and "The Human Torch" for his teammates, he decides to call himself "Mr. Fantastic". His teammates become upset, because unlike the other names, "Mr. Fantastic" does not really describe his powers.
Mister Fantastic's genitalia, along with that of fellow Fantastic Four member The Thing, is discussed in the film Mallrats in a scene guest-starring Stan Lee.
Reception
Mister Fantastic was ranked as the 41st Greatest Comic Book Character of All Time by Wizard magazine IGN ranked Reed Richards as the 40th Greatest Comic Book Hero of All Time stating that "Mister Fantastic numbers among the very smartest men in the Marvel Universe" and that "Sure, his obsession with science sometimes comes at the detriment of his family life, but a kinder and nobler hero you'll rarely find." Mr. Fantastic was also listed as #50 on IGN's list of the "Top 50 Avengers".
References
Sources
MDP: Mister Fantastic - Marvel Database Project
Official Marvel Picture site
Official Fantastic Four movie webpage
Official Fantastic Four: Rise of the Silver Surfer movie webpage
External links
Mister Fantastic Bio at Marvel.com
Ultimate Mister Fantastic on the Marvel Universe Character Bio Wiki
Marc Singer on Reed Richards and the Galactus Saga.
Reed Richards at Marvel Wiki
Avengers (comics) characters
Characters created by Jack Kirby
Characters created by Stan Lee
Comics characters introduced in 1961
Fantastic Four characters
Fictional astronauts
Fictional California Institute of Technology people
Fictional characters from California
Fictional characters from Connecticut
Fictional characters from New York City
Fictional characters who can stretch themselves
Fictional explorers
Fictional Harvard University people
Fictional inventors
Fictional Massachusetts Institute of Technology people
Fictional professors
Fictional schoolteachers
Fictional scientists
Fictional theoretical physicists
Male characters in film
Marvel Comics American superheroes
Marvel Comics characters who are shapeshifters
Marvel Comics film characters
Marvel Comics male superheroes
Marvel Comics mutates |
151452 | https://en.wikipedia.org/wiki/Burnside%20problem | Burnside problem | The Burnside problem asks whether a finitely generated group in which every element has finite order must necessarily be a finite group. It was posed by William Burnside in 1902, making it one of the oldest questions in group theory and was influential in the development of combinatorial group theory. It is known to have a negative answer in general, as Evgeny Golod and Igor Shafarevich provided a counter-example in 1964. The problem has many refinements and variants (see bounded and restricted below) that differ in the additional conditions imposed on the orders of the group elements, some of which are still open questions.
Brief history
Initial work pointed towards the affirmative answer. For example, if a group G is finitely generated and the order of each element of G is a divisor of 4, then G is finite. Moreover, A. I. Kostrikin was able to prove in 1958 that among the finite groups with a given number of generators and a given prime exponent, there exists a largest one. This provides a solution for the restricted Burnside problem for the case of prime exponent. (Later, in 1989, Efim Zelmanov was able to solve the restricted Burnside problem for an arbitrary exponent.) Issai Schur had showed in 1911 that any finitely generated periodic group that was a subgroup of the group of invertible n × n complex matrices was finite; he used this theorem to prove the Jordan–Schur theorem.
Nevertheless, the general answer to the Burnside problem turned out to be negative. In 1964, Golod and Shafarevich constructed an infinite group of Burnside type without assuming that all elements have uniformly bounded order. In 1968, Pyotr Novikov and Sergei Adian supplied a negative solution to the bounded exponent problem for all odd exponents larger than 4381. In 1982, A. Yu. Ol'shanskii found some striking counterexamples for sufficiently large odd exponents (greater than 1010), and supplied a considerably simpler proof based on geometric ideas.
The case of even exponents turned out to be much harder to settle. In 1992, S. V. Ivanov announced the negative solution for sufficiently large even exponents divisible by a large power of 2 (detailed proofs were published in 1994 and occupied some 300 pages). Later joint work of Ol'shanskii and Ivanov established a negative solution to an analogue of Burnside problem for hyperbolic groups, provided the exponent is sufficiently large. By contrast, when the exponent is small and different from 2, 3, 4 and 6, very little is known.
General Burnside problem
A group G is called periodic if every element has finite order; in other words, for each g in G, there exists some positive integer n such that gn = 1. Clearly, every finite group is periodic. There exist easily defined groups such as the p∞-group which are infinite periodic groups; but the latter group cannot be finitely generated.
General Burnside problem. If G is a finitely generated, periodic group, then is G necessarily finite?
This question was answered in the negative in 1964 by Evgeny Golod and Igor Shafarevich, who gave an example of an infinite p-group that is finitely generated (see Golod–Shafarevich theorem). However, the orders of the elements of this group are not a priori bounded by a single constant.
Bounded Burnside problem
Part of the difficulty with the general Burnside problem is that the requirements of being finitely generated and periodic give very little information about the possible structure of a group. Therefore, we pose more requirements on G. Consider a periodic group G with the additional property that there exists a least integer n such that for all g in G, gn = 1. A group with this property is said to be periodic with bounded exponent n, or just a group with exponent n. Burnside problem for groups with bounded exponent asks:
Burnside problem I. If G is a finitely generated group with exponent n, is G necessarily finite?
It turns out that this problem can be restated as a question about the finiteness of groups in a particular family. The free Burnside group of rank m and exponent n, denoted B(m, n), is a group with m distinguished generators x1, ..., xm in which the identity xn = 1 holds for all elements x, and which is the "largest" group satisfying these requirements. More precisely, the characteristic property of B(m, n) is that, given any group G with m generators g1, ..., gm and of exponent n, there is a unique homomorphism from B(m, n) to G that maps the ith generator xi of B(m, n) into the ith generator gi of G. In the language of group presentations, free Burnside group B(m, n) has m generators x1, ..., xm and the relations xn = 1 for each word x in x1, ..., xm, and any group G with m generators of exponent n is obtained from it by imposing additional relations. The existence of the free Burnside group and its uniqueness up to an isomorphism are established by standard techniques of group theory. Thus if G is any finitely generated group of exponent n, then G is a homomorphic image of B(m, n), where m is the number of generators of G. Burnside problem can now be restated as follows:
Burnside problem II. For which positive integers m, n is the free Burnside group B(m, n) finite?
The full solution to Burnside problem in this form is not known. Burnside considered some easy cases in his original paper:
B(1, n) is the cyclic group of order n.
B(m, 2) is the direct product of m copies of the cyclic group of order 2 and hence finite.
The following additional results are known (Burnside, Sanov, M. Hall):
B(m, 3), B(m, 4), and B(m, 6) are finite for all m.
The particular case of B(2, 5) remains open: it was not known whether this group is finite.
The breakthrough in solving the Burnside problem was achieved by Pyotr Novikov and Sergei Adian in 1968. Using a complicated combinatorial argument, they demonstrated that for every odd number n with n > 4381, there exist infinite, finitely generated groups of exponent n. Adian later improved the bound on the odd exponent to 665. The latest improvement to the bound on odd exponent is 101 obtained by Adian himself in 2015. The case of even exponent turned out to be considerably more difficult. It was only in 1994 that Sergei Vasilievich Ivanov was able to prove an analogue of Novikov–Adian theorem: for any m > 1 and an even n ≥ 248, n divisible by 29, the group B(m, n) is infinite; together with the Novikov–Adian theorem, this implies infiniteness for all m > 1 and n ≥ 248. This was improved in 1996 by I. G. Lysënok to m > 1 and n ≥ 8000. Novikov–Adian, Ivanov and Lysënok established considerably more precise results on the structure of the free Burnside groups. In the case of the odd exponent, all finite subgroups of the free Burnside groups were shown to be cyclic groups. In the even exponent case, each finite subgroup is contained in a product of two dihedral groups, and there exist non-cyclic finite subgroups. Moreover, the word and conjugacy problems were shown to be effectively solvable in B(m, n) both for the cases of odd and even exponents n.
A famous class of counterexamples to the Burnside problem is formed by finitely generated non-cyclic infinite groups in which every nontrivial proper subgroup is a finite cyclic group, the so-called Tarski Monsters. First examples of such groups were constructed by A. Yu. Ol'shanskii in 1979 using geometric methods, thus affirmatively solving O. Yu. Schmidt's problem. In 1982 Ol'shanskii was able to strengthen his results to establish existence, for any sufficiently large prime number p (one can take p > 1075) of a finitely generated infinite group in which every nontrivial proper subgroup is a cyclic group of order p. In a paper published in 1996, Ivanov and Ol'shanskii solved an analogue of the Burnside problem in an arbitrary hyperbolic group for sufficiently large exponents.
Restricted Burnside problem
Formulated in the 1930s, it asks another, related, question:
Restricted Burnside problem. If it is known that a group G with m generators and exponent n is finite, can one conclude that the order of G is bounded by some constant depending only on m and n? Equivalently, are there only finitely many finite groups with m generators of exponent n, up to isomorphism?
This variant of the Burnside problem can also be stated in terms of certain universal groups with m generators and exponent n. By basic results of group theory, the intersection of two subgroups of finite index in any group is itself a subgroup of finite index. Let M be the intersection of all subgroups of the free Burnside group B(m, n) which have finite index, then M is a normal subgroup of B(m, n) (otherwise, there exists a subgroup g−1Mg with finite index containing elements not in M). One can therefore define a group B0(m, n) to be the factor group B(m, n)/M. Every finite group of exponent n with m generators is a homomorphic image of B0(m, n).
The restricted Burnside problem then asks whether B0(m, n) is a finite group.
In the case of the prime exponent p, this problem was extensively studied by A. I. Kostrikin during the 1950s, prior to the negative solution of the general Burnside problem. His solution, establishing the finiteness of B0(m, p), used a relation with deep questions about identities in Lie algebras in finite characteristic. The case of arbitrary exponent has been completely settled in the affirmative by Efim Zelmanov, who was awarded the Fields Medal in 1994 for his work.
Notes
References
Bibliography
S. I. Adian (1979) The Burnside problem and identities in groups. Translated from the Russian by John Lennox and James Wiegold. Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], 95. Springer-Verlag, Berlin-New York. .
Translation in
A. I. Kostrikin (1990) Around Burnside. Translated from the Russian and with a preface by James Wiegold. Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 20. Springer-Verlag, Berlin. .
Translation in
A. Yu. Ol'shanskii (1989) Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin (1991) Mathematics and its Applications (Soviet Series), 70. Dordrecht: Kluwer Academic Publishers Group. .
Translation in
Translation in
External links
History of the Burnside problem at MacTutor History of Mathematics archive
Group theory
Unsolved problems in mathematics |
151713 | https://en.wikipedia.org/wiki/One-instruction%20set%20computer | One-instruction set computer | A one-instruction set computer (OISC), sometimes called an ultimate reduced instruction set computer (URISC), is an abstract machine that uses only one instructionobviating the need for a machine language opcode. With a judicious choice for the single instruction and given infinite resources, an OISC is capable of being a universal computer in the same manner as traditional computers that have multiple instructions. OISCs have been recommended as aids in teaching computer architecture and have been used as computational models in structural computing research. The first carbon nanotube computer is a 1-bit one-instruction set computer (and has only 178 transistors).
Machine architecture
In a Turing-complete model, each memory location can store an arbitrary integer, anddepending on the modelthere may be arbitrarily many locations. The instructions themselves reside in memory as a sequence of such integers.
There exists a class of universal computers with a single instruction based on bit manipulation such as bit copying or bit inversion. Since their memory model is finite, as is the memory structure used in real computers, those bit manipulation machines are equivalent to real computers rather than to Turing machines.
Currently known OISCs can be roughly separated into three broad categories:
Bit-manipulating machines
Transport triggered architecture machines
Arithmetic-based Turing-complete machines
Bit-manipulating machines
Bit-manipulating machines are the simplest class.
FlipJump
The FlipJump machine has 1 instruction, a;b - flips the bit a, then jumps to b. This is the most primitive OISC, but it's still useful. It can successfully do Math/Logic calculations, branching, pointers, and calling functions with the help of its standard library.
BitBitJump
A bit copying machine, called BitBitJump, copies one bit in memory and passes the execution unconditionally to the address specified by one of the operands of the instruction. This process turns out to be capable of universal computation (i.e. being able to execute any algorithm and to interpret any other universal machine) because copying bits can conditionally modify the code that will be subsequently executed.
Toga computer
Another machine, called the Toga Computer, inverts a bit and passes the execution conditionally depending on the result of inversion. The unique instruction is TOGA(a,b) which stands for TOGgle a And branch to b if the result of the toggle operation is true.
Multi-bit copying machine
Similar to BitBitJump, a multi-bit copying machine copies several bits at the same time. The problem of computational universality is solved in this case by keeping predefined jump tables in the memory.
Transport triggered architecture
Transport triggered architecture (TTA) is a design in which computation is a side effect of data transport. Usually, some memory registers (triggering ports) within common address space perform an assigned operation when the instruction references them. For example, in an OISC using a single memory-to-memory copy instruction, this is done by triggering ports that perform arithmetic and instruction pointer jumps when written to.
Arithmetic-based Turing-complete machines
Arithmetic-based Turing-complete machines use an arithmetic operation and a conditional jump. Like the two previous universal computers, this class is also Turing-complete. The instruction operates on integers which may also be addresses in memory.
Currently there are several known OISCs of this class, based on different arithmetic operations:
addition (addleq, add and branch if less than or equal to zero)
decrement (DJN, Decrement and branch (Jump) if Nonzero)
increment (P1eq, Plus 1 and branch if equal to another value)
subtraction (subleq, subtract and branch if less than or equal to zero)
positive subtraction when possible, else branch (Arithmetic machine)
Instruction types
Common choices for the single instruction are:
Subtract and branch if less than or equal to zero
Subtract and branch if negative
Subtract if positive else branch
Reverse subtract and skip if borrow
Move (used as part of a transport triggered architecture)
Subtract and branch if non zero (SBNZ a, b, c, destination)
Cryptoleq (heterogeneous encrypted and unencrypted computation)
Only one of these instructions is used in a given implementation. Hence, there is no need for an opcode to identify which instruction to execute; the choice of instruction is inherent in the design of the machine, and an OISC is typically named after the instruction it uses (e.g., an SBN OISC, the SUBLEQ language, etc.). Each of the above instructions can be used to construct a Turing-complete OISC.
This article presents only subtraction-based instructions among those that are not transport triggered. However, it is possible to construct Turing complete machines using an instruction based on other arithmetic operations, e.g., addition. For example, one variation known as DLN (Decrement and jump if not zero) has only two operands and uses decrement as the base operation. For more information see Subleq derivative languages .
Subtract and branch if not equal to zero
The SBNZ a, b, c, d instruction ("subtract and branch if not equal to zero") subtracts the contents at address a from the contents at address b, stores the result at address c, and then, if the result is not 0, transfers control to address d (if the result is equal to zero, execution proceeds to the next instruction in sequence).
Subtract and branch if less than or equal to zero
The instruction ("subtract and branch if less than or equal to zero") subtracts the contents at address from the contents at address , stores the result at address , and then, if the result is not positive, transfers control to address (if the result is positive, execution proceeds to the next instruction in sequence). Pseudocode:
Instruction subleq a, b, c
Mem[b] = Mem[b] - Mem[a]
if (Mem[b] ≤ 0)
goto c
Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.
A variant is also possible with two operands and an internal accumulator, where the accumulator is subtracted from the memory location specified by the first operand. The result is stored in both the accumulator and the memory location, and the second operand specifies the branch address:
Instruction subleq2 a, b
Mem[a] = Mem[a] - ACCUM
ACCUM = Mem[a]
if (Mem[a] ≤ 0)
goto b
Although this uses only two (instead of three) operands per instruction, correspondingly more instructions are then needed to effect various logical operations.
Synthesized instructions
It is possible to synthesize many types of higher-order instructions using only the instruction.
Unconditional branch:
subleq Z, Z, c
Addition can be performed by repeated subtraction, with no conditional branching; e.g., the following instructions result in the content at location being added to the content at location :
subleq a, Z
subleq Z, b
subleq Z, Z
The first instruction subtracts the content at location from the content at location (which is 0) and stores the result (which is the negative of the content at ) in location . The second instruction subtracts this result from , storing in this difference (which is now the sum of the contents originally at and ); the third instruction restores the value 0 to .
A copy instruction can be implemented similarly; e.g., the following instructions result in the content at location getting replaced by the content at location , again assuming the content at location is maintained as 0:
subleq b, b
subleq a, Z
subleq Z, b
subleq Z, Z
Any desired arithmetic test can be built. For example, a branch-if-zero condition can be assembled from the following instructions:
subleq b, Z, L1
subleq Z, Z, OUT
L1:
subleq Z, Z
subleq Z, b, c
OUT:
...
Subleq2 can also be used to synthesize higher-order instructions, although it generally requires more operations for a given task. For example, no fewer than 10 subleq2 instructions are required to flip all the bits in a given byte:
subleq2 tmp ; tmp = 0 (tmp = temporary register)
subleq2 tmp
subleq2 one ; acc = -1
subleq2 a ; a' = a + 1
subleq2 Z ; Z = - a - 1
subleq2 tmp ; tmp = a + 1
subleq2 a ; a' = 0
subleq2 tmp ; load tmp into acc
subleq2 a ; a' = - a - 1 ( = ~a )
subleq2 Z ; set Z back to 0
Emulation
The following program (written in pseudocode) emulates the execution of a -based OISC:
int memory[], program_counter, a, b, c
program_counter = 0
while (program_counter >= 0):
a = memory[program_counter]
b = memory[program_counter+1]
c = memory[program_counter+2]
if (a < 0 or b < 0):
program_counter = -1
else:
memory[b] = memory[b] - memory[a]
if (memory[b] > 0):
program_counter += 3
else:
program_counter = c
This program assumes that is indexed by nonnegative integers. Consequently, for a instruction (, , ), the program interprets , , or an executed branch to as a halting condition. Similar interpreters written in a -based language (i.e., self-interpreters, which may use self-modifying code as allowed by the nature of the instruction) can be found in the external links below.
Compilation
There is a compiler called Higher Subleq written by Oleg Mazonka that compiles a simplified C program into code.
Subtract and branch if negative
The instruction ("subtract and branch if negative"), also called , is defined similarly to :
Instruction subneg a, b, c
Mem[b] = Mem[b] - Mem[a]
if (Mem[b] < 0)
goto c
Conditional branching can be suppressed by setting the third operand equal to the address of the next instruction in sequence. If the third operand is not written, this suppression is implied.
Synthesized instructions
It is possible to synthesize many types of higher-order instructions using only the instruction. For simplicity, only one synthesized instruction is shown here to illustrate the difference between and .
Unconditional branch:
subneg POS, Z, c
where and are locations previously set to contain 0 and a positive integer, respectively;
Unconditional branching is assured only if initially contains 0 (or a value less than the integer stored in ). A follow-up instruction is required to clear after the branching, assuming that the content of must be maintained as 0.
subneg4
A variant is also possible with four operands – subneg4. The reversal of minuend and subtrahend eases implementation in hardware. The non-destructive result simplifies the synthetic instructions.
Instruction subneg s, m, r, j
(* subtrahend, minuend, result and jump addresses *)
Mem[r] = Mem[m] - Mem[s]
if (Mem[r] < 0)
goto j
Arithmetic machine
In an attempt to make Turing machine more intuitive, Z. A. Melzac consider the task of computing with positive numbers. The machine has an infinite abacus, an infinite number of counters (pebbles, tally sticks) initially at a special location S. The machine is able to do one operation:
Take from location X as many counters as there are in location Y and transfer them to location Z and proceed to next instruction.
If this operation is not possible because there is not enough counters in Y, then leave the abacus as it is and proceed to instruction T.
This essentially a subneg where the test is done before rather than after the subtraction, in order to keep all numbers positive and mimic a human operator computing on a real world abacus. Pseudocode:
Instruction melzac X, Y, Z, T
if (Mem[Y] < Mem[X])
goto T
Mem[Z] = Mem[Y] - Mem[X]
After giving a few programs: multiplication, gcd, computing the n-th prime number, representation in base b of an arbitrary number, sorting in order of magnitude, Melzac shows explicitly how to simulate an arbitrary Turing machine on his arithmetic machine.
He mentions that it can easily be shown using the elements of recursive functions that every number calculable on the arithmetic machine is computable. A proof of which was given by Lambek on an equivalent two instruction machine : X+ (increment X) and X− else T (decrement X if it not empty, else jump to T).
Reverse subtract and skip if borrow
In a reverse subtract and skip if borrow (RSSB) instruction, the accumulator is subtracted from the memory location and the next instruction is skipped if there was a borrow (memory location was smaller than the accumulator). The result is stored in both the accumulator and the memory location. The program counter is mapped to memory location 0. The accumulator is mapped to memory location 1.
Instruction rssb x
ACCUM = Mem[x] - ACCUM
Mem[x] = ACCUM
if (ACCUM < 0)
goto PC + 2
Example
To set x to the value of y minus z:
# First, move z to the destination location x.
RSSB temp # Three instructions required to clear acc, temp [See Note 1]
RSSB temp
RSSB temp
RSSB x # Two instructions clear acc, x, since acc is already clear
RSSB x
RSSB y # Load y into acc: no borrow
RSSB temp # Store -y into acc, temp: always borrow and skip
RSSB temp # Skipped
RSSB x # Store y into x, acc
# Second, perform the operation.
RSSB temp # Three instructions required to clear acc, temp
RSSB temp
RSSB temp
RSSB z # Load z
RSSB x # x = y - z [See Note 2]
[Note 1] If the value stored at "temp" is initially a negative value and the instruction that executed right before the first "RSSB temp" in this routine borrowed, then four "RSSB temp" instructions will be required for the routine to work.
[Note 2] If the value stored at "z" is initially a negative value then the final "RSSB x" will be skipped and thus the routine will not work.
Transport triggered architecture
A transport triggered architecture uses only the move instruction, hence it was originally called a "move machine". This instruction moves the contents of one memory location to another memory location combining with the current content of the new location:
Instruction movx a, b (also written a -> b)
OP = GetOperation(Mem[b])
Mem[b] := OP(Mem[a], Mem[b])
The operation performed is defined by the destination memory cell. Some cells are specialized in addition, some other in multiplication, etc. So memory cells are not simple store but coupled with an arithmetic logic unit (ALU) setup to perform only one sort of operation with the current value of the cell. Some of the cells are control flow instructions to alter the program execution with jumps, conditional execution, subroutines, if-then-else, for-loop, etc...
A commercial transport triggered architecture microcontroller has been produced called MAXQ, which hides the apparent inconvenience of an OISC by using a "transfer map" that represents all possible destinations for the move instructions.
Cryptoleq
Cryptoleq is a language consisting of one eponymous instruction, is capable of performing general-purpose computation on encrypted programs and is a close relative to Subleq. Cryptoleq works on continuous cells of memory using direct and indirect addressing, and performs two operations and on three values A, B, and C:
Instruction cryptoleq a, b, c
Mem[b] = O1(Mem[a], Mem[b])
if O2(Mem[b]) ≤ 0
IP = c
else
IP = IP + 3
where a, b and c are addressed by the instruction pointer, IP, with the value of IP addressing a, IP + 1 point to b and IP + 2 to c.
In Cryptoleq operations and are defined as follows:
The main difference with Subleq is that in Subleq, simply subtracts from and equals to . Cryptoleq is also homomorphic to Subleq, modular inversion and multiplication is homomorphic to subtraction and the operation of corresponds the Subleq test if the values were unencrypted. A program written in Subleq can run on a Cryptoleq machine, meaning backwards compatibility. Cryptoleq though, implements fully homomorphic calculations and since the model is be able to do multiplications. Multiplication on an encrypted domain is assisted by a unique function G that is assumed to be difficult to reverse engineer and allows re-encryption of a value based on the operation:
where is the re-encrypted value of and is encrypted zero. is the encrypted value of a variable, let it be , and equals .
The multiplication algorithm is based on addition and subtraction, uses the function G and does not have conditional jumps nor branches. Cryptoleq encryption is based on Paillier cryptosystem.
See also
FRACTRAN
Register machine
Turing tarpit
Zero instruction set computer
References
External links
Subleq on the esoteric programming languages wiki – interpreters, compilers, examples and derivative languages
by Christopher Domas
Laboratory subleq computer – FPGA implementation using VHDL
The Retrocomputing Museum – SBN emulator and sample programs
Laboratory SBN computer – implemented with 7400 series integrated circuits
RSSB on the esoteric programming languages wiki – interpreters and examples
Dr. Dobb's 32-bit OISC implementation – transport triggered architecture (TTA) on an FPGA using Verilog
Introduction to the MAXQ Architecture – includes transfer map diagram
OISC-Emulator – graphical version
TrapCC (recent Intel x86 MMUs are actually Turing-complete OISCs.)
SBN simulator – simulator and design inspired by CARDboard Illustrative Aid to Computation
One-bit Computing at 60 Hertz – intermediate between a computer and a state machine
The NOR Machineinfo on building a CPU with only one Instruction
CryptoleqCryptoleq resources repository
CAAMPComputer Architecture A Minimalist Perspective
DawnOS – an operating system for the SUBLEQ architecture
Unileq – a variant of SUBLEQ using unsigned integers
Models of computation
Esoteric programming languages |
151762 | https://en.wikipedia.org/wiki/Zu%20Chongzhi | Zu Chongzhi | Zu Chongzhi (; 429–500 AD), courtesy name Wenyuan (), was a Chinese astronomer, mathematician, politician, inventor, and writer during the Liu Song and Southern Qi dynasties. He was most notable for calculating pi as between 3.1415926 and 3.1415927, a record in accuracy which would not be surpassed for over 800 years.
Life and works
Chongzhi's ancestry was from modern Baoding, Hebei. To flee from the ravages of war, Zu's grandfather Zu Chang moved to the Yangtze, as part of the massive population movement during the Eastern Jin. Zu Chang () at one point held the position of Chief Minister for the Palace Buildings () within the Liu Song and was in charge of government construction projects. Zu's father, Zu Shuozhi (), also served the court and was greatly respected for his erudition.
Zu was born in Jiankang. His family had historically been involved in astronomical research, and from childhood Zu was exposed to both astronomy and mathematics. When he was only a youth his talent earned him much repute. When Emperor Xiaowu of Liu Song heard of him, he was sent to the Hualin Xuesheng () academy, and later the Imperial Nanjing University (Zongmingguan) to perform research. In 461 in Nanxu (today Zhenjiang, Jiangsu), he was engaged in work at the office of the local governor.
Zu Chongzhi, along with his son Zu Gengzhi, wrote a mathematical text entitled Zhui Shu (; "Methods for Interpolation"). It is said that the treatise contained formulas for the volume of a sphere, cubic equations and an accurate value of pi. This book has been lost since the Song Dynasty.
His mathematical achievements included
the Daming calendar () introduced by him in 465.
distinguishing the sidereal year and the tropical year. He measured 45 years and 11 months per degree between those two; today we know the difference is 70.7 years per degree.
calculating one year as 365.24281481 days, which is very close to 365.24219878 days as we know today.
calculating the number of overlaps between sun and moon as 27.21223, which is very close to 27.21222 as we know today; using this number he successfully predicted an eclipse four times during 23 years (from 436 to 459).
calculating the Jupiter year as about 11.858 Earth years, which is very close to 11.862 as we know of today.
deriving two approximations of pi, (3.1415926535897932...) which held as the most accurate approximation for for over nine hundred years. His best approximation was between 3.1415926 and 3.1415927, with (, milü, close ratio) and (, yuelü, approximate ratio) being the other notable approximations. He obtained the result by approximating a circle with a 24,576 (= 213 × 3) sided polygon. This was an impressive feat for the time, especially considering that the counting rods he used for recording intermediate results were merely a pile of wooden sticks laid out in certain patterns. Japanese mathematician Yoshio Mikami pointed out, " was nothing more than the value obtained several hundred years earlier by the Greek mathematician Archimedes, however milü = could not be found in any Greek, Indian or Arabian manuscripts, not until 1585 Dutch mathematician Adriaan Anthoniszoon obtained this fraction; the Chinese possessed this most extraordinary fraction over a whole millennium earlier than Europe". Hence Mikami strongly urged that the fraction be named after Zu Chongzhi as Zu's fraction. In Chinese literature, this fraction is known as "Zu's ratio". Zu's ratio is a best rational approximation to , and is the closest rational approximation to from all fractions with denominator less than 16600.
finding the volume of a sphere as D3/6 where D is diameter (equivalent to 4r3/3).
Astronomy
Zu was an accomplished astronomer who calculated the time values with unprecedented precision. His methods of interpolationg and the use of integration were far ahead of his time. Even the results of the astronomer Yi Xing (who was beginning to utilize foreign knowledge) were not comparable. The Sung dynasty calendar was backwards to the "Northern barbarians" because they were implementing their daily lives with the Da Ming Li. It is said that his methods of calculation were so advanced, the scholars of the Sung dynasty and Indo influence astronomers of the Tang dynasty found it confusing.
Mathematics
The majority of Zu's great mathematical works are recorded in his lost text the Zhui Shu. Most schools argue about his complexity since traditionally the Chinese had developed mathematics as algebraic and equational. Logically, scholars assume that the Zhui Shu yields methods of cubic equations. His works on the accurate value of pi describe the lengthy calculations involved. Zu used the Liu Hui's algorithm described earlier by Liu Hui to inscribe a 12,288-gon. Zu's value of pi is precise to six decimal places and for a thousand years thereafter no subsequent mathematician computed a value this precise. Zu also worked on deducing the formula for the volume of a sphere.
Inventions and innovations
Hammer mills
In 488, Zu Chongzhi was responsible for erecting water powered trip hammer mills which was inspected by Emperor Wu of Southern Qi during the early 490s.
Paddle boats
Zu is also credited with inventing Chinese paddle steamers or Qianli chuan in the late 5th century AD during the Southern Qi Dynasty. The boats made sailing a more reliable form of transportation and based on the shipbuilding technology of its day, numerous paddle wheel ships were constructed during the Tang era as the boats were able to cruise at faster speeds than the existing vessels at the time as well as being able to cover hundreds of kilometers of distance without the aid of wind.
South pointing chariot
The south-pointing chariot device was first invented by the Chinese mechanical engineer Ma Jun (c. 200–265 AD). It was a wheeled vehicle that incorporated an early use of differential gears to operate a fixed figurine that would constantly point south, hence enabling one to accurately measure their directional bearings. This effect was achieved not by magnetics (like in a compass), but through intricate mechanics, the same design that allows equal amounts of torque applied to wheels rotating at different speeds for the modern automobile. After the Three Kingdoms period, the device fell out of use temporarily. However, it was Zu Chongzhi who successfully re-invented it in 478, as described in the texts of the Book of Song and the Book of Qi, with a passage from the latter below:
When Emperor Wu of Liu Song subdued Guanzhong he obtained the south-pointing carriage of Yao Xing, but it was only the shell with no machinery inside. Whenever it moved it had to have a man inside to turn (the figure). In the Sheng-Ming reign period, Gao Di commissioned Zi Zu Chongzhi to reconstruct it according to the ancient rules. He accordingly made new machinery of bronze, which would turn round about without a hitch and indicate the direction with uniformity. Since Ma Jun's time such a thing had not been.Book of Qi, 52.905
Literature
Zu's paradoxographical work Accounts of Strange Things [] survives.
Named after him
≈ as Zu Chongzhi's ratio.
The lunar crater Tsu Chung-Chi
1888 Zu Chong-Zhi is the name of asteroid 1964 VO1.
Zuc stream cipher is a new encryption algorithm.
Notes
References
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 2. Cambridge University Press
Du Shiran and He Shaogeng, "Zu Chongzhi". Encyclopedia of China (Mathematics Edition), 1st ed.
Further reading
External links
Encyclopædia Britannica's description of Zu Chongzhi
Zu Chongzhi at Chinaculture.org
Zu Chongzhi at the University of Maine
429 births
500 deaths
5th-century Chinese mathematicians
5th-century Chinese astronomers
Ancient Chinese mathematicians
Chinese inventors
Liu Song dynasty people
Liu Song politicians
Liu Song writers
Pi
Politicians from Nanjing
Scientists from Nanjing
Southern Qi politicians
Writers from Nanjing |
151950 | https://en.wikipedia.org/wiki/Eeg%20%28disambiguation%29 | Eeg (disambiguation) | Eeg or EEG may refer to:
People
Harald Rosenløw Eeg (born 1970), Norwegian writer
Sinne Eeg (born 1977), Danish musician
Syvert Omundsen Eeg (1757–1838), Norwegian politician
Other uses
Eastern European Group, in the United Nations
Electroencephalography
Emirates Environmental Group
Enterprise encryption gateway
Emperor Entertainment Group, a Hong Kong-based entertainment company of Emperor Group
German Renewable Energy Sources Act (German: ), of the Government of Germany |
152420 | https://en.wikipedia.org/wiki/Passphrase | Passphrase | A passphrase is a sequence of words or other text used to control access to a computer system, program or data. It is similar to a password in usage, but a passphrase is generally longer for added security. Passphrases are often used to control both access to, and the operation of, cryptographic programs and systems, especially those that derive an encryption key from a passphrase. The origin of the term is by analogy with password. The modern concept of passphrases is believed to have been invented by Sigmund N. Porter in 1982.
Security
Considering that the entropy of written English is less than 1.1 bits per character, passphrases can be relatively weak. NIST has estimated that the 23-character passphrase "IamtheCapitanofthePina4" contains a 45-bit strength. The equation employed here is:
4 bits (1st character) + 14 bits (characters 2–8) + 18 bits (characters 9–20) + 3 bits (characters 21–23) + 6 bits (bonus for upper case, lower case, and alphanumeric) = 45 bits
(This calculation does not take into account that this is a well-known quote from the operetta H.M.S. Pinafore. An MD5 hash of this passphrase can be cracked in 4 seconds using crackstation.net, indicating that the phrase is found in password cracking databases.)
Using this guideline, to achieve the 80-bit strength recommended for high security (non-military) by NIST, a passphrase would need to be 58 characters long, assuming a composition that includes uppercase and alphanumeric.
There is room for debate regarding the applicability of this equation, depending on the number of bits of entropy assigned. For example, the characters in five-letter words each contain 2.3 bits of entropy, which would mean only a 35-character passphrase is necessary to achieve 80 bit strength.
If the words or components of a passphrase may be found in a language dictionary—especially one available as electronic input to a software program—the passphrase is rendered more vulnerable to dictionary attack. This is a particular issue if the entire phrase can be found in a book of quotations or phrase compilations. However, the required effort (in time and cost) can be made impracticably high if there are enough words in the passphrase and if they are randomly chosen and ordered in the passphrase. The number of combinations which would have to be tested under sufficient conditions make a dictionary attack so difficult as to be infeasible. These are difficult conditions to meet, and selecting at least one word that cannot be found in any dictionary significantly increases passphrase strength.
If passphrases are chosen by humans, they are usually biased by the frequency of particular words in natural language. In the case of four word phrases, actual entropy rarely exceeds 30 bits. On the other hand, user-selected passwords tend to be much weaker than that, and encouraging users to use even 2-word passphrases may be able to raise entropy from below 10 bits to over 20 bits.
For example, the widely used cryptography standard OpenPGP requires that a user make up a passphrase that must be entered whenever decrypting or signing messages. Internet services like Hushmail provide free encrypted e-mail or file sharing services, but the security present depends almost entirely on the quality of the chosen passphrase.
Compared to passwords
Passphrases differ from passwords. A password is usually short—six to ten characters. Such passwords may be adequate for various applications (if frequently changed, if chosen using an appropriate policy, if not found in dictionaries, if sufficiently random, and/or if the system prevents online guessing, etc.) such as:
Logging onto computer systems
Negotiating keys in an interactive setting (e.g. using password-authenticated key agreement)
Enabling a smart-card or PIN for an ATM card (e.g. where the password data (hopefully) cannot be extracted)
But passwords are typically not safe to use as keys for standalone security systems (e.g., encryption systems) that expose data to enable offline password guessing by an attacker. Passphrases are theoretically stronger, and so should make a better choice in these cases. First, they usually are (and always should be) much longer—20 to 30 characters or more is typical—making some kinds of brute force attacks entirely impractical. Second, if well chosen, they will not be found in any phrase or quote dictionary, so such dictionary attacks will be almost impossible. Third, they can be structured to be more easily memorable than passwords without being written down, reducing the risk of hardcopy theft. However, if a passphrase is not protected appropriately by the authenticator and the clear-text passphrase is revealed its use is no better than other passwords. For this reason it is recommended that passphrases not be reused across different or unique sites and services.
In 2012, two Cambridge University researchers analyzed passphrases from the Amazon PayPhrase system and found that a significant percentage are easy to guess due to common cultural references such as movie names and sports teams, losing much of the potential of using long passwords.
When used in cryptography, commonly the password protects a long (machine generated) key, and the key protects the data. The key is so long a brute force attack (directly on the data) is impossible. A key derivation function is used, involving many thousands of iterations (salted & hashed), to slow down password cracking attacks.
Passphrase selection
Typical advice about choosing a passphrase includes suggestions that it should be:
Long enough to be hard to guess
Not a famous quotation from literature, holy books, et cetera
Hard to guess by intuition—even by someone who knows the user well
Easy to remember and type accurately
For better security, any easily memorable encoding at the user's own level can be applied.
Not reused between sites, applications and other different sources
Example methods
One method to create a strong passphrase is to use dice to select words at random from a long list, a technique often referred to as diceware. While such a collection of words might appear to violate the "not from any dictionary" rule, the security is based entirely on the large number of possible ways to choose from the list of words and not from any secrecy about the words themselves. For example, if there are 7776 words in the list and six words are chosen randomly, then there are 77766 = 221073919720733357899776 combinations, providing about 78 bits of entropy. (The number 7776 was chosen to allow words to be selected by throwing five dice. 7776 = 65) Random word sequences may then be memorized using techniques such as the memory palace.
Another is to choose two phrases, turn one into an acronym, and include it in the second, making the final passphrase. For instance, using two English language typing exercises, we have the following. The quick brown fox jumps over the lazy dog, becomes tqbfjotld. Including it in, Now is the time for all good men to come to the aid of their country, might produce, Now is the time for all good tqbfjotld to come to the aid of their country as the passphrase.
There are several points to note here, all relating to why this example passphrase is not a good one.
It has appeared in public and so should be avoided by everyone.
It is long (which is a considerable virtue in theory) and requires a good typist as typing errors are much more likely for extended phrases.
Individuals and organizations serious about cracking computer security have compiled lists of passwords derived in this manner from the most common quotations, song lyrics, and so on.
The PGP Passphrase FAQ suggests a procedure that attempts a better balance between theoretical security and practicality than this example. All procedures for picking a passphrase involve a tradeoff between security and ease of use; security should be at least "adequate" while not "too seriously" annoying users. Both criteria should be evaluated to match particular situations.
Another supplementary approach to frustrating brute-force attacks is to derive the key from the passphrase using a deliberately slow hash function, such as PBKDF2 as described in RFC 2898.
Windows support
If backward compatibility with Microsoft LAN Manager is not needed, in versions of Windows NT (including Windows 2000, Windows XP and later), a passphrase can be used as a substitute for a Windows password. If the passphrase is longer than 14 characters, this will also avoid the generation of a very weak LM hash.
Unix support
In recent versions of Unix-like operating systems such as Linux, OpenBSD, NetBSD, Solaris and FreeBSD, up to 255-character passphrases can be used.
See also
Keyfile
Password-based cryptography
Password psychology
References
External links
Diceware page
xkcd Password Strength common-viewed explanation of concept
Cryptography
Password authentication |
152701 | https://en.wikipedia.org/wiki/BESK | BESK | BESK (Binär Elektronisk SekvensKalkylator, Swedish for "Binary Electronic Sequence Calculator") was Sweden's first electronic computer, using vacuum tubes instead of relays. It was developed by Matematikmaskinnämnden (Swedish Board for Computing Machinery) and for a short time it was the fastest computer in the world. The computer was completed in 1953 and in use until 1966. The technology behind BESK was later continued with the transistorized FACIT EDB and FACIT EDB-3 machines, both software compatible with BESK. Non-compatible machines highly inspired by BESK were SMIL made for the University of Lund, SAABs räkneautomat SARA, "SAAB's calculating machine", and DASK made in Denmark.
BESK was developed by the Swedish Board for Computing Machinery (Matematikmaskinnämnden) a few years after the mechanical relay computer BARK (Binär Aritmetisk Relä-Kalkylator, Swedish for "Binary Arithmetic Relay Calculator"). The team was initially led by Conny Palm, who died in December 1951, after which Stig Comét took over. The hardware was developed by Erik Stemme. Gösta Neovius and Olle Karlqvist were responsible for architecture and instruction set. It was closely modeled on the IAS machine for which the design team had retrieved drawings during a scholarship to Institute for Advanced Study (IAS) and Massachusetts Institute of Technology, U.S.
During the development of the BESK magnetic drum memory, Olle Karlqvist discovered a magnetic phenomenon, which has been called the Karlqvist gap.
Performance
BESK was a 40-bit machine; it could perform an addition in 56 μs and a multiplication took 350 μs. The electrostatic memory could store 512 words. The instruction length was 20 bits, so each word could store two instructions. BESK contained 2400 "radio tubes" (vacuum tubes) and 400 germanium diodes (so it was partly solid state). The power consumption was 15 kVA.
Initially an average runtime of 5 minutes was achieved before hardware problems appeared. In 1954 the system became more stable. Breakpoints were introduced to allow software restart after hardware failures.
Originally BESK had a British Williams tube 512 word x 40 bit memory based on 40 cathode tubes, and eight spare tubes. The memory was from the beginning found to be insufficient and Carl-Ivar Bergman was given just a few weeks to build and install a ferrite core memory in 1956. To get finished before the deadline they hired housewives with knitting experience to make the memory. One of the new memory bits did not work at first, but it was easily cut out and replaced.
Usage
BESK was inaugurated on 1 April 1954 and handled weather data for Carl-Gustaf Rossby and the Swedish Meteorological and Hydrological Institute, statistics for the telecommunications service provider Televerket, wing profiles for the attack aircraft Saab Lansen, and road profiles for the road authority Vägverket. During the nights Swedish National Defence Radio Establishment (FRA) used BESK for cracking encryption of radio messages (by Per-Erik Persson et al.). BESK was also used for calculations for the Swedish nuclear energy industry, for example Monte Carlo simulations of neutron spectrum (by Per-Erik Persson et al.), and for the Swedish nuclear weapon program, but most of those calculations were done by SMIL. In 1957 Hans Riesel used BESK to discover a Mersenne prime with 969 digits - the largest prime known at the time.
SAAB rented computer time on the BESK to (probably, much was secret) make calculations of the strength of the Saab Lansen attack aircraft. In the fall of 1955 SAAB thought the capacity was insufficient and started working on SAABs räkneautomat SARA, "SAAB's calculating machine", which was going to be twice as fast as BESK. Some former SARA employees went to Facit and worked with the FACIT EDB.
In the spring of 1956, eighteen of the BESK developers were hired by office equipment manufacturer Facit and housed in an office at Karlavägen 62 in Stockholm, where they started to build copies of BESK called Facit EDB (models 1, 2, and 3), led by Carl-Ivar Bergman. A total of nine machines were built, of which four were used internally by Facit Electronics and five were sold to customers. On 1 July 1960 Facit Electronics, then with 135 employees, moved to Solna, just north of Stockholm.
In 1960 BESK was used to create an animation of a car driving down a planned highway from the driver's perspective. This was one of the earliest computer animations ever made. The short clip was broadcast on Swedish national television on 9 November 1961.
Trivia
"Besk" is Swedish for the taste "bitter". Bäsk is also the name of a traditional bitters made from distilled alcohol seasoned with the herb Artemisia absinthium L. local to the province of Skåne, in which Lund is located. Reportedly this was an intentional and unnoticed pun after officials denied usage of the name CONIAC (Conny [Palm] Integrator And Calculator, compare Cognac and ENIAC) for the predecessor BARK.
See also
BARK - Binär Aritmetisk Relä-Kalkylator - Sweden's first computer.
Elsa-Karin Boestad-Nilsson, a programmer on BARK and BESK
SMIL - SifferMaskinen I Lund (The Number Machine in Lund)
History of computing hardware
List of vacuum tube computers
References
External links
Datorhistoriska nedslag (in Swedish), Google translation
BESK Binär Elektronisk Sekvens Kalkylator (in Swedish), Google translation
BESK programmers manual (in Swedish)
IAS architecture computers
Vacuum tube computers
1950s computers
Science and technology in Sweden |
154405 | https://en.wikipedia.org/wiki/CCM | CCM | CCM may refer to:
Cubic centimetre (ccm), metric unit of volume
Climate change mitigation (CCM), climate change topic
Biology and medicine
Calcium concentration microdomains, part of a cell's cytoplasm
Photosynthesis#Carbon concentrating mechanisms
Cardiac contractility modulation, a therapy for heart failure
Cerebral cavernous malformation
Computing
CCM mode, an encryption algorithm
Client Configuration Manager, a component of Microsoft System Center Configuration Manager
Combined Cipher Machine, a WWII-era cipher system
Community Climate Model, predecessor of the Community Climate System Model
Constrained conditional model, a machine-learning framework
CORBA Component Model
Customer communications management, a type of software
Government
Center for Countermeasures, a US White Sands Proving Grounds operation
Chama Cha Mapinduzi, the ruling political party in Tanzania
Command Chief Master Sergeant, a US Air Force position
Convention on Cluster Munitions, a 2010 international treaty prohibiting cluster bombs
Music
Aspects of mid-20th century American Christian evangelicism:
Contemporary Christian music
CCM Magazine
Contemporary commercial music
University of Cincinnati – College-Conservatory of Music
Schools and organizations
City College of Manila, Philippines
Council of Churches of Malaysia
County College of Morris, New Jersey, United States
Sport
CCM (bicycle company), a Canadian bicycle manufacturer
CCM (ice hockey), a Canadian sporting goods brand
Central Coast Mariners FC, an Australian A-League football team
Clews Competition Motorcycles, a British motorcycle manufacturer
Other uses
Cardinal Courier Media, an overseeing body at St. John Fisher College
Catherine Cortez Masto (born 1964), United States Senator from Nevada
Cape Cod Mall, a shopping mall in Hyannis, Massachusetts
CCM Airlines, an airline of Corsica, France
Certified Construction Manager, an accreditation by the Construction Management Association of America
Certified Consulting Meteorologist
China Chemicals Market, part of Kcomber
Convergent cross mapping, a statistical test
Core Cabin Module, a part of the Chinese space station
Macao Cultural Centre, Macau, China |
154457 | https://en.wikipedia.org/wiki/Internet%20censorship%20in%20China | Internet censorship in China | Internet censorship in the People's Republic of China (PRC) affects both publishing and viewing online material. Many controversial events are censored from news coverage, preventing many Chinese citizens from knowing about the actions of their government, and severely restricting freedom of the press. Such measures, including the complete blockage of various websites, inspired the policy's nickname, the "Great Firewall of China", which blocks websites such as Wikipedia, YouTube, and Google. Methods used to block websites and pages include DNS spoofing, blocking access to IP addresses, analyzing and filtering URLs, packet inspection, and resetting connections.
China's Internet censorship is more comprehensive and sophisticated than any other country in the world. The government blocks website content and monitors Internet access. As required by the government, major Internet platforms in China established elaborate self-censorship mechanisms. As of 2019 more than sixty online restrictions had been created by the Government of China and implemented by provincial branches of state-owned ISPs, companies and organizations. Some companies hire teams and invested in powerful artificial intelligence algorithms to police and remove illegal online content.
Amnesty International states that China has "the largest recorded number of imprisoned journalists and cyber-dissidents in the world" and Reporters Without Borders stated in 2010 and 2012 that "China is the world's biggest prison for netizens."
About 904 million people have access to Internet in China. Commonly alleged user offenses include communicating with organized groups abroad, signing controversial online petitions, and forcibly calling for government reform. The government has escalated its efforts to reduce coverage and commentary that is critical of the regime after a series of large anti-pollution and anti-corruption protests, and in region of Xinjiang and Tibet which are subjected to terrorism. Many of these protests as well as ethnic riots were organized or publicized using instant messaging services, chat rooms, and text messages. China's Internet police force was reported by official state media to be 2 million strong in 2013.
China's special administrative regions of Hong Kong and Macau are outside the Great Firewall. However, it was reported that the central government authorities have been closely monitoring Internet use in these regions (see Internet censorship in Hong Kong).
Background
The political and ideological background of Internet censorship is considered to be one of Deng Xiaoping's favorite sayings in the early 1980s: "If you open a window for fresh air, you have to expect some flies to blow in." The saying is related to a period of the Chinese economic reform that became known as the "socialist market economy". Superseding the political ideologies of the Cultural Revolution, the reform led China towards a market economy, opening it up to foreign investors. Nonetheless, the Chinese Communist Party (CCP) wished to protect its values and political ideas by "swatting flies" of other ideologies, with a particular emphasis on suppressing movements that could potentially threaten the stability of the country.
The Internet first arrived in the country in 1994. Since its arrival and the gradual rise of availability, the Internet has become a common communication platform and an important tool for sharing information. Just as the Chinese government had expected, the number of Internet users in China soared from less than one percent in 1994, when the Internet was introduced, to 28.8 percent by 2009.
In 1998, the CCP feared the China Democracy Party (CDP), organized in contravention of the “Four Cardinal Principles”, would breed a powerful new network that CCP party elites might not be able to control resulting in the CDP being immediately banned. That same year, the "Golden Shield project" was created. The first part of the project lasted eight years and was completed in 2006. The second part began in 2006 and ended in 2008. The Golden Shield project was a database project in which the government could access the records of each citizen and connect China's security organizations. The government had the power to delete any comments online that were considered harmful.
On 6 December 2002, 300 members in charge of the Golden Shield project came from 31 provinces and cities across China to participate in a four-day inaugural "Comprehensive Exhibition on Chinese Information System". At the exhibition, many Western technology products including Internet security, video monitoring, and facial recognition systems were purchased. According to Amnesty International, around 30,000–50,000 Internet police have been employed by the Chinese government to enforce Internet laws.
The Chinese government has described censorship as the method to prevent and eliminate "risks in the ideological field from the Internet".
Legislative basis
The government of China defends its right to censor the Internet by claiming that this right extends from the country's own rules inside its borders. A white paper released in June 2010 reaffirmed the government's determination to govern the Internet within its borders under the jurisdiction of Chinese sovereignty. The document states, "Laws and regulations prohibit the spread of information that contains content subverting state power, undermining national unity [or] infringing upon national honor and interests." It adds that foreign individuals and firms can use the Internet in China, but they must abide by the country's laws.
The Central Government of China started its Internet censorship with three regulations. The first regulation was called the Temporary Regulation for the Management of Computer Information Network International Connection. The regulation was passed in the 42nd Standing Convention of the State Council on 23 January 1996. It was formally announced on 1 February 1996, and updated again on 20 May 1997. The content of the first regulation stated that Internet service providers be licensed and that Internet traffic goes through ChinaNet, GBNet, CERNET or CSTNET. The second regulation was the Ordinance for Security Protection of Computer Information Systems. It was issued on 18 February 1994 by the State Council to give the responsibility of Internet security protection to the Ministry of Public Security.
Article 5 of the Computer Information Network and Internet Security, Protection, and Management Regulations
The Ordinance regulation further led to the Security Management Procedures in Internet Accessing issued by the Ministry of Public Security in December 1997. The regulation defined "harmful information" and "harmful activities" regarding Internet usage. Section Five of the Computer Information Network and Internet Security, Protection, and Management Regulations approved by the State Council on 11 December 1997 stated the following:
(The "units" stated above refer to work units () or more broadly, workplaces). As of 2021, the regulations are still active and govern the activities of Internet users online.
Interim Regulations of the PRC on the Management of International Networking of Computer Information
In 1996, the Ministry of Commerce created a set of regulations which prohibit connection to "international networks" or use of channels outside of those provided by official government service providers without prior approval or license from authorities. The Ministry of Posts and Telecommunications has since been superseded by the Ministry of Industry and Information Technology or MIIT. To this date this regulation is still used to prosecute and fine users who connect to international networks or use VPN's.
State Council Order No. 292
In September 2000, State Council Order No. 292 created the first set of content restrictions for Internet content providers. China-based websites cannot link to overseas news websites or distribute news from overseas media without separate approval. Only "licensed print publishers" have the authority to deliver news online. These sites must obtain approval from state information offices and the State Council Information Agency. Non-licensed websites that wish to broadcast news may only publish information already released publicly by other news media. Article 11 of this order mentions that "content providers are responsible for ensuring the legality of any information disseminated through their services." Article 14 gives Government officials full access to any kind of sensitive information they wish from providers of Internet services.
Cybersecurity Law of the People's Republic of China
On November 6, 2017, the Standing Committee of the National People's Congress promulgated a cybersecurity law which among other things requires "network operators" to store data locally, hand over information when requested by state security organs and open software and hardware used by "critical information infrastructure" operators to be subject to national security review, potentially compromising source codes and security of encryption used by communications service providers. The law is an amalgamation of all previous regulations related to Internet use and online censorship and unifies and institutionalises the legislative framework governing cyber control and content censorship within the country. Article 12 states that persons using networks shall not "overturn the socialist system, incite separatism" or "break national unity" further institutionalising the suppression of dissent online.
Enforcement
In December 1997, The Public Security Minister, Zhu Entao, released new regulations to be enforced by the ministry that inflicted fines for "defaming government agencies, splitting the nation, and leaking state secrets." Violators could face a fine of up to CNY 15,000 (roughly US$1,800). Banning appeared to be mostly uncoordinated and ad hoc, with some websites allowed in one city, yet similar sites blocked in another. The blocks were often lifted for special occasions. For example, The New York Times was unblocked when reporters in a private interview with CCP General Secretary Jiang Zemin specifically asked about the block and he replied that he would look into the matter. During the APEC summit in Shanghai during 2001, normally-blocked media sources such as CNN, NBC, and the Washington Post became accessible. Since 2001, blocks on Western media sites have been further relaxed, and all three of the sites previously mentioned were accessible from mainland China. However, access to the New York Times was denied again in December 2008.
In the middle of 2005, China purchased over 200 routers from an American company, Cisco Systems, which enabled the Chinese government to use more advanced censor technology. In February 2006, Google, in exchange for equipment installation on Chinese soil, blocked websites which the Chinese government deemed illegal. Google reversed this policy in 2010, after they suspected that a Google employee passed information to the Chinese Government and inserted backdoors into their software.
In May 2011, the State Council Information Office announced the transfer of its offices which regulated the Internet to a new subordinate agency, the State Internet Information Office which would be responsible for regulating the Internet in China. The relationship of the new agency to other Internet regulation agencies in China was unclear from the announcement.
On 26 August 2014, the State Internet Information Office (SIIO) was formally authorized by the state council to regulate and supervise all Internet content. It later launched a website called the Cyberspace Administration of China (CAC) and the Office of the Central Leading Group for Cyberspace Affairs. In February 2014, the Central Internet Security and Informatization Leading Group was created in order to oversee cybersecurity and receive information from the CAC. Chairing the 2018 China Cyberspace Governance Conference on 20 and 21 April 2018, Xi Jinping, General Secretary of the Chinese Communist Party, committed to "fiercely crack down on criminal offenses including hacking, telecom fraud, and violation of citizens' privacy." The Conference comes on the eve of the First Digital China Summit, which was held at the Fuzhou Strait International Conference and Exhibition Centre in Fuzhou, the capital of Fujian Province.
On 4 January 2019, the CAC started a project to take down pornography, violence, bloody content, horror, gambling, defrauding, Internet rumors, superstition, invectives, parody, threats, and proliferation of "bad lifestyles" and "bad popular culture". On 10 January 2019, China Network Audiovisual Program Service Association announced a new regulation to censor short videos with controversial political or social content such as a "pessimistic outlook of millennials", "one night stands", "non-mainstream views of love and marriage" as well as previously prohibited content deemed politically sensitive.
China is planning to make deepfakes illegal which is described as the way to prevent "parody and pornography."
In July 2019, the CAC announced a regulation that said that Internet information providers and users in China who seriously violate related laws and regulations will be subject to Social Credit System blocklist. It also announces that Internet information providers and users who are not meeting the standard but mildly violation will be recorded in the List to Focus.
Self-regulation
Internet censorship in China has been called "a panopticon that encourages self-censorship through the perception that users are being watched." The enforcement (or threat of enforcement) of censorship creates a chilling effect where individuals and businesses willingly censor their own communications to avoid legal and economic repercussions. ISPs and other service providers are legally responsible for customers' conduct. The service providers have assumed an editorial role concerning customer content, thus becoming publishers and legally responsible for libel and other torts committed by customers. Some hotels in China advise Internet users to obey local Chinese Internet access rules by leaving a list of Internet rules and guidelines near the computers. These rules, among other things, forbid linking to politically unacceptable messages and inform Internet users that if they do, they will have to face legal consequences.
On 16 March 2002, the Internet Society of China, a self-governing Chinese Internet industry body, launched the Public Pledge on Self-Discipline for the Chinese Internet Industry, an agreement between the Chinese Internet industry regulator and companies that operate sites in China. In signing the agreement, web companies pledge to identify and prevent the transmission of information that Chinese authorities deem objectionable, including information that "breaks laws or spreads superstition or obscenity", or that "may jeopardize state security and disrupt social stability". As of 2006, the pledge had been signed by more than 3,000 entities operating websites in China.
Use of service providers
Although the government does not have the physical resources to monitor all Internet chat rooms and forums, the threat of being shut down has caused Internet content providers to employ internal staff, colloquially known as "big mamas", who stop and remove forum comments which may be politically sensitive. In Shenzhen, these duties are partly taken over by a pair of police-created cartoon characters, Jingjing and Chacha, who help extend the online "police presence" of the Shenzhen authorities. These cartoons spread across the nation in 2007 reminding Internet users that they are being watched and should avoid posting "sensitive" or "harmful" material on the Internet.
However, Internet content providers have adopted some counter-strategies. One is to post politically sensitive stories and remove them only when the government complains. In the hours or days in which the story is available online, people read it, and by the time the story is taken down, the information is already public. One notable case in which this occurred was in response to a school explosion in 2001, when local officials tried to suppress the fact the explosion resulted from children illegally producing fireworks.
On 11 July 2003, the Chinese government started granting licenses to businesses to open Internet cafe chains. Business analysts and foreign Internet operators regard the licenses as intended to clamp down on information deemed harmful to the Chinese government. In July 2007, the city of Xiamen announced it would ban anonymous online postings after text messages and online communications were used to rally protests against a proposed chemical plant in the city. Internet users will be required to provide proof of identity when posting messages on the more than 100,000 Web sites registered in Xiamen.
The Chinese government issued new rules on 28 December 2012, requiring Internet users to provide their real names to service providers, while assigning Internet companies greater responsibility for deleting forbidden postings and reporting them to the authorities. The new regulations, issued by the Standing Committee of the National People's Congress, allow Internet users to continue to adopt pseudonyms for their online postings, but only if they first provide their real names to service providers, a measure that could chill some of the vibrant discourse on the country's Twitter-like microblogs. The authorities periodically detain and even jail Internet users for politically sensitive comments, such as calls for a multiparty democracy or accusations of impropriety by local officials.
Arrests
Fines and short arrests are becoming an optional punishment to whoever spreads undesirable information through the different Internet formats, as this is seen as a risk to social stability.
In 2001, Wang Xiaoning and other Chinese activists were arrested and sentenced to 10 years in prison for using a Yahoo! email account to post anonymous writing to an Internet mailing list. On 23 July 2008, the family of Liu Shaokun was notified that he had been sentenced to one year re-education through labor for "inciting a disturbance". As a teacher in Sichuan province, he had taken photographs of collapsed schools and posted these photos online. On 18 July 2008, Huang Qi was formally arrested on suspicion of illegally possessing state secrets. Huang had spoken with the foreign press and posted information on his website about the plight of parents who had lost children in collapsed schools. Shi Tao, a Chinese journalist, used his Yahoo! email account to send a message to a U.S.-based pro-democracy website. In his email, he summarized a government order directing media organizations in China to downplay the upcoming 15th anniversary of the 1989 crackdown on pro-democracy activists. Police arrested him in November 2004, charging him with "illegally providing state secrets to foreign entities". In April 2005, he was sentenced to 10 years' imprisonment and two years' subsequent deprivation of his political rights.
In mid-2013 police across China arrested hundreds of people accused of spreading false rumors online. The arrest targeted microbloggers who accused CCP officials of corruption, venality, and sexual escapades. The crackdown was intended to disrupt online networks of like-minded people whose ideas could challenge the authority of the CCP. Some of China's most popular microbloggers were arrested. In September 2013, China's highest court and prosecution office issued guidelines that define and outline penalties for publishing online rumors and slander. The rules give some protection to citizens who accuse officials of corruption, but a slanderous message forwarded more than 500 times or read more than 5,000 times could result in up to three years in prison.
According to the 2020 World Press Freedom Index, compiled by Reporters Without Borders, China is the world's biggest jailer of journalists, holding around 100 in detention. In February 2020, China arrested two of its citizens for taking it upon themselves to cover the COVID-19 pandemic.
Technical implementation
Current methods
The Great Firewall has used numerous methods to block content, including IP dropping, DNS spoofing, deep packet inspection for finding plaintext signatures within the handshake to throttle protocols, and more recently active probing.
Future projects
The Golden Shield Project is owned by the Ministry of Public Security of the People's Republic of China (MPS). It started in 1998, began processing in November 2003, and the first part of the project passed the national inspection on 16 November 2006 in Beijing. According to MPS, its purpose is to construct a communication network and computer information system for police to improve their capability and efficiency. By 2002 the preliminary work of the Golden Shield Project had cost US$800 million (equivalent to RMB 5,000 million or €620 million). Greg Walton, a freelance researcher, said that the aim of the Golden Shield is to establish a "gigantic online database" that would include "speech and face recognition, closed-circuit television... [and] credit records" as well as traditional Internet use records.
A notice issued by the Ministry of Industry and Information Technology on 19 May stated that, as of 1 July 2009, manufacturers must ship machines to be sold in mainland China with the Green Dam Youth Escort software. On 14 August 2009, Li Yizhong, minister of industry and information technology, announced that computer manufacturers and retailers were no longer obliged to ship the software with new computers for home or business use, but that schools, Internet cafes and other public use computers would still be required to run the software.
A senior official of the Internet Affairs Bureau of the State Council Information Office said the software's only purpose was "to filter pornography on the Internet". The general manager of Jinhui, which developed Green Dam, said: "Our software is simply not capable of spying on Internet users, it is only a filter." Human rights advocates in China have criticized the software for being "a thinly concealed attempt by the government to expand censorship". Online polls conducted on Sina, Netease, Tencent, Sohu, and Southern Metropolis Daily revealed over 70% rejection of the software by netizens. However, Xinhua commented that "support [for Green Dam] largely stems from end users, opposing opinions primarily come from a minority of media outlets and businesses."
Targets of censorship
Targeted content
According to a Harvard study, at least 18,000 websites were blocked from within mainland China in 2002, including 12 out of the Top 100 Global Websites. The Chinese-sponsored news agency, Xinhua, stated that censorship targets only "superstitious, pornographic, violence-related, gambling, and other harmful information." This appears questionable, as the e-mail provider Gmail is blocked, and it cannot be said to fall into any of these categories. On the other hand, websites centered on the following political topics are often censored: Falun Gong, police brutality, 1989 Tiananmen Square protests, freedom of speech, democracy, Taiwan independence, the Tibetan independence movement, and the Tuidang movement. Foreign media websites are occasionally blocked. As of 2014 the New York Times, the BBC, and Bloomberg News are blocked indefinitely.
Testing performed by Freedom House in 2011 confirmed that material written by or about activist bloggers is removed from the Chinese Internet in a practice that has been termed "cyber-disappearance".
A 2012 study of social media sites by other Harvard researchers found that 13% of Internet posts were blocked. The blocking focused mainly on any form of collective action (anything from false rumors driving riots to protest organizers to large parties for fun), pornography, and criticism of the censors. However, significant criticisms of the government were not blocked when made separately from calls for collective action. Another study has shown comments on social media that criticize the state, its leaders, and their policies are usually published, but posts with collective action potential will be more likely to be censored whether they are against the state or not.
A lot of larger Japanese websites were blocked from the afternoon of 15 June 2012 (UTC+08:00) to the morning of 17 June 2012 (UTC+08:00), such as Google Japan, Yahoo! Japan, Amazon Japan, Excite, Yomiuri News, Sponichi News and Nikkei BP Japan.
Chinese censors have been relatively reluctant to block websites where there might be significant economic consequences. For example, a block of GitHub was reversed after widespread complaints from the Chinese software developer community. In November 2013 after the Chinese services of Reuters and the Wall Street Journal were blocked, greatfire.org mirrored the Reuters website to an Amazon.com domain in such a way that it could not be shut down without shutting off domestic access to all of Amazon's cloud storage service.
For one month beginning 17 November 2014, ProPublica tested whether the homepages of 18 international news organizations were accessible to browsers inside China, and found the most consistently blocked were Bloomberg, New York Times, South China Morning Post, Wall Street Journal, Facebook, and Twitter. Internet censorship and surveillance has tightly implemented in China that block social websites like Gmail, Google, YouTube, Facebook, Instagram, and others. The excessive censorship practices of the Great Firewall of China have now engulfed the VPN service providers as well.
Search engines
One part of the block is to filter the search results of certain terms on Chinese search engines. These Chinese search engines include both international ones (for example, yahoo.com.cn, Bing, and Google China) as well as domestic ones (for example, Sogou, 360 Search and Baidu). Attempting to search for censored keywords in these Chinese search engines will yield few or no results. Previously, google.cn displayed the following at the bottom of the page: "According to the local laws, regulations and policies, part of the searching result is not shown." When Google did business in the country, it set up computer systems inside China that try to access websites outside the country. If a site was inaccessible, then it was added to Google China's blocklist.
In addition, a connection containing intensive censored terms may also be closed by The Great Firewall, and cannot be re-established for several minutes. This affects all network connections including HTTP and POP, but the reset is more likely to occur during searching. Before the search engines censored themselves, many search engines had been blocked, namely Google and AltaVista. Technorati, a search engine for blogs, has been blocked. Different search engines implement the mandated censorship in different ways. For example, the search engine Bing is reported to censor search results from searches conducted in simplified Chinese characters (used in China), but not in traditional Chinese characters (used in Hong Kong, Taiwan and Macau).
Discussion forums
Several Bulletin Board Systems in universities were closed down or restricted public access since 2004, including the SMTH BBS and the YTHT BBS.
In September 2007, some data centers were shut down indiscriminately for providing interactive features such as blogs and forums. CBS reports an estimate that half the interactive sites hosted in China were blocked.
Coinciding with the twentieth anniversary of the government suppression of the pro-democracy protests in Tiananmen Square, the government ordered Internet portals, forums and discussion groups to shut down their servers for maintenance between 3 and 6 June 2009. The day before the mass shut-down, Chinese users of Twitter, Hotmail and Flickr, among others, reported a widespread inability to access these services.
Social media websites
The censorship of individual social media posts in China usually occurs in two circumstances:
1. Corporations/government hire censors to read individual social media posts and manually take down posts that violate policy. (Although the government and media often use the microblogging service Sina Weibo to spread ideas and monitor corruption, it is also supervised and self-censored by 700 Sina censors. )
2. Posts that will be primarily auto-blocked based on keyword filters, and decide which ones to publish later.
In the second half of 2009, the social networking sites Facebook and Twitter were blocked, presumably because of containing social or political commentary (similar to LiveJournal in the above list). An example is the commentary on the July 2009 Ürümqi riots. Another reason suggested for the block is that activists can utilize them to organize themselves.
In 2010, Chinese human rights activist Liu Xiaobo became a forbidden topic in Chinese media due to his winning the 2010 Nobel Peace Prize. Keywords and images relating to the activist and his life were again blocked in July 2017, shortly after his death.
After the 2011 Wenzhou train collision, the government started emphasizing the danger in spreading 'false rumours' (yaoyan), making the permissive usage of Weibo and social networks a public debate.
In 2012,[[First Monday (journal)| First Monday]] published an article on "political content censorship in social media, i.e., the active deletion of messages published by individuals." This academic study, which received extensive media coverage,China's social networks hit by censorship, says study , BBC News, 9 March 2012. accumulated a dataset of 56 million messages sent on Sina Weibo from June through September 2011, and statistically analyzed them three months later, finding 212,583 deletions out of 1.3 million sampled, more than 16 percent. The study revealed that censors quickly deleted words with politically controversial meanings (e.g., qingci 请辞 "asking someone to resign" referring to calls for Railway Minister Sheng Guangzu to resign after the Wenzhou train collision on 23 July 2011), and also that the rate of message deletion was regionally anomalous (compare censorship rates of 53% in Tibet and 52% in Qinghai with 12% in Beijing and 11.4% in Shanghai). In another study conducted by a research team led by political scientist Gary King, objectionable posts created by King's team on a social networking site were almost universally removed within 24 hours of their posting.
The comment areas of popular posts mentioned Vladimir Putin on Sina Weibo were closed during the 2017 G20 Hamburg summit in Germany. It is a rare example that a foreigner leader is granted the safety from a popular judgment on the Chinese Internet, which usually only granted to the Chinese leaders.
We-media
Social media and messaging app WeChat had attracted many users from blocked networks. Though subject to state rules which saw individual posts removed, Tech in Asia reported in 2013 that certain "restricted words" had been blocked on WeChat globally. A crackdown in March 2014 deleted dozens of WeChat accounts, some of which were independent news channels with hundreds of thousands of followers. CNN reported that the blocks were related to laws banning the spread of political "rumors".
The state-run Xinhua News Agency reported in July 2020 that the CAC would conduct an intensive three-month investigation and cleanup of 13 media platforms, including WeChat.
SSL Protocols
In 2020, China suddenly started blocking website using the TLS or Transport Layer Security 1.3 protocol and ESNI or Encrypted Server Name Indicator for SSL certificates, since ESNI makes it difficult if not impossible to identify the name of a website based on the server name displayed in its SSL certificate. Since May 2015, Chinese Wikipedia has been blocked in mainland China. This was done after Wikipedia started to use HTTPS encryption, which made selective censorship more difficult.
VPN Protocols
Beginning in 2018, the Ministry of Industry and Information Technology (MIIT) in conjunction with the Cyberspace Administration Commission or CAC began a sweeping crackdown on all VPN providers, ordering all major state owned telecommunications providers including China Telecom, China Mobile and China Unicom to block VPN protocols with only authorised users who have obtained permits beforehand to access VPN's (provided they are operated by state-owned telecommunications companies). In 2017 Apple also started removing all VPN apps from Apple app stores at the behest of the Chinese government.
Specific examples of Internet censorship
1989 Tiananmen Square protests
The Chinese government censors Internet materials related to the 1989 Tiananmen Square protests and massacre. According to the government's white paper in 2010 on the subject of Internet in China, the government protects "the safe flow of internet information and actively guides people to manage websites under the law and use the internet in a wholesome and correct way". The government, therefore, prevents people on the Internet from "divulging state secrets, subverting state power and jeopardizing national unification; damaging state honor" and "disrupting social order and stability." Law-abiding Chinese websites such as Sina Weibo censors words related to the protests in its search engine. Sina Weibo is one of the largest Chinese microblogging services. As of October 2012, Weibo's censored words include "Tank Man." The government also censors words that have similar pronunciation or meaning to "4 June", the date that the government's violent crackdown occurred. "陆肆", for example, is an alternative to "六四" (4 June). The government forbids remembrances of the protests. Sina Weibo's search engine, for example, censors Hong Kong lyricist Thomas Chow's song called 自由花 or "The Flower of Freedom", since attendees of the Vindicate 4 June and Relay the Torch rally at Hong Kong's Victoria Park sing this song every year to commemorate the victims of the events.
The government's Internet censorship of such topics was especially strict during the 20th anniversary of the Tiananmen Square protests, which occurred in 2009. According to a Reporters Without Borders' article, searching photos related to the protest such as "4 June" on Baidu, the most popular Chinese search engine, would return blank results and a message stating that the "search does not comply with laws, regulations, and policies". Moreover, a large number of netizens from China claimed that they were unable to access numerous Western web services such as Twitter, Hotmail, and Flickr in the days leading up to and during the anniversary. Netizens in China claimed that many Chinese web services were temporarily blocked days before and during the anniversary. Netizens also reported that microblogging services including Fanfou and Xiaonei (now known as Renren) were down with similar messages that claim that their services were "under maintenance" for a few days around the anniversary date. In 2019, censors once again doubled down during the 30th anniversary of the protests, and by this time had been "largely automated".
Reactions of netizens in China
In 2009, the Guardian wrote that Chinese netizens responded with subtle protests against the government's temporary blockages of large web services. For instance, Chinese websites made subtle grievances against the state's censorship by sarcastically calling the date 4 June as the 中国互联网维护日 or "Chinese Internet Maintenance Day". Owner of the blog Wuqing.org stated, "I, too, am under maintenance". The dictionary website Wordku.com voluntarily took its site down with the claim that this was because of the "Chinese Internet Maintenance Day". In 2013, Chinese netizens used subtle and sarcastic Internet memes to criticize the government and to bypass censorship by creating and posting humorous pictures or drawings resembling the Tank Man photo on Weibo. One of these pictures, for example, showed Florentijin Hofman's rubber ducks sculptures replacing tanks in the Tank Man photo. On Twitter, Hu Jia, a Beijing-based AIDS activist, asked netizens in mainland China to wear black T-shirts on 4 June to oppose censorship and to commemorate the date. Chinese web services such as Weibo eventually censored searches of both "black shirt" and "Big Yellow Duck" in 2013.
As a result, the government further promoted anti-western sentiment. In 2014, Chinese Communist Party general secretary Xi Jinping praised blogger Zhou Xiaoping for his "positive energy" after the latter argued in an essay titled "Nine Knockout Blows in America's Cold War Against China," that American culture was "eroding the moral foundation and self-confidence of the Chinese people."
Debates about the significance of Internet resistance to censorship
According to Chinese studies expert Johan Lagerkvist, scholars Pierre Bourdieu and Michel de Certeau argue that this culture of satire is a weapon of resistance against authority. This is because criticism against authority often results in satirical parodies that "presupposes and confirms emancipation" of the supposedly oppressed people. Academic writer Linda Hutcheon argues that some people, however, may view satirical language that is used to criticise the government as "complicity", which can "reinforce rather than subvert conservative attitudes". Chinese experts Perry Link and Xiao Qiang, however, oppose this argument. They claim that when sarcastic terms develop into common vocabulary of netizens, these terms would lose their sarcastic characteristic. They then become normal terms that carry significant political meanings that oppose the government. Xiao believes that the netizens' freedom to spread information on the Internet has forced the government to listen to popular demands of netizens. For example, the Ministry of Information Technology's plan to preinstall mandatory censoring software called Green Dam Youth Escort on computers failed after popular online opposition against it in 2009, the year of the 20th anniversary of the protest.
Lagerkvist states that the Chinese government, however, does not see subtle criticisms on the Internet as real threats that carry significant political meanings and topple the government. He argues that real threats occur only when "laugh mobs" become "organised smart mobs" that directly challenge the government's power. At a TED conference, Michael Anti gives a similar reason for the government's lack of enforcement against these Internet memes. Anti suggests that the government sometimes allows limited windows of freedom of speech such as Internet memes. Anti explains that this is to guide and generate public opinions that favor the government and to criticize enemies of the party officials.
Internet censorship of the protest in 2013
The Chinese government has become more efficient in its Internet regulations since the 20th anniversary of the Tiananmen protest. On 3 June 2013, Sina Weibo quietly suspended usage of the candle icon from the comment input tool, which netizens used to mourn the dead on forums. Some searches related to the protest on Chinese website services no longer come up with blank results, but with results that the government had "carefully selected." These subtle methods of government censorship may cause netizens to believe that their searched materials were not censored. The government, however, is inconsistent in its enforcement of censorship laws. Netizens reported that searches of some censored terms on Chinese web services still resulted in blank pages with a message that says "relevant laws, regulations, and policies" prevent the display of results related to the searches.
Usage of Internet kill switch
China completely shut down Internet service in the autonomous region of Xinjiang from July 2009 to May 2010 for up to 312 days after the July 2009 Ürümqi riots.
COVID-19 pandemic
Reporters without Borders has accused that China's policies prevented an earlier warning about the COVID-19 pandemic. At least one doctor suspected as early as 25 December 2019 that an outbreak was occurring, but arguably may have been deterred from informing the media due to harsh punishment for whistleblowers.
During the pandemic, academic research concerning the origins of the virus was censored. An investigation by ProPublica and The New York Times found that the Cyberspace Administration of China placed censorship restrictions on Chinese media outlets and social media to avoid mentions of the COVID-19 outbreak, mentions of Li Wenliang, and "activated legions of fake online commenters to flood social sites with distracting chatter".
Winnie the Pooh
Since 2013, the Disney character Winnie the Pooh is systematically removed on the Chinese Internet following the spread of an Internet meme in which photographs of Xi and other individuals were compared to the bear and other characters from the works of A. A. Milne as re-imagined by Disney. The first heavily censored viral meme can be traced back to the official visit to the United States in 2013 during which Xi was photographed by a Reuters photographer walking with then-US President Barack Obama in Sunnylands, California. A blog post where the photograph was juxtaposed with the cartoon depiction went viral, but Chinese censors rapidly deleted it. A year later came a meme featuring Xi and Shinzo Abe. When Xi Jinping inspected troops through his limousine's sunroof, a popular meme was created with Winnie the Pooh in a toy car. The widely circulated image became the most censored picture of the year in 2015. In addition to not wanting any kind of online euphemism for the Communist Party's general secretary, the Chinese government considers that the caricature undermines the authority of the presidential office as well as the president himself, and all works comparing Xi with Winnie the Pooh are purportedly banned in China.
Other examples
In February 2018 Xi Jinping appeared to set in motion a process to scrap term limits, allowing himself to become ruler for life. To suppress criticism, censors banned phrases such as "Disagree" (不同意), "Shameless" (不要脸), "Lifelong" (终身), "Animal Farm", and at one point briefly censored the letter 'N'. Li Datong, a former state newspaper editor, wrote a critical letter that was censored; some social media users evaded the censorship by posting an upside-down screenshot of the letter.
On 13 March 2018, China's CCTV incidentally showed Yicai's Liang Xiangyi apparently rolling her eyes in disgust at a long-winded and canned media question during the widely watched National People's Congress. In the aftermath, Liang's name became the most-censored search term on Weibo. The government also blocked the search query "journalist in blue" and attempted to censor popular memes inspired by the eye-roll.
On 21 June 2018, British-born comedian John Oliver criticized China's paramount leader Xi Jinping on his U.S. show Last Week Tonight over Xi Jinping's apparent descent into authoritarianism (including his sidelining of dissent, mistreatment of the Uyghur peoples and clampdowns on Chinese Internet censorship), as well as the Belt and Road Initiative. As a result, the English language name of John Oliver (although not the Chinese version) was censored on Sina Weibo and other sites on the Chinese Internet.
The American television show South Park was banned from China in 2019 and any mention of it was removed from almost all sites on the Chinese Internet, after criticizing China's government and censorship in season 23 episode, "Band in China". Series creators Trey Parker and Matt Stone later issued a mock apology.
International influence
Foreign content providers such as Yahoo!, AOL, and Skype must abide by Chinese government wishes, including having internal content monitors, to be able to operate within mainland China. Also, per mainland Chinese laws, Microsoft began to censor the content of its blog service Windows Live Spaces, arguing that continuing to provide Internet services is more beneficial to the Chinese. Chinese journalist Michael Anti's blog on Windows Live Spaces was censored by Microsoft. In an April 2006 e-mail panel discussion Rebecca MacKinnon, who reported from China for nine years as a Beijing bureau chief for CNN, said: "... many bloggers said he [Anti] was a necessary sacrifice so that the majority of Chinese can continue to have an online space to express themselves as they choose. So the point is, compromises are being made at every level of society because nobody expects political freedom anyway."
The Chinese version of Myspace, launched in April 2007, has many censorship-related differences from other international versions of the service. Discussion forums on topics such as religion and politics are absent and a filtering system that prevents the posting of content about politically sensitive topics has been added. Users are also given the ability to report the "misconduct" of other users for offenses including "endangering national security, leaking state secrets, subverting the government, undermining national unity, spreading rumors or disturbing the social order."
Some media have suggested that China's Internet censorship of foreign websites may also be a means of forcing mainland Chinese users to rely on China's e-commerce industry, thus self-insulating their economy from the dominance of international corporations. On 7 November 2005 an alliance of investors and researchers representing 26 companies in the U.S., Europe and Australia with over US$21 billion in joint assets announced that they were urging businesses to protect freedom of expression and pledged to monitor technology companies that do business in countries violating human rights, such as China. On 21 December 2005 the UN, OSCE and OAS special mandates on freedom of expression called on Internet corporations to "work together ... to resist official attempts to control or restrict the use of the Internet." Google finally responded when attacked by hackers rumored to be hired by the Chinese government by threatening to pull out of China.
In 2006, Reporters Without Borders wrote that it suspects that regimes such as Cuba, Zimbabwe, and Belarus have obtained surveillance technology from China.
Evasion
Using a VPN service
Internet censorship in China is circumvented by determined parties by using proxy servers outside the firewall. Users may circumvent all of the censorship and monitoring of the Great Firewall if they have a working VPN or SSH connection method to a computer outside mainland China. However, disruptions of VPN services have been reported and the free or popular services especially are increasingly being blocked. To avoid deep packet inspection and continue providing services in China some VPN providers implemented server obfuscation.
Changing IP addresses
Blogs hosted on services such as Blogger and Wordpress.com are frequently blocked. In response, some China-focused services explicitly offer to change a blog's IP address within 30 minutes if it is blocked by the authorities.
Using a mirror website
In 2002, Chinese citizens used the Google mirror elgooG after China blocked Google.
Modifying the network stack
In July 2006, researchers at Cambridge University claimed to have defeated the firewall by ignoring the TCP reset packets.
Using Tor and DPI-resistant tools
Although many users use VPNs to circumvent the Great Firewall of China, many Internet connections are now subject to deep packet inspection, in which data packets are looked at in detail. Many VPNs have been blocked using this method. Blogger Grey One suggests users trying to disguise VPN usage forward their VPN traffic through port 443 because this port is also heavily used by web browsers for HTTPS connections. However, Grey points out this method is futile against advanced inspection. Obfsproxy and other pluggable transports do allow users to evade deep-packet inspection.
The Tor anonymity network was and is subject to partial blocking by China's Great Firewall. The Tor website is blocked when accessed over HTTP but it is reachable over HTTPS so it is possible for users to download the Tor Browser Bundle. The Tor project also maintains a list of website mirrors in case the main Tor website is blocked.
The Tor network maintains a public list of approximately 3000 entry relays; almost all of them are blocked. In addition to the public relays, Tor maintains bridges which are non-public relays. Their purpose is to help censored users reach the Tor network. The Great Firewall scrapes nearly all the bridge IPs distributed through bridges.torproject.org and email. According to Winter's research paper published in April 2012, this blocking technique can be circumvented by using packet fragmentation or the Tor obfsproxy bundle in combination with private obfsproxy bridges. Tor Obfs4 bridges still work in China as long as the IPs are discovered through social networks or self-published bridges.
Tor now primarily functions in China using meeks which works via front-end proxies hosted on Content Delivery Networks (CDNs) to obfuscate the information coming to and from the source and destination, it is a type of pluggable transport. Examples are Microsoft's Azure and Cloudflare.
Unintended methods
It was common in the past to use Google's cache feature to view blocked websites. However, this feature of Google seems to be under some level of blocking, as access is now erratic and does not work for blocked websites. Currently, the block is mostly circumvented by using proxy servers outside the firewall and is not difficult to carry out for those determined to do so.
The mobile Opera Mini browser uses a proxy-based approach employing encryption and compression to speed up downloads. This has the side effect of allowing it to circumvent several approaches to Internet censorship. In 2009 this led the government of China to ban all but a special Chinese version of the browser.
Using an analogy to bypass keyword filters
As the Great Firewall of China gets more sophisticated, users are getting increasingly creative in the ways they elude the censorship, such as by using analogies to discuss topics. Furthermore, users are becoming increasingly open in their mockery of them by actively using homophones to avoid censorship. Deleted sites have "been harmonized", indicating CCP general secretary Hu Jintao's Internet censorship lies under the larger idea of creating a "Socialist Harmonious Society". For example, censors are referred to as "river crabs", because in Chinese that phrase forms a homophone for "harmony".
Using steganography
According to The Guardian editor Charles Arthur, Internet users in China have found more technical ways to get around the Great Firewall of China, including using steganography, a practice of "embedding useful data in what looks like something irrelevant. The text of a document can be broken into its constituent bytes, which are added to the pixels of an innocent picture. The effect is barely visible on the picture, but the recipient can extract it with the right software".
Voices
Rupert Murdoch famously proclaimed that advances in communications technology posed an "unambiguous threat to totalitarian regimes everywhere" and Ai Weiwei argued that the Chinese "leaders must understand it's not possible for them to control the Internet unless they shut it off".
However, Nathan Freitas, a fellow at the Berkman Center for Internet and Society at Harvard and technical adviser to the Tibet Action Institute, says "There’s a growing sense within China that widely used VPN services that were once considered untouchable are now being touched." In June 2015 Jaime Blasco, a security researcher at AlienVault in Silicon Valley, reported that hackers, possibly with the assistance of the Chinese government, had found ways to circumvent the most popular privacy tools on the Internet: virtual private networks, or VPNs, and Tor. This is done with the aid of a particularly serious vulnerability, known as JSONP, that 15 web services in China never patched. As long as the users are logged into one of China's top web services such as Baidu, Taobao, QQ, Sina, Sohu, and Ctrip the hackers can identify them and access their personal information, even if they are using Tor or a VPN. The vulnerability is not new; it was published in a Chinese security and web forum around 2013.
Specific examples of evasion as Internet activism
The rapid increase of access to Internet in China has also created new opportunities for Internet activism. For example, in terms of journalism, Marina Svensson's article on “Media and Civil Society in China: Community building and networking among investigative journalists and beyond” illustrates that although Chinese journalists are not able to create their own private companies, they are using informal connections online and offline that allows them to create a community that may allow them to go around state repression. Specifically, with the development of microblogging, an increase in new community that are formed underlines a possibility of "...more open expressions of solidarity and ironic resistance”. However, one shortcoming with Internet activism is digital inequality. In 2016, the number of Internet users reached 731 million, which was about a rate of 53% for Internet penetration. According to the Information and Communications Technologies Development Index (IDI), China exhibits high inequality in terms of regional and wealth differences.
Economic impact
According to the BBC, local Chinese businesses such as Baidu, Tencent and Alibaba, some of the world's largest Internet enterprises, benefited from the way China has blocked international rivals from the market, encouraging domestic competition.
According to Financial Times, China's crackdown on VPN portals has brought business to state-approved telecom companies. Reuters'' reported that China's state newspaper has expanded its online censoring business. The company's net income in 2018 has risen 140 percent. Its Shanghai-listed stock price jumped up by 166 percent in 2018.
See also
List of websites blocked in mainland China
Censorship in China
Digital divide in China
Human rights in China
Media of China
Censorship of GitHub in China
References
External links
Keywords and URLs censored on the Chinese Internet
Cyberpolice.cn (网络违法犯罪举报网站) – Ministry of Public Security P.R. China Information & Network Security
A website that lists and detects all blocked websites by GFW.
A website to test if a resource is blocked websites by GFW.
Internet Enemies: China, Reporters Without Borders
Freedom on the Net 2011: China-Freedom House: Freedom on the Net Report
Human rights abuses in China
Internet in China
China
China
Articles containing video clips |
154471 | https://en.wikipedia.org/wiki/Beaufort | Beaufort | Beaufort may refer to:
People and titles
Beaufort (surname)
House of Beaufort, English nobility
Duke of Beaufort (England), a title in the peerage of England
Duke of Beaufort (France), a title in the French nobility
Places
Polar regions
Beaufort Sea in the Arctic Ocean
Beaufort Island, an island in Antarctica's Ross Sea
Australia
Beaufort, Queensland, a locality in the Barcaldine Region, Queensland
Beaufort, South Australia
Beaufort, Victoria
Beaufort Inlet, an inlet located in the Great Southern region of Western Australia
Canada
Beaufort Range, Vancouver Island, British Columbia
France
Beaufort, Haute-Garonne
Beaufort, Hérault
Beaufort, Isère
Beaufort, Jura
Beaufort, Nord
Beaufort, Savoie
Beaufort-Blavincourt, Pas-de-Calais
Beaufort-en-Argonne, Meuse
Beaufort-en-Santerre, Somme
Beaufort-en-Vallée, Maine-et-Loire
Beaufort-sur-Gervanne, Drôme
Montmorency-Beaufort, Aube
Ireland
Beaufort, County Kerry
Luxembourg
Beaufort, Luxembourg
Lebanon
Beaufort Castle, Lebanon
Malaysia
Beaufort, Malaysia
South Africa
Beaufort West, largest town in the arid Great Karoo
Fort Beaufort, town in the Amatole District of Eastern Cape Province
Port Beaufort, settlement in Eden in the Western Cape province
Thailand
Beaufort (federal constituency)
United Kingdom
Beaufort, Blaenau Gwent, Wales
Beaufort Castle, Scotland
Beaufort's Dyke, between Scotland and Northern Ireland
United States
Beaufort, North Carolina
Beaufort County, North Carolina
Beaufort, South Carolina
Beaufort County, South Carolina
Military uses
Bristol Beaufort, a large British torpedo bomber
CSS Beaufort, a Confederate Navy gunboat
Beaufort, a transport which served as headquarters for the Governor of Nova Scotia, Edward Cornwallis, for some Nova Scotia Council meetings
Transportation
Beaufort (automobiles), a German manufacturer of automobiles solely for the British market from 1902 to 1910
Beaufort (dinghy), a sailing dinghy designed by Ian Proctor
Beaufort, one of the GWR 3031 Class locomotives that were built for and run on the Great Western Railway between 1891 and 1915, formerly named Bellerophon before 1895
Other uses
Beaufort (film), a 2007 Israeli Oscar-nominated film, referring to Beaufort Castle, Lebanon
Beaufort (novel), title of the 2007 English-language translation of the novel אם יש גן עדן (trsl. Im Yesh Gan Eden), basis for the film
Beaufort Castle (disambiguation)
Beaufort cheese, a French cheese
Beaufort cipher, an encryption technique using a substitution cipher
Beaufort County Schools (disambiguation)
Beaufort Group, subdivisions of the Karoo Supergroup
Beaufort War Hospital, Bristol, England
Beaufort scale, an empirical measure for describing wind intensity |
156282 | https://en.wikipedia.org/wiki/Jon%20Lech%20Johansen | Jon Lech Johansen | Jon Lech Johansen (born November 18, 1983 in Harstad, Norway), also known as DVD Jon, is a Norwegian programmer who has worked on reverse engineering data formats. He wrote the DeCSS software, which decodes the Content Scramble System used for DVD licensing enforcement. Johansen is a self-trained software engineer, who quit high school during his first year to spend more time with the DeCSS case. He moved to the United States and worked as a software engineer from October 2005 until November 2006. He then moved to Norway but moved back to the United States in June 2007.
Education
In a post on his blog, he said that in the 1990s he started with a book (Programming the 8086/8088), the web ("Fravia's site was a goldmine") and IRC ("Lurked in a x86 assembly IRC channel and picked up tips from wise wizards.")
DeCSS prosecution
After Johansen released DeCSS, he was taken to court in Norway for computer hacking in 2002. The prosecution was conducted by the Norwegian National Authority for the Investigation and Prosecution of Economic and Environmental Crime (Økokrim in Norwegian), after a complaint by the US DVD Copy Control Association (DVD-CCA) and the Motion Picture Association (MPA). Johansen has denied writing the decryption code in DeCSS, saying that this part of the project originated from someone in Germany. He only developed the GUI component of the software. His defense was assisted by the Electronic Frontier Foundation. The trial opened in the Oslo District Court on 9 December 2002 with Johansen pleading not guilty to charges that had a maximum penalty of two years in prison or large fines. The defense argued that no illegal access was obtained to anyone else's information, since Johansen owned the DVDs himself. They also argued that it is legal under Norwegian law to make copies of such data for personal use. The verdict was announced on 7 January 2003, acquitting Johansen of all charges.
Two further levels of appeals were available to the prosecutors, to the appeals court and then to the Supreme Court. Økokrim filed an appeal on 20 January 2003 and it was reported on 28 February that the Borgarting Court of Appeal had agreed to hear the case. Johansen's second DeCSS trial began in Oslo on 2 December 2003, and resulted in an acquittal on 22 December 2003. Økokrim announced on 5 January 2004 that it would not appeal the case to the Supreme Court.
Other projects
In the first decade of the 21st century, Johansen's career has included many other projects.
2001
In 2001, Johansen released OpenJaz, a reverse-engineered set of drivers for Linux, BeOS and Windows 2000 that allow operation of the JazPiper MP3 digital audio player without its proprietary drivers.
2003
In November 2003, Johansen released QTFairUse, an open source program which dumps the raw output of a QuickTime Advanced Audio Coding (AAC) stream to a file, which could bypass the Digital Rights Management (DRM) software used to encrypt content of music from media such as those distributed by the iTunes Music Store, Apple Computer's online music store. Although these resulting raw AAC files were unplayable by most media players at the time of release, they represent the first attempt at circumventing Apple's encryption.
2004
Johansen had by now become a VideoLAN developer, and had reverse engineered FairPlay and written VLC's FairPlay support. It has been available in VideoLAN CVS since January 2004, but the first release to include FairPlay support is VLC 0.7.1 (released March 2, 2004).
2005
On March 18, 2005, Travis Watkins and Cody Brocious, along with Johansen, wrote PyMusique, a Python based program which allows the download of purchased files from the iTunes Music Store without Digital Rights Management (DRM) encryption. This was possible because Apple Computer's iTunes software adds the DRM to the music file after the music file is downloaded. On March 22, Apple released a patch for the iTunes Music Store blocking the use of his PyMusique program. The same day, an update to PyMusique was released, circumventing the new patch.
On June 26, 2005, Johansen created a modification of Google's new in-browser video player (which was based on the open source VLC media player) less than 24 hours after its release, to allow the user to play videos that are not hosted on Google's servers.
In late 2005, Håkon Wium Lie, the Norwegian CTO of Opera Software, co-creator of Cascading Style Sheets and long-time supporter of open source, named Johansen a "hero" in a net meeting arranged by one of Norway's biggest newspapers. On September 2, 2005, The Register published news that DVD Jon had defeated encryption in Microsoft's Windows Media Player by reverse engineering a proprietary algorithm that was ostensibly used to protect Windows Media Station NSC files from engineers sniffing for the files' source IP address, port or stream format. Johansen had also made a decoder available.
In September 2005, Johansen announced the release of SharpMusique 1.0, an alternative to the default iTunes program. The program allows Linux and Windows users to buy songs from the iTunes music store without copy protection. In 2005, Johansen worked for MP3tunes in San Diego as a software engineer. His first project was a new digital music product, code-named Oboe.
Sony BMG DRM rootkit
In November 2005, a Slashdot story claimed that Sony-BMGs Extended Copy Protection (XCP) DRM software includes code and comments (such as "copyright (c) Apple Computer, Inc. All Rights Reserved.") illegally copied from an iTunes DRM circumvention program by Johansen. A popular claim was that, using the criteria that RIAA uses in its copyright lawsuits, Johansen could sue for billions of dollars in damages.
2006
On January 8, 2006, Johansen revealed his intent to defeat the encryption of next-generation DVD encryption, Advanced Access Content System (AACS). On June 7, 2006, he announced that he had moved to San Francisco and was joining DoubleTwist Ventures. In October 2006, Johansen and DoubleTwist Ventures announced they had reverse engineered Apple Computer's DRM for iTunes, called FairPlay. Rather than allow people to strip the DRM, DoubleTwist would license the ability to apply FairPlay to media companies who wanted their music and videos to play on the iPod, without having to sign a distribution contract with Apple.
2007
In July 2007, Johansen managed to allow the iPhone to work as an iPod with WiFi, without AT&T activation.
2008
On February 2, 2008, Johansen launched doubleTwist, which allows customers to route around digital rights management in music files and convert files between various formats. The software converts digital music of any bitrate encoded with any popular codec into a format that can be played on any device.
2009
In June, he managed to get an advertisement for his application doubleTwist on the wall of the Bay Area Rapid Transit exit outside the San Francisco Apple Store, just days before the 2009 WWDC event. On June 9, it was reported that the advertisement was removed by BART for allegedly "being too opaque" (the background was blueish) and not allowing enough light into the adjoining transit station. The advertisement was later redesigned and redeployed with a transparent background.
Awards
January 2000 - Karoline award, given to high-school students with excellent grades and noteworthy achievements in sports, arts or culture
April 2002 - EFF Pioneer Award
References
External links
Jon Johansen's blog
Electronic Frontier Norway's link collection on the Jon Johansen case: Complete (Norwegian) version, English version (links only)
DVD Jon releases program to bypass iTunes DRM
Interview with DVD Jon, from slyck.com
Jon Lech Johansen talks to DVDfuture
Wired News: DVD Jon Lands Dream Job Stateside
Libbenga, Jan (January 2, 2004) – "DVD Jon wins again"
1983 births
Living people
Norwegian Internet celebrities
Modern cryptographers
Norwegian computer programmers
Norwegian expatriates in the United States
Norwegian people of Polish descent
People from Harstad |
157105 | https://en.wikipedia.org/wiki/VxWorks | VxWorks | VxWorks is a real-time operating system (RTOS) developed as proprietary software by Wind River Systems, a wholly owned subsidiary of Aptiv. First released in 1987, VxWorks is designed for use in embedded systems requiring real-time, deterministic performance and, in many cases, safety and security certification, for industries, such as aerospace and defense, medical devices, industrial equipment, robotics, energy, transportation, network infrastructure, automotive, and consumer electronics.
VxWorks supports AMD/Intel architecture, POWER architecture, ARM architectures and RISC-V. The RTOS can be used in multicore asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), and mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64-bit processors.
VxWorks comes with the kernel, middleware, board support packages, Wind River Workbench development suite and complementary third-party software and hardware technologies. In its latest release, VxWorks 7, the RTOS has been re-engineered for modularity and upgradeability so the OS kernel is separate from middleware, applications and other packages. Scalability, security, safety, connectivity, and graphics have been improved to address Internet of Things (IoT) needs.
History
VxWorks started in the late 1980s as a set of enhancements to a simple RTOS called VRTX sold by Ready Systems (becoming a Mentor Graphics product in 1995). Wind River acquired rights to distribute VRTX and significantly enhanced it by adding, among other things, a file system and an integrated development environment. In 1987, anticipating the termination of its reseller contract by Ready Systems, Wind River developed its own kernel to replace VRTX within VxWorks.
Published in 2003 with a Wind River copyright, "Real-Time Concepts for Embedded Systems"
describes the development environment, runtime setting, and system call families of the RTOS.
Written by Wind River employees with a foreword by Jerry Fiddler, chairman and co-founder of Wind River, the text book is an excellent tutorial on the RTOS. (It does not, however, replace Wind River documentation as might be needed by practicing engineers.)
VxWorks key milestones are:
1980s: VxWorks adds support for 32-bit processors.
1990s: VxWorks 5 becomes the first RTOS with a networking stack.
2000s: VxWorks 6 supports SMP and adds derivative industry-specific platforms.
2010s: VxWorks adds support for 64-bit processing and introduces VxWorks 7 for IoT in 2016.
2020s: VxWorks continues to update and add support, including the ability to power the Mars 2020 lander
Platform overview
VxWorks supports Intel architecture, Power architecture, and ARM architectures. The RTOS can be used in multi-core asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), and mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64-bit processors.
The VxWorks consists of a set of runtime components and development tools. The run time components are an operating system (UP and SMP; 32- and 64-bit), software for applications support (file system, core network stack, USB stack and inter-process communications) and hardware support (architecture adapter, processor support library, device driver library and board support packages). VxWorks core development tools are compilers such as Diab, GNU, and Intel C++ Compiler (ICC)) and its build and configuration tools. The system also includes productivity tools such as its Workbench development suite and Intel tools and development support tools for asset tracking and host support.
The platform is a modular, vendor-neutral, open system that supports a range of third-party software and hardware. The OS kernel is separate from middleware, applications and other packages, which enables easier bug fixes and testing of new features. An implementation of a layered source build system allows multiple versions of any stack to be installed at the same time so developers can select which version of any feature set should go into the VxWorks kernel libraries.
Optional advanced technology for VxWorks provides add-on technology-related capabilities, such as:
Advanced security features to safeguard devices and data residing in and traveling across the Internet of Things (IoT)
Advanced safety partitioning to enable reliable application consolidation
Real-time advanced visual edge analytics allowing autonomous responses on VxWorks-based devices in real time without latency
Optimized embedded Java runtime engine enabling the deployment of Java applications
Virtualization capability with a real-time embedded, Type 1 hypervisor
Features
A list of some of the features of the OS are:
Multitasking kernel with preemptive and round-robin scheduling and fast interrupt response
Native 64-bit operating system (only one 64-bit architecture supported: x86-64). Data model: LP64.
User-mode applications ("Real-Time Processes", or RTP) isolated from other user-mode applications as well as the kernel via memory protection mechanisms.
SMP, AMP and mixed mode multiprocessing support
Error handling framework
Bluetooth, USB, CAN protocols, Firewire IEEE 1394, BLE, L2CAP, Continua stack, health device profile
Binary, counting, and mutual exclusion semaphores with priority inheritance
Local and distributed message queues
POSIX PSE52 certified conformity in user-mode execution environment
File systems: High Reliability File System (HRFS), FAT-based file system (DOSFS), Network File System (NFS), and TFFS
Dual-mode IPv6 networking stack with IPv6 Ready Logo certification
Memory protection including real-time processes (RTPs), error detection and reporting, and IPC
Multi-OS messaging using TIPC and Wind River multi-OS IPC
Symbolic debugging
In March 2014, Wind River introduced VxWorks 7, which emphasizes scalability, security, safety, connectivity, graphics, and virtualization. The following lists some of the release 7 updates. More information can be found on the Wind Rivers VxWorks website.
Modular, componentized architecture using a layered build system with the ability to update each layer of code independently
VxWorks microkernel (a full RTOS that can be as small as 20 KB)
Security features such as digitally-signed modules (X.509), encryption, password management, ability to add/delete users at runtime
SHA-256 hashing algorithm as the default password hashing algorithm
Human machine interface with Vector Graphics, and Tilcon user interface (UI)
Graphical user interface (GUI): OpenVG stack, Open GL, Tilcon UI, Frame Buffer Driver, EV Dev Interface
Updated configuration interfaces for VxWorks Source Build VSB projects and VxWorks Image Projects
Single authentication control used for Telnet, SSH, FTP, and rlogin daemons
Connectivity with Bluetooth and SocketCAN protocol stacks
Inclusion of MIPC File System (MFS) and MIPC Network Device (MND)
Networking features with 64-bit support including Wind River MACsec, Wind River's implementation of IEEE 802.1A, Point-to-Point Protocol (PPP) over L2TP, PPP over virtual local area network (VLAN) and Diameter secure key storage
New Wind River Workbench 4 for VxWorks 7 integrated development environment with new system analysis tools
Wind River Diab Compiler 5.9.4; Wind River GNU Compiler 4.8; Intel C++ Compiler 14 and Intel Integrated Performance Primitives (IPP) 8
Hardware support
VxWorks has been ported to a number of platforms and now runs on practically any modern CPU that is used in the embedded market. This includes the Intel x86 family (including the Intel Quark SoC), MIPS, PowerPC (and BAE RAD), Freescale ColdFire, Intel i960, SPARC, Fujitsu FR-V, SH-4 and the closely related family of ARM, StrongARM and xScale CPUs. VxWorks provides a standard board support package (BSP) interface between all its supported hardware and the OS. Wind River's BSP developer kit provides a common application programming interface (API) and a stable environment for real-time operating system development. VxWorks is supported by popular SSL/TLS libraries such as wolfSSL.
Development environment
As is common in embedded system development, cross-compiling is used with VxWorks. Development is done on a "host" system where an integrated development environment (IDE), including the editor, compiler toolchain, debugger, and emulator can be used. Software is then compiled to run on the "target" system. This allows the developer to work with powerful development tools while targeting more limited hardware. VxWorks uses the following host environments and target hardware architectures:
Supported target architectures and processor families
VxWorks supports the following target architectures:
ARM
Intel architecture
Power architecture
RISC-V architecture
For the latest target architecture, processors and board support packages, refer to the VxWorks Marketplace: https://marketplace.windriver.com/index.php?bsp&on=locate&type=platform
The Eclipse-based Workbench IDE that comes with VxWorks is used to configure, analyze, optimize, and debug a VxWorks-based system under development. The Tornado IDE was used for VxWorks 5.x and was replaced by the Eclipse-based Workbench IDE for VxWorks 6.x. and later. Workbench is also the IDE for the Wind River Linux, On-Chip Debugging, and Wind River Diab Compiler product lines. VxWorks 7 uses Wind River Workbench 4 which updates to the Eclipse 4 base provide full third party plug-in support and usability improvements.
Wind River Simics is a standalone simulation tool compatible with VxWorks. It simulates the full target system (hardware and software) to create a shared platform for software development. Multiple developers can share a complete virtual system and its entire state, including execution history. Simics enables early and continuous system integration and faster prototyping by utilizing virtual prototypes instead of physical prototypes.
Notable uses
VxWorks is used by products across a wide range of market areas: aerospace and defense, automotive, industrial such as robots, consumer electronics, medical area and networking. Several notable products also use VxWorks as the onboard operating system.
Aerospace and defense
Spacecraft
The Mars 2020 rover launched in 2020
The Mars Reconnaissance Orbiter
The Mars Science Laboratory, also known as the Curiosity rover
NASA Mars rovers (Sojourner, Spirit, Opportunity)
The Deep Space Program Science Experiment (DSPSE) also known as Clementine (spacecraft) Clementine launched in 1994 running VxWorks 5.1 on a MIPS-based CPU responsible for the Star Tracker and image processing algorithms. The use of a commercial RTOS on board a spacecraft was considered experimental at the time
Phoenix Mars lander
The Deep Impact space probe
The Mars Pathfinder mission
The SpaceX Dragon
NASA's Juno space probe sent to Jupiter
Aircraft
AgustaWestland Project Zero
Northrop Grumman X-47B Unmanned Combat Air System
Airbus A400M Airlifter
BAE Systems Tornado Advanced Radar Display Information System (TARDIS) used in the Tornado GR4 aircraft for the U.K. Royal Air Force
Lockheed Martin RQ-170 Sentinel UAV
Boeing 787
Space telescopes
Fermi Gamma-ray Space Telescope(FGST)
James Webb Space Telescope
Others
European Geostationary Navigation Overlay System (EGNOS)
TacNet Tracker, Sandia National Laboratory’s rugged handheld communication device
BAE Systems SCC500TM series of infrared camera cores
Barco CDMS-3000 next generation control display and management system
Automotive
Toshiba TMPV75 Series image recognition SoCs for advanced driver assistance systems (ADAS)
Bosch Motor Sports race car telemetry system
Hyundai Mobis IVI system
Magneti Marelli's telemetry logger and GENIVI-compliant infotainment system
BMW iDrive system after 2008
Siemens VDO automotive navigation systems
Most of Renault Trucks T, K and C trucks' electronic control units.
European Volkswagen RNS 510 navigation systems.
Consumer electronics
Apple Airport Extreme
AMX NetLinx Controllers (NI-xx00/x00)
Brother printers
Drobo data storage robot
Honda robot ASIMO
Linksys WRT54G wireless routers (versions 5.0 and later)
MacroSystem Casablanca-2 digital video editor (Avio, Kron, Prestige, Claro, Renommee, Solitaire)
Motorola's DCT2500 interactive digital set-top box
Mobile Technika MobbyTalk and MobbyTalk253 phones
ReplayTV home digital video recorder
Industrial
Industrial robots
ABB industrial robots
The C5G robotic project by Comau
KUKA industrial robots
Stäubli industrial robots
Yaskawa Electric Corporation's industrial robots
Comau Robotics SMART5 industrial robot
Test and Measurement
Teledyne LeCroy WaveRunner LT, WaveRunner2LT and WavePro 900 oscilloscope series
Hexagon Metrology GLOBAL Silver coordinate measuring machine (CMM)
Transportation
FITSCO Automatic Train Protection (ATP)system
Bombardier HMI410 Train Information System
Controllers
Bachmann M1 Controller System
Invensys Foxboro PAC System
National Instruments CompactRIO 901x, 902x 907x controllers
The Experimental Physics and Industrial Control System (EPICS)
Bosch Rexroth Industrial Tightening Control Systems
MCE iBox elevator controller
Schneider Electric Industrial Controller
B%26R Automation Runtime
Storage systems
External RAID controllers designed by the LSI Corporation/Engenio prior to 2011, now designed by NetApp. And used in RDAC class arrays as NetApp E/EF Series and OEM arrays
Fujitsu ETERNUS DX S3 family of unified data storage arrays
Imaging
Toshiba eBridge based range of photocopiers
Others
GrandMA Full-Size and Light Console by MA Lighting
Medical
Varian Medical Systems Truebeam - a radiotherapy device for treating cancer
Olympus Corporation's surgical generator
BD Biosciences FACSCount HIV/AIDS Monitoring System
Fedegari Autoclavi S.p.A. Thema4 process controller
Sirona Dental Systems: CEREC extraoral X-ray CAD/CAM systems
General Electric Healthcare: CT and MRI scanners.
Carl Zeiss Meditec: Humphrey Field Analyzer HFA-II Series
Philips C-Arm Radiology Equipment
Networking and communication infrastructure
Arkoon Network Security appliances
Ubee Interactive's AirWalk EdgePoint
Kontron's ACTA processor boards
QQTechnologies's QQSG
A significant portion of Huawei's telecoms equipment uses VxWorks
BroadLight’s GPON/PON products
Shiron Satellite Communications’ InterSKY
Sky Pilot's SkyGateway, SkyExtender and SkyControl
EtherRaptor-1010 by Raptor Network Technology
CPG-3000 and CPX-5000 routers from Siemens
Nokia Solutions and Networks FlexiPacket series microwave engineering product
Acme Packet Net-Net series of Session Border Controllers
Alcatel-Lucent IP Touch 40x8 IP Deskphones
Avaya ERS 8600
Avaya IP400 Office
Cisco CSS platform
Cisco ONS platform
Ciena Common Photonic Layer
Dell PowerConnect switches that are 'powered by' Broadcom, except latest PCT8100 which runs on Linux platform
Ericsson SmartEdge routers (SEOS 11 run NetBSD 3.0 and VxWorks for Broadcom BCM1480 version 5.5.1 kernel version 2.6)
Hewlett Packard HP 9000 Superdome Guardian Service Processor
Hirschmann EAGLE20 Industrial Firewall
HughesNet/Direcway satellite internet modems
Mitel Networks' MiVoice Business (formerly Mitel Communications Director (MCD)), 3300 ICP Media Gateways and SX-200 and SX-200 ICP.
Motorola Solutions MCD5000 IP Deskset System
Motorola SB5100 cable modem
Motorola Cable Headend Equipment including SEM, NC, OM and other lines
Nortel CS1000 PBX (formerly Nortel Meridian 1 (Option 11C, Option 61C, Option 81C)
Nortel Passport
Radware OnDemand Switches
Samsung DCS and OfficeServ series PBX
SonicWALL firewalls
Thuraya SO-2510 satellite phone and ThurayaModule
Radvision 3G communications equipment
3com NBX phone systems
Zhone Technologies access systems
Oracle EAGLE STP system
TCP vulnerability and CVE patches
As of July 2019, a paper published by Armis exposed 11 critical vulnerabilities, including remote code execution, denial of service, information leaks, and logical flaws impacting more than two billion devices using the VxWorks RTOS. The findings are significant since this system is in use by quite a few mission-critical products. This YouTube video from Armis shows how an attacker can tunnel into an internal network using the vulnerability and hack into printers, laptops, and any other connected devices. The vulnerability can bypass firewalls as well.
Information and patches for all VxWorks versions affected by Urgent/11 vulnerability can be obtained from Wind River.
As of December 2021 there are still some CVE documented on the NIST database.
Stale Data Retention
The Wind River VxWorks operating system is used on the Boeing 787-8, 787-9 and 787-10 aircraft. As of April 2, 2020, the US Federal Aviation Administration requires the operating system to be power-cycled, or turned off and on, every fifty-one (51) days. The reason for requiring the periodic reboot of the common core system (CCS) is that its failure when continuously powered could lead to a loss of the common data network (CDN) message age validation, which filters out stale data from key flight control displays. From the FAA Air Directive: "The potential loss of the stale-data monitoring function of the CCS when continuously powered on for 51 days, if not addressed, could result in erroneous flight-critical data being routed and displayed as valid data, which could reduce the ability of the flight crew to maintain the safe flight and landing of the airplane."
References
External links
ARM operating systems
Embedded operating systems
Intel software
MIPS operating systems
PowerPC operating systems
Real-time operating systems
Robot operating systems
X86 operating systems |
157792 | https://en.wikipedia.org/wiki/Autokey%20cipher | Autokey cipher | An autokey cipher (also known as the autoclave cipher) is a cipher that incorporates the message (the plaintext) into the key. The key is generated from the message in some automated fashion, sometimes by selecting certain letters from the text or, more commonly, by adding a short primer key to the front of the message.
There are two forms of autokey cipher: key-autokey and text-autokey ciphers. A key-autokey cipher uses previous members of the keystream to determine the next element in the keystream. A text-autokey uses the previous message text to determine the next element in the keystream.
In modern cryptography, self-synchronising stream ciphers are autokey ciphers.
History
This cipher was invented in 1586 by Blaise de Vigenère with a reciprocal table of ten alphabets. Vigenère's version used an agreed-upon letter of the alphabet as a primer, making the key by writing down that letter and then the rest of the message.
More popular autokeys use a tabula recta, a square with 26 copies of the alphabet, the first line starting with 'A', the next line starting with 'B' etc. Instead of a single letter, a short agreed-upon keyword is used, and the key is generated by writing down the primer and then the rest of the message, as in Vigenère's version. To encrypt a plaintext, the row with the first letter of the message and the column with the first letter of the key are located. The letter in which the row and the column cross is the ciphertext letter.
Method
The autokey cipher, as used by members of the American Cryptogram Association, starts with a relatively-short keyword, the primer, and appends the message to it. If, for example, the keyword is QUEENLY and the message is attack at dawn, the key would be QUEENLYATTACKATDAWN.
Plaintext: attackatdawn...
Key: QUEENLYATTACKATDAWN....
Ciphertext: QNXEPVYTWTWP...
The ciphertext message would thus be "QNXEPVYTWTWP".
To decrypt the message, the recipient would start by writing down the agreed-upon keyword.
QNXEPVYTWTWP
QUEENLY
The first letter of the key, Q, would then be taken, and that row would be found in a tabula recta. That column for the first letter of the ciphertext would be looked across, also Q in this case, and the letter to the top would be retrieved, A. Now, that letter would be added to the end of the key:
QNXEPVYTWTWP
QUEENLYA
a
Then, since the next letter in the key is U and the next letter in the ciphertext is N, the U row is looked across to find the N to retrieve T:
QNXEPVYTWTWP
QUEENLYAT
at
That continues until the entire key is reconstructed, when the primer can be removed from the start.
With Vigenère's autokey cipher, a single mistake in encryption renders the rest of the message unintelligible.
Cryptanalysis
Autokey ciphers are somewhat more secure than polyalphabetic ciphers that use fixed keys since the key does not repeat within a single message. Therefore, methods like the Kasiski examination or index of coincidence analysis will not work on the ciphertext, unlike for similar ciphers that use a single repeated key.
A crucial weakness of the system, however, is that the plaintext is part of the key. That means that the key will likely contain common words at various points. The key can be attacked by using a dictionary of common words, bigrams, trigrams etc. and by attempting the decryption of the message by moving that word through the key until potentially-readable text appears.
Consider an example message meet at the fountain encrypted with the primer keyword KILT: To start, the autokey would be constructed by placing the primer at the front of the message:
plaintext: meetatthefountain
primer: KILT
autokey: KILTMEETATTHEFOUN
The message is then encrypted by using the key and the substitution alphabets, here a tabula recta:
plaintext: meetatthefountain
key: KILTMEETATTHEFOUN
ciphertext: WMPMMXXAEYHBRYOCA
The attacker receives only the ciphertext and can attack the text by selecting a word that is likely to appear in the plaintext. In this example, the attacker selects the word the as a potential part of the original message and then attempts to decode it by placing THE at every possible location in the key:
ciphertext: WMP MMX XAE YHB RYO CA
key: THE THE THE THE THE ..
plaintext: dfl tft eta fax yrk ..
ciphertext: W MPM MXX AEY HBR YOC A
key: . THE THE THE THE THE .
plaintext: . tii tqt hxu oun fhy .
ciphertext: WM PMM XXA EYH BRY OCA
key: .. THE THE THE THE THE
plaintext: .. wfi eqw lrd iku vvw
In each case, the resulting plaintext appears almost random because the key is not aligned for most of the ciphertext. However, examining the results can suggest locations of the key being properly aligned. In those cases, the resulting decrypted text is potentially part of a word. In this example, it is highly unlikely that dfl is the start of the original plaintext and so it is highly unlikely either that the first three letters of the key are THE. Examining the results, a number of fragments that are possibly words can be seen and others can be eliminated. Then, the plaintext fragments can be sorted in their order of likelihood:
unlikely ←——————————————————→ promising
eqw dfl tqt ... ... ... ... eta oun fax
A correct plaintext fragment is also going to appear in the key, shifted right by the length of the keyword. Similarly, the guessed key fragment (THE) also appears in the plaintext shifted left. Thus, by guessing keyword lengths (probably between 3 and 12), more plaintext and key can be revealed.
Trying that with oun, possibly after wasting some time with the others, results in the following:
shift by 4:
ciphertext: WMPMMXXAEYHBRYOCA
key: ......ETA.THE.OUN
plaintext: ......the.oun.ain
shift by 5:
ciphertext: WMPMMXXAEYHBRYOCA
key: .....EQW..THE..OU
plaintext: .....the..oun..og
shift by 6:
ciphertext: WMPMMXXAEYHBRYOCA
key: ....TQT...THE...O
plaintext: ....the...oun...m
A shift of 4 can be seen to look good (both of the others have unlikely Qs) and so the revealed ETA can be shifted back by 4 into the plaintext:
ciphertext: WMPMMXXAEYHBRYOCA
key: ..LTM.ETA.THE.OUN
plaintext: ..eta.the.oun.ain
A lot can be worked with now. The keyword is probably 4 characters long (..LT), and some of the message is visible:
m.eta.the.oun.ain
Because the plaintext guesses have an effect on the key 4 characters to the left, feedback on correct and incorrect guesses is given. The gaps can quickly be filled in:
meetatthefountain
The ease of cryptanalysis is caused by the feedback from the relationship between plaintext and key. A three-character guess reveals six more characters (three on each side), which then reveal further characters, creating a cascade effect. That allows incorrect guesses to be ruled out quickly.
See also
Chaocipher
Cipher Block Chaining
Notes
References
Bellaso, Giovan Battista, Il vero modo di scrivere in cifra con facilità, prestezza, et securezza di Misser Giovan Battista Bellaso, gentil’huomo bresciano, Iacobo Britannico, Bressa 1564.
Vigenère, Blaise de, Traicté des chiffres ou secrètes manières d’escrire, Abel l’Angelier, Paris 1586. ff. 46r-49v.
LABRONICUS (Buonafalce, A), Early Forms of the Porta Table, “The Cryptogram”, vol. LX n. 2, Wilbraham 1994.
Buonafalce, Augusto, Bellaso’s Reciprocal Ciphers, “Cryptologia” 30 (1):39-51, 2006.
LABRONICUS (Buonafalce, A), Vigenère and Autokey. An Update, “The Cryptogram”, vol. LXXIV n. 3, Plano 2008.
External links
Secret Code Breaker - AutoKey Cipher Decoder and Encoder
A Javascript implementation of the Autokey cipher
Classical ciphers
Stream ciphers |
157923 | https://en.wikipedia.org/wiki/Tabula%20recta | Tabula recta | In cryptography, the tabula recta (from Latin tabula rēcta) is a square table of alphabets, each row of which is made by shifting the previous one to the left. The term was invented by the German author and monk Johannes Trithemius in 1508, and used in his Trithemius cipher.
Trithemius cipher
The Trithemius cipher was published by Johannes Trithemius in his book Polygraphia, which is credited with being the first published printed work on cryptology.
Trithemius used the tabula recta to define a polyalphabetic cipher, which was equivalent to Leon Battista Alberti's cipher disk except that the order of the letters in the target alphabet is not mixed. The tabula recta is often referred to in discussing pre-computer ciphers, including the Vigenère cipher and Blaise de Vigenère's less well-known autokey cipher. All polyalphabetic ciphers based on the Caesar cipher can be described in terms of the tabula recta.
The tabula recta uses a letter square with the 26 letters of the alphabet followed by 26 rows of additional letters, each shifted once to the left from the one above it. This, in essence, creates 26 different Caesar ciphers.
The resulting ciphertext appears as a random string or block of data. Due to the variable shifting, natural letter frequencies are hidden. However, if a codebreaker is aware that this method has been used, it becomes easy to break. The cipher is vulnerable to attack because it lacks a key, thus violating Kerckhoffs's principle of cryptology.
Improvements
In 1553, an important extension to Trithemius's method was developed by Giovan Battista Bellaso, now called the Vigenère cipher. Bellaso added a key, which is used to dictate the switching of cipher alphabets with each letter. This method was misattributed to Blaise de Vigenère, who published a similar autokey cipher in 1586.
The classic Trithemius cipher (using a shift of one) is equivalent to a Vigenère cipher with ABCDEFGHIJKLMNOPQRSTUVWXYZ as the key. It is also equivalent to a Caesar cipher in which the shift is increased by 1 with each letter, starting at 0.
Usage
Within the body of the tabula recta, each alphabet is shifted one letter to the left from the one above it. This forms 26 rows of shifted alphabets, ending with an alphabet starting with Z (as shown in image). Separate from these 26 alphabets are a header row at the top and a header column on the left, each containing the letters of the alphabet in A-Z order.
The tabula recta can be used in several equivalent ways to encrypt and decrypt text. Most commonly, the left-side header column is used for the plaintext letters, both with encryption and decryption. That usage will be described herein. In order to decrypt a Trithemius cipher, one first locates in the tabula recta the letters to decrypt: first letter in the first interior column, second letter in the second column, etc.; the letter directly to the far left, in the header column, is the corresponding decrypted plaintext letter. Assuming a standard shift of 1 with no key used, the encrypted text HFNOS would be decrypted to HELLO (H->H, F->E, N->L, O->L, S->O ). So, for example, to decrypt the second letter of this text, first find the F within the second interior column, then move directly to the left, all the way to the leftmost header column, to find the corresponding plaintext letter: E.
Data is encrypted in the opposite fashion, by first locating each plaintext letter of the message in the leftmost header column of the tabula recta, and mapping it to the appropriate corresponding letter in the interior columns. For example, the first letter of the message is found within the left header column, and then mapped to the letter directly across in the column headed by "A". The next letter is then mapped to the corresponding letter in the column headed by "B", and this continues until the entire message is encrypted. If the Trithemius cipher is thought of as having the key ABCDEFGHIJKLMNOPQRSTUVWXYZ, the encryption process can also be conceptualized as finding, for each letter, the intersection of the row containing the letter to be encrypted with the column corresponding to the current letter of the key. The letter where this row and column cross is the ciphertext letter.
Programmatically, the cipher is computable, assigning , then the encryption process is . Decryption follows the same process, exchanging ciphertext and plaintext. may be defined as the value of a letter from a companion ciphertext in a running key cipher, a constant for a Caesar cipher, or a zero-based counter with some period in Trithemius's usage.
References
Citations
Sources
Classical ciphers |
157932 | https://en.wikipedia.org/wiki/Index%20of%20coincidence | Index of coincidence | In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a ratio of the total or normalized by dividing by the expected count for a random source model, is known as the index of coincidence, or IC for short.
Because letters in a natural language are not distributed evenly, the IC is higher for such texts than it would be for uniformly random text strings. What makes the IC especially useful is the fact that its value does not change if both texts are scrambled by the same single-alphabet substitution cipher, allowing a cryptanalyst to quickly detect that form of encryption.
Calculation
The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text. The chance of drawing a given letter in the text is (number of times that letter appears / length of the text). The chance of drawing that same letter again (without replacement) is (appearances − 1 / text length − 1). The product of these two values gives you the chance of drawing that letter twice in a row. One can find this product for each letter that appears in the text, then sum these products to get a chance of drawing two of a kind. This probability can then be normalized by multiplying it by some coefficient, typically 26 in English.
where c is the normalizing coefficient (26 for English), na is the number of times the letter "a" appears in the text, and N is the length of the text.
We can express the index of coincidence IC for a given letter-frequency distribution as a summation:
where N is the length of the text and n1 through nc are the frequencies (as integers) of the c letters of the alphabet (c = 26 for monocase English). The sum of the ni is necessarily N.
The products count the number of combinations of n elements taken two at a time. (Actually this counts each pair twice; the extra factors of 2 occur in both numerator and denominator of the formula and thus cancel out.) Each of the ni occurrences of the i -th letter matches each of the remaining occurrences of the same letter. There are a total of letter pairs in the entire text, and 1/c is the probability of a match for each pair, assuming a uniform random distribution of the characters (the "null model"; see below). Thus, this formula gives the ratio of the total number of coincidences observed to the total number of coincidences that one would expect from the null model.
The expected average value for the IC can be computed from the relative letter frequencies of the source language:
If all letters of an alphabet were equally probable, the expected index would be 1.0.
The actual monographic IC for telegraphic English text is around 1.73, reflecting the unevenness of natural-language letter distributions.
Sometimes values are reported without the normalizing denominator, for example for English; such values may be called κp ("kappa-plaintext") rather than IC, with κr ("kappa-random") used to denote the denominator (which is the expected coincidence rate for a uniform distribution of the same alphabet, for English).
Application
The index of coincidence is useful both in the analysis of natural-language plaintext and in the analysis of ciphertext (cryptanalysis). Even when only ciphertext is available for testing and plaintext letter identities are disguised, coincidences in ciphertext can be caused by coincidences in the underlying plaintext. This technique is used to cryptanalyze the Vigenère cipher, for example. For a repeating-key polyalphabetic cipher arranged into a matrix, the coincidence rate within each column will usually be highest when the width of the matrix is a multiple of the key length, and this fact can be used to determine the key length, which is the first step in cracking the system.
Coincidence counting can help determine when two texts are written in the same language using the same alphabet. (This technique has been used to examine the purported Bible code). The causal coincidence count for such texts will be distinctly higher than the accidental coincidence count for texts in different languages, or texts using different alphabets, or gibberish texts.
To see why, imagine an "alphabet" of only the two letters A and B. Suppose that in our "language", the letter A is used 75% of the time, and the letter B is used 25% of the time. If two texts in this language are laid side by side, then the following pairs can be expected:
Overall, the probability of a "coincidence" is 62.5% (56.25% for AA + 6.25% for BB).
Now consider the case when both messages are encrypted using the simple monoalphabetic substitution cipher which replaces A with B and vice versa:
The overall probability of a coincidence in this situation is 62.5% (6.25% for AA + 56.25% for BB), exactly the same as for the unencrypted "plaintext" case. In effect, the new alphabet produced by the substitution is just a uniform renaming of the original character identities, which does not affect whether they match.
Now suppose that only one message (say, the second) is encrypted using the same substitution cipher (A,B)→(B,A). The following pairs can now be expected:
Now the probability of a coincidence is only 37.5% (18.75% for AA + 18.75% for BB). This is noticeably lower than the probability when same-language, same-alphabet texts were used. Evidently, coincidences are more likely when the most frequent letters in each text are the same.
The same principle applies to real languages like English, because certain letters, like E, occur much more frequently than other letters—a fact which is used in frequency analysis of substitution ciphers. Coincidences involving the letter E, for example, are relatively likely. So when any two English texts are compared, the coincidence count will be higher than when an English text and a foreign-language text are used.
It can easily be imagined that this effect can be subtle. For example, similar languages will have a higher coincidence count than dissimilar languages. Also, it is not hard to generate random text with a frequency distribution similar to real text, artificially raising the coincidence count. Nevertheless, this technique can be used effectively to identify when two texts are likely to contain meaningful information in the same language using the same alphabet, to discover periods for repeating keys, and to uncover many other kinds of nonrandom phenomena within or among ciphertexts.
Expected values for various languages are:
Generalization
The above description is only an introduction to use of the index of coincidence, which is related to the general concept of correlation. Various forms of Index of Coincidence have been devised; the "delta" I.C. (given by the formula above) in effect measures the autocorrelation of a single distribution, whereas a "kappa" I.C. is used when matching two text strings. Although in some applications constant factors such as and can be ignored, in more general situations there is considerable value in truly indexing each I.C. against the value to be expected for the null hypothesis (usually: no match and a uniform random symbol distribution), so that in every situation the expected value for no correlation is 1.0. Thus, any form of I.C. can be expressed as the ratio of the number of coincidences actually observed to the number of coincidences expected (according to the null model), using the particular test setup.
From the foregoing, it is easy to see that the formula for kappa I.C. is
where is the common aligned length of the two texts A and B, and the bracketed term is defined as 1 if the -th letter of text A matches the -th letter of text B, otherwise 0.
A related concept, the "bulge" of a distribution, measures the discrepancy between the observed I.C. and the null value of 1.0. The number of cipher alphabets used in a polyalphabetic cipher may be estimated by dividing the expected bulge of the delta I.C. for a single alphabet by the observed bulge for the message, although in many cases (such as when a repeating key was used) better techniques are available.
Example
As a practical illustration of the use of I.C., suppose that we have intercepted the following ciphertext message:
QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA
IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP
MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV
(The grouping into five characters is just a telegraphic convention and has nothing to do with actual word lengths.)
Suspecting this to be an English plaintext encrypted using a Vigenère cipher with normal A–Z components and a short repeating keyword, we can consider the ciphertext "stacked" into some number of columns, for example seven:
QPWKALV
RXCQZIK
GRBPFAE
OMFLJMS
DZVDHXC
XJYEBIM
TRQWN…
If the key size happens to have been the same as the assumed number of columns, then all the letters within a single column will have been enciphered using the same key letter, in effect a simple Caesar cipher applied to a random selection of English plaintext characters. The corresponding set of ciphertext letters should have a roughness of frequency distribution similar to that of English, although the letter identities have been permuted (shifted by a constant amount corresponding to the key letter). Therefore, if we compute the aggregate delta I.C. for all columns ("delta bar"), it should be around 1.73. On the other hand, if we have incorrectly guessed the key size (number of columns), the aggregate delta I.C. should be around 1.00. So we compute the delta I.C. for assumed key sizes from one to ten:
We see that the key size is most likely five. If the actual size is five, we would expect a width of ten to also report a high I.C., since each of its columns also corresponds to a simple Caesar encipherment, and we confirm this.
So we should stack the ciphertext into five columns:
QPWKA
LVRXC
QZIKG
RBPFA
EOMFL
JMSDZ
VDH…
We can now try to determine the most likely key letter for each column considered separately, by performing trial Caesar decryption of the entire column for each of the 26 possibilities A–Z for the key letter, and choosing the key letter that produces the highest correlation between the decrypted column letter frequencies and the relative letter frequencies for normal English text. That correlation, which we don't need to worry about normalizing, can be readily computed as
where are the observed column letter frequencies and are the relative letter frequencies for English.
When we try this, the best-fit key letters are reported to be "EVERY," which we recognize as an actual word, and using that for Vigenère decryption produces the plaintext:
MUSTC HANGE MEETI NGLOC ATION FROMB RIDGE TOUND ERPAS
SSINC EENEM YAGEN TSARE BELIE VEDTO HAVEB EENAS SIGNE
DTOWA TCHBR IDGES TOPME ETING TIMEU NCHAN GEDXX
from which one obtains:
MUST CHANGE MEETING LOCATION FROM BRIDGE TO UNDERPASS
SINCE ENEMY AGENTS ARE BELIEVED TO HAVE BEEN ASSIGNED
TO WATCH BRIDGE STOP MEETING TIME UNCHANGED XX
after word divisions have been restored at the obvious positions. "XX" are evidently "null" characters used to pad out the final group for transmission.
This entire procedure could easily be packaged into an automated algorithm for breaking such ciphers. Due to normal statistical fluctuation, such an algorithm will occasionally make wrong choices, especially when analyzing short ciphertext messages.
References
See also
Kasiski examination
Riverbank Publications
Topics in cryptography
Cryptography
Cryptographic attacks
Summary statistics for contingency tables |
157934 | https://en.wikipedia.org/wiki/Frequency%20analysis | Frequency analysis | In cryptanalysis, frequency analysis (also known as counting letters) is the study of the frequency of letters or groups of letters in a ciphertext. The method is used as an aid to breaking classical ciphers.
Frequency analysis is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. Moreover, there is a characteristic distribution of letters that is roughly the same for almost all samples of that language. For instance, given a section of English language, , , and are the most common, while , , and are rare. Likewise, , , , and are the most common pairs of letters (termed bigrams or digraphs), and , , , and are the most common repeats. The nonsense phrase "ETAOIN SHRDLU" represents the 12 most frequent letters in typical English language text.
In some ciphers, such properties of the natural language plaintext are preserved in the ciphertext, and these patterns have the potential to be exploited in a ciphertext-only attack.
Frequency analysis for simple substitution ciphers
In a simple substitution cipher, each letter of the plaintext is replaced with another, and any particular letter in the plaintext will always be transformed into the same letter in the ciphertext. For instance, if all occurrences of the letter turn into the letter , a ciphertext message containing numerous instances of the letter would suggest to a cryptanalyst that represents .
The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. More s in the ciphertext than anything else suggests that corresponds to in the plaintext, but this is not certain; and are also very common in English, so might be either of them also. It is unlikely to be a plaintext or which are less common. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters.
More complex use of statistics can be conceived, such as considering counts of pairs of letters (bigrams), triplets (trigrams), and so on. This is done to provide more information to the cryptanalyst, for instance, and nearly always occur together in that order in English, even though itself is rare.
An example
Suppose Eve has intercepted the cryptogram below, and it is known to be encrypted using a simple substitution cipher as follows:
For this example, uppercase letters are used to denote ciphertext, lowercase letters are used to denote plaintext (or guesses at such), and ~ is used to express a guess that ciphertext letter represents the plaintext letter .
Eve could use frequency analysis to help solve the message along the following lines: counts of the letters in the cryptogram show that is the most common single letter, most common bigram, and is the most common trigram. is the most common letter in the English language, is the most common bigram, and is the most common trigram. This strongly suggests that ~, ~ and ~. The second most common letter in the cryptogram is ; since the first and second most frequent letters in the English language, and are accounted for, Eve guesses that ~, the third most frequent letter. Tentatively making these assumptions, the following partial decrypted message is obtained.
Using these initial guesses, Eve can spot patterns that confirm her choices, such as "". Moreover, other patterns suggest further guesses. "" might be "", which would mean ~. Similarly "" could be guessed as "", yielding ~ and ~. Furthermore, "" might be "", giving ~. Filling in these guesses, Eve gets:
In turn, these guesses suggest still others (for example, "" could be "", implying ~) and so on, and it is relatively straightforward to deduce the rest of the letters, eventually yielding the plaintext.
At this point, it would be a good idea for Eve to insert spaces and punctuation:
Hereupon Legrand arose, with a grave and stately air, and brought me the beetle
from a glass case in which it was enclosed. It was a beautiful scarabaeus, and, at
that time, unknown to naturalists—of course a great prize in a scientific point
of view. There were two round black spots near one extremity of the back, and a
long one near the other. The scales were exceedingly hard and glossy, with all the
appearance of burnished gold. The weight of the insect was very remarkable, and,
taking all things into consideration, I could hardly blame Jupiter for his opinion
respecting it.
In this example from The Gold-Bug, Eve's guesses were all correct. This would not always be the case, however; the variation in statistics for individual plaintexts can mean that initial guesses are incorrect. It may be necessary to backtrack incorrect guesses or to analyze the available statistics in much more depth than the somewhat simplified justifications given in the above example.
It is also possible that the plaintext does not exhibit the expected distribution of letter frequencies. Shorter messages are likely to show more variation. It is also possible to construct artificially skewed texts. For example, entire novels have been written that omit the letter "" altogether — a form of literature known as a lipogram.
History and usage
The first known recorded explanation of frequency analysis (indeed, of any kind of cryptanalysis) was given in the 9th century by Al-Kindi, an Arab polymath, in A Manuscript on Deciphering Cryptographic Messages. It has been suggested that close textual study of the Qur'an first brought to light that Arabic has a characteristic letter frequency. Its use spread, and similar systems were widely used in European states by the time of the Renaissance. By 1474, Cicco Simonetta had written a manual on deciphering encryptions of Latin and Italian text.
Several schemes were invented by cryptographers to defeat this weakness in simple substitution encryptions. These included:
Homophonic substitution: Use of homophones — several alternatives to the most common letters in otherwise monoalphabetic substitution ciphers. For example, for English, both X and Y ciphertext might mean plaintext E.
Polyalphabetic substitution, that is, the use of several alphabets — chosen in assorted, more or less devious, ways (Leone Alberti seems to have been the first to propose this); and
Polygraphic substitution, schemes where pairs or triplets of plaintext letters are treated as units for substitution, rather than single letters, for example, the Playfair cipher invented by Charles Wheatstone in the mid-19th century.
A disadvantage of all these attempts to defeat frequency counting attacks is that it increases complication of both enciphering and deciphering, leading to mistakes. Famously, a British Foreign Secretary is said to have rejected the Playfair cipher because, even if school boys could cope successfully as Wheatstone and Playfair had shown, "our attachés could never learn it!".
The rotor machines of the first half of the 20th century (for example, the Enigma machine) were essentially immune to straightforward frequency analysis.
However, other kinds of analysis ("attacks") successfully decoded messages from some of those machines.
Frequency analysis requires only a basic understanding of the statistics of the plaintext language and some problem solving skills, and, if performed by hand, tolerance for extensive letter bookkeeping. During World War II (WWII), both the British and the Americans recruited codebreakers by placing crossword puzzles in major newspapers and running contests for who could solve them the fastest. Several of the ciphers used by the Axis powers were breakable using frequency analysis, for example, some of the consular ciphers used by the Japanese. Mechanical methods of letter counting and statistical analysis (generally IBM card type machinery) were first used in World War II, possibly by the US Army's SIS. Today, the hard work of letter counting and analysis has been replaced by computer software, which can carry out such analysis in seconds. With modern computing power, classical ciphers are unlikely to provide any real protection for confidential data.
Frequency analysis in fiction
Frequency analysis has been described in fiction. Edgar Allan Poe's "The Gold-Bug", and Sir Arthur Conan Doyle's Sherlock Holmes tale "The Adventure of the Dancing Men" are examples of stories which describe the use of frequency analysis to attack simple substitution ciphers. The cipher in the Poe story is encrusted with several deception measures, but this is more a literary device than anything significant cryptographically.
See also
ETAOIN SHRDLU
Letter frequencies
Arabic Letter Frequency
Index of coincidence
Topics in cryptography
Zipf's law
A Void, a novel by Georges Perec. The original French text is written without the letter e, as is the English translation. The Spanish version contains no a.
Gadsby (novel), a novel by Ernest Vincent Wright. The novel is written as a lipogram, which does not include words that contain the letter E.
Further reading
Helen Fouché Gaines, "Cryptanalysis", 1939, Dover.
Abraham Sinkov, "Elementary Cryptanalysis: A Mathematical Approach", The Mathematical Association of America, 1966. .
References
External links
Free tools to analyse texts: Frequency Analysis Tool (with source code)
Tools to analyze Arabic text
Statistical Distributions of Arabic Text Letters
Statistical Distributions of English Text
Statistical Distributions of Czech Text
Character and Syllable frequencies of 33 languages and a portable tool to create frequency and syllable distributions
English Frequency Analysis based on a live data stream of posts from a forum.
Decrypting Text
Letter frequency in German
Cryptographic attacks
Frequency distribution
Arab inventions
Quantitative linguistics |
157935 | https://en.wikipedia.org/wiki/Plaintext | Plaintext | In cryptography, plaintext usually means unencrypted information pending input into cryptographic algorithms, usually encryption algorithms. This usually refers to data that is transmitted or stored unencrypted
Overview
With the advent of computing, the term plaintext expanded beyond human-readable documents to mean any data, including binary files, in a form that can be viewed or used without requiring a key or other decryption device. Information—a message, document, file, etc.—if to be communicated or stored in encrypted form is referred to as plaintext.
Plaintext is used as input to an encryption algorithm; the output is usually termed ciphertext, particularly when the algorithm is a cipher. Codetext is less often used, and almost always only when the algorithm involved is actually a code. Some systems use multiple layers of encryption, with the output of one encryption algorithm becoming "plaintext" input for the next.
Secure handling
Insecure handling of plaintext can introduce weaknesses into a cryptosystem by letting an attacker bypass the cryptography altogether. Plaintext is vulnerable in use and in storage, whether in electronic or paper format. Physical security means the securing of information and its storage media from physical, attack—for instance by someone entering a building to access papers, storage media, or computers. Discarded material, if not disposed of securely, may be a security risk. Even shredded documents and erased magnetic media might be reconstructed with sufficient effort.
If plaintext is stored in a computer file, the storage media, the computer and its components, and all backups must be secure. Sensitive data is sometimes processed on computers whose mass storage is removable, in which case physical security of the removed disk is vital. In the case of securing a computer, useful (as opposed to handwaving) security must be physical (e.g., against burglary, brazen removal under cover of supposed repair, installation of covert monitoring devices, etc.), as well as virtual (e.g., operating system modification, illicit network access, Trojan programs). Wide availability of keydrives, which can plug into most modern computers and store large quantities of data, poses another severe security headache. A spy (perhaps posing as a cleaning person) could easily conceal one, and even swallow it if necessary.
Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything— they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a disk surface occupied by a deleted file is insufficient in many cases. Peter Gutmann of the University of Auckland wrote a celebrated 1996 paper on the recovery of overwritten information from magnetic disks; areal storage densities have gotten much higher since then, so this sort of recovery is likely to be more difficult than it was when Gutmann wrote.
Modern hard drives automatically remap failing sectors, moving data to good sectors. This process makes information on those failing, excluded sectors invisible to the file system and normal applications. Special software, however, can still extract information from them.
Some government agencies (e.g., US NSA) require that personnel physically pulverize discarded disk drives and, in some cases, treat them with chemical corrosives. This practice is not widespread outside government, however. Garfinkel and Shelat (2003) analyzed 158 second-hand hard drives they acquired at garage sales and the like, and found that less than 10% had been sufficiently sanitized. The others contained a wide variety of readable personal and confidential information. See data remanence.
Physical loss is a serious problem. The US State Department, Department of Defense, and the British Secret Service have all had laptops with secret information, including in plaintext, lost or stolen. Appropriate disk encryption techniques can safeguard data on misappropriated computers or media.
On occasion, even when data on host systems is encrypted, media that personnel use to transfer data between systems is plaintext because of poorly designed data policy. For example, in October 2007, HM Revenue and Customs lost CDs that contained the unencrypted records of 25 million child benefit recipients in the United Kingdom.
Modern cryptographic systems resist known plaintext or even chosen plaintext attacks, and so may not be entirely compromised when plaintext is lost or stolen. Older systems resisted the effects of plaintext data loss on security with less effective techniques—such as padding and Russian copulation to obscure information in plaintext that could be easily guessed.
See also
Ciphertext
Red/black concept
References
S. Garfinkel and A Shelat, "Remembrance of Data Passed: A Study of Disk Sanitization Practices", IEEE Security and Privacy, January/February 2003 https://creativecommons.org/licenses/by-sa/3.0/
UK HM Revenue and Customs loses 25m records of child benefit recipients BBC
Kissel, Richard (editor). (February, 2011). NIST IR 7298 Revision 1, Glossary of Key Information Security Terms (https://creativecommons.org/licenses/by-sa/3.0/). National Institute of Standards and Technology.
Cryptography |
159271 | https://en.wikipedia.org/wiki/ElcomSoft | ElcomSoft | ElcomSoft is a privately owned software company headquartered in Moscow, Russia. Since its establishment in 1990, the company has been working on computer security programs, with the main focus on password and system recovery software.
The DMCA case
On July 16, 2001, Dmitry Sklyarov, a Russian citizen employed by ElcomSoft who was at the time visiting the United States for DEF CON, was arrested and charged for violating the United States DMCA law by writing ElcomSoft's Advanced eBook Processor software. He was later released on bail and allowed to return to Russia, and the charges against him were dropped. The charges against ElcomSoft were not, and a court case ensued, attracting much public attention and protest. On December 17, 2002, ElcomSoft was found not guilty of all four charges under the DMCA.
Thunder Tables
Thunder Tables is the company's own technology developed to ensure guaranteed recovery of Microsoft Word and Microsoft Excel documents protected with 40-bit encryption. The technology first appeared in 2007 and employs the time–memory tradeoff method to build pre-computed hash tables, which open the corresponding files in a matter of seconds instead of days. These tables take around ~ 4GB. So far, the technology is used in two password recovery programs: Advanced Office Password Breaker and Advanced PDF Password Recovery.
Cracking wi-fi password with GPUs
In 2009 ElcomSoft released a tool that takes WPA/WPA2 Hash Codes and uses brute-force methods to guess the password associated with a wireless network. The brute force attack is carried out by testing passwords with a known SSID of a network of which the WPA/WPA2 Hash Code has been captured. The passwords that are tested are generated from a dictionary using various mutation (genetic algorithm) methods, including case mutation (password, PASSWORD, PassWOrD, etc.), year mutation (password, password1992, password67, etc.), and many other mutations to try to guess the correct password.
The advantages of using such methods over the traditional ones, such as rainbow tables, are numerous. Rainbow tables, being very large in size because of the amount of SSID/Password combinations saved, take a long time to traverse, cannot have large numbers of passwords per SSID, and are reliant on the SSID being a common one which the rainbow table has already listed hash codes for (Common ones include linksys, belkin54g, etc.). EWSA, however, uses a relatively small dictionary file (a few megabytes versus dozens of gigabytes for common rainbow tables) and creates the passwords on the fly as needed. Rainbow tables are tested against a captured WPA/WPA2 Hash Code via a computer's processor with relatively low numbers of simultaneous processes possible. EWSA, however, can use a computer's processor(s), with up to 32 logical cores, up to 8 GPUs, all with many CUDA cores (NVIDIA) or Stream Processors (ATI).
Vulnerability in Canon authentication software
On November 30, 2010, Elcomsoft announced that the encryption system used by Canon cameras to ensure that pictures and Exif metadata have not been altered was flawed and cannot be fixed.
On that same day, Dmitry Sklyarov gave a presentation at the Confidence 2.0 conference in Prague demonstrating the flaws. Among others, he showed an image of an astronaut planting a flag of the Soviet Union on the moon; all the images pass Canon's authenticity verification.
Nude Celebrity Photo Leak
In 2014, an attacker used the Elcomsoft Phone Password Breaker to a guess celebrity Jennifer Lawrence's password and obtain nude photos. Wired said about Apple's cloud services, "...cloud services might be about as secure as leaving your front door key under the mat."
References
Companies established in 1990
Computer law
Cryptography law
Software companies of Russia
Computer security software companies
Companies based in Moscow |
160202 | https://en.wikipedia.org/wiki/Block%20cipher%20mode%20of%20operation | Block cipher mode of operation | In cryptography, a block cipher mode of operation is an algorithm that uses a block cipher to provide information security such as confidentiality or authenticity. A block cipher by itself is only suitable for the secure cryptographic transformation (encryption or decryption) of one fixed-length group of bits called a block. A mode of operation describes how to repeatedly apply a cipher's single-block operation to securely transform amounts of data larger than a block.
Most modes require a unique binary sequence, often called an initialization vector (IV), for each encryption operation. The IV has to be non-repeating and, for some modes, random as well. The initialization vector is used to ensure distinct ciphertexts are produced even when the same plaintext is encrypted multiple times independently with the same key. Block ciphers may be capable of operating on more than one block size, but during transformation the block size is always fixed. Block cipher modes operate on whole blocks and require that the last part of the data be padded to a full block if it is smaller than the current block size. There are, however, modes that do not require padding because they effectively use a block cipher as a stream cipher.
Historically, encryption modes have been studied extensively in regard to their error propagation properties under various scenarios of data modification. Later development regarded integrity protection as an entirely separate cryptographic goal. Some modern modes of operation combine confidentiality and authenticity in an efficient way, and are known as authenticated encryption modes.
History and standardization
The earliest modes of operation, ECB, CBC, OFB, and CFB (see below for all), date back to 1981 and were specified in FIPS 81, DES Modes of Operation. In 2001, the US National Institute of Standards and Technology (NIST) revised its list of approved modes of operation by including AES as a block cipher and adding CTR mode in SP800-38A, Recommendation for Block Cipher Modes of Operation. Finally, in January, 2010, NIST added XTS-AES in SP800-38E, Recommendation for Block Cipher Modes of Operation: The XTS-AES Mode for Confidentiality on Storage Devices. Other confidentiality modes exist which have not been approved by NIST. For example, CTS is ciphertext stealing mode and available in many popular cryptographic libraries.
The block cipher modes ECB, CBC, OFB, CFB, CTR, and XTS provide confidentiality, but they do not protect against accidental modification or malicious tampering. Modification or tampering can be detected with a separate message authentication code such as CBC-MAC, or a digital signature. The cryptographic community recognized the need for dedicated integrity assurances and NIST responded with HMAC, CMAC, and GMAC. HMAC was approved in 2002 as FIPS 198, The Keyed-Hash Message Authentication Code (HMAC), CMAC was released in 2005 under SP800-38B, Recommendation for Block Cipher Modes of Operation: The CMAC Mode for Authentication, and GMAC was formalized in 2007 under SP800-38D, Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC.
The cryptographic community observed that compositing (combining) a confidentiality mode with an authenticity mode could be difficult and error prone. They therefore began to supply modes which combined confidentiality and data integrity into a single cryptographic primitive (an encryption algorithm). These combined modes are referred to as authenticated encryption, AE or "authenc". Examples of AE modes are CCM (SP800-38C), GCM (SP800-38D), CWC, EAX, IAPM, and OCB.
Modes of operation are defined by a number of national and internationally recognized standards bodies. Notable standards organizations include NIST, ISO (with ISO/IEC 10116), the IEC, the IEEE, ANSI, and the IETF.
Initialization vector (IV)
An initialization vector (IV) or starting variable (SV) is a block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times, without the need for a slower re-keying process.
An initialization vector has different security requirements than a key, so the IV usually does not need to be secret. For most block cipher modes it is important that an initialization vector is never reused under the same key, i.e. it must be a cryptographic nonce. Many block cipher modes have stronger requirements, such as the IV must be random or pseudorandom. Some block ciphers have particular problems with certain initialization vectors, such as all zero IV generating no encryption (for some keys).
It is recommended to review relevant IV requirements for the particular block cipher mode in relevant specification, for example SP800-38A.
For CBC and CFB, reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages.
For OFB and CTR, reusing an IV causes key bitstream re-use, which breaks security. This can be seen because both modes effectively create a bitstream that is XORed with the plaintext, and this bitstream is dependent on the key and IV only.
In CBC mode, the IV must be unpredictable (random or pseudorandom) at encryption time; in particular, the (previously) common practice of re-using the last ciphertext block of a message as the IV for the next message is insecure (for example, this method was used by SSL 2.0). If an attacker knows the IV (or the previous block of ciphertext) before the next plaintext is specified, they can check their guess about plaintext of some block that was encrypted with the same key before (this is known as the TLS CBC IV attack).
For some keys an all-zero initialization vector may generate some block cipher modes (CFB-8, OFB-8) to get internal state stuck at all-zero. For CFB-8, an all-zero IV and an all-zero plaintext, causes 1/256 of keys to generate no encryption, plaintext is returned as ciphertext. For OFB-8, using all zero initialization vector will generate no encryption for 1/256 of keys. OFB-8 encryption returns the plaintext unencrypted for affected keys.
Some modes (such as AES-SIV and AES-GCM-SIV) are built to be more nonce-misuse resistant, i.e. resilient to scenarios in which the randomness generation is faulty or under the control of the attacker.
Synthetic Initialization Vector (SIV) synthesize an internal IV by running an Pseudo-Random Function (PRF) construction called S2V on the input (additional data and plaintext), preventing any external data from directly controlling the IV. External nonces / IV may be feed into S2V as an additional data field.
AES-GCM-SIV synthesize an internal IV by running POLYVAL Galois mode of authentication on input (additional data and plaintext), followed by an AES operation.
Padding
A block cipher works on units of a fixed size (known as a block size), but messages come in a variety of lengths. So some modes (namely ECB and CBC) require that the final block be padded before encryption. Several padding schemes exist. The simplest is to add null bytes to the plaintext to bring its length up to a multiple of the block size, but care must be taken that the original length of the plaintext can be recovered; this is trivial, for example, if the plaintext is a C style string which contains no null bytes except at the end. Slightly more complex is the original DES method, which is to add a single one bit, followed by enough zero bits to fill out the block; if the message ends on a block boundary, a whole padding block will be added. Most sophisticated are CBC-specific schemes such as ciphertext stealing or residual block termination, which do not cause any extra ciphertext, at the expense of some additional complexity. Schneier and Ferguson suggest two possibilities, both simple: append a byte with value 128 (hex 80), followed by as many zero bytes as needed to fill the last block, or pad the last block with n bytes all with value n.
CFB, OFB and CTR modes do not require any special measures to handle messages whose lengths are not multiples of the block size, since the modes work by XORing the plaintext with the output of the block cipher. The last partial block of plaintext is XORed with the first few bytes of the last keystream block, producing a final ciphertext block that is the same size as the final partial plaintext block. This characteristic of stream ciphers makes them suitable for applications that require the encrypted ciphertext data to be the same size as the original plaintext data, and for applications that transmit data in streaming form where it is inconvenient to add padding bytes.
Common modes
Authenticated encryption with additional data (AEAD) modes
A number of modes of operation have been designed to combine secrecy and authentication in a single cryptographic primitive. Examples of such modes are , , integrity-aware parallelizable mode (IAPM), OCB, EAX, CWC, CCM, and GCM. Authenticated encryption modes are classified as single-pass modes or double-pass modes. Some single-pass authenticated encryption algorithms, such as OCB mode, are encumbered by patents, while others were specifically designed and released in a way to avoid such encumberment.
In addition, some modes also allow for the authentication of unencrypted associated data, and these are called AEAD (authenticated encryption with associated data) schemes. For example, EAX mode is a double-pass AEAD scheme while OCB mode is single-pass.
Galois/counter (GCM)
Galois/counter mode (GCM) combines the well-known counter mode of encryption with the new Galois mode of authentication. The key feature is the ease of parallel computation of the Galois field multiplication used for authentication. This feature permits higher throughput than encryption algorithms.
GCM is defined for block ciphers with a block size of 128 bits. Galois message authentication code (GMAC) is an authentication-only variant of the GCM which can form an incremental message authentication code. Both GCM and GMAC can accept initialization vectors of arbitrary length. GCM can take full advantage of parallel processing and implementing GCM can make efficient use of an instruction pipeline or a hardware pipeline. The CBC mode of operation incurs pipeline stalls that hamper its efficiency and performance.
Like in CTR, blocks are numbered sequentially, and then this block number is combined with an IV and encrypted with a block cipher , usually AES. The result of this encryption is then XORed with the plaintext to produce the ciphertext. Like all counter modes, this is essentially a stream cipher, and so it is essential that a different IV is used for each stream that is encrypted.
The ciphertext blocks are considered coefficients of a polynomial which is then evaluated at a key-dependent point , using finite field arithmetic. The result is then encrypted, producing an authentication tag that can be used to verify the integrity of the data. The encrypted text then contains the IV, ciphertext, and authentication tag.
Counter with cipher block chaining message authentication code (CCM)
Counter with cipher block chaining message authentication code (counter with CBC-MAC; CCM) is an authenticated encryption algorithm designed to provide both authentication and confidentiality. CCM mode is only defined for block ciphers with a block length of 128 bits.
Synthetic initialization vector (SIV)
Synthetic initialization vector (SIV) is a nonce-misuse resistant block cipher mode.
SIV synthesizes an internal IV using the a pseudorandom function S2V. S2V is a keyed hash is based on CMAC, and the input to the function is:
Additional authenticated data (zero, one or many AAD fields are supported)
Plaintext
Authentication key (K).
SIV encrypts the S2V output and the plaintext using AES-CTR, keyed with the encryption key (K).
SIV can support external nonce-based authenticated encryption, in which case one of the authenticated data fields is utilized for this purpose. RFC5297 specifies that for interoperability purposes the last authenticated data field should be used external nonce.
Owing to the use of two keys, the authentication key K and encryption key K, naming schemes for SIV AEAD-variants may lead to some confusion; for example AEAD_AES_SIV_CMAC_256 refers to AES-SIV with two AES-128 keys and not AES-256.
AES-GCM-SIV
AES-GCM-SIV is a mode of operation for the Advanced Encryption Standard which provides similar performance to Galois/counter mode as well as misuse resistance in the event of the reuse of a cryptographic nonce. The construction is defined in RFC 8452.
AES-GCM-SIV synthesizes the internal IV. It derives a hash of the additional authenticated data and plaintext using the POLYVAL Galois hash function. The hash is then encrypted an AES-key, and used as authentication tag and AES-CTR initialization vector.
AES-GCM-SIV is an improvement over the very similarly named algorithm GCM-SIV, with a few very small changes (e.g. how AES-CTR is initialized), but which yields practical benefits to its security "This addition allows for encrypting up to 250 messages with the same key, compared to the significant limitation of only 232 messages that were allowed with GCM-SIV."
Confidentiality only modes
Many modes of operation have been defined. Some of these are described below. The purpose of cipher modes is to mask patterns which exist in encrypted data, as illustrated in the description of the weakness of ECB.
Different cipher modes mask patterns by cascading outputs from the cipher block or other globally deterministic variables into the subsequent cipher block. The inputs of the listed modes are summarized in the following table:
Note: g(i) is any deterministic function, often the identity function.
Electronic codebook (ECB)
The simplest (and not to be used anymore) of the encryption modes is the electronic codebook (ECB) mode (named after conventional physical codebooks). The message is divided into blocks, and each block is encrypted separately.
The disadvantage of this method is a lack of diffusion. Because ECB encrypts identical plaintext blocks into identical ciphertext blocks, it does not hide data patterns well.
ECB is not recommended for use in cryptographic protocols.
A striking example of the degree to which ECB can leave plaintext data patterns in the ciphertext can be seen when ECB mode is used to encrypt a bitmap image which uses large areas of uniform color. While the color of each individual pixel is encrypted, the overall image may still be discerned, as the pattern of identically colored pixels in the original remains in the encrypted version.
ECB mode can also make protocols without integrity protection even more susceptible to replay attacks, since each block gets decrypted in exactly the same way.
Cipher block chaining (CBC)
Ehrsam, Meyer, Smith and Tuchman invented the cipher block chaining (CBC) mode of operation in 1976. In CBC mode, each block of plaintext is XORed with the previous ciphertext block before being encrypted. This way, each ciphertext block depends on all plaintext blocks processed up to that point. To make each message unique, an initialization vector must be used in the first block.
If the first block has index 1, the mathematical formula for CBC encryption is
while the mathematical formula for CBC decryption is
Example
CBC has been the most commonly used mode of operation. Its main drawbacks are that encryption is sequential (i.e., it cannot be parallelized), and that the message must be padded to a multiple of the cipher block size. One way to handle this last issue is through the method known as ciphertext stealing. Note that a one-bit change in a plaintext or initialization vector (IV) affects all following ciphertext blocks.
Decrypting with the incorrect IV causes the first block of plaintext to be corrupt but subsequent plaintext blocks will be correct. This is because each block is XORed with the ciphertext of the previous block, not the plaintext, so one does not need to decrypt the previous block before using it as the IV for the decryption of the current one. This means that a plaintext block can be recovered from two adjacent blocks of ciphertext. As a consequence, decryption can be parallelized. Note that a one-bit change to the ciphertext causes complete corruption of the corresponding block of plaintext, and inverts the corresponding bit in the following block of plaintext, but the rest of the blocks remain intact. This peculiarity is exploited in different padding oracle attacks, such as POODLE.
Explicit initialization vectors takes advantage of this property by prepending a single random block to the plaintext. Encryption is done as normal, except the IV does not need to be communicated to the decryption routine. Whatever IV decryption uses, only the random block is "corrupted". It can be safely discarded and the rest of the decryption is the original plaintext.
Propagating cipher block chaining (PCBC)
The propagating cipher block chaining or plaintext cipher-block chaining mode was designed to cause small changes in the ciphertext to propagate indefinitely when decrypting, as well as when encrypting. In PCBC mode, each block of plaintext is XORed with both the previous plaintext block and the previous ciphertext block before being encrypted. Like with CBC mode, an initialization vector is used in the first block.
Encryption and decryption algorithms are as follows:
PCBC is used in Kerberos v4 and WASTE, most notably, but otherwise is not common. On a message encrypted in PCBC mode, if two adjacent ciphertext blocks are exchanged, this does not affect the decryption of subsequent blocks. For this reason, PCBC is not used in Kerberos v5.
Cipher feedback (CFB)
Full-block CFB
The cipher feedback (CFB) mode, in its simplest form uses the entire output of the block cipher. In this variation, it is very similar to CBC, makes a block cipher into a self-synchronizing stream cipher. CFB decryption in this variation is almost identical to CBC encryption performed in reverse:
CFB-1, CFB-8, CFB-64, CFB-128, etc.
NIST SP800-38A defines CFB with a bit-width. The CFB mode also requires an integer parameter, denoted s, such that 1 ≤ s ≤ b. In the specification of the CFB mode below, each plaintext segment (Pj) and ciphertext segment (Cj) consists of s bits. The value of s is sometimes incorporated into the name of the mode, e.g., the 1-bit CFB mode, the 8-bit CFB mode, the 64-bit CFB mode, or the 128-bit CFB mode.
These modes will truncate the output of the underlying block cipher.
CFB-1 is considered self synchronizing and resilient to loss of ciphertext; "When the 1-bit CFB mode is used, then the synchronization is automatically restored b+1 positions after the inserted or deleted bit. For other values of s in the CFB mode, and for the other confidentiality modes in this recommendation, the synchronization must be restored externally." (NIST SP800-38A). I.e. 1-bit loss in a 128-bit-wide block cipher like AES will render 129 invalid bits before emitting valid bits.
CFB may also self synchronize in some special cases other than those specified. For example, a one bit change in CFB-128 with an underlying 128 bit block cipher, will re-synchronize after two blocks. (However, CFB-128 etc. will not handle bit loss gracefully; a one-bit loss will cause the decryptor to loose alignment with the encryptor)
CFB compared to other modes
Like CBC mode, changes in the plaintext propagate forever in the ciphertext, and encryption cannot be parallelized. Also like CBC, decryption can be parallelized.
CFB, OFB and CTR share two advantages over CBC mode: the block cipher is only ever used in the encrypting direction, and the message does not need to be padded to a multiple of the cipher block size (though ciphertext stealing can also be used for CBC mode to make padding unnecessary).
Output feedback (OFB)
The output feedback (OFB) mode makes a block cipher into a synchronous stream cipher. It generates keystream blocks, which are then XORed with the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows many error-correcting codes to function normally even when applied before encryption.
Because of the symmetry of the XOR operation, encryption and decryption are exactly the same:
Each output feedback block cipher operation depends on all previous ones, and so cannot be performed in parallel. However, because the plaintext or ciphertext is only used for the final XOR, the block cipher operations may be performed in advance, allowing the final step to be performed in parallel once the plaintext or ciphertext is available.
It is possible to obtain an OFB mode keystream by using CBC mode with a constant string of zeroes as input. This can be useful, because it allows the usage of fast hardware implementations of CBC mode for OFB mode encryption.
Using OFB mode with a partial block as feedback like CFB mode reduces the average cycle length by a factor of 232 or more. A mathematical model proposed by Davies and Parkin and substantiated by experimental results showed that only with full feedback an average cycle length near to the obtainable maximum can be achieved. For this reason, support for truncated feedback was removed from the specification of OFB.
Counter (CTR)
Note: CTR mode (CM) is also known as integer counter mode (ICM) and segmented integer counter (SIC) mode.
Like OFB, counter mode turns a block cipher into a stream cipher. It generates the next keystream block by encrypting successive values of a "counter". The counter can be any function which produces a sequence which is guaranteed not to repeat for a long time, although an actual increment-by-one counter is the simplest and most popular. The usage of a simple deterministic input function used to be controversial; critics argued that "deliberately exposing a cryptosystem to a known systematic input represents an unnecessary risk". However, today CTR mode is widely accepted, and any problems are considered a weakness of the underlying block cipher, which is expected to be secure regardless of systemic bias in its input. Along with CBC, CTR mode is one of two block cipher modes recommended by Niels Ferguson and Bruce Schneier.
CTR mode was introduced by Whitfield Diffie and Martin Hellman in 1979.
CTR mode has similar characteristics to OFB, but also allows a random-access property during decryption. CTR mode is well suited to operate on a multi-processor machine, where blocks can be encrypted in parallel. Furthermore, it does not suffer from the short-cycle problem that can affect OFB.
If the IV/nonce is random, then they can be combined with the counter using any invertible operation (concatenation, addition, or XOR) to produce the actual unique counter block for encryption. In case of a non-random nonce (such as a packet counter), the nonce and counter should be concatenated (e.g., storing the nonce in the upper 64 bits and the counter in the lower 64 bits of a 128-bit counter block). Simply adding or XORing the nonce and counter into a single value would break the security under a chosen-plaintext attack in many cases, since the attacker may be able to manipulate the entire IV–counter pair to cause a collision. Once an attacker controls the IV–counter pair and plaintext, XOR of the ciphertext with the known plaintext would yield a value that, when XORed with the ciphertext of the other block sharing the same IV–counter pair, would decrypt that block.
Note that the nonce in this diagram is equivalent to the initialization vector (IV) in the other diagrams. However, if the offset/location information is corrupt, it will be impossible to partially recover such data due to the dependence on byte offset.
Error propagation
"Error propagation" properties describe how a decryption behaves during bit errors, i.e. how error in one bit cascades to different decrypted bits.
Bit errors may occur randomly due to transmission errors.
Bit errors may occur intentionally in attacks.
Specific bit errors in stream cipher modes (OFB, CTR, etc.) it is trivial affect only the specific bit intended.
Specific bit errors in more complex modes such (e.g. CBC): adaptive chosen-ciphertext attack may intelligently combine many different specific bit errors to break the cipher mode. In Padding oracle attack, CBC can be decrypted in the attack by guessing encryption secrets based on error responses. The Padding Oracle attack variant "CBC-R" (CBC Reverse) lets the attacker construct any valid message.
For modern authenticated encryption (AEAD) or protocols with message authentication codes chained in MAC-Then-Encrypt order, any bit error should completely abort decryption and must not generate any specific bit errors to decryptor. I.e. if decryption succeeded, there should not be any bit error. As such error propagation is less important subject in modern cipher modes than in traditional confidentiality-only modes.
(Source: SP800-38A Table D.2: Summary of Effect of Bit Errors on Decryption)
It might be observed, for example, that a one-block error in the transmitted ciphertext would result in a one-block error in the reconstructed plaintext for ECB mode encryption, while in CBC mode such an error would affect two blocks. Some felt that such resilience was desirable in the face of random errors (e.g., line noise), while others argued that error correcting increased the scope for attackers to maliciously tamper with a message.
However, when proper integrity protection is used, such an error will result (with high probability) in the entire message being rejected. If resistance to random error is desirable, error-correcting codes should be applied to the ciphertext before transmission.
Other modes and other cryptographic primitives
Many more modes of operation for block ciphers have been suggested. Some have been accepted, fully described (even standardized), and are in use. Others have been found insecure, and should never be used. Still others don't categorize as confidentiality, authenticity, or authenticated encryption – for example key feedback mode and Davies–Meyer hashing.
NIST maintains a list of proposed modes for block ciphers at Modes Development.
Disk encryption often uses special purpose modes specifically designed for the application. Tweakable narrow-block encryption modes (LRW, XEX, and XTS) and wide-block encryption modes (CMC and EME) are designed to securely encrypt sectors of a disk (see disk encryption theory).
Many modes use an initialization vector (IV) which, depending on the mode, may have requirements such as being only used once (a nonce) or being unpredictable ahead of its publication, etc. Reusing an IV with the same key in CTR, GCM or OFB mode results in XORing the same keystream with two or more plaintexts, a clear misuse of a stream, with a catastrophic loss of security. Deterministic authenticated encryption modes such as the NIST Key Wrap algorithm and the SIV (RFC 5297) AEAD mode do not require an IV as an input, and return the same ciphertext and authentication tag every time for a given plaintext and key. Other IV misuse-resistant modes such as AES-GCM-SIV benefit from an IV input, for example in the maximum amount of data that can be safely encrypted with one key, while not failing catastrophically if the same IV is used multiple times.
Block ciphers can also be used in other cryptographic protocols. They are generally used in modes of operation similar to the block modes described here. As with all protocols, to be cryptographically secure, care must be taken to design these modes of operation correctly.
There are several schemes which use a block cipher to build a cryptographic hash function. See one-way compression function for descriptions of several such methods.
Cryptographically secure pseudorandom number generators (CSPRNGs) can also be built using block ciphers.
Message authentication codes (MACs) are often built from block ciphers. CBC-MAC, OMAC and PMAC are examples.
See also
Disk encryption
Message authentication code
Authenticated encryption
One-way compression function
References
Cryptographic algorithms |
160216 | https://en.wikipedia.org/wiki/RC6 | RC6 | In cryptography, RC6 (Rivest cipher 6) is a symmetric key block cipher derived from RC5. It was designed by Ron Rivest, Matt Robshaw, Ray Sidney, and Yiqun Lisa Yin to meet the requirements of the Advanced Encryption Standard (AES) competition. The algorithm was one of the five finalists, and also was submitted to the NESSIE and CRYPTREC projects. It was a proprietary algorithm, patented by RSA Security.
RC6 proper has a block size of 128 bits and supports key sizes of 128, 192, and 256 bits up to 2040-bits, but, like RC5, it may be parameterised to support a wide variety of word-lengths, key sizes, and number of rounds. RC6 is very similar to RC5 in structure, using data-dependent rotations, modular addition, and XOR operations; in fact, RC6 could be viewed as interweaving two parallel RC5 encryption processes, although RC6 does use an extra multiplication operation not present in RC5 in order to make the rotation dependent on every bit in a word, and not just the least significant few bits.
Encryption/decryption
Note that the key expansion algorithm is practically identical to that of RC5. The only difference is that for RC6, more words are derived from the user-supplied key.
// Encryption/Decryption with RC6-w/r/b
//
// Input: Plaintext stored in four w-bit input registers A, B, C & D
// r is the number of rounds
// w-bit round keys S[0, ... , 2r + 3]
//
// Output: Ciphertext stored in A, B, C, D
//
// '''Encryption Procedure:'''
B = B + S[0]
D = D + S[1]
for i = 1 to r do
{
t = (B * (2B + 1)) <<< lg w
u = (D * (2D + 1)) <<< lg w
A = ((A ^ t) <<< u) + S[2i]
C = ((C ^ u) <<< t) + S[2i + 1]
(A, B, C, D) = (B, C, D, A)
}
A = A + S[2r + 2]
C = C + S[2r + 3]
// '''Decryption Procedure:'''
C = C - S[2r + 3]
A = A - S[2r + 2]
for i = r downto 1 do
{
(A, B, C, D) = (D, A, B, C)
u = (D * (2D + 1)) <<< lg w
t = (B * (2B + 1)) <<< lg w
C = ((C - S[2i + 1]) >>> t) ^ u
A = ((A - S[2i]) >>> u) ^ t
}
D = D - S[1]
B = B - S[0]
Possible use in NSA "implants"
In August 2016, code reputed to be Equation Group or NSA "implants" for various network security devices was disclosed. The accompanying instructions revealed that some of these programs use RC6 for confidentiality of network communications.
Licensing
As RC6 was not selected for the AES, it was not guaranteed that RC6 is royalty-free. , a web page on the official web site of the designers of RC6, RSA Laboratories, states the following:
"We emphasize that if RC6 is selected for the AES, RSA Security will not require any licensing or royalty payments for products using the algorithm".
The emphasis on the word "if" suggests that RSA Security Inc. may have required licensing and royalty payments for any products using the RC6 algorithm. RC6 was a patented encryption algorithm ( and ); however, the patents expired between 2015 and 2017.
Notes
References
External links
Block ciphers |
160494 | https://en.wikipedia.org/wiki/AN/PRC-77%20Portable%20Transceiver | AN/PRC-77 Portable Transceiver | AN/PRC 77 Radio Set is a manpack, portable VHF FM combat-net radio transceiver manufactured by Associated Industries and used to provide short-range, two-way radiotelephone voice communication. In the Joint Electronics Type Designation System (JETDS), AN/PRC translates to "Army/Navy, Portable, Radio, Communication."
History
The AN/PRC-77 entered service in 1968 during the Vietnam War as an upgrade to the earlier AN/PRC-25. It differs from its predecessor mainly in that the PRC-77's final power amplifier stage is made with a transistor, eliminating the only vacuum tube in the PRC-25 and the DC-DC voltage converter used to create the high plate voltage for the tube from the 15 V battery. These were not the only changes. The PRC-77 transmitter audio bandwidth was widened to give it the ability to use voice encryption devices, while the PRC-25 could not. These include the TSEC/KY-38 NESTOR equipment used in Vietnam and the later KY-57 VINSON family. Problems were encountered in Vietnam with the combination as described in the NESTOR article. The transmitter spurious emissions were cleaned up to create less interference to nearby receivers. The receiver's performance was also hardened in the PRC-77 to enable it to better reject interference suffered from nearby transmitters, a common operating set up that reduced the effectiveness of the PRC-25. The receiver audio bandwidth was also increased to operate with the encryption equipment.
There were no changes to the external controls or looks, so the two radios looked and the operating controls were the same. The equipment tag glued to the edge of the front panel being the only (external) way to tell the difference. The original batteries had a 3 V tap (series diode-reduced to 2.4 V) for the PRC-25's tube filament. This remained unchanged so the batteries could operate either radio it was placed in, but the PRC-77 did not use the 3 V tap at all. With the more efficient all-transistorized circuitry, and without the DC-DC step-up voltage converter for the tube, the common battery lasted longer in the PRC-77 under the same conditions. "OF THE TWENTY-FIVE (25) ELECTRONIC MODULES ORIGINALLY USED IN BOTH THE TRANSMITTER AND RECEIVER PORTIONS OF THE AN/PRC-25, ONLY EIGHT (8) OF THE MODULES USED IN THE AN/PRC-77 ARE INTERCHANGEABLE WITH THE AN/PRC-25.'"
Today the AN/PRC-77 has largely been replaced by SINCGARS radios, but it is still capable of inter-operating with most VHF FM radios used by U.S. and allied ground forces. It was commonly nicknamed the "prick-77" by U.S. military forces.
Technical details
The AN/PRC 77 consists of the RT-841 transceiver and minor components. It can provide secure voice (X-mode) transmission with the TSEC/KY-57 VINSON voice encryption device, but is not compatible with the SINCGARS frequency hopping mode. During the Vietnam War, the PRC-77 used the earlier TSEC/KY-38 NESTOR voice encryption system.
Major components:
Transmitter/Receiver unit
Battery
Minor components - CES (Complete Equipment Schedule):
3 ft antenna - 'bush/battle whip'
10 ft antenna
3 ft antenna base - 'gooseneck'
10 ft antenna base
Handset
Harness
Users
: The Austrian Army still uses the AN/PRC-77, though in a limited capacity such as training cadets in radio communications. For border patrol the Austrian Army now uses a new device called "TFF-41" (Pentacom RT-405), which is capable of frequency-hopping and digital encryption. The Austrian Army also uses the AN/PRC-1177 for example the Austrian AN/PRC-77 have a special switch for a 25 kHz mode, which reduces the bandwidth of the selected channel by 25 kHz and therefore doubles the number of available channels.
: The Bangladesh Army use the AN/PRC-77 as a section level communication equipment. In Chittagong Hill Tracts area it is still used for operations. Some modified/improvised local antenna concepts often increase the communication range up to 15–20 km. Now being phased out by far superior Q-MAC's VHF-90M
: In Brazil it is used by Brazilian Army It was nicknamed EB-11 RY-20/ERC-110 manufactured by Associated Industries U.S.A and manufactured by AEG Telefunken do Brasil S/A, São Paulo 1970 the radio is used today but is now being replaced but still the PRC-77 remains stored in military units also used for training of technicians in military communications sergeants communications.
: The Telecomm Regiments in the Chilean Army still using the PRC - 77. (In process of modernization).
: Salvadoran military and security forces used both American and Israeli-manufactured versions during the civil war.
: The Finnish army uses this radio as a "battalion radio", using it as a common training device. The radio is designated LV 217 'Ventti-seiska' ('ventti' is Finnish slang for '21', from the Finnish variant of blackjack).
: The Israel Defense Forces used this radio extensively from the early 1970s to the late 1990s, when it was gradually replaced by modern digital devices. However, it can still be found in some units, mostly in stationary temporary posts.
: The New Zealand Defence Force used the '77 set' as its VHF combat arms communications equipment, both manpack and vehicle-mounted Land Rover 'fitted for radio' (FFR) variants, from the late 1960s until the 1990s. It came into New Zealand service with a lot of other US equipment during New Zealand's contribution to the Vietnam War, replacing the New Zealand-built ZC-1 and British equipment dating back to the Second World War.
: The AN/PRC-77 has been replaced as a main source of radio communication for regular forces of the Norwegian Army by indigenously developed radio sets called MRR (Multi Role Radio) and LFR (Lett Flerbruks Radio) (Norwegian for Light Multi Role Radio), and other modern radios. However the Norwegian Army did not throw these radio sets away. Instead many of them were handed over to the Home Guard which still uses it as their backup radio as there is a limited supply of MRR sets for the force totalling 40 000 soldiers.
: The Pakistani Army has used the set for the past 25+ years. Purchased from different sources including the US, Brazil and Spain, it is scheduled to be replaced in the next 5 years.
: The Philippine Army made extensive use of the AN/PRC-77 for several decades until they were phased out of service with the introduction of newer manpack radios such as the Harris Falcon II during the 2000s.
: The Spanish Army, Spanish Navy (Armada Española), Spanish Marines and Spanish Air Force formerly used the AN/PRC-77. It was replaced by the French PR4G since 2002
: In the Swedish Army the radio system goes under the name Radio 145 and Radio 146 (Ra145/146), predominately the Homeguard (National Guard) is issued the Ra145/146.
: The Swiss Army used the radio as SE-227.
: The Republic of China (Taiwan) Army nicknamed the radio as "77", and had used it for over 40 years when AN/PRC-77, along with AN/VRC-12, were replaced by indigenous radio systems in 2010s.
Photo gallery
See also
List of military electronics of the United States
References
External links
AN/PRC-25 and AN/PRC-77 at Olive-drab.com
PRC-77 Back Pack Squad Radio
Military radio systems of the United States
Military electronics of the United States
Military equipment of the Vietnam War
Military equipment introduced in the 1960s |
160506 | https://en.wikipedia.org/wiki/Hardware%20random%20number%20generator | Hardware random number generator | In computing, a hardware random number generator (HRNG) or true random number generator (TRNG) is a device that generates random numbers from a physical process, rather than by means of an algorithm. Such devices are often based on microscopic phenomena that generate low-level, statistically random "noise" signals, such as thermal noise, the photoelectric effect, involving a beam splitter, and other quantum phenomena. These stochastic processes are, in theory, completely unpredictable for as long as an equation governing such phenomena is unknown or uncomputable, and the theory's assertions of unpredictability are subject to experimental test. This is in contrast to the paradigm of pseudo-random number generation commonly implemented in computer programs.
A hardware random number generator typically consists of a transducer to convert some aspect of the physical phenomena to an electrical signal, an amplifier and other electronic circuitry to increase the amplitude of the random fluctuations to a measurable level, and some type of analog-to-digital converter to convert the output into a digital number, often a simple binary digit 0 or 1. By repeatedly sampling the randomly varying signal, a series of random numbers is obtained.
The main application for electronic hardware random number generators is in cryptography, where they are used to generate random cryptographic keys to transmit data securely. They are widely used in Internet encryption protocols such as Transport Layer Security (TLS).
Random number generators can also be built from "random" macroscopic processes, using devices such as coin flipping, dice, roulette wheels and lottery machines. The presence of unpredictability in these phenomena can be justified by the theory of unstable dynamical systems and chaos theory. Even though macroscopic processes are deterministic under Newtonian mechanics, the output of a well-designed device like a roulette wheel cannot be predicted in practice, because it depends on the sensitive, micro-details of the initial conditions of each use.
Although dice have been mostly used in gambling, and as "randomizing" elements in games (e.g. role playing games), the Victorian scientist Francis Galton described a way to use dice to explicitly generate random numbers for scientific purposes in 1890.
Hardware random number generators generally produce only a limited number of random bits per second. In order to increase the available output data rate, they are often used to generate the "seed" for a faster cryptographically secure pseudorandom number generator, which then generates a pseudorandom output sequence at a much higher data rate.
Uses
Unpredictable random numbers were first investigated in the context of gambling, and many randomizing devices such as dice, shuffling playing cards, and roulette wheels, were first developed for such use. Fairly produced random numbers are vital to electronic gambling and ways of creating them are sometimes regulated by governmental gaming commissions.
Random numbers are also used for non-gambling purposes, both where their use is mathematically important, such as sampling for opinion polls, and in situations where fairness is approximated by randomization, such as military draft lotteries and selecting jurors.
Cryptography
The major use for hardware random number generators is in the field of data encryption, for example to create random cryptographic keys and nonces needed to encrypt and sign data. They are a more secure alternative to pseudorandom number generators (PRNGs), software programs commonly used in computers to generate "random" numbers. PRNGs use a deterministic algorithm to produce numerical sequences. Although these pseudorandom sequences pass statistical pattern tests for randomness, by knowing the algorithm and the conditions used to initialize it, called the "seed", the output can be predicted. Because the sequence of numbers produced by a PRNG is in principle predictable, data encrypted with pseudorandom numbers is potentially vulnerable to cryptanalysis. Hardware random number generators produce sequences of numbers that are assumed not to be predictable, and therefore provide the greatest security when used to encrypt data.
Early work
One early way of producing random numbers was by a variation of the same machines used to play keno or select lottery numbers. These involved mixed, numbered ping-pong balls with blown air, perhaps combined with mechanical agitation, and used some method to withdraw balls from the mixing chamber (). This method gives reasonable results in some senses, but the random numbers generated by this means are expensive. The method is inherently slow, and is unusable for most computing applications.
On 29 April 1947, RAND Corporation began generating random digits with an "electronic roulette wheel", consisting of a random frequency pulse source of about 100,000 pulses per second gated once per second with a constant frequency pulse and fed into a five-bit binary counter. Douglas Aircraft built the equipment, implementing Cecil Hasting's suggestion (RAND P-113) for a noise source (most likely the well known behavior of the 6D4 miniature gas thyratron tube, when placed in a magnetic field). Twenty of the 32 possible counter values were mapped onto the 10 decimal digits and the other 12 counter values were discarded.
The results of a long run from the RAND machine, filtered and tested, were converted into a table, which was published in 1955 in the book A Million Random Digits with 100,000 Normal Deviates. The RAND table was a significant breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available. It has been a useful source for simulations, modeling, and for deriving the arbitrary constants in cryptographic algorithms to demonstrate that the constants had not been selected maliciously. The block ciphers Khufu and Khafre are among the applications which use the RAND table. See: Nothing up my sleeve numbers.
Physical phenomena with random properties
Quantum random properties
There are two fundamental sources of practical quantum mechanical physical randomness: quantum mechanics at the atomic or sub-atomic level and thermal noise (some of which is quantum mechanical in origin). Quantum mechanics predicts that certain physical phenomena, such as the nuclear decay of atoms, are fundamentally random and cannot, in principle, be predicted (for a discussion of empirical verification of quantum unpredictability, see Bell test experiments). And, because the world exists at a temperature above absolute zero, every system has some random variation in its state; for instance, molecules of gases composing air are constantly bouncing off each other in a random way (see statistical mechanics.) This randomness is a quantum phenomenon as well (see phonon).
Because the outcome of quantum-mechanical events cannot be predicted even in principle, they are the ‘gold standard’ for random number generation. Some quantum phenomena used for random number generation include:
Shot noise, a quantum mechanical noise source in electronic circuits. A simple example is a lamp shining on a photodiode. Due to the uncertainty principle, arriving photons create noise in the circuit. Collecting the noise for use poses some problems, but this is an especially simple random noise source. However, shot noise energy is not always well distributed throughout the bandwidth of interest. Gas diode and thyratron electron tubes in a crosswise magnetic field can generate substantial noise energy (10 volts or more into high impedance loads) but have a very peaked energy distribution and require careful filtering to achieve flatness across a broad spectrum.
A nuclear decay radiation source, detected by a Geiger counter attached to a PC.
Photons travelling through a semi-transparent mirror. The mutually exclusive events (reflection/transmission) are detected and associated to ‘0’ or ‘1’ bit values respectively.
Amplification of the signal produced on the base of a reverse-biased transistor. The emitter is saturated with electrons and occasionally they will tunnel through the band gap and exit via the base. This signal is then amplified through a few more transistors and the result fed into a Schmitt trigger.
Spontaneous parametric down-conversion leading to binary phase state selection in a degenerate optical parametric oscillator.
Fluctuations in vacuum energy measured through homodyne detection.
Classical random properties
Thermal phenomena are easier to detect. They are somewhat vulnerable to attack by lowering the temperature of the system, though most systems will stop operating at temperatures low enough to reduce noise by a factor of two (e.g., ~150 K). Some of the thermal phenomena used include:
Thermal noise from a resistor, amplified to provide a random voltage source.
Avalanche noise generated from an avalanche diode, or Zener breakdown noise from a reverse-biased Zener diode.
Atmospheric noise, detected by a radio receiver attached to a PC (though much of it, such as lightning noise, is not properly thermal noise, but most likely a chaotic phenomenon).
In the absence of quantum effects or thermal noise, other phenomena that tend to be random, although in ways not easily characterized by laws of physics, can be used. When several such sources are combined carefully (as in, for example, the Yarrow algorithm or Fortuna CSPRNGs), enough entropy can be collected for the creation of cryptographic keys and nonces, though generally at restricted rates. The advantage is that this approach needs, in principle, no special hardware. The disadvantage is that a sufficiently knowledgeable attacker can surreptitiously modify the software or its inputs, thus reducing the randomness of the output, perhaps substantially. The primary source of randomness typically used in such approaches is the precise timing of the interrupts caused by mechanical input/output devices, such as keyboards and disk drives, various system information counters, etc.
This last approach must be implemented carefully and may be subject to attack if it is not. For instance, the forward-security of the generator in Linux 2.6.10 kernel could be broken with 264 or 296 time complexity.
Clock drift
Another variable physical phenomenon that is easy to measure is clock drift.
There are several ways to measure and use clock drift as a source of randomness.
The Intel 82802 Firmware Hub (FWH) chip included a hardware RNG using two free running oscillators, one fast and one slow. A thermal noise source (non-commonmode noise from two diodes) is used to modulate the frequency of the slow oscillator, which then triggers a measurement of the fast oscillator. That output is then debiased using a von Neumann type decorrelation step (see below). The output rate of this device is somewhat less than 100,000 bit/s. This chip was an optional component of the 840 chipset family that supported an earlier Intel bus. It is not included in modern PCs.
All VIA C3 microprocessors have included a hardware RNG on the processor chip since 2003. Instead of using thermal noise, raw bits are generated by using four freerunning oscillators which are designed to run at different rates. The output of two are XORed to control the bias on a third oscillator, whose output clocks the output of the fourth oscillator to produce the raw bit. Minor variations in temperature, silicon characteristics, and local electrical conditions cause continuing oscillator speed variations and thus produce the entropy of the raw bits. To further ensure randomness, there are actually two such RNGs on each chip, each positioned in different environments and rotated on the silicon. The final output is a mix of these two generators. The raw output rate is tens to hundreds of megabits per second, and the whitened rate is a few megabits per second. User software can access the generated random bit stream using new non-privileged machine language instructions.
A software implementation of a related idea on ordinary hardware is included in CryptoLib, a cryptographic routine library. The algorithm is called truerand. Most modern computers have two crystal oscillators, one for the real-time clock and one for the primary CPU clock; truerand exploits this fact. It uses an operating system service that sets an alarm, running off the real-time clock. One subroutine sets that alarm to go off in one clock tick (usually 1/60th of a second). Another then enters a while loop waiting for the alarm to trigger. Since the alarm will not always trigger in exactly one tick, the least significant bits of a count of loop iterations, between setting the alarm and its trigger, will vary randomly, possibly enough for some uses. Truerand doesn't require additional hardware, but in a multi-tasking system great care must be taken to avoid non-randomizing interference from other processes (e.g., in the suspension of the counting loop process as the operating system scheduler starts and stops assorted processes).
The RDRAND opcode will return values from an onboard hardware random number generator. It is present in Intel Ivy Bridge processors and AMD64 processors since 2015.
Dealing with bias
The bit-stream from such systems is prone to be biased, with either 1s or 0s predominating. There are two approaches to dealing with bias and other artifacts. The first is to design the RNG to minimize bias inherent in the operation of the generator. One method to correct this feeds back the generated bit stream, filtered by a low-pass filter, to adjust the bias of the generator. By the central limit theorem, the feedback loop will tend to be well-adjusted 'almost all the time'. Ultra-high speed random number generators often use this method. Even then, the numbers generated are usually somewhat biased.
Software whitening
A second approach to coping with bias is to reduce it after generation (in software or hardware). There are several techniques for reducing bias and correlation, often called "whitening" algorithms, by analogy with the related problem of producing white noise from a correlated signal.
John von Neumann invented a simple algorithm to fix simple bias and reduce correlation. It considers two bits at a time (non-overlapping), taking one of three actions: when two successive bits are equal, they are discarded; a sequence of 1,0 becomes a 1; and a sequence of 0,1 becomes a zero. It thus represents a falling edge with a 1, and a rising edge with a 0. This eliminates simple bias, and is easy to implement as a computer program or in digital logic. This technique works no matter how the bits have been generated. It cannot assure randomness in its output, however. What it can do (with significant numbers of discarded bits) is transform a biased random bit stream into an unbiased one.
Another technique for improving a near random bit stream is to exclusive-or the bit stream with the output of a high-quality cryptographically secure pseudorandom number generator such as Blum Blum Shub or a strong stream cipher. This can improve decorrelation and digit bias at low cost; it can be done by hardware, such as an FPGA, which is faster than doing it by software.
A related method which reduces bias in a near random bit stream is to take two or more uncorrelated near random bit streams, and exclusive or them together. Let the probability of a bit stream producing a 0 be 1/2 + e, where −1/2 ≤ e ≤ 1/2. Then e is the bias of the bitstream. If two uncorrelated bit streams with bias e are exclusive-or-ed together, then the bias of the result will be 2e2. This may be repeated with more bit streams (see also the Piling-up lemma).
Some designs apply cryptographic hash functions such as MD5, SHA-1, or RIPEMD-160 or even a CRC function to all or part of the bit stream, and then use the output as the random bit stream. This is attractive, partly because it is relatively fast.
Many physical phenomena can be used to generate bits that are highly biased, but each bit is independent from the others.
A Geiger counter (with a sample time longer than the tube recovery time) or a semi-transparent mirror photon detector both generate bit streams that are mostly "0" (silent or transmission) with the occasional "1" (click or reflection).
If each bit is independent from the others, the Von Neumann strategy generates one random, unbiased output bit for each of the rare "1" bits in such a highly biased bit stream.
Whitening techniques such as the Advanced Multi-Level Strategy (AMLS) can extract more output bits – output bits that are just as random and unbiased – from such a highly biased bit stream.
PRNG with periodically refreshed random key
Other designs use what are believed to be true random bits as the key for a high quality block cipher algorithm, taking the encrypted output as the random bit stream. Care must be taken in these cases to select an appropriate block mode, however. In some implementations, the PRNG is run for a limited number of digits, while the hardware generating device produces a new seed.
Using observed events
Software engineers without true random number generators often try to develop them by measuring physical events available to the software. An example is measuring the time between user keystrokes, and then taking the least significant bit (or two or three) of the count as a random digit. A similar approach measures task-scheduling, network hits, disk-head seek times and other internal events. One Microsoft design includes a very long list of such internal values, a form of cryptographically secure pseudorandom number generator. Lava lamps have also been used as the physical devices to be monitored, as in the Lavarand system.
The method is risky when it uses computer-controlled events because a clever, malicious attacker might be able to predict a cryptographic key by controlling the external events. It is also risky because the supposed user-generated event (e.g., keystrokes) can be spoofed by a sufficiently ingenious attacker, allowing control of the "random values" used by the cryptography.
However, with sufficient care, a system can be designed that produces cryptographically secure random numbers from the sources of randomness available in a modern computer. The basic design is to maintain an "entropy pool" of random bits that are assumed to be unknown to an attacker. New randomness is added whenever available (for example, when the user hits a key) and an estimate of the number of bits in the pool that cannot be known to an attacker is kept. Some of the strategies in use include:
When random bits are requested, return that many bits derived from the entropy pool (by a cryptographic hash function, say) and decrement the estimate of the number of random bits remaining in the pool. If not enough unknown bits are available, wait until enough are available. This is the top-level design of the "/dev/random" device in Linux, written by Theodore Ts'o and used in many other Unix-like operating systems. It provides high-quality random numbers so long as the estimates of the input randomness are sufficiently cautious. The Linux "/dev/urandom" device is a simple modification which disregards estimates of input randomness, and is therefore rather less likely to have high entropy as a result.
Maintain a stream cipher with a key and initialization vector (IV) obtained from an entropy pool. When enough bits of entropy have been collected, replace both key and IV with new random values and decrease the estimated entropy remaining in the pool. This is the approach taken by the yarrow library. It provides resistance against some attacks and conserves hard-to-obtain entropy.
(De)centralized systems
A true random number generator can be a (de)central service. One example of a centralized system where a random number can be acquired is the randomness beacon service from the National Institute of Standards and Technology; another example is Random.org, a service that uses atmospheric noise to generate random binary digits (bits).
As an example of a decentralized system, the Cardano platform uses the participants of their decentralized proof-of-stake protocol to generate random numbers.
Problems
It is very easy to misconstruct hardware or software devices which attempt to generate random numbers. Also, most 'break' silently, often producing decreasingly random numbers as they degrade. A physical example might be the rapidly decreasing radioactivity of the smoke detectors mentioned earlier, if this source were used directly. Failure modes in such devices are plentiful and are complicated, slow, and hard to detect. Methods that combine multiple sources of entropy are more robust.
Because many entropy sources are often quite fragile, and fail silently, statistical tests on their output should be performed continuously. Many, but not all, such devices include some such tests into the software that reads the device.
Attacks
Just as with other components of a cryptography system, a software random number generator should be designed to resist certain attacks. Defending against these attacks is difficult without a hardware entropy source.
Estimating entropy
There are mathematical techniques for estimating the entropy of a sequence of symbols. None are so reliable that their estimates can be fully relied upon; there are always assumptions which may be very difficult to confirm. These are useful for determining if there is enough entropy in a seed pool, for example, but they cannot, in general, distinguish between a true random source and a pseudorandom generator. This problem is avoided by the conservative use of hardware entropy sources.
Performance test
Hardware random number generators should be constantly monitored for proper operation. RFC 4086, FIPS Pub 140-2 and NIST Special Publication 800-90b include tests which can be used for this. Also see the documentation for the New Zealand cryptographic software library cryptlib.
Since many practical designs rely on a hardware source as an input, it will be useful to at least check that the source is still operating. Statistical tests can often detect failure of a noise source, such as a radio station transmitting on a channel thought to be empty, for example. Noise generator output should be sampled for testing before being passed through a "whitener." Some whitener designs can pass statistical tests with no random input. While detecting a large deviation from perfection would be a sign that a true random noise source has become degraded, small deviations are normal and can be an indication of proper operation. Correlation of bias in the inputs to a generator design with other parameters (e.g., internal temperature, bus voltage) might be additionally useful as a further check. Unfortunately, with currently available (and foreseen) tests, passing such tests is not enough to be sure the output sequences are random. A carefully chosen design, verification that the manufactured device implements that design and continuous physical security to insure against tampering may all be needed in addition to testing for high value uses.
See also
AN/CYZ-9
Bell test experiments
/dev/random
ERNIE
List of random number generators
Lottery machine
Randomness extractor
RDRAND
Trusted Platform Module
References
General references
.
.
.
.
.
.
External links
.
.
ProtegoST SG100, ProtegoST, "Hardware Random Number Generator "Based on quantum physics random number source from a zener diode".
Cryptography
Random number generation
Computer peripherals
de:Zufallszahlengenerator#Physikalischer Zufallszahlengenerator |
160524 | https://en.wikipedia.org/wiki/SINCGARS | SINCGARS | Single Channel Ground and Airborne Radio System (SINCGARS) is a Combat-net radio (CNR) used by U.S. and allied military forces. The CNR network is designed around three systems: SINCGARS, the high frequency (HF) radio, and the SC tactical satellite (TACSAT). Each system has different capabilities and transmission characteristics. SINCGARS is a family of user-owned and operated, very high frequency-frequency modulation (VHF-FM) CNRs. In the CNR network, the SINCGARS’ primary role is voice transmission for command and control (C2) between surface and airborne C2 assets. SINCGARS can transmit and receive secure data and facsimile transmissions through simple connections with various data terminal equipment.
SINCGARS features provide communications interoperability for the Army, Marine, Navy, and Air Force, thus contributing to successful combat operations. SINCGARS is consistent with North Atlantic Treaty Organization interoperability requirements. The radios, which handle voice and data communications, are designed to be reliable, secure, and easily maintained. Vehicle-mount, backpack, airborne, and handheld form factors are available.
Joint and combined operations require exchanging information, both voice and data, with other participating forces. The Single-Channel Ground and Airborne Radio System (SINCGARS) tactical radio has provided secure, low probability of intercept/electronic attack voice communications in the frequency hopping (FH) mode. Later enhancements provide for the exchange of secure data through the evolving Army and Marine Corps tactical Internets, enabling increased situational awareness and more expedient engagement of the enemy while reducing the probability of fratricide. In addition, the Enhanced Position Location Reporting System (EPLRS) is used by military forces to provide C2 data distribution, battlefield situation awareness, and position location services.
The SINCGARS family has mostly replaced the Vietnam War-era synthesized single frequency radios (AN/PRC-77 and AN/VRC-12), although it can work with them. The airborne AN/ARC-201 radio is phasing out the older tactical air-to-ground radios (AN/ARC-114 and AN/ARC-131).
The SINCGARS is designed on a modular basis to achieve maximum commonality among various ground, maritime, and airborne configurations. A common receiver transmitter (RT) is used in the ground configurations. The modular design also reduces the burden on the logistics system to provide repair parts.
The SINCGARS can operate in either the SC or frequency hop (FH) mode, and stores both SC frequencies and FH loadsets. The system is compatible with all current U.S. and allied VHF-FM radios in the SC, non-secure mode. The SINCGARS operates on any of 2320 channels between 30 and 88 megahertz (MHz) with a channel separation of 25 kilohertz (kHz). It accepts either digital or analog inputs and superimposes the signal onto a radio frequency (RF) carrier wave. In FH mode, the input changes frequency about 100 times per second over portions of the tactical VHF-FM range. These continual changes in frequency hinder threat intercept and jamming units from locating or disrupting friendly communications. The SINCGARS provides data rates up to 16,000 bits per second. Enhanced data modes provide packet and RS-232 data. The enhanced data modes available with the System Improvement Program (SIP) and Advanced System Improvement Program (ASIP) radios also enable forward error correction (FEC), and increased speed, range, and accuracy of data transmissions.
Most ground SINCGARS radios have the ability to control output power; however, most airborne SINCGARS radio sets are fixed power. Those RTs with power settings can vary transmission range from approximately 200 meters (660 feet) to 10 kilometers (km) (6.2 miles). Adding a power amplifier increases the line of sight (LOS) range to approximately 40 km (25 miles). (These ranges are for planning purposes only; terrain, weather, and antennae height have an effect on transmission range.) The variable output power level allows users to operate on the minimum power necessary to maintain reliable communications, thus lessening the electromagnetic signature given off by their radio sets. This ability is of particular importance at major command posts, which operate in multiple networks.
SC CNR users outside the FH network can use a hailing method to request access to the network. When hailing a network, a user outside the network contacts the network control station (NCS) on the cue frequency. In the active FH mode, the SINCGARS radio gives audible and visual signals to the operator that an external subscriber wants to communicate with the FH network. The SINCGARS operator must change to the cue frequency to communicate with the outside radio system. The network can be set to a manual frequency for initial network activation. The manual frequency provides a common frequency for all members of the network to verify that the equipment is operational. During initial net activation, all operators in the net tune to the manual frequency. After communications are established, the net switches to the FH mode and the NCS transfers the hopping variables to the out stations.
Over 570,000 radios have been purchased. There have been several system improvement programs, including the Integrated Communications Security (ICOM) models, which have provided integrated voice and data encryption, the Special Improvement Program (SIP) models, which add additional data modes, and the advanced SIP (ASIP) models, which are less than half the size and weight of ICOM and SIP models and provided enhanced FEC (forward error correction) data modes, RS-232 asynchronous data, packet data formats, and direct interfacing to Precision Lightweight GPS Receiver (PLGR) devices providing radio level situational awareness capability.
In 1992, the U.S. Air Force awarded a contract to replace the AN/ARC-188 for communications between Air Force aircraft and Army units.
Timeline
November 1983: ITT Corporation (ITT) wins the contract for the first type of radio, for ground troops.
May 1985: ITT wins the contract for the airborne SINCGARS.
May-June 1988: 4th Bn, 31st Infantry begins initial field tests of the SINCGARS radio at Fort Sill
July 1988: General Dynamics wins a second-source contract for the ground radio.
February - April 1989: 2nd Infantry Division field tests SINCGARS in improvised man-pack configuration in the Korean DMZ.
April 1989: ITT reaches "Milestone IIIB": full-rate production.
December 1990: 1st Division is equipped.
December 1991: General Dynamics wins the "Option 1 Award" for the ground radio.
March 1992: ITT wins a "Ground and Airborne" award.
July 1992: Magnavox Electronics Systems Company develops the airborne SINCGARS AN/ARC-222 for the Air Force
August 1993: General Dynamics achieves full rate production.
April 1994: ITT and General Dynamics compete for the ground radio.
May 1994: ITT wins a sole-source contract for the airborne radio.
1997: ITT became the sole source supplier of the new half-size RT-1523E radio to the US Army.
2006: The RT-1523F/SideHat configuration provides a 2-channel capability.
July 2009: ITT wins RT-1523G platform development, $363 million contract. Partnered with Thales Communications Inc.
2012: Capability Set 14 to provide Universal Network Situational Awareness to help prevent air-to-ground friendly fire incidents.
May 2016: Harris Corp. is awarded a $405 million contract by Moroccan Army concerning SINCGARS system equipment including ancillary items, spare parts, installation kits, training and fielding support services. One bid was solicited with one received, with an estimated completion date of April 21, 2021.
June 2016: Harris Corporation awarded $15 million order to supply tactical radios to Middle East nation. Harris Corporation (NYSE:HRS) has received a $15 million order to provide tactical radios, management systems, training and field support services to a nation in the Middle East as part of an ongoing modernization program. The contract was awarded during the fourth quarter of Harris’ 2016 fiscal year. Harris.com, 2016-06-12. Retrieved 2017-12-14 – http://www.defenseworld.net
January 2017: Harris Corp. is awarded maximum $403 million contract From US Defense Logistics Agency for spare parts supporting various tactical radio systems, which includes SINCGARS. This is a five-year contract with no option periods and 5 January 2022 is performance completion date. Using customers are Army and Defense Logistics Agency, the US Department of Defense. Types of appropriation are fiscal 2017 through fiscal 2022 Army working capital; and defense working capital funds, funded in the year of delivery order issuance. The contracting activity is the Defense Logistics Agency Land and Maritime, Aberdeen Proving Ground, Maryland (SPRBL1-17-D-0002). Defenseworld.net, 2017-01-07. Retrieved 2017-06-16 – http://www.defenseworld.net
Models
RT-1523 VHF radio configurations
Ancillary items
SideHat - The 'SideHat' is a simple radio solution that attaches to existing SINCGARS radio installations, offering rapid, affordable and interoperable wideband network communications for Early Infantry Brigade Combat Team (E-IBCT) deployments and other Soldier radio waveform (SRW) applications.
SINCGARS Airborne - The AN/ARC-201 System Improvement Program (SIP) airborne radio is a reliable, field-proven voice and data battlespace communications system with networking capabilities.
Embedded GPS Receiver - The Selective Availability Antispoofing Module (SAASM) technology Embedded GPS Receiver (EGR) installed in the RT-1523(E)-(F) providing a navigation/communication system in support of critical Warfighter capabilities that includes Situational Awareness, Combat ID, Navigation and Timing and Surveying Capabilities.
GPS FanOut System - Provides six GPS formats from a single GPS source (RT-1523 with integrated SAASM GPS or PLGR/DAGR (Defense Advanced GPS Receiver–AN/PSN-13)).
VRCU (Vehicle Remote Control Unit) - Designed to be placed anywhere on a vehicle, VRCU is important in large vehicles and those with tight quarters. VRCU allows full control of both single and dual RT-1523 (models E, F, and G) and RT-1702 (models E and F) radios from any location within a vehicle.
Single ASIP Radio Mount (SARM) is the latest vehicle installation mount developed specifically for RT-1523 or RT-1702 radios. SARM solves space and weight claim issues associated with traditional vehicle installation mounts. SARM operates on 12 or 24 volt allowing installation into any military or civilian vehicle.
See also
Joint Tactical Radio System (JTRS) - a plan for a replacement radio system
Network Simulator for simulation SINCGARS
Further reading
Soldier's Manual of Common Tasks Warrior Skill Level 1 (STP 21-1-SMCT), Headquarters Department of the Army, Washington D.C. 11 September 2012. (p. 3-99, task #113-587-2070)
Tactical Single-Channel Radio Communications Techniques (FM 24-18), Headquarters Department of the Army, Washington D.C. 30 September 1987.
Radio Operator's Handbook (FM 24-19), Headquarters Department of the Army, Washington D.C. 24 May 1991.
References
External links
Harris.com (pdf)
Harris.com (pdf)
Single Channel Ground and Airborne Radio System (SINCGARS) fas.org
Information on RT-1439 radio prc68.com
https://usacac.army.mil/sites/default/files/misc/doctrine/CDG/cdg_resources/manuals/fm/fm6_02x72.pdf
Military radio systems of the United States
National Security Agency encryption devices
Military electronics of the United States
Military equipment introduced in the 1980s |
160562 | https://en.wikipedia.org/wiki/KY-57 | KY-57 | The Speech Security Equipment (VINSON), TSEC/KY-57, is a portable, tactical cryptographic device in the VINSON family, designed to provide voice encryption for a range of military communication devices such as radio or telephone.
The KY-57 was in use by NATO and its allies towards the end of the cold war. The device itself was classified as a CCI (controlled cryptographic item) when it was unkeyed. The classification of the device was temporarily raised to the classification of the key when the device was keyed. It was authorized for TOP SECRET information with the appropriate key. It is no longer authorized for handling classified information, and it has been de facto, but not officially, declassified. The details of its technical operation are still classified. The first unit, serial number 001 is still in operation at the NSA.
The KY-57 can accept signal fades of up to 12 seconds without losing synchronization with the transmitting station. There are storage positions for 6 keys. Keys 1 to 5 are traffic encryption keys (TEK). Key 6 is a key encryption key (KEK) used for over the air rekeying (OTAR) of the other 5 keys. Key 6 must be loaded manually using a fill device such as the AN/CYZ-10.
See also
NSA encryption systems
FNBDT
ANDVT
SINCGARS
AN/PRC-77
References
National Security Agency encryption devices |
161041 | https://en.wikipedia.org/wiki/William%20Hague | William Hague | William Jefferson Hague, Baron Hague of Richmond, (born 26 March 1961) is a British Conservative politician and life peer who served as Leader of the Conservative Party and Leader of the Opposition from 1997 to 2001. He was the Member of Parliament (MP) for Richmond (Yorks) in North Yorkshire from 1989 to 2015. He served in the Cameron government as First Secretary of State from 2010 to 2015, Secretary of State for Foreign and Commonwealth Affairs from 2010 to 2014, and Leader of the House of Commons from 2014 to 2015.
Hague was educated at Wath Comprehensive School, the University of Oxford and INSEAD, subsequently being returned to the House of Commons at a by-election in 1989. Hague quickly rose through the ranks of the government of John Major and was appointed to Cabinet in 1995 as Secretary of State for Wales. Following the Conservatives' defeat at the 1997 general election by the Labour Party, he was elected Leader of the Conservative Party at the age of 36.
Hague resigned as Conservative leader after the 2001 general election following his party's second defeat, at which the Conservatives made a net gain of just one seat. He returned to the backbenches, pursuing a career as an author, writing biographies of William Pitt the Younger and William Wilberforce. He also held several directorships, and worked as a consultant and public speaker.
After David Cameron was elected Leader of the Conservative Party in 2005, Hague was reappointed to the Shadow Cabinet as Shadow Foreign Secretary. He also assumed the role of Senior Member of the Shadow Cabinet, serving as Cameron's deputy. After the formation of the coalition government in 2010, Hague was appointed First Secretary of State and Foreign Secretary. Cameron described him as his "de facto political deputy". On 14 July 2014, Hague stood down as Foreign Secretary and became Leader of the House of Commons. He did not stand for re-election at the 2015 general election. He was awarded a life peerage in the 2015 Dissolution Honours List on 9 October 2015.
Early life
Hague was born on 26 March 1961 in Rotherham, Yorkshire, England. He initially boarded at Ripon Grammar School and then attended Wath Comprehensive School, a state secondary school near Rotherham. His parents, Nigel and Stella Hague, ran a soft drinks manufacturing business where he worked during school holidays.
He first made the national news at the age of 16 by addressing the Conservatives at their 1977 Annual National Conference. In his speech he told the delegates: "half of you won't be here in 30 or 40 years' time..., but that others would have to live with consequences of a Labour Government if it stayed in power". Writing in his diary at the time Kenneth Rose noted that Peter Carrington told him that "he and several other frontbench Tories were nauseated by the much-heralded speech of a sixteen-year-old schoolboy called William Hague. Peter said to Norman St John Stevas: 'If he is as priggish and self-assured as that at sixteen, what will he be like in thirty years' time? Norman replied: 'Like Michael Heseltine'".
Hague read Philosophy, Politics and Economics at Magdalen College, Oxford, graduating with first-class honours. He was President of the Oxford University Conservative Association (OUCA), but was "convicted of electoral malpractice" in the election process of his successor. OUCA's official historian, David Blair, notes that Hague was actually elected on a platform pledging to clean up OUCA, but that this was "tarnished by accusations that he misused his position as Returning Officer to help the Magdalen candidate for the presidency, Peter Havey. Hague was playing the classic game of using his powers as President to keep his faction in power, and Havey was duly elected.... There were accusations of blatant ballot box stuffing".
He also served as President of the Oxford Union, an established route into politics. After Oxford, Hague went on to study for a Master of Business Administration (MBA) degree at INSEAD. He then worked as a management consultant at McKinsey & Company, where Archie Norman was his mentor.
Public life
Early political career
Hague contested Wentworth unsuccessfully in 1987, before being elected to Parliament at a by-election in 1989 as Member for the safe Conservative seat of Richmond, North Yorkshire, where he succeeded former Home Secretary Leon Brittan. Following his election he became the then-youngest Conservative MP and despite having only recently become an MP, Hague was invited to join the Government in 1990, serving as Parliamentary Private Secretary to the Chancellor of the Exchequer, Norman Lamont. After Lamont was sacked in 1993, Hague moved to the Department of Social Security (DSS) where he was Parliamentary Under-Secretary of State. The following year he was promoted as Minister of State in the DSS with responsibility for Social Security and Disabled People. His fast rise up through Government ranks was attributed to his intelligence and debating skills.
Hague was appointed a Cabinet Minister in 1995 as Secretary of State for Wales; succeeding John Redwood, who had been castigated for being seen on TV apparently miming the Welsh national anthem at a conference; thus, Hague sought a Welsh Office civil servant, Ffion Jenkins, to teach him the words; they later married. He continued serving in Cabinet until the Conservatives were replaced by Labour at the 1997 general election.
Leadership of the Conservative Party
Following the 1997 general election defeat, Hague was elected Leader of the Conservative Party in succession to John Major, defeating more experienced figures such as Kenneth Clarke and Michael Howard.
At the age of 36, Hague was tasked with rebuilding the Conservative Party (fresh from their worst general election result of the 20th century) by attempting to build a more modern image. £250,000 was spent on the "Listening to Britain" campaign to try to put the Conservatives back in touch with the public after losing power; he welcomed ideas about "compassionate conservatism" including from the then-Governor of Texas, later President George W. Bush.
When he visited a theme park with his Chief of Staff and former local MP, Sebastian (now Lord) Coe, Hague took a ride on a log flume wearing a baseball cap emblazoned 'HAGUE'; Cecil Parkinson described the exercise as "juvenile".
Hague steered the Conservatives to a successful result at the European parliamentary elections in June 1999, where the Conservatives gained 36 MEPs ahead of Labour's 29. Hague considered his opposition to the single European currency (the Euro) to be later vindicated by Labour Prime Minister Gordon Brown's adoption and subsequent approval of the policy.
Hague's authority was challenged by the appointment of Michael Portillo as Shadow Chancellor in 2000. Portillo had been widely tipped to be the next Conservative Party Leader before dramatically losing his seat at the 1997 general election; he was elected as MP for Kensington and Chelsea at a by-election two years later. Soon after Portillo's return to Parliament, Conservative policy on two of Labour's flagship policies was reversed: the minimum wage and independence of the Bank of England. From then and until the 2001 general election Hague's supporters waged an increasingly bitter battle with Portillo's faction; such internecine war significantly contributed to the Conservatives' two subsequent election defeats.
Hague was widely ridiculed for claiming he used to drink "14 pints of beer a day" as a teenager. His reputation suffered further damage when a 2001 poll for The Daily Telegraph found that 66% of voters considered him to be "a bit of a wally", and 70% of voters believed he would "say almost anything to win votes".
"Foreign Land" speech
At a Party Conference speech in March 2001, Hague said:
Former Conservative Deputy Prime Minister Michael Heseltine, a prominent One-Nation Conservative, was critical of Hague's Eurosceptic view that Britain was becoming a "foreign land", betraying in newspaper interviews that he was uncertain as to whether he could support a Hague-led Conservative Party.
Skill in debate
Hague's critics assiduously monitored his performance at Prime Minister's Questions each Wednesday in Parliament, having difficulty to find fault. During one particular exchange, while responding to the Queen's Speech of 2000, Hague attacked the Prime Minister's record:
Blair responded by criticising what he saw as Hague's "bandwagon politics":
Resignation
On the morning of Labour's second consecutive landslide victory at the 2001 general election, Hague stated: "we have not been able to persuade a majority, or anything approaching a majority, that we are yet the alternative government that they need." At that election the Conservative Party gained just one parliamentary seat more than at the 1997 general election; following this defeat, Hague resigned as party leader. Hague thus became the second Conservative party leader not to become Prime Minister (after Austen Chamberlain) and the first ever to spend his entire tenure in Opposition.
Backbenches
On the backbenches he occasionally spoke in the House of Commons on issues of the day. Between 1997 and 2002, he was the Chairman of the International Democrat Union. Hague's profile and personal popularity rose thereafter among both Conservative Party members and the wider public following his spell as Party Leader. He has written a biography of 18th-century Prime Minister Pitt the Younger (published in 2004), taught himself how to play the piano, and hosted the 25th anniversary programme for Radio 4 on the political television satire Yes Minister in 2005. In June 2007 he published his second book, a biography of the anti-slave trade campaigner William Wilberforce, shortlisted for the 2008 Orwell Prize for political writing.
Hague's annual income was the highest in Parliament, with earnings of about £400,000 a year from directorships, consultancy, speeches and his parliamentary salary. His income was previously estimated at £1 million annually, but he dropped several commitments and in effect took a salary cut of some £600,000 on becoming Shadow Foreign Secretary in 2005.
Together with former Prime Minister John Major, former Chancellor Kenneth Clarke, and Hague's successor Iain Duncan Smith, Hague served for a time on the Conservative Leadership Council, which was set up by Michael Howard upon his election unopposed as Leader of the Conservative Party in 2003.
At the 2005 Conservative leadership election he supported the eventual winner David Cameron.
He is a member of Conservative Friends of Israel, a group which he joined when he was 15.
Return to the Shadow Cabinet
After the 2005 general election, the Conservative Party Leader Michael Howard apparently offered Hague the post of Shadow Chancellor of the Exchequer, which he turned down citing that his business commitments would make it difficult for him to take on such a high-profile job.
On 6 December 2005, David Cameron was elected Leader of the Conservative Party. Hague was offered and accepted the role of Shadow Foreign Secretary and Senior Member of the Shadow Cabinet, effectively serving as Cameron's deputy (though not formally, unlike previous Deputy Conservative Leaders Willie Whitelaw, Peter Lilley and Michael Ancram). He had been widely tipped to return to the frontbench under either Cameron or leadership contest runner-up David Davis.
On 30 January 2006, by Cameron's instructions, Hague travelled to Brussels for talks to pull Conservative Party MEPs out of the European People's Party–European Democrats Group (EPP-ED) in the European Parliament. (Daily Telegraph, 30 January 2006). Further, on 15 February 2006, Hague deputed, during David Cameron's paternity leave, at Prime Minister's Questions (PMQs). This appearance gave rise to jokes at the expense of Blair, that all three parties that day were being led by 'stand-ins', with the Liberal Democrats represented by Acting Leader Sir Menzies Campbell, the Labour Party by the departing Blair, and the Conservatives by Hague. Hague again deputised for Cameron for several sessions in 2006.
Foreign Secretary
Prime Minister Cameron's first appointment was Hague as Secretary of State for Foreign and Commonwealth Affairs. He was also accorded the honorary title of First Secretary of State. In his first overseas visit as British Foreign Secretary, Hague met US Secretary of State, Hillary Clinton, at Washington
In August 2010, Hague set out a values-based foreign policy, stating that: "We cannot have a foreign policy without a conscience. Foreign policy is domestic policy written large. The values we live by at home do not stop at our shores. Human rights are not the only issue that informs the making of foreign policy, but they are indivisible from it, not least because the consequences of foreign policy failure are human".
Hague further said that: "There will be no downgrading of human rights under this Government and no resiling from our commitments to aid and development". He continued saying that "Indeed I intend to improve and strengthen our human rights work. It is not in our character as a nation to have a foreign policy without a conscience, and neither is it in our interests". However, in March 2011, Hague was criticised by Cardinal Keith O'Brien for increasing financial aid to Pakistan despite persecution of its Christian minority: "To increase aid to the Pakistan Government when religious freedom is not upheld and those who speak up for religious freedom are gunned down is tantamount to an anti-Christian foreign policy".
In September 2011, Hague told BBC Radio 4's File on 4 investigation Cyber Spies into the legality of domestic cyber surveillance and the export of this technology from the UK to countries with questionable human rights records that the UK had a strong export licence system. The programme also obtained confirmation from the UK's Department for Business Innovation and Skills that cyber surveillance products that break, as opposed to create, encryption do not require export licences.
In June 2012, he continued to stand in for David Cameron at PMQs when both the Prime Minister and Deputy Prime Minister Nick Clegg were out of the country.
In January 2013, Hague visited New Zealand in his capacity as Foreign Secretary, holding talks with New Zealand government ministers, Murray McCully and David Shearer. In March 2013, Hague established the International Leaders Programme, designed to identify and develop partnerships among future global leaders.
Media reaction to FCO appointment
In early September 2010, newspapers including The Daily Telegraph, The Independent and Daily Mail released stories about allegations surrounding Hague's friendship with 25-year-old Christopher Myers, a history graduate from Durham University, whom he employed as a parliamentary special adviser. A spokesperson stated that "Any suggestion that the Foreign Secretary's relationship with Chris Myers is anything other than a purely professional one is wholly inaccurate and unfounded."
On 1 September 2010, Myers resigned from his appointment in light of that press speculation, which prompted Hague to issue a public statement, wherein he confirmed that he had "occasionally" shared a hotel room with Myers [for reasons of frugality by upbringing], but refuting the "utterly false" suggestions that he had ever been involved in a relationship with man. A spokesperson for Prime Minister David Cameron reported that he gave his "full support" over the media rumours. Figures from both within and without the Conservative Party criticised Hague for his personal response to the stories, with former Conservative leadership candidate, John Redwood, commenting that Hague had shown "poor judgement", and the Speaker's wife, Labour-supporting Sally Bercow, speculating that Hague had been given "duff PR advice", whilst a parliamentary and ministerial colleague, the Conservative MP, Alan Duncan, described the media coverage as "contemptible".
Israel–Palestinian conflict
Hague was criticised by Israeli leaders after meeting with Palestinians who demonstrated against Israel's barrier in the West Bank. He expressed solidarity with the idea of non-violence and listened to the accounts of left-wing and Palestinian activists. Israeli Opposition Leader Tzipi Livni condemned the statements and said:
The security barrier has saved lives, and its construction was necessary. The barrier has separated Israel from Palestinian cities and completely changed the reality in Israel, where citizens were exposed to terror every day.
2011 Middle East protests
In February 2011 security forces in the Bahrain dispersed thousands of anti-government protesters at Pearl Square in the centre of the capital, Manama. Hague informed the House of Commons that he had stressed the need for peaceful action in dealing with the protesters: "At least three people died in the operation, with hundreds more injured. We are greatly concerned about the deaths that have occurred. I have this morning spoken to the Foreign Minister of Bahrain and HM Ambassador spoke last night to the Bahraini Minister of the Interior. In both cases we stressed the need for peaceful action to address the concerns of protesters, the importance of respect for the right to peaceful protest and for freedom of expression".
Hague told Sky News that the use of force by the Libyan authorities during the 2011 Libyan Civil War was "dreadful and horrifying" and called on the leader to respect people's human rights. A vicious crackdown led by special forces, foreign mercenaries and Muammar Gaddafi loyalists was launched in the country's second city Benghazi, which has been the focus of anti-regime protests. Hague stated to Dermot Murnaghan on Sky: "I think we have to increase the international pressure and condemnation. The United Kingdom condemns what the Libyan Government has been doing and how they have responded to these protests, and we look to other countries to do the same".
Following delays in extracting British citizens from Libya, a disastrous helicopter attempt to contact the protesters ending with eight British diplomats/SAS arrested and no aircraft carriers or Harriers to enforce a no-fly zone he was accused, by the Labour Opposition, of "losing his mojo" in March 2011.
In March 2011, Hague said in a speech to business leaders that the examples being set in north Africa and the Middle East will ultimately transform the relationship between governments and their populations in the region. However following the row over whether Libyan leader Muammar Gaddafi was being targeted by coalition forces, the Foreign Secretary stated that the Libyan people must be free to determine their own future. Hague said: "It is not for us to choose the government of Libya—that is for the Libyan people themselves. But they have a far greater chance of making that choice now than they did on Saturday, when the opposition forces were on the verge of defeat."
Hague has warned that autocratic leaders including Robert Mugabe, President of Zimbabwe, could be shaken and even toppled by a wave of popular uprisings rippling out from north Africa. said that recent revolts against authoritarian leaders in countries including Libya and Egypt will have a greater historic significance than the 9/11 attacks on the US or the recent financial crisis. He stopped short of threatening military intervention against other dictators, but warned that they will inevitably face "judgement" for oppressing their people and suppressing democracy. Repressive African régimes will also face challenges from their populations and from the international community, the Foreign Secretary said: "Demands for freedom will spread, and that undemocratic governments elsewhere should take heed." He added: "Governments that use violence to stop democratic development will not earn themselves respite forever. They will pay an increasingly high price for actions which they can no longer hide from the world with ease, and will find themselves on the wrong side of history."
Hague, on his way to Qatar Summit in April 2011, called for intensified sanctions on the Libyan régime and for a clear statement that Gaddafi must go: "we have sent more ground strike aircraft in order to protect civilians. We do look to other countries to do the same, if necessary, over time". "We would like a continued increase in our (NATO's) capability to protect civilians in Libya", he added. Whether NATO ratcheted up operations depended on what happened on the ground, Hague said. "These air strikes are a response to movements of, or attacks from, régime forces so what happens will be dependent on that", he said. Whether the Americans could again be asked to step up their role would also "depend on the circumstances", he added.
Hague, speaking on the protests in Syria said "Political reforms should be brought forward and implemented without delay". It is thought as many as 60 people have been killed by security forces in the country today (22 April 2011), making it the worst day for deaths since protests against President Bashar al-Assad began over a month ago, reported BBC News.
Syria
Speaking on the Syrian Civil War in August 2011 Hague said of military intervention: "It's not a remote possibility. Even if we were in favour [of UN-backed military action], which we are not because there's no call from the Arab League for intervention as in the case of Libya, there is no prospect of a legal, morally sanctioned military intervention. Hague added that it was a "frustrating situation" and that the "levers" at the international community's disposal were severely limited but said countries had to concentrate on other ways of influencing the Assad government. "We want to see stronger international pressure all round. Of course, to be effective that just can't be pressure from Western nations, that includes from Arab nations... and it includes from Turkey who has been very active in trying to persuade President Assad to reform instead of embarking on these appalling actions", he said. "I would also like to see a United Nations Security Council Resolution to condemn this violence, to call for the release of political prisoners, to call for legitimate grievances to be responded to", he added.
During 2012 the UK started training Syrian opposition activists in Istanbul on media, civil society and local government matters, and supplying non-lethal equipment such as satellite communications and computers.
On 24 February 2012, Hague recognised the Syrian National Council as a "legitimate representative" of the country. Hague also said Bashar al-Assad's government had "forfeited the right to lead" by "miring itself in the blood of innocent people". Hague said: "Today we must show that we will not abandon the Syrian people in their darkest hour". He added that "Those responsible for the murder of entire families, the shelling of homes, the execution of detainees, the cleansing of political opponents and the torture and rape of women and children must be held to account", he said.
In March 2012, Hague ordered the evacuation of all British diplomats from Syria and closed the UK embassy in Damascus because of mounting security threats. Hague told Parliament: "We have maintained an embassy in Damascus despite the violence to help us communicate with all parties in Syria and to provide insight into the situation on the ground". He added: "We now judge that the deterioration of the security situation in Damascus puts our embassy staff and premises at risk." Hague said that his decision "in no way reduces the UK's commitment to active diplomacy to maintain pressure on the Assad régime to end the violence". He went on to say that: "We will continue to work closely with other nations to co-ordinate diplomatic and economic pressure on the Syrian régime."
On 1 April 2012, Hague met 74 other nations at a Friends of Syria Group conference in Istanbul, Turkey. Hague said the issue could return to the United Nations Security Council if current efforts to resolve the crisis fail. The government of President Assad has said it accepts a peace plan by the UN-Arab League envoy Kofi Annan, but there has been little evidence that it is prepared to end its crackdown on the opposition. Hague accused Assad of "stalling for time" and warned that if the issue does return to the Security Council, he may no longer be able to rely on the backing of Russia and China, who blocked a previous resolution calling for him to stand down. "There isn't an unlimited period of time for this, for the Kofi Annan process to work before many of the nations here want us to go back to the UN Security Council—some of them will call for arming the opposition if there isn't progress made," Hague told the BBC. He added that "What is now being put to them is a plan from Kofi Annan supported by the whole United Nations Security Council, and this is an important point, it's supported by Russia and by China as well as by the more obvious countries—the United States, the United Kingdom, France, the Arab League and so on".
On 20 November 2012, Hague recognised the National Coalition for Syrian Revolutionary and Opposition Forces as the "sole legitimate representative" of the Syrian people, and a credible alternative to the current Syrian Government.
On 29 August 2013, the British Parliament refused to ratify the British Government's plan to participate in military strikes against the Syrian Government in the wake of a chemical-weapons attack at Ghouta. Hague denied suggestions that he had threatened to resign over Prime Minister David Cameron's decision to go straight to a parliamentary vote. After the vote, Hague continued to urge other governments to take action against the Syrian Government, saying "If it is decided in the various parliaments of the world that no-one will stand up to the use of chemical weapons and take any action about that, that would be a very alarming moment in the affairs of the world". Ultimately a negotiated agreement was reached to eliminate Syria's chemical weapons.
Proposal of elected EU presidency
In June 2011, Hague dismissed Tony Blair's vision for an elected-head of the European Union by insisting that member states have more pressing priorities than further "constitutional tinkering". Hague made clear his view after Blair argued that a directly elected President of Europe, representing almost 400m people from 27 countries, would give the EU clear leadership and enormous authority. In an interview with The Times, Blair set out the agenda that he thought a directly elected EU President should pursue, although he conceded, there was "no chance" of such a post being created "at the present time". Asked about the former Prime Minister's call for further European integration and the creation of an elected-President, Hague suggested that Blair may have been thinking of the role for himself. "I can't think who he had in mind", Hague joked, further adding on a serious note: "Elected presidents are for countries. The EU is not a country and it's not going to become a country, in my view, now or ever in the future. It is a group of countries working together".
Taliban talks
In June 2011, Hague said that Britain helped initiate "distasteful" peace talks with the Taliban in Afghanistan. Hague made the comments while on a three-day tour of the country to meet President Hamid Karzai and visited British troops. He told The Sun newspaper that Britain had led the way in persuading US President Barack Obama's administration that negotiation was the best potential solution to the conflict. Hague admitted that any deal might mean accepting "distasteful things" and could anger military veterans and relatives of the 374 British troops killed in Afghanistan. However, he said he believed that Britain as a whole was "realistic and practical" enough to accept that ending fighting and starting talks was the best way to safeguard national security. He told the newspaper: "An eventual settlement of these issues is the ultimate and most desirable way of safeguarding that national security." He added, "but reconciliation with people who have been in a military conflict can be very distasteful. In all these types of situations, you do have to face up to some distasteful things." The previous night US President Barack Obama told Americans that "the tide of war is receding" as he announced plans to withdraw 33,000 US troops from Afghanistan by September 2012.
Comments on the Euro
In September 2011, Hague said that the Euro is "a burning building with no exits" for some of the countries which adopted the currency. Hague first used the expression when he was Conservative Leader in 1998—and said in an interview with The Spectator he had been proved right: "It was folly to create this system. It will be written about for centuries as a kind of historical monument to collective folly. But it's there and we have to deal with it," he said. "I described the Euro as a burning building with no exits and so it has proved for some of the countries in it," he further said, adding "I might take the analogy too far but the Euro wasn't built with exits so it is very difficult to leave it".
Iran
In February 2012, Hague warned in a BBC interview about Iran's "increasing willingness to contemplate" terrorism around the world. He cited the 2011 Iran assassination plot, an attempt to assassinate Adel al-Jubeir, the Saudi Ambassador to the United States, as well as alleged involvement in recent attacks in New Delhi, Georgia, and Bangkok. He said it showed "the danger Iran is currently presenting to the peace of the world".
Hague spoke the Commons on 20 February about the nuclear program of Iran and said that if the Tehran régime managed to construct a viable weapon, its neighbours would be forced to build their own nuclear warheads too. He accused Iranian President Mahmoud Ahmadinejad of pursuing "confrontational policies" and described the country's enrichment of uranium in defiance of United Nations Security Council resolutions as "a crisis coming steadily down the track". "Our policy is that whilst we remain unswervingly committed to diplomacy, it is important to emphasise to Iran that all options are on the table", Hague told MPs.
In March he condemned the way parliamentary elections were staged, claiming they were not "free and fair". He said the poll had been held against a backdrop of fear that meant the result would not reflect the will of the people. Hague said: "It has been clear for some time that these elections would not be free and fair. "The régime has presented the vote as a test of loyalty, rather than an opportunity for people freely to choose their own representatives. The climate of fear, created by the régime's crushing of opposition voices since 2009, persists."
Falkland Islands
The 30th anniversary of the beginning of the 1982 Falklands War was on 2 April 2012. On 29 March, before the Lord Mayor of London's banquet guests, namely the entire foreign diplomatic corps of more than 100 ambassadors, including Alicia Castro (Argentinian Ambassador), Hague said the UK was keen to deepen its relationship with Latin America—and reiterated Britain's commitment to the Falklands. He said: "We are reversing Britain's decline in Latin America, where we are opening a new Embassy in El Salvador. This determination to deepen our relations with Latin America is coupled with our steadfast commitment to the right of self-determination of the people of the Falkland Islands".
Tensions over the Falklands had risen in the weeks prior to the anniversary. In February, Hague said deployments of a British warship, HMS Dauntless and the Duke of Cambridge to the Falklands were "entirely routine". Hague said that Britain affirmed the Falklanders' self-determination and would seek to prevent Argentina from "raising the diplomatic temperature" over the issue. He further said: "(the events) are not so much celebrations as commemorations. I think Argentina will also be holding commemorations of those who died in the conflict. Since both countries will be doing that I don't think there is anything provocative about that."
Turks and Caicos Islands
Hague set out Her Majesty's Government's plans, on 12 June 2012, for the reintroduction of self-government in the Turks and Caicos Islands, where direct rule of the Governor had been in place since the islands had been subject to corruption and maladministration under the previous autonomous administration.
Julian Assange and right of asylum
In August 2012, Hague declared that Julian Assange, the WikiLeaks organisation founder, would not be granted political asylum by the United Kingdom. Hague declared the UK's willingness to extradite Assange to the Swedish authorities who had requested his extradition; thus Swedish prosecutors, unwilling to break diplomatic protocol, have deferred from interrogating Assange at the Embassy of Ecuador, London.
Hague confirmed the British Government's position – that it is lawfully obliged to extradite Julian Assange. "We're disappointed by the statement by Ecuador's Foreign Minister today that Ecuador has offered political asylum to Julian Assange. Under our Laws, with Mr. Assange having exhausted all options of appeal, the British authorities are under a binding obligation to extradite him to Sweden. We must carry out that obligation and of course we fully intend to do so," Hague confirmed.
Following The Guardian newspaper outcry over a Foreign Office note sanctioned by Hague sent to the Ecuadorian Embassy—in which it raised the possibility of the revocation of their diplomatic status under the Diplomatic and Consular Premises Act 1987—the Foreign Secretary reaffirmed the UK remained "committed to a diplomatic solution" and played down any suggestion of a police raid of the Ecuadorian Embassy, stating "there is no threat here to storm an embassy".
The former ambassador to Uzbekistan, Craig Murray, warned that using the 1987 Act to raid the Ecuadorian Embassy would be in "breach of the Vienna Convention of 1961". Russia warned Britain against violating fundamental diplomatic principles (Vienna Convention on Diplomatic Relations, and in particular the Article 22 spelling out the inviolability of diplomatic premises), which the Government of Ecuador invoked.
Hague is the subject of a portrait in oil commissioned by Parliament.
Leader of the House of Commons and retirement
Once Hague had formally declared his intention not to seek re-election as MP for Richmond at the forthcoming 2015 general election, he told David Cameron he would be standing down as Foreign Secretary. Cameron instigated a Cabinet reshuffle whereby Hague became Leader of the House of Commons. Hague remained as Cameron's "de facto political deputy", retained his membership of the National Security Council and played a lead role in reaching out to voters in the North of England in the run up to the general election.
In a surprise motion on his last day in the House of Commons, Hague moved to make the election for Speaker in the next parliament a secret ballot, in what was seen as an effort to oust the incumbent John Bercow for lacking the neutrality expected of a Speaker of the House. Charles Walker, Conservative MP for Broxbourne, Chairman of the Procedure Committee and responsible for Speaker elections, stated that he had written a report about such an idea "years ago" and despite speaking with Hague and Michael Gove earlier that week, neither had told him of any such move. A visibly emotional Walker told the House, "I have been played as a fool. When I go home tonight, I will look in the mirror and see an honourable fool looking back at me. I would much rather be an honourable fool, in this and any other matter, than a clever man." Walker received a standing ovation, mainly from the Labour benches, whilst the Government lost its parliamentary motion by 228 to 202 votes. During the debate the future Father of the House, Gerald Kaufman, denounced Hague, saying: "Is the right hon. Gentleman aware that this grubby decision is what he personally will be remembered for? After a distinguished career in the House of Commons, both as a leader of a party and as a senior Cabinet Minister, he has now descended to squalor in the final days of the Parliament."
He was succeeded as MP for Richmond (Yorks) by future Chancellor of the Exchequer Rishi Sunak.
In retirement
On 9 October 2015, Hague was created Baron Hague of Richmond, of Richmond in the County of North Yorkshire.
In August 2020, Hague endorsed Joe Biden for U.S. president over incumbent Donald Trump, arguing that a Biden victory was in the UK's interest.
Illegal wildlife trade fighter
Hague and the Duke of Cambridge identified, while the former was in post as Foreign Secretary, that the illegal wildlife trade (IWT) was among the most profitable criminal enterprises in the world, and in order to combat it formed in 2014 the Transport Task Force (TTF). The TTF seeks to identify and stop wildlife trafficking. They continue to work at this in 2020. The Financial Task Force was created in 2018 to help further the goal.
Royal Foundation
In September 2020, Hague was appointed as chairman of the Royal Foundation, a charitable organisation operating under the auspices of the Duke and Duchess of Cambridge, in succession to Sir Keith Mills who retired.
Publications
Hague is an author of political biographies, and since his retirement from public life he has maintained a weekly column in first the Daily Telegraph and subsequently The Times. Hague also writes the occasional book review and appears on TV shows and in radio presentations.
As author
As columnist
On coronavirus
Hague has been particularly sharp on the coronavirus, writing as early as 10 February 2020 that "Coronavirus is a calamity for China. It cannot continue its dangerous wildlife practices any longer." Hague wrote on 2 March that: "The rise of coronavirus is a clear indication that the degrading of nature will come back to hit humans very hard." Hague returned to the subject on 13 April, when he said that the "world must act now on wildlife markets or run the risk of worse pandemics in future".
Personal life
Hague married Ffion Jenkins at the Chapel of St Mary Undercroft on 19 December 1997. Ffion Hague is now styled The Lady Hague of Richmond.
Hague serves as a Vice-President of the Friends of the British Library, which provides funding support for the British Library to make new acquisitions. He is a Patron of the European Youth Parliament UK, an educational charity organisation that runs debating competitions and discussion forums across the UK and is President of the Britain-Australia Society. Hague practises judo, and has a keen interest in music, learning to play the piano, shortly after the 2001 general election. He is an enthusiast for the natural history and countryside of his native Yorkshire.
In 2015 Hague purchased a £2.5 million country house, Cyfronydd Hall, in Powys, Wales.
Honours and awards
1998: The Spectator's "Parliamentarian of the Year Award"
2005: History Book of the Year at the British Book Awards, for William Pitt the Younger
2007: The Spectator's "Speech of the Year Award"
2008: The Trustees' Award at the Longman/History Today Awards
2009: Fellow of the Royal Society of Literature (FRSL)
2014: Britain-Australia Society Award for contribution to the relationship between Britain and Australia
2015: Freeman of the City of London
2015: Liveryman of the Worshipful Company of Stationers and Newspaper Makers
2015: Life peerage
2017 Grand Cordon of the Order of the Rising Sun (Japan)
Arms
Hague was granted arms on 7 April 2016
In popular culture
Hague was portrayed by Alex Avery in the 2015 Channel 4 television film Coalition.
See also
Tory Boy
References
External links
Debrett's People of Today
Profile at the Foreign & Commonwealth Office
Rt Hon William Hague MP official Conservative Party profile
William Hague collected news and commentary at The Telegraph
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
1961 births
Alumni of Magdalen College, Oxford
English male judoka
British management consultants
British Secretaries of State for Foreign and Commonwealth Affairs
Conservative Party (UK) life peers
Life peers created by Elizabeth II
Conservative Party (UK) MPs for English constituencies
English Anglicans
English biographers
English historians
Fellows of the Royal Society of Literature
First Secretaries of State of the United Kingdom
INSEAD alumni
Leaders of the Conservative Party (UK)
Leaders of the Opposition (United Kingdom)
Living people
McKinsey & Company people
Members of the Privy Council of the United Kingdom
People educated at Ripon Grammar School
People educated at Wath Academy
People from Rotherham
Presidents of the Oxford Union
Presidents of the Oxford University Conservative Association
Secretaries of State for Wales
UK MPs 1987–1992
UK MPs 1992–1997
UK MPs 1997–2001
UK MPs 2001–2005
UK MPs 2005–2010
UK MPs 2010–2015 |
162600 | https://en.wikipedia.org/wiki/Hacktivism | Hacktivism | In Internet activism, hacktivism, or hactivism (a portmanteau of hack and activism), is the use of computer-based techniques such as hacking as a form of civil disobedience to promote a political agenda or social change. With roots in hacker culture and hacker ethics, its ends are often related to free speech, human rights, or freedom of information movements.
Hacktivist activities span many political ideals and issues. Freenet, a peer-to-peer platform for censorship-resistant communication, is a prime example of translating political thought and freedom of speech into code. Hacking as a form of activism can be carried out through a network of activists, such as Anonymous and WikiLeaks, or through a singular activist, working in collaboration toward common goals without an overarching authority figure.
"Hacktivism" is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking. But just as hack can sometimes mean cyber crime, hacktivism can be used to mean activism that is malicious, destructive, and undermining the security of the Internet as a technical, economic, and political platform.
Origins and definitions
Writer Jason Sack first used the term hacktivism in a 1995 article in conceptualizing New Media artist Shu Lea Cheang's film Fresh Kill. However, the term is frequently attributed to the Cult of the Dead Cow (cDc) member "Omega," who used it in a 1996 e-mail to the group. Due to the variety of meanings of its root words, the definition of hacktivism is nebulous and there exists significant disagreement over the kinds of activities and purposes it encompasses. Some definitions include acts of cyberterrorism while others simply reaffirm the use of technological hacking to effect social change.
Forms and methods
Self-proclaimed "hactivists" often work anonymously, sometimes operating in groups while other times operating as a lone-wolf with several cyber-personas all corresponding to one activist within the cyberactivism umbrella that has been gaining public interest and power in pop-culture. Hactivists generally operate under apolitical ideals and express uninhibited ideas or abuse without being scrutinized by society while representing or defending them publicly under an anonymous identity giving them a sense of power in the cyberactivism community.
In order to carry out their operations, hacktivists might create new tools; or integrate or use a variety of software tools readily available on the Internet. One class of hacktivist activities includes increasing the accessibility of others to take politically motivated action online.
Repertoire of contention of hacktivism includes among others:
Code: Software and websites can achieve political goals. For example, the encryption software PGP can be used to secure communications; PGP's author, Phil Zimmermann said he distributed it first to the peace movement. Jim Warren suggests PGP's wide dissemination was in response to Senate Bill 266, authored by Senators Biden and DeConcini, which demanded that "...communications systems permit the government to obtain the plain text contents of voice, data, and other communications...". WikiLeaks is an example of a politically motivated website: it seeks to "keep governments open".
Mirroring. Website mirroring is used as a circumvention tool in order to bypass various censorship blocks on websites. This technique copies the contents of a censored website and disseminates it on other domains and sub-domains that are not censored. Document mirroring, similar to website mirroring, is a technique that focuses on backing up various documents and other works. RECAP is software that was written with the purpose to 'liberate US case law' and make it openly available online. The software project takes the form of distributed document collection and archival. Major mirroring projects include initiatives such as the Internet Archive and Wikisource.
Anonymity: a method of speaking out to a wide audience about human rights issues, government oppression, etc. that utilizes various web tools such as free and/or disposable email accounts, IP masking, and blogging software to preserve a high level of anonymity.
Doxing: The practice in which private and/or confidential documents and records are hacked into and made public. Hacktivists view this as a form of assured transparency, experts claim it is harassment.
Denial-of-service attacks: These attacks, commonly referred to as DoS attacks, use large arrays of personal and public computers that hackers take control of via malware executable files usually transmitted through email attachments or website links. After taking control, these computers act like a herd of zombies, redirecting their network traffic to one website, with the intention of overloading servers and taking a website offline.
Virtual sit-ins: Similar to DoS attacks but executed by individuals rather than software, a large number of protesters visit a targeted website and rapidly load pages to overwhelm the site with network traffic to slow the site or take it offline.
Website defacements: Hackers infiltrate a web server to replace a specific web page with one of their own, usually to convey a specific message.
Website redirects:This method involves changing the address of a website within the server so would-be visitors of the site are redirected to a site created by the perpetrator, typically to denounce the original site.
Geo-bombing: a technique in which netizens add a geo-tag while editing YouTube videos so that the location of the video can be seen in Google Earth.
Controversy
Depending on who is using the term, hacktivism can be a politically motivated technology hack, a constructive form of anarchic civil disobedience, or an undefined anti-systemic gesture. It can signal anticapitalist or political protest; it can denote anti-spam activists, security experts, or open source advocates.
Some people describing themselves as hacktivists have taken to defacing websites for political reasons, such as attacking and defacing websites of governments and those who oppose their ideology. Others, such as Oxblood Ruffin (the "foreign affairs minister" of Cult of the Dead Cow and Hacktivismo), have argued forcefully against definitions of hacktivism that include web defacements or denial-of-service attacks.
Hacktivism is often seen as shadowy due to its anonymity, commonly attributed to the work of fringe groups and outlying members of society. The lack of responsible parties to be held accountable for the social-media attacks performed by hactivists has created implications in corporate and federal security measures both on and offline.
While some self-described hacktivists have engaged in DoS attacks, critics suggest that DoS attacks are an attack on free speech and that they have unintended consequences. DoS attacks waste resources and they can lead to a "DoS war" that nobody will win. In 2006, Blue Security attempted to automate a DoS attack against spammers; this led to a massive DoS attack against Blue Security which knocked them, their old ISP and their DNS provider off the Internet, destroying their business.
Following denial-of-service attacks by Anonymous on multiple sites, in reprisal for the apparent suppression of WikiLeaks, John Perry Barlow, a founding member of the EFF, said "I support freedom of expression, no matter whose, so I oppose DDoS attacks regardless of their target... they're the poison gas of cyberspace...". On the other hand, Jay Leiderman, an attorney for many hacktivists, argues that DDoS can be a legitimate form of protest speech in situations that are reasonably limited in time, place and manner.
Notable hacktivist events
In 1990, the Hong Kong Blondes helped Chinese citizens get access to blocked websites by targeting the Chinese computer networks. The group identified holes in the Chinese internet system, particularly in the area of satellite communications. The leader of the group, Blondie Wong, also described plans to attack American businesses that were partnering with China.
In 1996, the title of the United States Department of Justice's homepage was changed to "Department of Injustice". Pornographic images were also added to the homepage to protest the Communications Decency Act.
In 1998, members of the Electronic Disturbance Theater created FloodNet, a web tool that allowed users to participate in DDoS attacks (or what they called electronic civil disobedience) in support of Zapatista rebels in Chiapas.
In December 1998, a hacktivist group from the US called Legions of the Underground emerged. They declared a cyberwar against Iraq and China and planned on disabling internet access in retaliation for the countries' human rights abuses. Opposing hackers criticized this move by Legions of the Underground, saying that by shutting down internet systems, the hacktivist group would have no impact on providing free access to information.
In July 2001, Hacktivismo, a sect of the Cult of the Dead Cow, issued the "Hacktivismo Declaration". This served as a code of conduct for those participating in hacktivism, and declared the hacker community's goals of stopping "state-sponsored censorship of the Internet" as well as affirming the rights of those therein to "freedom of opinion and expression".
During the 2009 Iranian election protests, Anonymous played a role in disseminating information to and from Iran by setting up the website Anonymous Iran; they also released a video manifesto to the Iranian government.
Google worked with engineers from SayNow and Twitter to provide communications for the Egyptian people in response to the government sanctioned Internet blackout during the 2011 protests. The result, Speak To Tweet, was a service in which voicemail left by phone was then tweeted via Twitter with a link to the voice message on Google's SayNow.
On Saturday 29 May 2010 a hacker calling himself ‘Kaka Argentine’ hacked into the Ugandan State House website and posted a conspicuous picture of Adolf Hitler with the swastika, a Nazi Party symbol.
During the Egyptian Internet black out, January 28 – February 2, 2011, Telecomix provided dial up services, and technical support for the Egyptian people. Telecomix released a video stating their support of the Egyptian people, describing their efforts to provide dial-up connections, and offering methods to avoid internet filters and government surveillance. The hacktivist group also announced that they were closely tracking radio frequencies in the event that someone was sending out important messages.
Project Chanology, also known as "Operation Chanology", was a hacktivist protest against the Church of Scientology to punish the church for participating in Internet censorship relating to the removal of material from a 2008 interview with Church of Scientology member Tom Cruise. Hacker group Anonymous attempted to "expel the church from the Internet" via DDoS attacks. In February 2008 the movement shifted toward legal methods of nonviolent protesting. Several protests were held as part of Project Chanology, beginning in 2008 and ending in 2009.
On June 3, 2011, LulzSec took down a website of the FBI. This was the first time they had targeted a website that was not part of the private sector. That week, the FBI was able to track the leader of LulzSec, Hector Xavier Monsegur.
On June 20, 2011 LulzSec targeted the Serious Organised Crime Agency of the United Kingdom, causing UK authorities to take down the website.
In August 2011 a member of Anonymous working under the name "Oliver Tucket" took control of the Syrian Defense Ministry website and added an Israeli government web portal in addition to changing the mail server for the website to one belonging to the Chinese navy.
Anonymous and New World Hackers claimed responsibility for the 2016 Dyn cyberattack in retaliation for Ecuador's rescinding Internet access to WikiLeaks founder Julian Assange at their embassy in London. WikiLeaks alluded to the attack. Subsequently, FlashPoint stated that the attack was most likely done by script kiddies.
In 2013, as an online component to the Million Mask March, Anonymous in the Philippines crashed 30 government websites and posted a YouTube video to congregate people in front of the parliament house on November 5 to demonstrate their disdain toward the Filipino government.
In 2014, Sony Pictures Entertainment was hacked by a group by the name of Guardians Of Peace (GOP) who obtained over 100 Terabytes of data including unreleased films, employee salary, social security data, passwords, and account information. GOP hacked various social media accounts and hijacked them by changing their passwords to diespe123 (die pictures entertainment) and posting threats on the pages.
British hacker Kane Gamble, who was sentenced to 2 years in youth detention, posed as John Brennan, the then director of the CIA, and Mark F. Giuliano, a former deputy director of the FBI, to access highly sensitive information. The judge said Gamble engaged in "politically motivated cyber-terrorism."
In 2021, Anonymous hacked and leaked the databases of American web hosting company Epik.
Notable hacktivist peoples/groups
WikiLeaks
WikiLeaks was founded in 2006 by Julian Assange as a "multi-national media organization and associated library." WikiLeaks operated under the principle of "principled leaking," in order to fight societal corruption. The not-for-profit functions as a whistleblowing organization that serves as an archive of classified documents. Originally, WikiLeaks was operated with the principles of a wiki site, meaning that users could post documents, edit others' documents, and help decide which materials were posted.
The first notable release of documents by WikiLeaks was the release of Afghanistan War logs. In July 2010, WikiLeaks published over 90,000 documents regarding the war in Afghanistan. Prior to the leak, WikiLeaks gave access to the documents to three newspapers. Though WikiLeaks did not identify a source for the documents, it was speculated that the leak came from Chelsea Manning, a U.S. Army intelligence analyst arrested in May 2010 and accused of leaking classified information. The war logs revealed 144 incidents of formerly unreported civilian casualties by the U.S. military. The leak of the Afghanistan war logs was the greatest military leak in United States history.
WikiLeaks is also notable for its leak of over 20,000 confidential emails and 8,000 file attachments from the Democratic National Committee (DNC), on July 22, 2016. The emails are specifically from the inboxes of seven prominent staffers of the DNC, and they were leaked as a searchable database. The emails leaked showed instances of key DNC staffers working to undermine Senator Bernie Sanders' presidential campaign prior to primary elections, which was directly against the DNC's stated neutrality in primary elections. Examples of targeting Senator Bernie Sanders included targeting his religion, hoping for his dropping out of the race, constructing negative narratives about his campaign and more. Other emails revealed criticism of President Barack Obama for not helping more in fundraising. Following the leak, DNC chairwoman Debbie Wasserman Schultz announced she would be stepping down from her position in the DNC. On July 25, 2016, the Democratic National Convention opened without Wasserman Schultz. The DNC issued an apology to Sanders the same day the Democratic National Convention opened.
Anonymous
Perhaps the most prolific and well known hacktivist group, Anonymous has been prominent and prevalent in many major online hacks over the past decade. Anonymous originated on the forums of 4chan during 2003, but didn't rise to prominence until 2008 when they directly attacked the Church of Scientology in a massive DoS attack. Since then, Anonymous has participated in a great number of online projects such as Operation: Payback and Operation: Safe Winter. However, while a great number of their projects have been for a charitable cause, they have still gained notoriety from the media due to the nature of their work mostly consisting of illegal hacking.
Following the Paris terror attacks in 2015, Anonymous posted a video declaring war on ISIS, the terror group that claimed responsibility for the attacks. Since declaring war on ISIS, Anonymous since identified several Twitter accounts associated with the movement in order to stop the distribution of ISIS propaganda. However, Anonymous fell under heavy criticism when Twitter issued a statement calling the lists Anonymous had compiled "wildly inaccurate," as it contained accounts of journalists and academics rather than members of ISIS.
Anonymous has also been involved with the Black Lives Matter movement. Early in July 2015, there was a rumor circulating that Anonymous was calling for a Day of Rage protests in retaliation for the shootings of Alton Sterling and Philando Castile, which would entail violent protests and riots. This rumor was based on a video that was not posted with the official Anonymous YouTube account. None of the Twitter accounts associated with Anonymous had tweeted anything in relation to a Day of Rage, and the rumors were identical to past rumors that had circulated in 2014 following the death of Mike Brown. Instead, on July 15, a Twitter account associated with Anonymous posted a series of tweets calling for a day of solidarity with the Black Lives Matter movement. The Twitter account used the hashtag "#FridayofSolidarity" to coordinate protests across the nation, and emphasized the fact that the Friday of Solidarity was intended for peaceful protests. The account also stated that the group was unaware of any Day of Rage plans.
In February 2017 the group took down more than 10,000 sites on the Dark web related to child porn.
DkD[||
DkD[||, a French cyberhacktivist, was known, amongst others, to be the "defacer" of navy.mil (US Navy website) and defensivethinking.com (the Company of the famous hacker Kevin Mitnick) among other 2000 websites.
He had been arrested by the OCLCTIC (office central de lutte contre la criminalité liée aux technologies de l’information et de la communication), in March 2003.
DkD[|| defaced more than 2000 pages, many were governments and US military sites.
Eric Voulleminot of the Regional Service of Judicial Police in Lille classified the young hacker as "the most wanted hacktivist in france"
DkD[|| was a very known defacer in the underground for his political view, in fact he did all his defacements for political reasons.
When the news of his arrest was coming around the underground, a crew, called The Ghost Boys have defaced a lot of sites using the “Free DkD[||!!” Slogan which recalls what happened after Mitnick's arrest.
LulzSec
In May 2011, five members of Anonymous formed the hacktivist group Lulz Security, otherwise known as LulzSec. LulzSec's name originated from the conjunction of the internet slang term "lulz", meaning laughs, and "sec", meaning security. The group members used specific handles to identify themselves on Internet Relay Channels, the most notable being: "Sabu," "Kayla," "T-Flow," "Topiary," "AVUnit," and "Pwnsauce." Though the members of LulzSec would spend up to 20 hours a day in communication, they did not know one another personally, nor did they share personal information. For example, once the members' identities were revealed, "T-Flow" was revealed to be 15 years old. Other members, on the basis of his advanced coding ability, thought he was around 30 years old.
One of the first notable targets that LulzSec pursued was HBGary, which was performed in response to a claim made by the technology security company that it had identified members of Anonymous. Following this, the members of LulzSec targeted an array of companies and entities, including but not limited to: Fox Television, Tribune Company, PBS, Sony, Nintendo, and the Senate.gov website. The targeting of these entities typically involved gaining access to and downloading confidential user information, or defacing the website at hand. LulzSec while not as strongly political as those typical of WikiLeaks or Anonymous, they shared similar sentiments for the freedom of information. One of their distinctly politically driven attacks involved targeting the Arizona State Police in response to new immigration laws.
The group's first attack that garnered significant government attention was in 2011, when they collectively took down a website of the FBI. Following the incident, the leader of LulzSec, "Sabu," was identified as Hector Xavier Monsegur by the FBI, and he was the first of the group to be arrested. Immediately following his arrest, Monsegur admitted to criminal activity. He then began his cooperation with the US government, helping FBI authorities to arrest 8 of his co-conspirators, prevent 300 potential cyber attacks, and helped to identify vulnerabilities in existing computer systems. In August 2011, Monsegur pleaded guilty to "computer hacking conspiracy, computer hacking, computer hacking in furtherance of fraud, conspiracy to commit access device fraud, conspiracy to commit bank fraud, and aggravated identity theft pursuant to a cooperation agreement with the government." He served a total of one year and seven months and was charged a $1,200 fine.
Related practices
Culture jamming
Hacking has been sometime described as a form of culture jamming. This term refers to the practice of subverting and criticizing political messages as well as media culture with the aim of challenging the status quo. It is often targeted toward subliminal thought processes taking place in the viewers with the goal of raising awareness as well as causing a paradigm shift. Culture jamming takes many forms including billboard hacking, broadcast signal intrusion, ad hoc art performances, simulated legal transgressions, memes, and artivism.
The term "culture jamming" was first coined in 1984 by American musician Donald Joyce of the band Negativland. However, some speculation remains as to when the practice of culture jamming first began. Social researcher Vince Carducci believes culture jamming can be traced back to the 1950s with European social activist group Situationist International. Author and cultural critic Mark Dery believes medieval carnival is the earliest form of culture jamming as a way to subvert the social hierarchy at the time.
Culture jamming is sometimes confused with acts of vandalism. However, unlike culture jamming, the main goal of vandalism is to cause destruction with any political themes being of lesser importance. Artivism usually has the most questionable nature as a form of culture jamming because defacement of property is usually involved.
Media hacking
Media hacking refers to the usage of various electronic media in an innovative or otherwise abnormal fashion for the purpose of conveying a message to as large a number of people as possible, primarily achieved via the World Wide Web. A popular and effective means of media hacking is posting on a blog, as one is usually controlled by one or more independent individuals, uninfluenced by outside parties. The concept of social bookmarking, as well as Web-based Internet forums, may cause such a message to be seen by users of other sites as well, increasing its total reach.
Media hacking is commonly employed for political purposes, by both political parties and political dissidents. A good example of this is the 2008 US Election, in which both the Democratic and Republican parties used a wide variety of different media in order to convey relevant messages to an increasingly Internet-oriented audience. At the same time, political dissidents used blogs and other social media like Twitter in order to reply on an individual basis to the presidential candidates. In particular, sites like Twitter are proving important means in gauging popular support for the candidates, though the site is often used for dissident purposes rather than a show of positive support.
Mobile technology has also become subject to media hacking for political purposes. SMS has been widely used by political dissidents as a means of quickly and effectively organising smart mobs for political action. This has been most effective in the Philippines, where SMS media hacking has twice had a significant impact on whether or not the country's Presidents are elected or removed from office.
Reality hacking
Reality hacking is any phenomenon that emerges from the nonviolent use of illegal or legally ambiguous digital tools in pursuit of politically, socially, or culturally subversive ends. These tools include website defacements, URL redirections, denial-of-service attacks, information theft, web-site parodies, virtual sit-ins, and virtual sabotage.
Art movements such as Fluxus and Happenings in the 1970s created a climate of receptibility in regard to loose-knit organizations and group activities where spontaneity, a return to primitivist behavior, and an ethics where activities and socially engaged art practices became tantamount to aesthetic concerns.
The conflation of these two histories in the mid-to-late 1990s resulted in cross-overs between virtual sit-ins, electronic civil disobedience, denial-of-service attacks, as well as mass protests in relation to groups like the International Monetary Fund and the World Bank. The rise of collectives, net.art groups, and those concerned with the fluid interchange of technology and real life (often from an environmental concern) gave birth to the practice of "reality hacking".
Reality hacking relies on tweaking the everyday communications most easily available to individuals with the purpose of awakening the political and community conscience of the larger population. The term first came into use among New York and San Francisco artists, but has since been adopted by a school of political activists centered around culture jamming.
In fiction
The 1999 science fiction-action film The Matrix, among others, popularized the simulation hypothesis — the suggestion that reality is in fact a simulation of which those affected by the simulants are generally unaware. In this context, "reality hacking" is reading and understanding the code which represents the activity of the simulated reality environment (such as Matrix digital rain) and also modifying it in order to bend the laws of physics or otherwise modify the simulated reality.
Reality hacking as a mystical practice is explored in the Gothic-Punk aesthetics-inspired White Wolf urban fantasy role-playing game Mage: The Ascension. In this game, the Reality Coders (also known as Reality Hackers or Reality Crackers) are a faction within the Virtual Adepts, a secret society of mages whose magick revolves around digital technology. They are dedicated to bringing the benefits of cyberspace to real space. To do this, they had to identify, for lack of a better term, the "source code" that allows our Universe to function. And that is what they have been doing ever since. Coders infiltrated a number of levels of society in order to gather the greatest compilation of knowledge ever seen. One of the Coders' more overt agendas is to acclimate the masses to the world that is to come. They spread Virtual Adept ideas through video games and a whole spate of "reality shows" that mimic virtual reality far more than "real" reality. The Reality Coders consider themselves the future of the Virtual Adepts, creating a world in the image of visionaries like Grant Morrison or Terence McKenna.
In a location-based game (also known as a pervasive game), reality hacking refers to tapping into phenomena that exist in the real world, and tying them into the game story universe.
Academic interpretations
There have been various academic approaches to deal with hacktivism and urban hacking. In 2010, Günther Friesinger, Johannes Grenzfurthner and Thomas Ballhausen published an entire reader dedicated to the subject. They state: "Urban spaces became battlefields, signifiers have been invaded, new structures have been established: Netculture replaced counterculture in most parts and also focused on the everchanging environments of the modern city. Important questions have been brought up to date and reasked, taking current positions and discourses into account. The major question still remains, namely how to create culturally based resistance under the influence of capitalistic pressure and conservative politics."
See also
Crypto-anarchism
Cyberterrorism
E-democracy
Open-source governance
Patriotic hacking
Tactical media
1984 Network Liberty Alliance
Chaos Computer Club
Cicada 3301
Decocidio
Jester
Internet vigilantism
The Internet's Own Boy – a documentary film
milw0rm
2600: The Hacker Quarterly
Citizen Lab
HackThisSite
Cypherpunk
Jeremy Hammond
Mr. Robot – a television series
References
Further reading
Olson, Parmy. (05-14-2013). We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency. .
Coleman, Gabriella. (2014-11-4). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso Books. .
Shantz, Jeff; Tomblin, Jordon (2014-11-28). Cyber Disobedience: Re://Presenting Online Anarchy. John Hunt Publishing. .
Deseriis, Marco (2017). Hacktivism: On the Use of Botnets in Cyberattacks. Theory, Culture & Society 34(4): 131–152.
External links
Hacktivism and Politically Motivated Computer Crime History, types of activity and cases studies
Activism by type
Hacking (computer security)
Politics and technology
Internet terminology
2000s neologisms
Culture jamming techniques
Hacker culture
Articles containing video clips |
163768 | https://en.wikipedia.org/wiki/Geek%20Code | Geek Code | The Geek Code, developed in 1993, is a series of letters and symbols used by self-described "geeks" to inform fellow geeks about their personality, appearance, interests, skills, and opinions. The idea is that everything that makes a geek individual can be encoded in a compact format which only other geeks can read. This is deemed to be efficient in some sufficiently geeky manner.
It was once common practice to use a geek code as one's email or Usenet signature, but the last official version of the code was produced in 1996, and it has now largely fallen out of use.
History
The Geek Code was invented by Robert A. Hayden in 1993 and was defined at geekcode.com. It was inspired by a similar code for the bear subculture - which in turn was inspired by the Yerkes spectral classification system for describing stars.
After a number of updates, the last revision of the code was v3.12, in 1996.
Some alternative encodings have also been proposed. For example, the 1997 Acorn Code was a version specific to users of Acorn's RISC OS computers.
Format
Geek codes can be written in two formats; either as a simple string:
GED/J d-- s:++>: a-- C++(++++) ULU++ P+ L++ E---- W+(-) N+++ o+ K+++ w--- O- M+ V-- PS++>$ PE++>$ Y++ PGP++ t- 5+++ X++ R+++>$ tv+ b+ DI+++ D+++ G+++++ e++ h r-- y++**
...or as a "Geek Code Block", a parody of the output produced by the encryption program PGP:
-----BEGIN GEEK CODE BLOCK-----
Version: 3.1
GED/J d-- s:++>: a-- C++(++++) ULU++ P+ L++ E---- W+(-) N+++ o+ K+++ w---
O- M+ V-- PS++>$ PE++>$ Y++ PGP++ t- 5+++ X++ R+++>$ tv+ b+ DI+++ D+++
G+++++ e++ h r-- y++**
------END GEEK CODE BLOCK------
Note that this latter format has a line specifying the version of Geek Code being used.
(Both these examples use Hayden's own geek code.)
Encoding
Occupation
The code starts with the letter G (for Geek) followed by the geek's occupation(s): GMU for a geek of music, GCS for a geek of computer science etc. There are 28 occupations that can be represented, but GAT is for geeks that can do anything and everything - and "usually precludes the use of other vocational descriptors".
Categories
The Geek Code website contains the complete list of categories, along with all of the special syntax options.
Decoding
There have been several '"decoders" produced to transform a specific geek code into English, including:
Bradley M. Kuhn, in late 1998, made Williams' program available as a web service.
Joe Reiss made a similar page available in October 1999.
See also
Leet Speak
New Speak
The Natural Bears Classification System
Signature block
References
External links
Robert Hayden's official Geek Code web site (presenting v3.12)
Internet self-classification codes
Internet culture
Lifestyle websites
Nerd culture |
164126 | https://en.wikipedia.org/wiki/Strategic%20lawsuit%20against%20public%20participation | Strategic lawsuit against public participation | A strategic lawsuit against public participation (SLAPP), SLAPP suit, or intimidation lawsuit is intended to censor, intimidate, and silence critics by burdening them with the cost of a legal defense until they abandon their criticism or opposition.
In the typical SLAPP, the plaintiff does not normally expect to win the lawsuit. The plaintiff's goals are accomplished if the defendant succumbs to fear, intimidation, mounting legal costs, or simple exhaustion and abandons the criticism. In some cases, repeated frivolous litigation against a defendant may raise the cost of directors and officers liability insurance for that party, interfering with an organization's ability to operate. A SLAPP may also intimidate others from participating in the debate. A SLAPP is often preceded by a legal threat. SLAPPs bring about freedom of speech concerns due to their chilling effect and are often difficult to filter out and penalize because the plaintiffs attempt to obfuscate their intent to censor, intimidate, or silence their critics.
To protect freedom of speech some jurisdictions have passed anti-SLAPP laws (often called SLAPP-back laws). These laws often function by allowing a defendant to file a motion to strike and/or dismiss on the grounds that the case involves protected speech on a matter of public concern. The plaintiff then bears the burden of showing a probability that they will prevail. If the plaintiffs fail to meet their burden their claim is dismissed and the plaintiffs may be required to pay a penalty for bringing the case.
Anti-SLAPP laws occasionally come under criticism from those who believe that there should not be barriers to the right to petition for those who sincerely believe they have been wronged, regardless of ulterior motives. Hence, the difficulty in drafting SLAPP legislation, and in applying it, is to craft an approach which affords an early termination to invalid, abusive suits, without denying a legitimate day in court to valid good faith claims. Anti-SLAPP laws are generally considered to have a favorable effect, and many lawyers have fought to enact stronger laws protecting against SLAPPs.
Characteristics
SLAPP is a form of strategic litigation or impact litigation. SLAPPs take various forms. The most common used to be a civil suit for defamation, which in the English common law tradition was a tort. The common law of libel dates to the early 17th century and, unlike most English law, is reverse onus, meaning that once someone alleges a statement is libelous, the burden is on the defendant to prove that it is not. In England and Wales, the Defamation Act 2013 removed most of the uses of defamation as a SLAPP by requiring the proof of special damage. Various abuses of this law including political libel (criticism of the political actions or views of others) have ceased to exist in most places, but persist in some jurisdictions (notably British Columbia and Ontario) where political views can be held as defamatory.
A common feature of SLAPPs is forum shopping, wherein plaintiffs find courts that are more favourable towards the claims to be brought than the court in which the defendant (or sometimes plaintiffs) live.
Other widely mentioned elements of a SLAPP are the actual effectiveness at silencing critics, the timing of the suit, inclusion of extra or spurious defendants (such as relatives or hosts of legitimate defendants), inclusion of plaintiffs with no real claim (such as corporations that are affiliated with legitimate plaintiffs), making claims that are very difficult to disprove or rely on no written record, ambiguous or deliberately mangled wording that lets plaintiffs make spurious allegations without fear of perjury, refusal to consider any settlement (or none other than cash), characterization of all offers to settle as insincere, extensive and unnecessary demands for discovery, attempts to identify anonymous or pseudonymous critics, appeals on minor points of law, demands for broad rulings when appeal is accepted on such minor points of law, and attempts to run up defendants' costs even if this clearly costs more to the plaintiffs.
Several jurisdictions have passed anti-SLAPP laws, designed to quickly remove cases out of court. In many cases, the plaintiff is also required to pay a penalty for bringing the case, known as a SLAPP-back.
History
The acronym was coined in the 1980s by University of Denver professors Penelope Canan and George W. Pring. The term was originally defined as "a lawsuit involving communications made to influence a governmental action or outcome, which resulted in a civil complaint or counterclaim filed against nongovernment individuals or organizations on a substantive issue of some public interest or social significance." The concept's originators later dropped the notion that government contact had to be about a public issue to be protected by the right to petition the government, as provided in the First Amendment. It has since been defined less broadly by some states, and more broadly in one state (California) where it includes suits about speech on any public issue.
The original conceptualization proffered by Canan and Pring emphasized the right to petition as protected in the United States under the US Constitution's specific protection in the First Amendment's fifth clause. It is still definitional: SLAPPs are civil lawsuits filed against those who have communicated to government officialdom (in its entire constitutional apparatus). The right to petition, granted by Edgar the Peaceful, King of England in the 10th century, antedates Magna Carta in terms of its significance in the development of democratic institutions. As currently conceived, the right claims that democracy cannot properly function in the presence of barriers between the governed and the governing.
New York Supreme Court Judge J. Nicholas Colabella said in reference to SLAPPs: "Short of a gun to the head, a greater threat to First Amendment expression can scarcely be imagined." Gordon v. Morrone, 590 N.Y.S.2d 649, 656 (N.Y. Sup. Ct. 1992). A number of jurisdictions have made such suits illegal, provided that the appropriate standards of journalistic responsibility have been met by the critic.
Jurisdictional variations
Australia
In the Australian Capital Territory, the Protection of Public Participation Act 2008 protects conduct intended to influence public opinion or promote or further action in relation to an issue of public interest. A party starting or maintaining a proceeding against a defendant for an improper purpose may be ordered to pay a financial penalty to the Territory.
Canada
Some political libel and forum shopping incidents, both common in Canada, have been called SLAPPs, because such suits load defendants with costs of responding in unfamiliar jurisdictions or at times (typically elections) when they are extremely busy and short of funds. Both types of suits are unusual to Canada, so there is little academic concern nor examination of whether political subject matter or remote forums are a clear indicator of SLAPP.
Canada's three most populous provinces (Quebec, British Columbia, and Ontario) have enacted anti-SLAPP legislation.
British Columbia
One of the first cases in Canada to be explicitly ruled a SLAPP was Fraser v. Saanich (see [1999] B.C.J. No. 3100 (B.C. S.C.)) (QL), where the British Columbia Supreme Court struck out the claim of a hospital director against the District of Saanich, holding that it was a meritless action designed to silence or intimidate the residents who were opposed to the plaintiff's plan to redevelop the hospital facilities.
Following the decision in Fraser v. Saanich, the Protection of Public Participation Act (PPPA) went into effect in British Columbia in April 2001. The legislation was repealed in August 2001. There was extensive debate on its merits and the necessity of having hard criteria for judges and whether this tended to reduce or increase process abuse. The debate was largely formed by the first case to discuss and apply the PPPA, Home Equity Development v. Crow. The defendants' application to dismiss the action against them was dismissed. The defendants failed to meet the burden of proof required by the PPPA, that the plaintiffs had no reasonable prospect of success. While it was not the subject of the case, some felt that the plaintiffs did not bring their action for an improper purpose, and the suit did not inhibit the defendants in their public criticism of the particular project, and that the Act was, therefore, ineffective in this case.
Since the repeal, BC activists especially the BCCLA have argued repeatedly for a broad understanding of SLAPP and a broad interpretation of judicial powers especially in intervener applications in BC and other common law jurisdictions and when arguing for new legislation to prevent SLAPPs. The activist literature contains extensive research on particular cases and criteria. The West Coast Environmental Law organization agrees and generally considers BC to lag other jurisdictions.
In March 2019, the legislature voted unanimously to pass another anti-SLAPP bill, the Protection of Public Participation Act.
Nova Scotia
A private member's bill introduced in 2001 by Graham Steele (NDP, Halifax Fairview) proposed a "Protection of Public Participation Act" to dismiss proceedings or claims brought or maintained for an improper purpose, awarding punitive or exemplary damages (effectively, a "SLAPP back") and protection from liability for communication or conduct which constitutes public participation. The bill did not progress beyond first reading.
Ontario
In Ontario, the decision in Daishowa v. Friends of the Lubicon [1996] O.J. No. 3855 Ont. Ct. Gen. Div. (QL) was instructive on SLAPPs. A motion brought by the corporate plaintiff Daishowa to impose conditions on the defendant Friends of the Lubicon Indian Band that they would not represent Daishowa's action as a SLAPP was dismissed.
By 2010, the Ontario Attorney-General had issued a major report which identified SLAPP as a major problem but initially little to nothing was done.
In June 2013, the Attorney General introduced legislation to implement the recommendations of the report. The bill proposed a mechanism for an order to dismiss strategic lawsuits which attack free expression on matters of public interest, with full costs (but not punitive damages) and on a relatively short timeframe, if the underlying claims had no reasonable prospect of success.
The bill enjoyed support from a wide range of groups including municipalities, the Canadian Environmental Law Association, EcoJustice, Environmental Defence, Ontario Clean Air Alliance, Ontario Nature, Canadian Civil Liberties Association, Canadian Journalists for Free Expression, Citizens Environment Alliance of Southwestern Ontario, The Council of Canadians, CPAWS Wildlands League, Sierra Club Ontario, Registered Nurses' Association of Ontario and Greenpeace Canada. The Ontario Civil Liberties Association called upon the Attorney General to go even further, claiming Bill 83 did not correct fundamental flaws with Ontario's defamation law which impose a one-sided burden of proof to force defendants to disprove falsity, malice, and damage within a very limited framework where "truth", "privilege", "fair comment", and "responsible reporting" are their only recognised defences.
The legislation was re-introduced following the 2014 Ontario election as Bill 52, and on 3 November 2015, Ontario enacted it as the Protection of Public Participation Act, 2015.
Quebec
Québec's then Justice Minister, Jacques Dupuis, proposed an anti-SLAPP bill on 13 June 2008.
The bill was adopted by the National Assembly of Quebec on 3 June 2009. Quebec's amended Code of Civil Procedure was the first anti-SLAPP mechanism in force in Canada.
Prior to Ontario enacting its own Anti-SLAPP law the bill was invoked there (and then Supreme Court of Canada docket 33819). In the case of Les Éditions Écosociété Inc., Alain Deneault, Delphine Abadie and William Sacher vs. Banro Inc., in which the publisher Écosociété pleaded (supported by the BCCLA) that it should not face Ontario liability for a publication in Quebec, as the suit was a SLAPP and the Quebec law explicitly provided to dismiss these. The court denied the request, ruling it had jurisdiction. A separate 2011 decision in Quebec Superior Court had ruled that Barrick Gold had to pay $143,000 to the book's three authors and publisher, Les Éditions Écosociété Inc., to prepare their defence in a "seemingly abusive" strategic lawsuit against public participation. Despite the Québec ruling, a book Noir Canada documenting the relationship between Canadian mining corporations, armed conflict and political actors in Africa was never published as part of a settlement which, according to the authors, was only made for the sole purpose of resolving the three-and-a-half-year legal battle.
The Quebec law is substantially different in structure than that of California or other jurisdictions, however, as Quebec's Constitution generally subordinates itself to international law, and as such the International Covenant on Civil and Political Rights applies. That treaty only permits liability for arbitrary and unlawful speech. The ICCPR has also been cited, in the BC case Crookes v. Newton, as the standard for balancing free speech versus reputation rights. The Supreme Court of Canada in October 2011, ruling in that case, neither reiterated nor rescinded that standard.
European Union
On 25 November 2020, the European Parliament passed a resolution expressed "its continued deep concern about the state of media freedom within the EU in the context of the abuses and attacks still being perpetrated against journalists and media workers in some Member States because of their work" and called on the European Commission to "establish minimum standards against SLAPP practices across the EU". As of 2021 the European Union is considering adopting an anti-SLAPP directive to protect the freedom of speech of European citizens.
United States
Thirty one states, the District of Columbia, and Guam have enacted statutory protections against SLAPPs.
These states are Arizona, Arkansas, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Illinois, Indiana, Kansas, Louisiana, Maine, Maryland, Massachusetts, Minnesota, Missouri, Nebraska, Nevada, New Mexico, New York, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, Vermont, Virginia, and Washington. In Colorado and West Virginia, the courts have adopted protections against SLAPPs. These laws vary dramatically in scope and level of protection, and the remaining states lack specific protections.
There is no federal anti-SLAPP law, but legislation for one has been previously introduced, such as the SPEAK FREE Act of 2015. The extent to which state laws apply in federal courts is unclear, and the circuits are split on the question. The First, Fifth and Ninth circuits have allowed litigants from Maine, Louisiana and California, respectively, to use their state's special motion in federal district courts in diversity actions. The D.C. Circuit has held the reverse for D.C. litigants.
It has been argued that the lack of uniform protection against SLAPPs has encouraged forum shopping; proponents of federal legislation have argued that the uncertainty about one's level of protection has likely magnified the chilling effect of SLAPPs.
In December 2009, Rep. Steve Cohen (D–Tennessee) introduced the Citizen Participation Act in the U.S. House. This marks the first time the Congress has considered federal anti-SLAPP legislation, though the Congress enacted the SPEECH Act on the closely related issue of libel tourism. Like many state anti-SLAPP laws, H.R. 4364 would allow the defendant of a SLAPP to have the suit quickly dismissed and to recover fees and costs.
California
California has a unique variant of anti-SLAPP legislation. In 1992 California enacted Code of Civil Procedure § 425.16, a statute intended to frustrate SLAPPs by providing a quick and inexpensive defense. It provides for a special motion that a defendant can file at the outset of a lawsuit to strike a complaint when it arises from conduct that falls within the rights of petition or free speech. The statute expressly applies to any writing or speech made in connection with an issue under consideration or review by a legislative, executive, or judicial proceeding, or any other official proceeding authorized by law, but there is no requirement that the writing or speech be promulgated directly to the official body. It also applies to speech in a public forum about an issue of public interest and to any other petition or speech conduct about an issue of public interest.
Washington State
In May 2015, the Washington Supreme Court struck down the state's 2010 anti-SLAPP statute.
Balancing the right of access to the courts
The SLAPP penalty stands as a barrier to access to the courts by providing an early penalty to claimants who seek judicial redress. In recent years, the courts in some states have recognized that enforcement of SLAPP legislation must recognize and balance the constitutional rights of both litigants. It has been said:
Since Magna Carta, the world has recognized the importance of justice in a free society. "To no one will we sell, to no one will we refuse or delay, right or justice." (Magna Carta, 1215.) This nation's founding fathers knew people would never consent to be governed and surrender their right to decide disputes by force, unless government offered a just forum for resolving those disputes.
The right to bring grievances to the courts, in good faith, is protected by state and federal constitutions in a variety of ways. In most states, the right to trial by jury in civil cases is recognized. The right to cross-examine witnesses is considered fundamental to the American judicial system. Moreover, the first amendment protects the right to petition the government for a redress of grievances. The "right to petition extends to all departments of the Government. The right of access to the courts is indeed but one aspect of the right of petition."
Because "the right to petition is 'among the most precious of the liberties safeguarded by the Bill of Rights', ... the right of access to the courts shares this 'preferred place' in [the United States'] hierarchy of constitutional freedoms and values."
This balancing question is resolved differently in different states, often with substantial difficulty.
In Palazzo v. Alves, the Supreme Court of Rhode Island stated:
By the nature of their subject matter, anti-SLAPP statutes require meticulous drafting. On the one hand, it is desirable to seek to shield citizens from improper intimidation when exercising their constitutional right to be heard with respect to issues of public concern. On the other hand, it is important that such statutes be limited in scope lest the constitutional right of access to the courts (whether by private figures, public figures, or public officials) be improperly thwarted. There is a genuine double-edged challenge to those who legislate in this area.
The most challenging balancing problem arises in application to SLAPP claims which do not sound (give rise to a claim) in tort. The common law and constitutional law have developed in the United States to create a high substantive burden to tort and tort-like claims which seek redress for public speech, especially public speech which addresses matters of public concern. The common law in many states requires the pleader to state accurately the content of libelous words. Constitutional law has provided substantive protection which bars recovery against a first amendment defense except upon clear and convincing evidence that there has been deliberate or reckless falsehood. For this reason, ferreting out the bad faith SLAPP claim at an early stage of litigation should be accomplished with relative ease. Extension of the SLAPP penalties to factually complex cases, where the substantive standard of proof at common law is lower presents special challenges.
A Minnesota Supreme Court case, Middle-Snake-Tamarac Rivers Watershed Dist. v. Stengrim, 784 N.W.2d 834 (Minn. 2010) establishes a two-step process to determine whether SLAPP procedure should be applied. The decision arises in the context of an effort to enforce a settlement agreement between a local government and an opponent of a flood control project. The landowner had accepted a significant monetary settlement in settlement of his opposition to land acquisition. The landowner agreed as part of the settlement to address no further challenges to the project. When the local government sued the landowner for breach of settlement, the landowner contended that enforcement of the settlement was a strategic lawsuit against public participation. The Supreme Court rejected that claim and affirmed the District Court's denial of SLAPP relief, holding "The District Court properly denied a motion to dismiss where the underlying claim involved an alleged breach of a settlement agreement that potentially limited the moving party's rights to public participation." The Supreme Court explained:
Preexisting legal relationships, such as those based on a settlement agreement where a party waives certain rights, may legitimately limit a party's public participation. It would be illogical to read sections 554.01-.05 as providing presumptive immunity to actions that a moving party may have contractually agreed to forgo or limit.
Under the Minnesota approach, as a preliminary matter, the moving party must meet the burden of showing that the circumstances which bring the case within the purview of SLAPP protection exists. Until that has been accomplished, no clear and convincing burden has been shifted to the responding party.
Notable SLAPPs
Australia
"Gunns 20": In the 2005 Gunns Limited v Marr & Ors case, Gunns filed a writ in the Supreme Court of Victoria against 20 individuals and organisations, including Senator Bob Brown, for over A$7.8 million. The defendants have become collectively known as the "Gunns 20". Gunns claimed that the defendants sullied its reputation and caused it to lose jobs and profits. The defendants claimed that they are protecting the environment. Opponents and critics of the case have suggested that the writ was filed with the intent to discourage public criticism of the company. Gunns has maintained the position that they were merely trying to prevent parties enjoined to the writ from undertaking unlawful activities that disrupt their business. The statement of claim alleged incidents of assault against forestry workers and vandalism. At a hearing before the Supreme Court of Victoria, an amended statement of claim lodged by the company and served on defendants on 1 July 2005, was dismissed. However, the judge in the case granted the company leave to lodge a third version of their statement of claim with the court no later than 15 August 2005. The application continued before the court, before being brought to a close on 20 October 2006. In his ruling, the Honourable Justice Bongiorno made an award of costs in favour of the respondents only as far as it covered those costs incurred with striking out the third version of the statement of claim, and costs incurred associated with their application for costs. In November 2006, Gunns dropped the case against Helen Gee, Peter Pullinger and Doctors for Forests. In December 2006, it abandoned the claim against Greens MPs Bob Brown and Peg Putt. The other matters were all settled in favour of Gunns following the payment of more than $150,000 in damages or, in some cases, undertakings to the court not to protest at certain locations.
Brazil
ThyssenKrupp Atlantic Steel Company (TKCSA), one of the largest private enterprises in Latin America, sued Brazilian researchers from public universities as UERJ (Rio de Janeiro State University) and Fiocruz (Oswaldo Cruz Foundation) for moral damages. First, TKCSA sued research pulmonologist Hermano Albuquerque de Castro from Sergio Arouca National School of Public Health (ENSP – Fiocruz). Then TKCSA sued Alexandre Pessoa Dias, research professor of the Joaquim Venâncio Polytechnic School of Health (EPSJV – Fiocruz), and Monica Cristina Lima, a biologist from Pedro Ernesto University Hospital and board member of the Public University Workers Union of Rio de Janeiro State (Sintuperj). The last two lawsuits occurred after the disclosure of the technical report "Evaluation of social, environmental and health impacts caused by the setup and operation of TKCSA in Santa Cruz".
Canada
Daishowa Inc. v. Friends of the Lubicon: From 1995 to 1998 a series of judgements (OJ 1536 1995, OJ 1429 1998 (ONGD)) established that defendants, who had accused a global company of engaging in "genocide", were entitled to recover court costs due to the public interest in the criticism, even if it was rhetorically unjustifiable. This was the first case to establish clearly the SLAPP criteria.
Fraser v. Saanich (District) 1995, [BCJ 3100 BCSC] was held explicitly to be a SLAPP, the first known case to be so described. Justice Singh found plaintiff's conduct to be "reprehensible and deserving of censure", ordering he pay "special costs" (page 48, Strategic Lawsuits Against Public Participation: The British Columbia Experience, RECEIL 19(1) 2010 ) to compensate.
Canadian Prime Minister Stephen Harper filed a suit against the Liberal Party of Canada, the Official Opposition, after the latter paid for trucks to drive through the streets playing a journalist's tape of Harper admitting he knew of "financial considerations" offered to dying MP Chuck Cadman before a critical House of Commons of Canada vote in 2005. This, the Liberals and most commentators and authorities agreed, would be a serious crime if proven. Harper alleged the tape had been altered but a court found no evidence of this. The suit was dropped by Michael Ignatieff after he replaced Stephane Dion as Leader of the Opposition, and so was not heard in court, but was transparently a (successful) effort to get the trucks off the streets.
Crookes v. Openpolitics.ca, filed May 2006 [S063287, Supreme Court of BC], and a series of related suits leading to a unanimous October 2011 ruling by the Supreme Court of Canada in Crookes v. Newton, upheld the rights of online debaters to link freely to third parties without fear of liability for contents at the other end of the link. A number of related rulings had previously established that transient comments on the Internet could not be, in themselves, simply printed and used to prove that "publication" had occurred for purposes of libel and defamation law in Canada. Other elements of the ruling clarified how responsible journalism (and therefore the right to protect anonymous sources), qualified privilege and innocent dissemination defenses applied to persons accused of online defamation.
In May 2010, Youthdale Treatment Centres of Toronto, Ontario filed a defamation suit against various former patients, parents of former patients, and other persons, claiming C$5 million in damages. The lawsuit, filed on 5 May 2010, on behalf of Youthdale by Harvin Pitch and Jennifer Lake of Teplitsky, Colson LLP, claimed that these persons were involved in a conspiracy to, among other things, have Youthdale's licence to operate revoked. Youthdale also claimed their reputation was damaged as a result of various actions by the named defendants, which Youthdale alleged included the creation of websites and blogs containing complaints against Youthdale, including alleged accusations of unlawful administration of psychotropic medications. A notable left-turn for Youthdale occurred in July 2010, when Youthdale became the subject of a Toronto Star investigation, in which it was found that Youthdale had been admitting children to its Secure Treatment Unit that did not have mental disorders. The case has since been dismissed.
In 2011, in Robin Scory v. Glen Valley Watersheds Society, a BC court ruled that "an order for special costs acts as a deterrent to litigants whose purpose is to interfere with the democratic process", and that "Public participation and dissent is an important part of our democratic system." However, such awards remained rare.
Morris vs Johnson et al. 22 October 2012, ONSC 5824 (CanLII): During the final weeks of the 2010 municipal election in Aurora, Ontario, a group of town councilors and the incumbent Mayor Phyllis Morris agreed to use town funds to launch what was later referenced as a private lawsuit fronted by the Mayor, seeking $6M, against both named and anonymous residents who were critical of the local government. After the mayor and a number of councilors lost the election the new town council cut public funding for the private lawsuit and they issued a formal apology to the defendants. Almost one year after the town cut funding and after Morris lost a Norwich motion, Morris discontinued her case. The discontinuance cost decision delivered by Master Hawkins reads, per para. 32 (Ontario Superior court of Justice court file no.10-CV-412021): "Because I regard this action as SLAPP litigation designed to stifle debate about Mayor Morris' fitness for office, commenced during her re-election campaign, I award Johnson and Hogg special enhanced costs as was done in Scory v. Krannitz 2011 BCSC 1344 per Bruce J. at para. 31 (B.C.S.C)." Morris subsequently sued the town for $250,000 in the spring of 2013 to recover her legal costs for the period after the town cut funding of her case. Almost one and a half years after the final ruling in the Morris defamation case (i.e. the second Master Hawkins cost ruling delivered in January 2013) and approximately one year after suing the town, Morris amended her statement of claim to note that her legal costs were actually $27,821.46 and not the $250,000 as noted in the initial statement of claim. Morris then attempted to move the case to small claims court after the town had already spent over $150,000 in preparing its defense. As of the summer of 2015 the case is ongoing.
In 2012, Sino-Forest sued Muddy Waters Research for $4 billion for defamation in the Ontario Superior Court of Justice. Muddy Waters had accused Sino-Forest of fraudulently inflating its assets and earnings, and had claimed the company's shares were essentially worthless. However, on 10 January 2012, Sino-Forest announced that its historic financial statements and related audit reports should not be relied upon. Sino-forest also filed for bankruptcy protection. In response to the lawsuit, Muddy Waters stated that Sino's bankruptcy protection filing vindicated its accusations since the company would not require bankruptcy protection if it was really generating close to $2 billion in cash flow. Sino-Forest was represented by Bennett Jones LLP.
Businesspeople Garth Drabinsky and Conrad Black filed numerous suits against critics of their business activities. These received much publicity but were usually settled quickly.
In September 2014, Brampton, Ontario mayor Susan Fennell used threats of legal action against fellow councillors, the Toronto Star, the city's integrity commissioner, and auditor Deloitte to delay a city council meeting which was to discuss a major spending scandal. As the parties involved needed an opportunity to seek legal advice, regardless of the merit (or spuriousness) of the claims, this tactic served to defer a key debate which otherwise would have, and should have, taken place before the city's 27 October municipal election.
Estonia
In 2016, the real-estate investment company Pro Kapital Ltd sued urbanist Teele Pehk who expressed her opinion about the company's development plans in the Kalasadam area of Tallinn, Estonia. The accusations were based on an interview given for the article "The battle for the Estonian coastline", published by the monthly newspaper The Baltic Times. Initially, instead of clarifying the questionable quotes in the article with the Baltic Times editors, Pro Kapital sent a legal demand to Pehk demanding that she publish a pre-written explanation and pay €500 to cover their legal advice expenses. Pehk provided proof to the lawyer that she had not lied to the journalist of The Baltic Times, and the newspaper published a clarification online that Pehk's words were misinterpreted. Few months later Pro Kapital sued Pehk for damaging their reputation by spreading lies about the detailed plan of the Kalasadam area. Teele Pehk had been involved with the detailed plan of Kalasadam since 2011, as a member of the neighbourhood association Telliskivi selts and caretaker of the Kalarand beach, situated on the edge of the Kalasadam area.
Half a year into the court case, Pro Kapital began negotiations and settled with a compromise before the court hearing. Pro Kapital paid for Pehk's legal costs and both parties agreed not to disparage each other in the future. Teele Pehk is still active in Tallinn urban development and continues to spread the word about SLAPP suits.
This case took place at the end of the 12-year process of planning the Kalasadam area, which over the years had witnessed exceptionally high public interest regarding the planned residential development and most importantly, the public use of the seaside and the beach. The planning system in Estonia allows anyone to express their opinion, present suggestions or objections to any detailed plan. Many Estonian civic organisations were raising concerned voices about the case and the Chancellor for Justice of Estonia condemned that practice many times in public appearances.
France
In 2010 and 2011, a French blogger was summoned twice by the communication company Cometik (NOVA-SEO) over exposing their quick-selling method (a.k.a. one shot method) and suggesting a financial compensation for his first trial. The company's case was dismissed twice, but appealed both times. On 31 March 2011, the company won:
the censorship of any reference (of its name) on Mathias Poujol-Rost's weblog,
€2,000 as damages,
the obligation to publish the judicial decision for 3 months,
€2,000 as procedural allowance,
all legal fees for both first and appeal instances.
Germany
In September 2017, a naturopath in Arizona named Colleen Huber filed a defamation lawsuit, preceded by two cease and desist letters, against Britt Marie Hermes, a naturopathy whistleblower. The lawsuit was filed for Hermes' blog post criticizing Huber for using naturopathic remedies to treat cancer and speculating that Hermes' name was being used without her permission in several registered domain names owned by Huber. The lawsuit was filed in Kiel, Germany where Hermes was residing to pursue her PhD in evolutionary genomics. Jann Bellamy of Science-Based Medicine speculates that this is "due to good old forum shopping for a more plaintiff-friendly jurisdiction" as there are no protections against SLAPP lawsuits in Germany. Britt Hermes is a notable scientific skeptic and the organization Australian Skeptics set up a fund to help with legal costs on the case. In an interview at CSICon 2019, Britt Hermes told Susan Gerbic that she had won her case on 24 May 2019. According to Britt Hermes, "the court ruled that my post is protected speech under Article 5 (1) of the German constitution".
India
In 2020, Karan Bajaj, the founder of WhiteHat Jr., now owned by Byju's, filed a 2.6 million dollar lawsuit against Pradeep Poonia, an engineer who publicly accused the company of having a toxic work environment and unethical business practices. The Delhi High Court issued an interim order requiring Poonia to remove certain tweets from his account. In 2021, Bajaj rescinded the lawsuit.
Israel
During 2016, Amir Bramly, who at the time was being investigated and subsequently indicted for an alleged Ponzi scheme, sued for libel Tomer Ganon, a Calcalist reporter, privately for ₪1 million in damages, due to a news item linking him to Bar Refaeli. In addition Bramly sued Channel-2 News and its reporters and managers for ₪5 million in damages due to an alleged libel in an in-depth TV news item and interview with the court appointed liquidator of his companies, and has threatened to sue additional bodies. The sued individuals and bodies have claimed that these are SLAPP actions.
Japan
In 2006, Oricon Inc., Japan's music chart provider, sued freelance journalist Hiro Ugaya due to his suggesting in an article for business and culture magazine that the company was fiddling its statistics to benefit certain management companies and labels, specifically Johnny and Associates. The company sought ¥50 million and apology from him. He found allies in the magazine's editor-in-chief Tadashi Ibi, lawyer Kentaro Shirosaki, and Reporters Sans Frontières (RSF).
He was found guilty in 2008 by the Tokyo District Court and ordered to pay one million yen, but he appealed and won. Oricon did not appeal later. His 33-month struggle against Oricon and his research on SLAPPs through his self-expense trip in the United States was featured on the TBS program JNN Reportage, titled as "Legal Intimidation Against Free Speech: What is SLAPP?"
RSF expressed its support to the journalist and was relieved on the abandonment of the suit.
Norway
In 2018, Lovdata, a foundation that publishes judicial information, sued two people amongst the volunteers in the rettspraksis.no project. Up until 2008, Lovdata was considered a government agency and had unlimited access to the supreme court servers. Based on this access, Lovdata has established a de facto monopoly on Norwegian supreme court rulings. When rettspraksis.no published supreme court decisions, Lovdata sued Håkon Wium Lie and Fredrik Ljone, two of the volunteers. Although court decisions are not protected by copyright in Norway, Lovdata claimed that rettspraksis.no had used advanced crawlers to copy Lovdata's database. In less than 24 hours, Lovdata was able to close the rettspraksis.no site and the judge also ordered the volunteers to pay Lovdata's legal fees. Also, rettspraksis.no was not allowed to appear in court to explain that their source for the legal decision is a CD deposited in the National Library by Lovdata itself. In the court of appeals, Lovdata admitted that it is legal to copy court decisions from an old CD-ROM, but are still pressing charges.
Serbia
In the late 1990s, many SLAPP cases against independent and pro-opposition media ensued after adoption of the infamous media law, proposed by then minister of information, Aleksandar Vučić. The main characteristic of these cases were quick trials and extremely high fines, most of which were unaffordable for journalists and their media houses.
While SLAPP cases became, more or less, rare after the Overthrow of Slobodan Milošević, they gradually reappeared in the late 2010s, and especially in the early 2020s, during SNS-led cabinets. Notably, Aleksandar Vučić is current president of Serbia, the most influential figure of the regime, and he is often accused of suppression of media freedoms.
United States
From 1981 to 1986, Pacific Legal Foundation and San Luis Obispo County, California, filed a suit attempting to obtain the mailing list of the Abalone Alliance to get the group to pay for the police costs of the largest anti-nuclear civil-disobedience act in U.S. history at the Diablo Canyon Power Plant. Pacific Legal Foundation lost at every court level and withdrew the suit the day before it was due to be heard by the U.S. Supreme Court. Kim Shewalter and other neighborhood activists, as defendants, won a 1998 anti-SLAPP motion against apartment building owners. The owners had filed a SLAPP because of the defendants' protest activities.
Karen Winner, the author of Divorced From Justice, is recognized as "[the] catalyst for the changes that we adopted", said Leo Milonas, a retired justice with the Appellate Division of the New York state courts who chaired a special commission that recommended the changes adopted by Chief Judge Judith Kaye. But in 1999, Winner, along with a psychologist/whistleblower, and several citizens were SLAPPed for criticizing the guardian ad litem system and a former judge in South Carolina. Winner's report, "Findings on Judicial Practices & Court-appointed Personnel in the Family Courts in Dorchester, Charleston & Berkeley Counties, South Carolina" and citizen demonstrations led to the first laws in South Carolina to establish minimum standards and licensing requirements for guardians ad litem, who represent the interests of children in court cases. The retaliatory SLAPPs have been dragging on for nearly 10 years, with judgments totaling more than $11 million against the co-defendants collectively. Reflecting the retaliatory nature of these suits, at least one of the co-defendants is still waiting to find out from the judges which particular statements, if any, he made were false.
Barbra Streisand, as plaintiff, lost a 2003 SLAPP motion after she sued an aerial photographer involved in the California Coastal Records Project. Streisand v. Adelman, (California Superior Court Case SC077257) See Streisand effect.
Barry King and another Internet poster, as defendants, won an anti-SLAPP motion against corporate plaintiffs based on critical posts on an Internet financial message board.
Kathi Mills won an anti-SLAPP motion against the Atlanta Humane Society, Atlanta Humane Society v. Mills, in Gwinnett County (Georgia) Superior Court; case 01-A-13269-1. She had been sued based on comments she made to an internet forum after a news program had aired critical of the AHS. In part, the judge ruled that private citizens do not need to investigate news coverage before they make their own comments on it, and that governmental entities may not sue for defamation.
In 2004, RadioShack Corporation sued Bradley D. Jones, the webmaster of RadioShackSucks.com and a former RadioShack dealer for 17 years, in an attempt to suppress online discussion of a class action lawsuit in which more than 3,300 current or former RadioShack managers were alleging the company required them to work long hours without overtime pay.
Nationally syndicated talk radio host Tom Martino prevailed in an anti-SLAPP motion in 2009 after he was sued for libel by a watercraft retailer. The case received national attention for its suggestion that no one reasonably expects objective facts from a typical talk show host, who is often a comedian telling jokes.
In March 2009, MagicJack (a company that promotes a USB VoIP device) filed a defamation suit against Boing Boing for exposing their unfair and deceptive business tactics regarding their EULA, visitor counter, and 30-day trial period. This was dismissed as a SLAPP by a California judge in late 2009. In the resulting ruling, MagicJack was made responsible for most of Boing Boing's legal costs.
In the 2009 case Comins vs. VanVoorhis, a Florida man named Christopher Comins filed a defamation suit against a University of Florida graduate student after the student blogged about a video of Comins repeatedly shooting someone's pet dogs. This was cited as an example of a SLAPP by the radio show On the Media.
In November 2010, filmmaker Fredrik Gertten, as defendant, won an anti-SLAPP motion after he was sued for defamation by Dole Fruit Company. The case concerned Gertten's documentary film about farm workers. The lengthy lawsuit was documented in Gertten's film Big Boys Gone Bananas!*.
In an effort to prevent four women from filing any Public Records Requests without first getting permission from a judge, or from filing future lawsuits, the Congress Elementary School District filed the lawsuit Congress Elementary School District v. Warren, et. al. on 28 January 2010. The Goldwater Institute, a think tank based in Phoenix, Arizona, represented the four defendants. The school district said that it has been harassed so often by Warren that it was not able to functionally educate its students. Toni Wayas, the school district's superintendent, claimed "that it had, time and time again, complied with the requests". The Goldwater Institute argued that the school district had been in violation of state laws mandating government transparency in the past. Investigations in 2002 and 2007 by the state Ombudsman and Attorney General uncovered violations of the state's open meeting law by the Attorney General's Office. According to Carrie Ann Sitren of the Goldwater Institute, this was "a clear attempt to silence people in the community who have been critical of the board's actions, and have made good-faith attempts to ensure the district is spending taxpayer money wisely". None of the records requested were private or confidential, and thus, should have been readily available to be released to the public, according to the assistant state Ombudsman.
In December 2010, prominent foreclosure defense attorney Matthew Weidner was sued by Nationwide Title, a foreclosure processing firm.
In January 2011 Sony Computer Entertainment America sued George Hotz and other individuals for jailbreaking the PlayStation 3 and publishing encryption and signing keys for various layers of the system's architecture. The defendants and the Electronic Frontier Foundation consider the case an egregious abuse of the Digital Millennium Copyright Act. Hotz settled with Sony before trial.
In December 2015, James McGibney was ordered to pay a $1 million anti-SLAPP court sanction and $300,000 in attorney's fees to Neal Rauhauser for filing a series of baseless lawsuits against him. The ruling was temporarily reversed when the presiding judge granted McGibney's request for a new trial in February 2016, but reinstated in favor of Rauhasuer on 14 April 2016, with the SLAPP sanction against McGibney reduced from $1 million to $150,000. The judge ruled that McGibney had filed the suits to willfully and maliciously injure Rauhauser and to deter him from exercising his constitutional right to criticize McGibney.
"Scientology versus the Internet" refers to a number of disputes relating to the Church of Scientology's efforts to suppress material critical of Scientology on the Internet through the use of lawsuits and legal threats.
The Agora Six – The Cynwyd Group, LLC v. Stefany (2009)
Saltsman v. Goddard (the Steubenville High School rape case): In an effort to stop blogger Alexandria Goddard's website from allowing allegedly defamatory posts about their son, two parents of a teenaged boy from Steubenville, Ohio sued Goddard and a dozen anonymous posters in October 2012. The lawsuit asked for an injunction against the blogger, a public apology, acknowledgement that he was not involved in the rape, and $25,000 in damages.
In August 2015, the State Fair of Texas was sanctioned more than $75,000 for filing a SLAPP suit against a lawyer who had requested financial documents from the State Fair.
On 27 August 2012, Robert E. Murray and Murray Energy filed a lawsuit against environment reporter Ken Ward Jr. and the Charleston Gazette-Mail of Charleston, West Virginia. The lawsuit alleged Ken Ward Jr. posted libelous statements on his blog. Murray claims the blog post entitled "Mitt Romney, Murray Energy and Coal Criminals" has damaged his business, reputation, and has jeopardized the jobs Murray Energy provides in Belmont County, Ohio. In June 2017, Murray Energy issued a cease and desist letter to the HBO television show Last Week Tonight with John Oliver following the show's attempt to obtain comment about the coal industry. The show went ahead with the episode (18 June), in which host John Oliver discussed the Crandall Canyon Mine collapse in Utah in 2007, and expressed the opinion that Murray did not do enough to protect his miners' safety. Three days later, Murray and his companies brought suit against Oliver, the show's writers, HBO, and Time Warner. The lawsuit alleged that, in the Last Week Tonight show, Oliver "incited viewers to do harm to Mr. Murray and his companies". The ACLU filed an amicus brief in support of HBO in the case; the brief has been described as "hilarious" and the "snarkiest legal brief ever". The brief also included a comparison of Murray with the fictional character Dr. Evil that was used in the Oliver show, with the explanation that "it should be remembered that truth is an absolute defense to a claim of defamation". On 11 August 2017, a federal district court judge ruled that Murray Energy suits against The New York Times and HBO could each proceed in a lower state court. The suit against HBO was dismissed with prejudice on 21 February 2018. In November 2019, John Oliver discussed the implications of the lawsuit (and of SLAPP suits in general) on his show after Murray dropped the suit.
In March 2019, U.S. Rep. Devin Nunes (R-California) filed a defamation lawsuit against Twitter, Elizabeth "Liz" Mair, Mair Strategies LLC, and the people behind the parody Twitter accounts "Devin Nunes' Cow" (@DevinCow) and "Devin Nunes' Mom" (@DevinNunesMom), seeking $250 million in damages. The lawsuit has been described by legal experts as a SLAPP. Notably, the suit was filed in Virginia, a state known to have weak anti-SLAPP laws, rather than in California, where Nunes resides and where Twitter is headquartered, but which also has strong anti-SLAPP laws. In April 2019, Nunes filed a defamation lawsuit against The Fresno Bee, his hometown newspaper, and its owner, McClatchy, after it published a story detailing how investors in his winery partied on a yacht with cocaine and prostitutes. Like the prior lawsuit, it was filed in Virginia. Nunes has since filed additional lawsuits claiming defamation against CNN, Ryan Lizza, Hearst Magazines, Campaign for Accountability, Fusion GPS, and others. In February 2020 (following the 2019 elections in which Democrats took control of both chambers for the first time since 1994), the Virginia General Assembly passed bills intended to discourage future SLAPPs in the state by strengthening defendant protections.
See also
Barratry (common law)
Cease and desist
Chilling effect
Franchise fraud
Lawfare
Legal abuse
Legal threat
Media transparency
Public participation
Reputation management
Spamigation
Vexatious litigation
Frivolous litigation
Case studies
McDonald's Restaurants v Morris & Steel
Scientology and the legal system
Varian v. Delfino
Horizon Group v. Bonnen
Santa Barbara News-Press controversy#Susan Paterno
Steven Donziger
References
Further reading
External links
Abuse of the legal system
Ethically disputed judicial practices
Lawsuits
Legal terminology
Right to petition
Strategic lawsuits against public participation
Tort law
1980s neologisms |
165154 | https://en.wikipedia.org/wiki/Application%20server | Application server | An application server is a server that hosts applications.
Application server frameworks are software frameworks for building application servers. An application server framework provides both facilities to create web applications and a server environment to run them.
An application server framework contains a comprehensive service layer model. It includes a set of components accessible to the software developer through a standard API defined for the platform itself. For Web applications, these components usually run in the same environment as their web server(s), and their main job is to support the construction of dynamic pages. However, many application servers do more than generate web pages: they implement services such as clustering, fail-over, and load-balancing, so developers can focus on implementing the business logic.
In the case of Java application servers, the Jakarta EE server behaves like an extended virtual machine for running applications, transparently handling connections to the database on one side, and, often, connections to the web client on the other.
Other uses of the term may refer to the services that a server makes available or the computer hardware on which the services run.
History
The term was originally used when discussing early client–server systems to differentiate servers that contain application logic SQL services and middleware servers as distinct from other types of data-servers.
Currently, despite the fact that web-browsers have become ubiquitous and are typically the client for end-users in many application deployment strategies, browser-based web apps represent only a subset of application-server technologies.
Definition
Application servers are system software upon which web applications or desktop applications run.
Application servers consist of:
web server connectors,
computer programming languages,
runtime libraries,
database connectors, and
the administration code needed to deploy, configure, manage, and connect these components on a web host.
An application server runs behind a web Server (e.g., Apache or Microsoft Internet Information Services (IIS)) and (almost always) in front of an SQL database (e.g., PostgreSQL, MySQL, or Oracle). Web applications are computer code which run atop application servers and are written in the language(s) the application server supports and call the runtime libraries and components the application server offers.
Many application servers exist. The choice impacts the cost, performance, reliability, scalability, and maintainability of a web application.
Proprietary application servers provide system services in a well-defined but proprietary manner. The application developers develop programs according to the specification of the application server. Dependence on a particular vendor is the drawback of this approach.
An opposite but analogous case is the Jakarta EE platform. Jakarta EE application servers provide system services in a well-defined, open, industry standard. The application developers develop programs according to the Jakarta EE specifications and not according to the application server. A Jakarta EE application developed according to Jakarta EE standards can be deployed in any Jakarta EE application server making it vendor-independent.
Java application servers
Jakarta EE (formerly Java EE or J2EE) defines the core set of API and features of Java application servers.
The Jakarta EE infrastructure is partitioned into logical containers.
EJB container: Enterprise Beans are used to manage transactions. According to the Java BluePrints, the business logic of an application resides in Enterprise Beans—a modular server component providing many features, including declarative transaction management, and improving application scalability.
Web container: the web modules include Jakarta Servlets and Jakarta Server Pages (JSP).
JCA container (Jakarta Connectors)
JMS provider (Jakarta Messaging)
Some Java Application Servers leave off many Jakarta EE features like EJB and Jakarta Messaging (JMS). Their focus is more on Jakarta Servlets and Jakarta Server Pages.
There are many open source Java application servers that support Jakarta EE.
Commercial Java application servers have been dominated by WebLogic Application Server by Oracle, WebSphere Application Server from IBM and the open source JBoss Enterprise Application Platform (JBoss EAP) by Red Hat.
A Jakarta Server Page (JSP) executes in a web container. JSPs provide a way to create HTML pages by embedding references to the server logic within the page. HTML coders and Java programmers can work side by side by referencing each other's code from within their own.
The application servers mentioned above mainly serve web applications, and services via RMI, EJB, JMS and SOAP. Some application servers target networks other than web-based ones: Session Initiation Protocol servers, for instance, target telephony networks.
.NET
Microsoft
Microsoft positions their middle-tier applications and services infrastructure in the Windows Server operating system and the .NET Framework technologies in the role of an application server. The Windows Application Server role includes Internet Information Services (IIS) to provide web server support, the .NET Framework to provide application support, ASP.NET to provide server side scripting, COM+ for application component communication, Message Queuing for multithreaded processing, and the Windows Communication Foundation (WCF) for application communication.
Third-party
Mono (a cross platform open-source implementation of .NET supporting nearly all its features, with the exception of Windows OS-specific features), sponsored by Microsoft and released under the MIT License
PHP application servers
PHP application servers are used for running and managing PHP applications.
Zend Server, built by Zend, provides application server functionality for the PHP-based applications.
appserver.io, built by TechDivision GmbH is a multithreaded application server for PHP written in PHP.
RoadRunner, built by Spiral Scout is a high-performance PHP application server, load-balancer, and process manager written in Go.
Mobile application servers
A mobile app server is mobile middleware that makes back-end systems accessible to mobile apps to support mobile app development. Much like a web server that stores, processes, and delivers web pages to clients, a mobile app server bridges the gap from existing infrastructure to mobile devices.
Purpose
Although most standards-based infrastructure (including SOAs) are designed to connect to any independent of any vendor, product or technology, most enterprises have trouble connecting back-end systems to mobile applications, because mobile devices add the following technological challenges:
Limited resources – mobile devices have limited power and bandwidth
Intermittent connectivity – cellular service and wifi coverage is often not continuous
Difficult to secure – mobility and BYOD make it hard to secure mobile devices
The purpose of a mobile application server is to build on existing infrastructure to accommodate mobile devices.
Common features
Core capabilities of mobile application services include
Data routing– data is packaged in smaller (REST) objects with some business logic to minimize demands on bandwidth and battery
Orchestration– transactions and data integration across multiple sources
Authentication service– secure connectivity to back-end systems is managed by the mobile middleware
Off-line support– allows users to access and use data even though the device is not connected
Security– data encryption, device control, SSL, call logging
Mobile application servers vs. application servers vs. web servers
Mobile application servers, Application servers, and web servers serve similar purposes: they are pieces of middleware that connect back-end systems to the users that need to access them, but the technology in each of the three differs.
Application servers
Application servers were developed before the ubiquity of web-based applications—expose back-end business logic through various protocols, sometimes including HTTP, and manage security, transaction processing, resource pooling, and messaging. When web-based applications grew in popularity, application servers did not meet the needs of developers, and the webserver was created to fill the gap.
Web servers
Web servers provide the caching and scaling functionality demanded by web access and not provided by application servers. They convert requests to static content and serve only HTTP content.
Mobile application servers
Mobile application servers are on a similar path. The emergence of mobile devices presents the need for functionality not anticipated by the developers of traditional application server developers, and mobile application servers fill this gap. They take care of the security, data management and off-line requirements not met by existing infrastructure, and present content exclusively in REST.
Over time, these three categories may fully merge and be available in a single product, but the root functions differ.
Deployment models
An application server can be deployed:
On premises
Cloud
Private cloud
Platform as a service (PaaS)
See also
Application service provider (ASP)
List of application servers
References
Servers (computing)
Software architecture |
166378 | https://en.wikipedia.org/wiki/Merkava | Merkava | The Merkava (, , "chariot") is a series of main battle tanks used by the Israel Defense Forces and is the backbone of the IDF's armored corps. The tank began development in 1970, and its first generation - the Merkava mark 1 - entered official service in 1979. Four main variants have been deployed, with the Merkava mark 4 being the latest version. The Merkava was first used extensively in the 1982 Lebanon War. The name "Merkava" was derived from the IDF's initial development program name.
The tank was developed in the Merkava and Armored Combat Vehicles Division of the Israeli Ministry of Defense, and most of its parts are manufactured in Israel. The Merkava's design concept is to provide maximum protection for its crew, and therefore its front armor was fortified and the engine placed in the front part of the tank, unlike most other tanks.
Design criteria include rapid repair of battle damage, survivability, cost-effectiveness and off-road performance. Following the model of contemporary self-propelled howitzers, the turret assembly is located closer to the rear than in most main battle tanks. With the engine in front, this layout is intended to grant additional protection against a frontal attack, so as to absorb some of the force of incoming shells, especially for the personnel in the main hull, such as the driver. It also creates more space in the rear of the tank that allows increased storage capacity and a rear entrance to the main crew compartment allowing easy access under enemy fire. This allows the tank to be used as a platform for medical disembarkation, a forward command and control station, and an infantry fighting vehicle. The rear entrance's clamshell-style doors provide overhead protection when off- and on-loading cargo and personnel.
Development
During the late 1960s, the Israeli Army began collaborating on design notes for the Chieftain tank which had originally been introduced to British Army service, with a view to Israel purchasing and domestically producing the vehicle. Two prototypes were delivered as part of a four-year trial. However, it was eventually decided not to sell the marque to the Israelis (since, at that period of time in the late 1960s, the UK was more friendly towards the Arab states than to Israel), which prompted them to follow their own development programme.
Israel Tal, who was serving as a brigade commander after the Suez Crisis, restarted plans to produce an Israeli-made tank, drawing on lessons from the 1973 Yom Kippur War, in which Israeli forces were outnumbered by those of the Middle East's Arab nations.
By 1974, initial designs were completed and prototypes were built. After a brief set of trials, work began to retool the Tel HaShomer ordnance depot for full-time development and construction. After the new facilities were completed, the Merkava was announced to the public in the International Defense Review periodical. The first official images of the tank were then released to the American periodical Armed Forces Journal on May 4, 1977. The IDF officially adopted the tank in December 1979.
Primary contractors
The lead organization for system integration of the Merkava's main components is Israel Military Industries (IMI). The Israeli Ordnance Corps are responsible for final Merkava assembly. More than 90% of the Merkava 4 tank's components are produced locally in Israel by Israeli defense industries. Contributors to the vehicle include:
IMI manufactures the 105 mm and 120 mm main guns and their ammunition;
TGL SP Industries LTD develop and production of the road wheels.
Urdan Industries assembles and constructs the hull, drive- and powertrains, and turret assemblies;
Soltam manufactures the 60 mm internal mortar;
Elta designs and manufactures the electronic sensors and infrared optics;
Elbit delivers the ballistics computer, fire-control system (FCS) and electric turret and gun control system;
Tadiran provides cabin air conditioning, crew cabin intercom and radio equipment;
El-Op, Elisra and Astronautics implement the optics and laser warning systems;
Rafael Advanced Defense Systems builds and installs the Rafael Overhead Weapon Station and Trophy active protection system;
L-3 Communication Combat Propulsion Systems produces licensed copies of Germany's MTU MT883 1500 hp diesel engine powerplant and RENK RK325 transmissions;
Motorola supplies Tadiran communication encryption systems;
DuPont supplies the Nomex, ballistic, and fire-retardant materials used by Hagor;
Russia Military Industries helped to design the KMT-4 & -5 anti-mine rollers and the ABK-3 dozer blade, now built by Urdan;
FN Herstal supplies 7.62 mm (MAG 58) and 12.7 mm (M2) coaxial and pintle-mounted machine guns;
Caterpillar assisted with an Israeli-designed track system.
Bental Industries, a TAT Technologies subsidiary, produced the brushless motors used in the Mark IV's turret and gun control system.
General characteristics
Firepower
The Merkava Mark I and II were armed with a 105 mm M64 gun, a license built variant of the M68. The Mark III, Mark III Dor Dalet BAZ kassag, and the Mark IV are armed with an IMI 120 mm smoothbore gun which can fire all versions of Western 120 mm smooth bore tank ammunition.
Each model of the Merkava has two roof mounted 7.62 mm machine guns for use by the commander and loader and another mounted co-axially with the main gun. A 60 mm mortar is also fitted for firing smoke rounds or suppressing dug-in infantry anti-tank teams.
All Merkava tanks are fitted with a remote-controlled M2 Browning .50 heavy machine gun, aligned with the main gun and controlled from within the turret. The .50 machine gun has proven to be useful and effective in asymmetric warfare.
Mobility
The tank's 1,500 horsepower turbocharged diesel engine was designed by MTU and is manufactured under license by L-3 Communication Combat Propulsion Systems (formerly General Dynamics). The Mark IV's top road speed is 64 km/h.
Variants
Merkava Mark I
The Mark I, operational since 1978, is the original design created as a result of Israel Tal's decision, and was fabricated and designed for mass production.
The Mark I weighed 63 tonnes and had a diesel engine, with a power-to-weight ratio of 14 hp/ton. It was armed with the 105 millimeter M64 L71A main gun (a licensed copy of the British Royal Ordnance L7), two 7.62 mm machine guns for anti-infantry defense, and a 60 mm mortar mounted externally, with the mortar operator not completely protected by the tank's hull.
The general design borrows the tracks and road wheels from the British Centurion tank, which had seen extensive use during the Yom Kippur war and performed well in the rocky terrain of the Golan.
The Merkava was first used in combat during the 1982 Lebanon War, where Israel deployed 180 units. Although they were a success, the M113 APCs that accompanied them were found to have several defects and were withdrawn. Merkavas were converted into makeshift APCs or armored ambulances by taking out the palleted ammunition racks in storage. Ten soldiers or walking wounded could enter and exit through the rear door.
After the war, many adjustments and additions were noted and designed, the most important being that the 60 mm mortar needed to be installed within the hull and engineered for remote firing—a valuable feature that the Israelis had initially encountered on their Centurion Mk3s with their 2" Mk.III mortar. A shot trap was found beneath the rear of the turret bustle, where a well-placed shot could jam the turret completely. The installation of chain netting to disperse and destroy rocket propelled grenades and anti-tank rockets before impacting the primary armor increased survivability.
Merkava Mark II
The Mark II was first introduced into general service in April 1983. While fundamentally the same as the Merkava Mark I, it incorporated numerous small adjustments as a result of the previous year's incursion into Lebanon. The new tank was optimized for urban warfare and low intensity conflicts, with a weight and engine no greater than the Mark I.
The Mark II used the same 105 mm main gun and 7.62 mm machine guns as the Mark I, but the 60 mm mortar was redesigned during construction to be located within the hull and configured for remote firing to remove the need to expose the operator to enemy small-arms fire. An Israeli-designed automatic transmission and increased fuel storage for increased range was installed on all further Mark IIs. Anti-rocket netting was fitted for increased survivability against infantry equipped with anti-tank rockets. Many minor improvements were made to the fire-control system. Updated meteorological sensors, crosswind analyzers, and thermographic optics and image intensifiers gave greater visibility and battlefield awareness.
Newer versions of the original Mark II were designated:
Mark IIB, with thermal optics and unspecified updates to the fire control system.
Mark IIC, with more armor on the top of the turret to improve protection against attack from the air.
Mark IID, with modular composite armor on the chassis and turret, allowing rapid replacement of damaged armor.
In 2015 the IDF had begun a plan to take the old models out of storage and repurpose them as heavy armored personnel carriers. Cannons, turrets, and spaces used to store tank shells inside the hull were removed to create a personnel carrier that outperforms the lighter M113 APC. Converting hundreds of Mark II chassis provides a low-cost way to upgrade support units' capabilities to perform medical, logistical, and rescue missions. By late 2016, after 33 years of service, the last conscripted brigade to operate Merkava IIs was scheduled to transition to Merkava III and Merkava IV tanks for battlefield missions, relegating the vehicles to reserve forces for border patrols during conflicts and conversion to personnel carriers.
Merkava Mark III
The Merkava Mark III was introduced in December 1989 and was in production until 2003. As of 2016, the Merkava III is by far the most numerous tank in frontline IDF service. Compared to the Merkava II, it has upgrades to the drivetrain, powertrain, armament, and electronic systems. The most prominent addition was the incorporation of the locally developed IMI 120 mm gun. This gun and a larger diesel engine increased the total weight of the tank to , but the larger engine increased the maximum cruising speed to .
The turret was re-engineered for movement independent of the tank chassis, allowing it to track a target regardless of the tank's movement. Many other changes were made, including:
External two-way telephone for secure communications between the tank crew and dismounted infantry,
Upgraded ammunition storage containers to minimize ammunition cook-off,
Addition of laser designators,
Incorporation of the Kasag modular armor system, designed for rapid replacement and repair in the battlefield and for quick upgrading as new designs and sophisticated materials become available,
BAZ System
The 1995 Mark III BAZ (Hebrew acronym for ברק זוהר, Barak Zoher, signifying Shining Lightning) had a number of updates and additional systems including:
NBC protection systems,
Locally developed central air-conditioning system,
Added improvements in ballistic protection,
The Mark IIID has removable modular composite armor on the chassis and turret.
Dor-Dalet
The last generation of the Mark III class was the Mark IIID Dor-Dalet (Hebrew: Fourth Generation), which included several components as prototypes to be introduced in the Mark IV.
Upgraded and strengthened tracks (built by Caterpillar, designed in Israel),
Installation of the R-OWS.
Independent, fully stabilised, panoramic commander's sights allowing "hunter-killer" capability
Advanced thermal imagers for both gunner and commander.
Merkava Mark IV
The Mark IV is the most recent variant of the Merkava tank, which has been in development since 1999 and production since 2004. The upgrade's development was announced in an October 1999 edition of the military publication Bamachaneh ("At the Camp"). However, the Merkava Mark III remained in production until 2003. The first Merkava IVs were in production in limited numbers by the end of 2004.
Removable modular armor, from the Merkava Mark IIID, is used on all sides, including the top and a V-shaped belly armor pack for the underside. This modular system is designed to allow damaged tanks to be rapidly repaired and returned to the field. Because rear armor is thinner, chains with iron balls are attached to detonate projectiles before they hit the main armored hull. It is the first contemporary tank without a loader's hatch in the turret roof, because any aperture in the turret roof increases risk of penetration by ATGMs. Tank rounds are stored in individual fire-proof canisters, which reduce the chance of cookoffs in a fire inside the tank. The turret is electrically-powered (hydraulic turrets use flammable liquid that ignites if the turret is penetrated) and "dry": no active rounds are stored in it.
Some features, such as hull shaping, exterior non-reflective paints (radar cross-section reduction), and shielding for engine heat plumes mixing with air particles (reduced infrared signature) to confuse enemy thermal imagers, were carried over from the IAI Lavi program of the Israeli Air Force to make the tank harder to spot by heat sensors and radar.
The Mark IV includes the larger 120 mm main gun of the previous versions, but can fire a wider variety of ammunition, including HEAT and sabot rounds like the Armor Piercing Fin Stabilised Discarding Sabot(APFSDS) kinetic energy penetrator, using an electrical semi-automatic revolving magazine for 10 rounds. It also includes a much larger 12.7 mm machine gun for anti-vehicle operations (most commonly used against technicals).
The Mark IV has the Israeli-designed "TSAWS (Tracks, Springs, and Wheels System)" caterpillar track system, called "Mazkom" () by troops. This system is designed to reduce track-shedding under the harsh basalt rock conditions of Lebanon and the Golan Heights.
The model has a new fire-control system, the El-Op Knight Mark 4. An Amcoram LWS-2 laser warning receiver notifies the crew of threats like laser-guided anti-tank missiles, which can fire smoke grenade launchers to obscure the tank from the laser beam. Electromagnetic warning against radar illumination is also installed.
The tank carries the Israeli Elbit Systems BMS (Battle Management System; Hebrew: צי"ד), a centralised system that takes data from tracked units and UAVs in theater, displays it on color screens, and distributes it in encrypted form to all other units equipped with BMS in a given theater.
The Merkava IV has been designed for rapid repair and fast replacement of damaged armour, with modular armour that can be easily removed and replaced. It is also designed to be cost-effective in production and maintenance; its cost is lower than that of a number of other tanks used by Western armies.
The tank has a high performance air conditioning system and can even be fitted with a toilet for long duration missions.
Mark IVm (Mk 4M) Windbreaker
The Merkava Mark IVm (Mk 4M) Windbreaker is a Merkava Mark IV equipped with the Trophy active protection system (APS), designated "Meil Ruach" (; "Windbreaker" or "Wind Coat"). The serial production of Mark IVm tanks started in 2009 and the first whole brigade of Mark IVms was declared operational in 2011. The Trophy APS successfully intercepted rocket-propelled grenades and anti-tank missiles, including 9M133 Kornets, fired by Hamas before and during Operation Protective Edge in 2014.
Iron Vision helmet-mounted display system
The IDF was to begin trials of Elbit's Iron Vision, the world's first helmet-mounted display for tanks, in mid-2017. Israel's Elbit, which developed the helmet-mounted display system for the F-35, plans Iron Vision to use a circular review system as a number of externally mounted cameras to project the 360° view of a tank's surroundings onto the helmet-mounted display of its crew members. This allows the crew members to see outside the tank while staying inside, without having to open the hatches.
Specifications of models
Combat history
The Merkava has participated in the following actions.
1982 Lebanon War
The Merkava was used widely during the 1982 Lebanon War. The tank outperformed contemporary Syrian tanks (mostly T-62s) and proved largely immune to the anti-tank weapons of the time (the AT-3 Sagger and RPG-7) that were used against it. It was judged to be a significant improvement over Israel's previously most effective main battle tank, the Centurion. Israel lost dozens of tanks during the conflict, including a number of Merkavas.
Second Intifada
In February 2002, a Merkava III was destroyed by a roadside bomb near Netzarim in the Gaza Strip. The tank was lured into intervening in an attack on a settler convoy. The tank went over a heavy mine (estimated 100 kg TNT), which detonated and totally destroyed the tank. Four soldiers were killed in the blast. This was the first main battle tank to be destroyed during the Second Intifada. A second Israeli tank, a Merkava II or Merkava III, was destroyed a month later in the same area and a further three soldiers were killed. A third Merkava II or III tank was destroyed near the Kissufim Crossing, when one soldier was killed and two wounded.
2006 Lebanon War
During the 2006 Lebanon War, 5 Merkava tanks were destroyed. Most of the tanks engaged were Merkava IIIs and earlier versions; only a few of the tanks used during the war were Merkava Mark IVs since by 2006 they had still only entered service in limited numbers. Hezbollah fired over 1,000 anti-tank missiles during the conflict against both tanks and dismounted infantry. Some 45 percent of all tanks and armoured vehicles hit with antitank missiles during the conflict suffered some form of armour penetration. In total, 15 tank crewmen were killed by these ATGM penetrations. The penetrations were caused by tandem warhead missiles. Hezbollah weaponry was believed to include advanced Russian RPG-29 'Vampir', AT-5 'Konkurs', AT-13 'Metis-M', and laser-guided AT-14 'Kornet' HEAT missiles. The IDF reported finding the state-of-the-art Kornet ATGMs on Hezbollah positions in the village of Ghandouriyeh. Several months after the cease-fire, reports have provided detailed photographic evidence that Kornet ATGMs were indeed both possessed and used by Hezbollah in this area. Another Merkava IV tank crewman was killed when a tank ran over an improvised explosive device (IED). This tank had additional V-shaped underside armor, limiting casualties to just one of the seven personnel (four crewmen and three infantrymen) on board. In total, five Merkava tanks (two Merkava IIs, one Merkava III, and two Merkava IVs) were destroyed. Of these two Merkava Mark IVs, one was damaged by a powerful IED, and the other being destroyed by a Russian AT-14 'Kornet' missiles. The Israeli military said that it was satisfied with the Merkava Mark IV's performance, and attributed problems to insufficient training before the war. In total, 50 Merkava tanks (predominantly Merkava IIs and IIIs) were hit, eight of which remained serviceable on the battlefield. 21 tanks suffered armour penetrations (15 from missiles, and 6 from IEDs and anti-tank mines).
After the 2006 war, and as the IDF becomes increasingly involved in unconventional and guerrilla warfare, some analysts say the Merkava is too vulnerable to advanced anti-tank missiles, that in their man-portable types can be fielded by guerrilla warfare opponents. Other post-war analysts, including David Eshel, disagree, arguing that reports of losses to Merkavas were overstated and that "summing up the performance of Merkava tanks, especially the latest version Merkava Mark IV, most tank crews agree that, in spite of the losses sustained and some major flaws in tactical conduct, the tank proved its mettle in its first high-saturation combat." On a comparison done by the armor corps newsletter, it was shown that the average number of crewmen killed per tank penetrated by missile/rocket was reduced from 2 during the Yom Kippur War to 1.5 during the 1982 Lebanon War to 1 during the 2006 Lebanon War proving how, even in the face of the improvement in anti-tank weaponry, the Merkava series tanks provide increasingly better protection to its crew. The IDF wanted to increase orders of new Merkava Mark IV tanks, and planned to add the Trophy active defense system to Merkava Mark IV tanks, and to increase joint training between crews and Israeli antitank soldiers.
Operation Cast Lead
The Merkava IV was used more extensively during the Gaza War, as it had been received by the IDF in increasing numbers since 2006, replacing more of the Merkava II and III versions of the tank that were in service. One brigade of Merkava IVs managed to bisect the Gaza strip in five hours without Israeli casualties. The commander of the brigade stated that battlefield tactics had been greatly revised since 2006. Tactics had also been modified to focus on asymmetric or guerilla war threats, in addition to the conventional war scenarios that the Merkava had primarily been designed to combat.
The IDF also deployed the Merkava II and III during the war.
Gaza Border areas
By October 2010, the IDF had begun to equip the first Merkava IVs with the Trophy active protection system, to improve the tanks' protection against advanced anti-tank missiles which use tandem-charge HEAT warheads. Added protection systems included an Elbit laser-warning system and IMI in-built smoke-screen grenades.
In December 2010, Hamas fired an AT-14 Kornet anti-tank missile at a Merkava Mark III tank stationed on the Israel-Gaza border near Al-Bureij. It had hitherto not been suspected that Hamas possessed such an advanced missile. The missile penetrated the tank's armour, but caused no injuries among its crew. As a result of the attack, Israel decided to deploy its first Merkava Mark IV battalion equipped with the Trophy system along the Gaza border.
On March 1, 2011, a Merkava MK IV stationed near the Gaza border, equipped with the Trophy active protection system, successfully foiled a missile attack against it, marking the system's first operational success.
Operation Protective Edge 2014
No tanks were damaged during Operation Protective Edge. The Merkava Mk. IVm (Merkava Mk 4M) tanks, fitted with the Trophy Active Protection system, intercepted anti-tank missiles and RPGs on dozens of different occasions during the ground operation. During the operation, the system intercepted anti-tank weapons, primarily Kornet, as well as Metis-M and RPG-29, proving itself effective against man-portable anti-tank weapons. By identifying the source of fire, Trophy also allowed tanks to kill the Hamas anti-tank team on one occasion.
Giora Katz, head of Rafael's land division, stated that it was a "breakthrough because it is the first time in military history where an active defense system has proven itself in intense fighting."
The 401st Brigade (equipped with Merkava Mk. IVm tanks) alone killed between 120 and 130 Hamas militants during the ground fighting phase of Operation Protective Edge, according to the IDF.
Export
In May 2012, Israel offered procurement of Merkava IV tanks to the Colombian Army. The sale would include 25–40 tanks at an approximate cost of $4.5 million each, as well as a number of Namer APCs. With the threat of the expanding Venezuelan military, it would strengthen Colombian armored forces against Venezuelan T-72 tanks.
In 2014, Israel reported that exports of the Mk. 4 had started; the purchasing country's name was not disclosed for security reasons.
Derivatives
Following the Second Intifada the Israel Defense Forces modified some of their Merkavas to satisfy the needs of urban warfare.
Merkava LIC
These are Merkava Mark III BAZ or Mark IV tanks, converted for urban warfare. The LIC designation stands for "Low intensity conflict", underlining its emphasis on counter-insurgency, street-to-street inner-city asymmetrical type warfare of the 21st century.
The Merkava is equipped with a turret 12.7 mm caliber coaxial machine gun, which enables the crew to lay down fairly heavy cover fire without using the main gun (which is relatively ineffective against individual enemy combatants). Like the new remote-operated weapon station, the coaxial machine-gun is fired from inside the tank without exposing the crew to small-arms fire and snipers.
The most sensitive areas of a tank, its optics, exhaust ports and ventilators, are all protected by a newly developed high-strength metal mesh to prevent explosive charges being planted there.
Rubber whip pole-markers with LED tips and a driver's rear-facing camera have been installed to improve navigation and maneuverability in an urban environment by day or by night.
Merkava Tankbulance
Some Merkava tanks are fitted with full medical and ambulance capabilities while retaining their armament (but carrying less ammunition than the standard tank). The cabin area is converted for carrying injured personnel and includes two stretchers and life support medical station systems supplemented by a full medical team complement to operate under combat conditions with a Merkava battalion. The vehicle has a rear door to facilitate evacuation under fire, and can provide cover-fire/fire-support to infantry.
The "tankbulance" is not an unarmed ambulance and consequently is not protected by the Geneva Conventions provisions regarding ambulances, but it is far less vulnerable to accidental or deliberate fire than an ambulance or armored personnel carrier.
Merkava IFV Namer
Namer (Hebrew: leopard, which is also an abbreviation of "Nagmash (APC) Merkava"), is an infantry fighting vehicle based on the Merkava Mark IV chassis. In service since 2008, the vehicle was initially called Nemmera (Hebrew: leopardess), but later renamed to Namer.
Namer is equipped with a Samson Remote Controlled Weapon Station (RCWS) armed with either a .50 M2 Browning Heavy Machinegun or a Mk 19 Automatic Grenade Launcher. It also has a 7.62 mm MAG machine gun, 60 mm mortar and smoke grenades. Like the Merkava Mark IV, it is optimized for high level of crew survival on the battlefield. The Namer has a three-man crew (commander, driver, and RCWS gunner) and may carry up to nine infantrymen and a stretcher. An ambulance variant can carry two casualties on stretchers and medical equipment.
The Golani Brigade used two Namer IFVs during Operation Cast Lead. During Operation Protective Edge more than 20 vehicles were operated with great success and post operation analysis recommended procuring more of them.
Merkava ARV Nemmera
The Merkava armored recovery vehicle initially called Namer (Hebrew: leopard), but subsequently renamed Nemmera (Hebrew: leopardess) is an armored recovery vehicle based on a Merkava Mark III or IV chassis. It can tow disabled tanks and carries a complete Merkava back-up power pack that can be changed in the field in under 90 minutes.
There are two versions of Nemmera: the heavier equipped with a 42 ton-meter crane and a 35 ton-meter winch, and the smaller equipped with a smaller crane.
Merkava Howitzer Sholef
Two prototypes of Sholef ("Slammer", Hebrew slang for "Gunslinger") 155 mm self-propelled howitzer with an automatic loading system were built by Soltam in 1984–1986. The 45-ton vehicle had a long 155 mm gun barrel giving a range of 45+ km. Using GPS, inertial navigation, and an internal fire control computer, it was also capable of direct fire while on the move. It never entered production.
The Slammer is a heavily armored artillery gun mounted on a modified Merkava Mk 1 chassis. Many of these vehicles are Merkava Mk 1 that were retired after the Merkava Mk 2 and Merkava Mk 3 came into service. The Slammer has a long 52-caliber gun barrel that allows +10% range. Reload speed may be decreased to 1 for one minute every 10 minutes through use of an automatic loader. Ammunition racks are large. The Slammer is ready for autonomous operation (without an FDC) if the target's location is known within 15 seconds of a halt, using GPS, inertial navigation, and an internal fire control computer.
The Slammer 155 mm self-propelled howitzer is based on a modified Merkava MBT chassis fitted with a new welded steel turret, designed by Soltam Systems.
Development commenced in the 1970s. The project was considered of high national priority and incorporated the newest technological developments. Instead the Israeli Defense Forces selected an upgraded version of American M109 howitzer.
The Sholef's chassis, aside from a few minor modifications, is identical to that of the Merkava Mk.III. The glacis plate is unchanged, except for the addition of a support bracket for the gun turret, which is folded down when not in use. As such, the Sholef and Merkava series share a large percentage of common components. The front-left side of the chassis has a prominent exhaust louver, along with a much smaller port just in front of it; the exact function of this port is uncertain, though the soot seen around it in photos of the Sholef suggests it may be a new or additional exhaust port, or perhaps an outlet for a smoke generator.
The Sholef can be ready to fire only 15 seconds after coming to a complete stop, and fire three projectiles in only 15 seconds. It is compatible with standard NATO 155 mm ammunition, and a total of 75 projectiles can be stowed in one Sholef, 60 of which are ready for combat.
The Sholef's 155mm/52 gun is an original design created by Soltam, though it bears a resemblance to South Africa's G5 Howitzer. It has a fume extractor and muzzle brake, and is kept stationary by a travel lock while the vehicle is on the move. This gun has a maximum rate of fire of 9 rounds/min, and a range in excess of 40,000 m when firing an ERFB-BB round. Though loaded automatically, the gun may be cycled and fire manually if the need arises. While the gun is normally carried by a travel lock as with most other self-propelled howitzers while the Sholef is on the move, the weapon is stabilized and can actually be used for direct-fire while the vehicle is moving, giving it much greater self-defense capability than most other vehicles of its type.
A crew of four is required to fully operate the Sholef. Air conditioning and heating for the crew are provided, as is a ration heater.
The hull has the same ballistic protection as the Merkava Mk.III. The armor on the turret is sufficient to defeat small arms fire, shell splinters, blast overpressure, and most heavy machine gun rounds. The armor is augmented by spall liners, and the same overpressure NBC system as the Merkava Mk.III is fitted. There is also a back-up collective NBC system.
The running gear consists of six unevenly spaced rubber-tired roadwheels on each side, and five return rollers, the second from the rear of which is noticeably larger than the others. The drive sprocket is forward, and the conspicuously spoked idler is rear. These may be partially obscured by track skirts, of which the Merkava Mk.III has ten panels, with a wavering underside, and little coverage of the sprocket or idler.
The ordnance is fitted with a fume extractor and a double-baffle muzzle brake. When travelling, the ordnance is held in position by a travel lock that is mounted on the forward part of the glacis plate and this is remotely operated from the crew compartment.
Firing an ERFB-BB projectile, the 155 mm 52 calibre ordnance has a maximum range of 40,000+ m.
The 155 mm 52 calibre ordnance and recoil system is of the companies well-proven type already used in its towed weapons. The breech block assembly is of the semi-automatic wedge type that contains an automatic primer feeding system that enables manual reloading of the primer without opening the breech. Turret traverse and weapon elevation is hydraulic, with manual controls for emergency use.
A maximum rate of fire of 9 rds/min can be achieved due to the automatic computerised loading system, and a burst rate of fire of three rounds in 15 seconds.
The high rate of fire can be achieved using the onboard ammunition supply or from ground-piled ammunition.
The loading cycle is operated by two turret crewmen only, with the commander operating the computer and charge loader.
The automatic loader has five main subsystems: projectile storage system; projectile transfer system; loading tray with flick rammer; charge loading tray and elevator for external charge supply; and projectile elevator for reloading the external storage or directly loading the gun.
The internal projectile storage contains 60 projectiles ready for automatic loading with the remaining 15 stored in other locations. The system enables the handling of all kinds of projectiles in use without any adaptation.
Charge loading is accomplished manually using a loading tray with the ignition primer being inserted automatically.
All systems have a manual back-up so that, in the case of failure, the loading system may be operated partly or completely manually by only three crewmen, so allowing a continuous firing rate of 4 rds/min.
The computer also controls the functioning of the gun. The Loader Control System (LCS) consists of five main units:
The commander's panel provides the means for the commander to control the automatic loader and has a dedicated keyboard and supporting electronic circuits
The Central Control Unit (CCU) is based on the Intel 80286 CPU-8086 and produces all of the system's logic equations. The unit transfers commands through the serial communications (RS-422) to the computerised units and controls the display on the commander's panel
The Terminal Units (TUs) are based on the 8031 controller for purposes of independent control of the drive elements according to a functionally determined division. With the assistance of the terminal unit, a local mode can also be used in working with selected elements
For guiding operators and making round identification and fusing, the Operator's Panel](OP) includes an LC display with fixed instructions and one dot matrix line.
The Loader Keyboard Panel (LKP) includes breech block closing switch, fire and local activation of the trays.
The main operational roles are: firing from internal storage; firing for elevator - ground-piled ammunition; loading from elevator - external pile; synthesising fire programs; unloading; manual firing; identification; and fusing and checks.
Standard equipment includes an NBC system of the overpressure type and an inertial navigation and aiming system designed for autonomous operations.
According to Soltam Systems, the 155 mm/52 calibre ordnance and automatic loader, or parts of the system, could be installed in other self-propelled artillery systems and used to upgrade other self-propelled systems such as the US-designed and built 155 mm M109 and M44.
FMCV
On July 14, 2011, The Jerusalem Post reported that the IDF had begun developing a successor for the Merkava series of tanks. The development was started in part by the arrival of the Trophy active protection system. With the system's ability to intercept threats at a stand-off distance, there was a review of the need for vehicles like the Merkava to have thick, heavy layers of armor. The Merkava Tank Planning Directorate set up a team to study principles for a future tank and present ideas for an armored fighting vehicle to provide mobile firepower on a future battlefield. The team reviewed basic design principles including lessening its weight, armor thickness compared to an APS to intercept anti-tank threats, reducing the crew size, and the type of main gun. Horsepower capabilities and heavy and light track systems compared to a wheeled chassis were also considered. With future battlefield condition developments affecting design features, the vehicle may not be considered a "tank" in the traditional sense. By July 2012, details began to emerge of considerations for developing technologies for the new design. One possibility is the replacement of the traditional main gun with a laser cannon or an electromagnetic cannon. Other improvements could include a hybrid-electric engine and a reduced crew of two. The goals of the new tank are to make it faster, better protected, more interoperable and lethal than the current Merkava.
The 65-ton Merkava is not regarded as useful for missions other than conventional warfare. The Israeli Army Armored Corps wants a lighter and highly mobile vehicle for rapid-response and urban warfare situations that can fill multiple roles. In 2012, the Defense Ministry drafted a program for development of a new family of light armored vehicles called Rakiya (Horizon), a Hebrew acronym for "future manned combat vehicle" (FMCV). The FMCV is planned to weigh 35 tons and have sufficient armor and weapons for both urban and conventional military operations. Instead of one multi-mission chassis, separate vehicles in distinct variants will perform different roles with all vehicles using common components. Vehicles are likely to be wheeled to maneuver in urban environments and move troops and equipment around in built-up areas. While the FMCV will be a fifth-generation vehicle as a follow-on to the Merkava IV, it will not be a replacement for the tank. The Merkava and Namer heavy tracked vehicles will remain in service for decades, while FMCV vehicles are to address entirely different operational requirements. Although the program seems similar to the American Future Combat Systems effort, which failed to produce a family of rapidly deployable lightweight ground vehicles, program officials say they learned from the American experience and that the FMCV was more focused and driven by simpler and more reasonable requirements based on cost considerations. Officials expect requirements for a range of configurations for FMCV light armored vehicles to be approved in 2014 and solicited to Israeli and American companies. The IDF hopes for the FMCV family of vehicles be operational by 2020.
References
Bibliography
.
.
Further reading
.
External links
Articles containing video clips
Israeli inventions
Main battle tanks of Israel
Merkava Mark 1 2
Post–Cold War main battle tanks
Military vehicles introduced in the 1970s |
167079 | https://en.wikipedia.org/wiki/Smartphone | Smartphone | A smartphone is a computing platform portable device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation).
Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone personal digital assistant (PDA) devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input, and emphasizing access to push email and wireless internet. Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors, with large, capacitive screens with support for multi-touch gestures rather than physical keyboards, and offer the ability for users to download or purchase additional applications from a centralized store, and use cloud storage and synchronization, virtual assistants, as well as mobile payment services. Smartphones have largely replaced PDAs, handheld/palm-sized PCs and portable media players (PMP).
Improved hardware and faster wireless communication (due to standards such as LTE) have bolstered the growth of the smartphone industry. In the third quarter of 2012, one billion smartphones were in use worldwide. Global smartphone sales surpassed the sales figures for feature phones in early 2013.
History
The development of the smartphone was enabled by several key technological advances. The exponential scaling and miniaturization of MOSFETs (MOS transistors) down to sub-micron levels during the 1990s2000s (as predicted by Moore's law) made it possible to build portable smart devices such as smartphones, as well as enabling the transition from analog to faster digital wireless mobile networks (leading to Edholm's law). Other important enabling factors include the lithium-ion battery, an indispensable energy source enabling long battery life, invented in the 1980s and commercialized in 1991, and the development of more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
Forerunner
In the early 1990s, IBM engineer Frank Canova realised that chip-and-wireless technology was becoming small enough to use in handheld devices. The first commercially available device that could be properly referred to as a "smartphone" began as a prototype called "Angler" developed by Canova in 1992 while at IBM and demonstrated in November of that year at the COMDEX computer industry trade show. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. In addition to placing and receiving cellular calls, the touchscreen-equipped Simon could send and receive faxes and emails. It included an address book, calendar, appointment scheduler, calculator, world time clock, and notepad, as well as other visionary mobile applications such as maps, stock reports and news.
The IBM Simon was manufactured by Mitsubishi Electric, which integrated features from its own wireless personal digital assistant (PDA) and cellular radio technologies. It featured a liquid-crystal display (LCD) and PC Card support. The Simon was commercially unsuccessful, particularly due to its bulky form factor and limited battery life, using NiCad batteries rather than the nickel–metal hydride batteries commonly used in mobile phones in the 1990s, or lithium-ion batteries used in modern smartphones.
The term "smart phone" was not coined until a year after the introduction of the Simon, appearing in print as early as 1995, describing AT&T's PhoneWriter Communicator. The term "smartphone" was first used by Ericsson in 1997 to describe a new device concept, the GS88.
PDA/phone hybrids
Beginning in the mid-late 1990s, many people who had mobile phones carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, Newton OS, Symbian or Windows CE/Pocket PC. These operating systems would later evolve into early mobile operating systems. Most of the "smartphones" in this era were hybrid devices that combined these existing familiar PDA OSes with basic phone hardware. The results were devices that were bulkier than either dedicated mobile phones or PDAs, but allowed a limited amount of cellular Internet access. PDA and mobile phone manufacturers competed in reducing the size of devices. The bulk of these smartphones combined with their high cost and expensive data plans, plus other drawbacks such as expansion limitations and decreased battery life compared to separate standalone devices, generally limited their popularity to "early adopters" and business users who needed portable connectivity.
In March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC with a Nokia 2110 mobile phone piggybacked onto it and ROM-based software to support it. It had a 640×200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls, and to create and receive text messages, emails and faxes. It was also 100% DOS 5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows.
In August 1996, Nokia released the Nokia 9000 Communicator, a digital cellular PDA based on the Nokia 2110 with an integrated system based on the PEN/GEOS 3.0 operating system from Geoworks. The two components were attached by a hinge in what became known as a clamshell design, with the display above and a physical QWERTY keyboard below. The PDA provided e-mail; calendar, address book, calculator and notebook applications; text-based Web browsing; and could send and receive faxes. When closed, the device could be used as a digital cellular telephone.
In June 1999 Qualcomm released the "pdQ Smartphone", a CDMA digital PCS smartphone with an integrated Palm PDA and Internet connectivity.
Subsequent landmark devices included:
The Ericsson R380 (December 2000) by Ericsson Mobile Communications, the first phone running the operating system later named Symbian (it ran EPOC Release 5, which was renamed Symbian OS at Release 6). It had PDA functionality and limited Web browsing on a resistive touchscreen utilizing a stylus. While it was marketed as a "smartphone", users could not install their own software on the device.
The Kyocera 6035 (February 2001), a dual-nature device with a separate Palm OS PDA operating system and CDMA mobile phone firmware. It supported limited Web browsing with the PDA software treating the phone hardware as an attached modem.
The Nokia 9210 Communicator (June 2001), the first phone running Symbian (Release 6) with Nokia's Series 80 platform (v1.0). This was the first Symbian phone platform allowing the installation of additional applications. Like the Nokia 9000 Communicator it's a large clamshell device with a full physical QWERTY keyboard inside.
Handspring's Treo 180 (2002), the first smartphone that fully integrated the Palm OS on a GSM mobile phone having telephony, SMS messaging and Internet access built into the OS. The 180 model had a thumb-type keyboard and the 180g version had a Graffiti handwriting recognition area, instead.
Japanese cell phones
In 1999, Japanese wireless provider NTT DoCoMo launched i-mode, a new mobile internet platform which provided data transmission speeds up to 9.6 kilobits per second, and access web services available through the platform such as online shopping. NTT DoCoMo's i-mode used cHTML, a language which restricted some aspects of traditional HTML in favor of increasing data speed for the devices. Limited functionality, small screens and limited bandwidth allowed for phones to use the slower data speeds available. The rise of i-mode helped NTT DoCoMo accumulate an estimated 40 million subscribers by the end of 2001, and ranked first in market capitalization in Japan and second globally. Japanese cell phones increasingly diverged from global standards and trends to offer other forms of advanced services and smartphone-like functionality that were specifically tailored to the Japanese market, such as mobile payments and shopping, near-field communication (NFC) allowing mobile wallet functionality to replace smart cards for transit fares, loyalty cards, identity cards, event tickets, coupons, money transfer, etc., downloadable content like musical ringtones, games, and comics, and 1seg mobile television. Phones built by Japanese manufacturers used custom firmware, however, and didn't yet feature standardized mobile operating systems designed to cater to third-party application development, so their software and ecosystems were akin to very advanced feature phones. As with other feature phones, additional software and services required partnerships and deals with providers.
The degree of integration between phones and carriers, unique phone features, non-standardized platforms, and tailoring to Japanese culture made it difficult for Japanese manufacturers to export their phones, especially when demand was so high in Japan that the companies didn't feel the need to look elsewhere for additional profits.
The rise of 3G technology in other markets and non-Japanese phones with powerful standardized smartphone operating systems, app stores, and advanced wireless network capabilities allowed non-Japanese phone manufacturers to finally break in to the Japanese market, gradually adopting Japanese phone features like emojis, mobile payments, NFC, etc. and spreading them to the rest of the world.
Early smartphones
Phones that made effective use of any significant data connectivity were still rare outside Japan until the introduction of the Danger Hiptop in 2002, which saw moderate success among U.S. consumers as the T-Mobile Sidekick. Later, in the mid-2000s, business users in the U.S. started to adopt devices based on Microsoft's Windows Mobile, and then BlackBerry smartphones from Research In Motion. American users popularized the term "CrackBerry" in 2006 due to the BlackBerry's addictive nature. In the U.S., the high cost of data plans and relative rarity of devices with Wi-Fi capabilities that could avoid cellular data network usage kept adoption of smartphones mainly to business professionals and "early adopters."
Outside the U.S. and Japan, Nokia was seeing success with its smartphones based on Symbian, originally developed by Psion for their personal organisers, and it was the most popular smartphone OS in Europe during the middle to late 2000s. Initially, Nokia's Symbian smartphones were focused on business with the Eseries, similar to Windows Mobile and BlackBerry devices at the time. From 2006 onwards, Nokia started producing consumer-focused smartphones, popularized by the entertainment-focused Nseries. Until 2010, Symbian was the world's most widely used smartphone operating system.
The touchscreen personal digital assistant (PDA)-derived nature of adapted operating systems like Palm OS, the "Pocket PC" versions of what was later Windows Mobile, and the UIQ interface that was originally designed for pen-based PDAs on Symbian OS devices resulted in some early smartphones having stylus-based interfaces. These allowed for virtual keyboards and/or handwriting input, thus also allowing easy entry of Asian characters.
By the mid-2000s, the majority of smartphones had a physical QWERTY keyboard. Most used a "keyboard bar" form factor, like the BlackBerry line, Windows Mobile smartphones, Palm Treos, and some of the Nokia Eseries. A few hid their full physical QWERTY keyboard in a sliding form factor, like the Danger Hiptop line. Some even had only a numeric keypad using T9 text input, like the Nokia Nseries and other models in the Nokia Eseries. Resistive touchscreens with stylus-based interfaces could still be found on a few smartphones, like the Palm Treos, which had dropped their handwriting input after a few early models that were available in versions with Graffiti instead of a keyboard.
Form factor and operating system shifts
The late 2000s and early 2010s saw a shift in smartphone interfaces away from devices with physical keyboards and keypads to ones with large finger-operated capacitive touchscreens. The first phone of any kind with a large capacitive touchscreen was the LG Prada, announced by LG in December 2006. This was a fashionable feature phone created in collaboration with Italian luxury designer Prada with a 3" 240x400 pixel screen, a 2-Megapixel digital camera with 144p video recording ability, an LED flash, and a miniature mirror for self portraits.
In January 2007, Apple Computer introduced the iPhone. It had a 3.5" capacitive touchscreen with twice the common resolution of most smartphone screens at the time, and introduced multi-touch to phones, which allowed gestures such as "pinching" to zoom in or out on photos, maps, and web pages. The iPhone was notable as being the first device of its kind targeted at the mass market to abandon the use of a stylus, keyboard, or keypad typical of contemporary smartphones, instead using a large touchscreen for direct finger input as its main means of interaction.
The iPhone's operating system was also a shift away from previous ones that were adapted from PDAs and feature phones, to one powerful enough to avoid using a limited, stripped down web browser requiring pages specially formatted using technologies such as WML, cHTML, or XHTML that previous phones supported and instead run a version of Apple's Safari browser that could easily render full websites not specifically designed for phones.
Later Apple shipped a software update that gave the iPhone a built-in on-device App Store allowing direct wireless downloads of third-party software. This kind of centralized App Store and free developer tools quickly became the new main paradigm for all smartphone platforms for software development, distribution, discovery, installation, and payment, in place of expensive developer tools that required official approval to use and a dependence on third-party sources providing applications for multiple platforms.
The advantages of a design with software powerful enough to support advanced applications and a large capacitive touchscreen affected the development of another smartphone OS platform, Android, with a more BlackBerry-like prototype device scrapped in favor of a touchscreen device with a slide-out physical keyboard, as Google's engineers thought at the time that a touchscreen could not completely replace a physical keyboard and buttons. Android is based around a modified Linux kernel, again providing more power than mobile operating systems adapted from PDAs and feature phones. The first Android device, the horizontal-sliding HTC Dream, was released in September 2008.
In 2012, Asus started experimenting with a convertible docking system named PadFone, where the standalone handset can when necessary be inserted into a tablet-sized screen unit with integrated supportive battery and used as such.
In 2013 and 2014, Samsung experimented with the hybrid combination of compact camera and smartphone, releasing the Galaxy S4 Zoom and K Zoom, each equipped with integrated 10× optical zoom lens and manual parameter settings (including manual exposure and focus) years before these were widely adapted among smartphones. The S4 Zoom additionally has a rotary knob ring around the lens and a tripod mount.
While screen sizes have increased, manufacturers have attempted to make smartphones thinner at the expense of utility and sturdiness, since a thinner frame is more vulnerable to bending and has less space for components, namely battery capacity.
Operating system competition
The iPhone and later touchscreen-only Android devices together popularized the slate form factor, based on a large capacitive touchscreen as the sole means of interaction, and led to the decline of earlier, keyboard- and keypad-focused platforms. Later, navigation keys such as the home, back, menu, task and search buttons have also been increasingly replaced by nonphysical touch keys, then virtual, simulated on-screen navigation keys, commonly with access combinations such as a long press of the task key to simulate a short menu key press, as with home button to search. More recent "bezel-less" types have their screen surface space extended to the unit's front bottom to compensate for the display area lost for simulating the navigation keys. While virtual keys offer more potential customizability, their location may be inconsistent among systems and/or depending on screen rotation and software used.
Multiple vendors attempted to update or replace their existing smartphone platforms and devices to better-compete with Android and the iPhone; Palm unveiled a new platform known as webOS for its Palm Pre in late-2009 to replace Palm OS, which featured a focus on a task-based "card" metaphor and seamless synchronization and integration between various online services (as opposed to the then-conventional concept of a smartphone needing a PC to serve as a "canonical, authoritative repository" for user data). HP acquired Palm in 2010 and released several other webOS devices, including the Pre 3 and HP TouchPad tablet. As part of a proposed divestment of its consumer business to focus on enterprise software, HP abruptly ended development of future webOS devices in August 2011, and sold the rights to webOS to LG Electronics in 2013, for use as a smart TV platform.
Research in Motion introduced the vertical-sliding BlackBerry Torch and BlackBerry OS 6 in 2010, which featured a redesigned user interface, support for gestures such as pinch-to-zoom, and a new web browser based on the same WebKit rendering engine used by the iPhone. The following year, RIM released BlackBerry OS 7 and new models in the Bold and Torch ranges, which included a new Bold with a touchscreen alongside its keyboard, and the Torch 9860—the first BlackBerry phone to not include a physical keyboard. In 2013, it replaced the legacy BlackBerry OS with a revamped, QNX-based platform known as BlackBerry 10, with the all-touch BlackBerry Z10 and keyboard-equipped Q10 as launch devices.
In 2010, Microsoft unveiled a replacement for Windows Mobile known as Windows Phone, featuring a new touchscreen-centric user interface built around flat design and typography, a home screen with "live tiles" containing feeds of updates from apps, as well as integrated Microsoft Office apps. In February 2011, Nokia announced that it had entered into a major partnership with Microsoft, under which it would exclusively use Windows Phone on all of its future smartphones, and integrate Microsoft's Bing search engine and Bing Maps (which, as part of the partnership, would also license Nokia Maps data) into all future devices. The announcement led to the abandonment of both Symbian, as well as MeeGo—a Linux-based mobile platform it was co-developing with Intel. Nokia's low-end Lumia 520 saw strong demand and helped Windows Phone gain niche popularity in some markets, overtaking BlackBerry in global market share in 2013.
In mid-June 2012, Meizu released its mobile operating system, Flyme OS.
Many of these attempts to compete with Android and iPhone were short-lived. Over the course of the decade, the two platforms became a clear duopoly in smartphone sales and market share, with BlackBerry, Windows Phone, and "other" operating systems eventually stagnating to little or no measurable market share. In 2015, BlackBerry began to pivot away from its in-house mobile platforms in favor of producing Android devices, focusing on a security-enhanced distribution of the software. The following year, the company announced that it would also exit the hardware market to focus more on software and its enterprise middleware, and began to license the BlackBerry brand and its Android distribution to third-party OEMs such as TCL for future devices.
In September 2013, Microsoft announced its intent to acquire Nokia's mobile device business for $7.1 billion, as part of a strategy under CEO Steve Ballmer for Microsoft to be a "devices and services" company. Despite the growth of Windows Phone and the Lumia range (which accounted for nearly 90% of all Windows Phone devices sold), the platform never had significant market share in the key U.S. market, and Microsoft was unable to maintain Windows Phone's momentum in the years that followed, resulting in dwindling interest from users and app developers. After Balmer was succeeded by Satya Nadella (who has placed a larger focus on software and cloud computing) as CEO of Microsoft, it took a $7.6 billion write-off on the Nokia assets in July 2015, and laid off nearly the entire Microsoft Mobile unit in May 2016.
Prior to the completion of the sale to Microsoft, Nokia released a series of Android-derived smartphones for emerging markets known as Nokia X, which combined an Android-based platform with elements of Windows Phone and Nokia's feature phone platform Asha, using Microsoft and Nokia services rather than Google.
Camera advancements
The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network, and store up to 20 JPEG digital images, which could be sent over e-mail. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication.
By the mid-2000s, higher-end cell phones commonly had integrated digital cameras. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008.
Many early smartphones didn't have cameras at all, and earlier models that had them had low performance and insufficient image and video quality that could not compete with budget pocket cameras and fulfill user's needs. By the beginning of the 2010s almost all smartphones had an integrated digital camera. The decline in sales of stand-alone cameras accelerated due to the increasing use of smartphones with rapidly improving camera technology for casual photography, easier image manipulation, and abilities to directly share photos through the use of apps and web-based services. By 2011, cell phones with integrated cameras were selling hundreds of millions per year. In 2015, digital camera sales were 35.395 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak.
Contributing to the rise in popularity of smartphones being used over dedicated cameras for photography, smaller pocket cameras have difficulty producing bokeh in images, but nowadays, some smartphones have dual-lens cameras that reproduce the bokeh effect easily, and can even rearrange the level of bokeh after shooting. This works by capturing multiple images with different focus settings, then combining the background of the main image with a macro focus shot.
In 2007 the Nokia N95 was notable as a smartphone that had a 5.0 Megapixel (MP) camera, when most others had cameras with around 3 MP or less than 2 MP. Some specialized feature phones like the LG Viewty, Samsung SGH-G800, and Sony Ericsson K850i, all released later that year, also had 5.0 MP cameras. By 2010 5.0 MP cameras were common; a few smartphones had 8.0 MP cameras and the Nokia N8, Sony Ericsson Satio, and Samsung M8910 Pixon12 feature phone had 12 MP. The main camera of the 2009 Nokia N86 uniquely features a three-level aperture lens.
The Altek Leo, a 14-megapixel smartphone with 3x optical zoom lens and 720p HD video camera was released in late 2010.
In 2011, the same year the Nintendo 3DS was released, HTC unveiled the Evo 3D, a 3D phone with a dual five-megapixel rear camera setup for spatial imaging, among the earliest mobile phones with more than one rear camera.
The 2012 Samsung Galaxy S3 introduced the ability to capture photos using voice commands.
In 2012 Nokia announced and released the Nokia 808 PureView, featuring a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4 Zeiss all-aspherical one-group lens. The high resolution enables four times of lossless digital zoom at 1080p and six times at 720p resolution, using image sensor cropping. The 2013 Nokia Lumia 1020 has a similar high-resolution camera setup, with the addition of optical image stabilization and manual camera settings years before common among high-end mobile phones, although lacking expandable storage that could be of use for accordingly high file sizes.
In the same year, Nokia introduced mobile optical image stabilization with the Lumia 920, enabling prolonged exposure times for low-light photography and smoothing out handheld video shake whose appearance would magnify over a larger display such as a monitor or television set, which would be detrimental to watching experience.
Since 2012, smartphones have become increasingly able to capture photos while filming, whose resolution may vary, where Samsung uses the highest image sensor resolution at the video's aspect ratio, which at 16:9 is 6 Megapixels (3264×1836) on the Galaxy S3 and 9.6 Megapixels (4128×2322) on the Galaxy S4. The earliest iPhones with such functionality, iPhone 5 and 5s, captured simultaneous photos at 0.9 Megapixels (1280×720) while filming.
Starting in 2013 on the Xperia Z1, Sony experimented with real-time augmented reality camera effects such as floating text, virtual plants, volcano, and a dinosaur walking in the scenery. Apple later did similarly in 2017 with the iPhone X.
In the same year, iOS 7 introduced the later widely implemented viewfinder intuition, where exposure value can be adjusted through vertical swiping, after focus and exposure has been set by tapping, and even while locked after holding down for a brief moment. On some devices, this intuition may be restricted by software in video/slow motion modes and for front camera.
In 2013, Samsung unveiled the Galaxy S4 Zoom smartphone with the grip shape of a compact camera and a 10× optical zoom lens, as well as a rotary knob ring around the lens, as used on higher-end compact cameras, and an ISO 1222 tripod mount. It is equipped with manual parameter settings, including for focus and exposure. The successor 2014 Samsung Galaxy K Zoom brought resolution and performance enhancements, but lacks the rotary knob and tripod mount to allow for a more smartphone-like shape with less protruding lens.
The 2014 Panasonic Lumix DMC-CM1 was another attempt at mixing mobile phone with compact camera, so much so that it inherited the Lumix brand. While lacking optical zoom, its image sensor has a format of 1", as used in high-end compact cameras such as the Lumix DMC-LX100 and Sony CyberShot DSC-RX100 series, with multiple times the surface size of a typical mobile camera image sensor, as well as support for light sensitivities of up to ISO 25600, well beyond the typical mobile camera light sensitivity range. As of 2021, no successor has been released.
In 2013 and 2014, HTC experimentally traded in pixel count for pixel surface size on their One M7 and M8, both with only four megapixels, marketed as UltraPixel, citing improved brightness and less noise in low light, though the more recent One M8 lacks optical image stabilization.
The One M8 additionally was one of the earliest smartphones to be equipped with a dual camera setup. Its software allows generating visual spacial effects such as 3D panning, weather effects, and focus adjustment ("UFocus"), simulating the postphotographic selective focussing capability of images produced by a light-field camera. HTC returned to a high-megapixel single-camera setup on the 2015 One M9.
Meanwhile, in 2014, LG Mobile started experimenting with time-of-flight camera functionality, where a rear laser beam that measures distance accelerates autofocus.
Phase-detection autofocus was increasingly adapted throughout the mid-2010s, allowing for quicker and more accurate focussing than contrast detection.
In 2016 Apple introduced the iPhone 7 Plus, one of the phones to popularize a dual camera setup. The iPhone 7 Plus included a main 12 MP camera along with a 12 MP telephoto camera. In early 2018 Huawei released a new flagship phone, the Huawei P20 Pro, one of the first triple camera lens setups with Leica optics. In late 2018, Samsung released a new mid-range smartphone, the Galaxy A9 (2018) with the world's first quad camera setup. The Nokia 9 PureView was released in 2019 featuring a penta-lens camera system.
2019 saw the commercialization of high resolution sensors, which use pixel binning to capture more light. 48 MP and 64 MP sensors developed by Sony and Samsung are commonly used by several manufacturers. 108 MP sensors were first implemented in late 2019 and early 2020.
Video resolution
With stronger getting chipsets to handle computing workload demands at higher pixel rates, mobile video resolution and framerate has caught up with dedicated consumer-grade cameras over years.
In 2009 the Samsung Omnia HD became the first mobile phone with 720p HD video recording. In the same year, Apple brought video recording initially to the iPhone 3GS, at 480p, whereas the 2007 original iPhone and 2008 iPhone 3G lacked video recording entirely.
720p was more widely adapted in 2010, on smartphones such as the original Samsung Galaxy S, Sony Ericsson Xperia X10, iPhone 4, and HTC Desire HD.
The early 2010s brought a steep increase in mobile video resolution. 1080p mobile video recording was achieved in 2011 on the Samsung Galaxy S2, HTC Sensation, and iPhone 4s.
In 2012 and 2013, select devices with 720p filming at 60 frames per second were released: the Asus PadFone 2 and HTC One M7, unlike flagships of Samsung, Sony, and Apple. However, the 2013 Samsung Galaxy S4 Zoom does support it.
In 2013, the Samsung Galaxy Note 3 introduced 2160p (4K) video recording at 30 frames per second, as well as 1080p doubled to 60 frames per second for smoothness.
Other vendors adapted 2160p recording in 2014, including the optically stabilized LG G3. Apple first implemented it in late 2015 on the iPhone 6s and 6s Plus.
The framerate at 2160p was widely doubled to 60 in 2017 and 2018, starting with the iPhone 8, Galaxy S9, LG G7, and OnePlus 6.
Sufficient computing performance of chipsets and image sensor resolution and its reading speeds have enabled mobile 4320p (8K) filming in 2020, introduced with the Samsung Galaxy S20 and Redmi K30 Pro, though some upper resolution levels were foregone (skipped) throughout development, including 1440p (2.5K), 2880p (5K), and 3240p (6K), except 1440p on Samsung Galaxy front cameras.
Mid-class
Among mid-range smartphone series, the introduction of higher video resolutions was initially delayed by two to three years compared to flagship counterparts. 720p was widely adapted in 2012, including with the Samsung Galaxy S3 Mini, Sony Xperia go, and 1080p in 2013 on the Samsung Galaxy S4 Mini and HTC One mini.
The proliferation of video resolutions beond 1080p has been postponed by several years. The mid-class Sony Xperia M5 supported 2160p filming in 2016, whereas Samsung's mid-class series such as the Galaxy J and A series were strictly limited to 1080p in resolution and 30 frames per second at any resolution for six years until around 2019, whether and how much for technical reasons is unclear.
Setting
A lower video resolution setting may be desirable to extend recording time by reducing space storage and power consumption.
The camera software of some sophisticated devices such as the LG V10 is equipped with separate controls for resolution, frame rate, and bit rate, within a technically supported range of pixel rate.
Slow motion video
A distinction between different camera software is the method used to store high frame rate video footage, with more recent phones retaining both the image sensor's original output frame rate and audio, while earlier phones do not record audio and stretch the video so it can be played back slowly at default speed.
While the stretched encoding method used on earlier phones enables slow motion playback on video player software that lacks manual playback speed control, typically found on older devices, if the aim were to achieve a slow motion effect, the real-time method used by more recent phones offers greater versatility for video editing, where slowed down portions of the footage can be freely selected by the user, and exported into a separate video. A rudimentary video editing software for this purpose is usually precluded. The video can optionally be played back at normal (real-time) speed, acting as usual video.
Development
The earliest smartphone known to feature a slow motion mode is the 2009 Samsung i8000 Omnia II, which can record at QVGA (320×240) at 120 fps (frames per second). Slow motion was not available on the 2010 Galaxy S1, 2011 Galaxy S2, 2011 Galaxy Note 1, and 2012 Galaxy S3 flagships.
In early 2012, the HTC One X allowed 768×432 pixel slow motion filming at an undocumented frame rate. The output footage has been measured as a third of real-time speed.
In late 2012, the Galaxy Note 2 brought back slow motion, with D1 (720×480) at 120 fps. In early 2013, the Galaxy S4 and HTC One M7 recorded at that frame rate with 800×450, followed by the Note 3 and iPhone 5s with 720p (1280×720) in late 2013, the latter of which retaines audio and original sensor frame rate, as with all later iPhones. In early 2014, the Sony Xperia Z2 and HTC One M8 adapted this resolution as well. In late 2014, the iPhone 6 doubled the frame rate to 240fps, and in late 2015, the iPhone 6s added support for 1080p (1920×1080) at 120 frames per second. In early 2015, the Galaxy S6 became the first Samsung mobile phone to retain the sensor framerate and audio, and in early 2016, the Galaxy S7 became the first Samsung mobile phone with 240fps recording, also at 720p.
In early 2015, the MT6795 chipset by MediaTek promised 1080p@480fps video recording. The project's status remains indefinite.
Since early 2017, starting with the Sony Xperia XZ, smartphones have been released with a slow motion mode that unsustainably records at framerates multiple times as high, by temporarily storing frames on the image sensor's internal burst memory. Such a recording endures few real-time seconds at most.
In late 2017, the iPhone 8 brought 1080p at 240fps, as well as 2160p at 60fps, followed by the Galaxy S9 in early 2018. In mid-2018, the OnePlus 6 brought 720p at 480fps, sustainable for one minute.
In early 2021, the OnePlus 9 Pro became the first phone with 2160p at 120fps.
HDR video
The first smartphones to record HDR video were the early 2013 Sony Xperia Z and mid-2013 Xperia Z Ultra, followed by the early 2014 Galaxy S5, all at 1080p.
Audio recording
Mobile phones with multiple microphones usually allow video recording with stereo audio for spaciality, with Samsung, Sony, and HTC initially implementing it in 2012 on their Samsung Galaxy S3, Sony Xperia S, and HTC One X. Apple implemented stereo audio starting with the 2018 iPhone Xs family and iPhone XR.
Front cameras
Photo
Emphasis is being put on the front camera since the mid-2010s, where front cameras have reached resolutions as high as typical rear cameras, such as the 2015 LG G4 (8 megapixels), Sony Xperia C5 Ultra (13 megapixels), and 2016 Sony Xperia XA Ultra (16 megapixels, optically stabilized). The 2015 LG V10 brought a dual front camera system where the second has a wider angle for group photography. Samsung implemented a front-camera sweep panorama (panorama selfie) feature since the Galaxy Note 4 to extend the field of view.
Video
In 2012, the Galaxy S3 and iPhone 5 brought 720p HD front video recording (at 30 fps). In early 2013, the Samsung Galaxy S4, HTC One M7 and Sony Xperia Z brought 1080p Full HD at that framerate, and in late 2014, the Galaxy Note 4 introduced 1440p video recording on the front camera. Apple adapted 1080p front camera video with the late 2016 iPhone 7.
In 2019, smartphones started adapting 2160p 4K video recording on the front camera, six years after rear camera 2160p commenced with the Galaxy Note 3.
Display advancements
In the early 2010s, larger smartphones with screen sizes of at least diagonal, dubbed "phablets", began to achieve popularity, with the 2011 Samsung Galaxy Note series gaining notably wide adoption. In 2013, Huawei launched the Huawei Mate series, sporting a HD (1280x720) IPS+ LCD display, which was considered to be quite large at the time.
Some companies began to release smartphones in 2013 incorporating flexible displays to create curved form factors, such as the Samsung Galaxy Round and LG G Flex.
By 2014, 1440p displays began to appear on high-end smartphones. In 2015, Sony released the Xperia Z5 Premium, featuring a 4K resolution display, although only images and videos could actually be rendered at that resolution (all other software was shown at 1080p).
New trends for smartphone displays began to emerge in 2017, with both LG and Samsung releasing flagship smartphones (LG G6 and Galaxy S8), utilizing displays with taller aspect ratios than the common 16:9 ratio, and a high screen-to-body ratio, also known as a "bezel-less design". These designs allow the display to have a larger diagonal measurement, but with a slimmer width than 16:9 displays with an equivalent screen size.
Another trend popularized in 2017 were displays containing tab-like cut-outs at the top-centre—colloquially known as a "notch"—to contain the front-facing camera, and sometimes other sensors typically located along the top bezel of a device. These designs allow for "edge-to-edge" displays that take up nearly the entire height of the device, with little to no bezel along the top, and sometimes a minimal bottom bezel as well. This design characteristic appeared almost simultaneously on the Sharp Aquos S2 and the Essential Phone, which featured small circular tabs for their cameras, followed just a month later by the iPhone X, which used a wider tab to contain a camera and facial scanning system known as Face ID. The 2016 LG V10 had a precursor to the concept, with a portion of the screen wrapped around the camera area in the top-left corner, and the resulting area marketed as a "second" display that could be used for various supplemental features.
Other variations of the practice later emerged, such as a "hole-punch" camera (such as those of the Honor View 20, and Samsung's Galaxy A8s and Galaxy S10)—eschewing the tabbed "notch" for a circular or rounded-rectangular cut-out within the screen instead, while Oppo released the first "all-screen" phones with no notches at all, including one with a mechanical front camera that pops up from the top of the device (Find X), and a 2019 prototype for a front-facing camera that can be embedded and hidden below the display, using a special partially-translucent screen structure that allows light to reach the image sensor below the panel. The first implementation was the ZTE Axon 20 5G, with a 32 MP sensor manufactured by Visionox.
Displays supporting refresh rates higher than 60 Hz (such as 90 Hz or 120 Hz) also began to appear on smartphones in 2017; initially confined to "gaming" smartphones such as the Razer Phone (2017) and Asus ROG Phone (2018), they later became more common on flagship phones such as the Pixel 4 (2019) and Samsung Galaxy S21 series (2021). Higher refresh rates allow for smoother motion and lower input latency, but often at the cost of battery life. As such, the device may offer a means to disable high refresh rates, or be configured to automatically reduce the refresh rate when there is low on-screen motion.
Multi-tasking
An early implementation of multiple simultaneous tasks on a smartphone display are the picture-in-picture video playback mode ("pop-up play") and "live video list" with playing video thumbnails of the 2012 Samsung Galaxy S3, the former of which was later delivered to the 2011 Samsung Galaxy Note through a software update. Later that year, a split-screen mode was implemented on the Galaxy Note 2, later retrofitted on the Galaxy S3 through the "premium suite upgrade".
The earliest implementation of desktop and laptop-like windowing was on the 2013 Samsung Galaxy Note 3.
Foldable smartphones
Smartphones utilizing flexible displays were theorized as possible once manufacturing costs and production processes were feasible. In November 2018, the startup company Royole unveiled the first commercially available foldable smartphone, the Royole FlexPai. Also that month, Samsung presented a prototype phone featuring an "Infinity Flex Display" at its developers conference, with a smaller, outer display on its "cover", and a larger, tablet-sized display when opened. Samsung stated that it also had to develop a new polymer material to coat the display as opposed to glass. Samsung officially announced the Galaxy Fold, based on the previously-demonstrated prototype, in February 2019 for an originally-scheduled release in late-April. Due to various durability issues with the display and hinge systems encountered by early reviewers, the release of the Galaxy Fold was delayed to September to allow for design changes.
In November 2019, Motorola unveiled a variation of the concept with its re-imagining of the Razr, using a horizontally-folding display to create a clamshell form factor inspired by its previous feature phone range of the same name. Samsung would unveil a similar device known as the Galaxy Z Flip the following February.
Other developments in the 2010s
The first smartphone with a fingerprint reader was the Motorola Atrix 4G in 2011. In September 2013, the iPhone 5S was unveiled as the first smartphone on a major U.S. carrier since the Atrix to feature this technology. Once again, the iPhone popularized this concept. One of the barriers of fingerprint reading amongst consumers was security concerns, however Apple was able to address these concerns by encrypting this fingerprint data onto the A7 Processor located inside the phone as well as make sure this information could not be accessed by third-party applications and is not stored in iCloud or Apple servers
In 2012, Samsung introduced the Galaxy S3 (GT-i9300) with retrofittable wireless charging, pop-up video playback, 4G-LTE variant (GT-i9305) quad-core processor.
In 2013, Fairphone launched its first "socially ethical" smartphone at the London Design Festival to address concerns regarding the sourcing of materials in the manufacturing followed by Shiftphone in 2015. In late 2013, QSAlpha commenced production of a smartphone designed entirely around security, encryption and identity protection.
In October 2013, Motorola Mobility announced Project Ara, a concept for a modular smartphone platform that would allow users to customize and upgrade their phones with add-on modules that attached magnetically to a frame. Ara was retained by Google following its sale of Motorola Mobility to Lenovo, but was shelved in 2016. That year, LG and Motorola both unveiled smartphones featuring a limited form of modularity for accessories; the LG G5 allowed accessories to be installed via the removal of its battery compartment, while the Moto Z utilizes accessories attached magnetically to the rear of the device.
Microsoft, expanding upon the concept of Motorola's short-lived "Webtop", unveiled functionality for its Windows 10 operating system for phones that allows supported devices to be docked for use with a PC-styled desktop environment.
Samsung and LG used to be the "last standing" manufacturers to offer flagship devices with user-replaceable batteries.
But in 2015, Samsung succumbed to the minimalism trend set by Apple, introducing the Galaxy S6 without a user-replaceable battery.
In addition, Samsung was criticised for pruning long-standing features such as MHL, MicroUSB 3.0, water resistance and MicroSD card support, of which the latter two came back in 2016 with the Galaxy S7 and S7 Edge.
As of 2015, the global median for smartphone ownership was 43%. Statista forecast that 2.87 billion people would own smartphones in 2020.
Major technologies that began to trend in 2016 included a focus on virtual reality and augmented reality experiences catered towards smartphones, the newly introduced USB-C connector, and improving LTE technologies.
In 2016, adjustable screen resolution known from desktop operating systems was introduced to smartphones for power saving, whereas variable screen refresh rates were popularized in 2020.
In 2018, the first smartphones featuring fingerprint readers embedded within OLED displays were announced, followed in 2019 by an implementation using an ultrasonic sensor on the Samsung Galaxy S10.
In 2019, the majority of smartphones released have more than one camera, are waterproof with IP67 and IP68 ratings, and unlock using facial recognition or fingerprint scanners.
Other developments in the 2020s
In 2020, the first smartphones featuring high-speed 5G network capability were announced.
Since 2020, smartphones have decreasingly been shipped with rudimentary accessories like a power adapter and headphones that have historically been almost invariably within the scope of delivery. This trend was initiated with Apple's iPhone 12, followed by Samsung and Xiaomi on the Galaxy S21 and Mi 11 respectively, months after having mocked the same through advertisements. The reason cited is reducing environmental footprint, though reaching raised charging rates supported by newer models demands a new charger shipped through separate packaging with its own environmental footprint.
With the development of the PinePhone and Librem 5 in the 2020s, there are intensified efforts to make open source GNU/Linux for smartphones a major alternative to iOS and Android. Moreover, associated software enabled convergence (beyond convergent and hybrid apps) by allowing the smartphones to be used like a desktop computer when connected to a keyboard, mouse and monitor.
Hardware
A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips:
Application processor (CMOS system-on-a-chip)
Flash memory (floating-gate MOS memory)
Cellular modem (baseband RF CMOS)
RF transceiver (RF CMOS)
Phone camera image sensor (CMOS image sensor)
Power management integrated circuit (power MOSFETs)
Display driver (LCD or LED driver)
Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver)
Sound chip (audio codec and power amplifier)
Gyroscope
Capacitive touchscreen controller (ASIC and DSP)
RF power amplifier (LDMOS)
Some are also equipped with an FM radio receiver, a hardware notification LED, and an infrared transmitter for use as remote control. Few have additional sensors such as thermometer for measuring ambient temperature, hygrometer for humidity, and a sensor for ultraviolet ray measurement.
Few exotic smartphones designed around specific purposes are equipped with uncommon hardware such as a projector (Samsung Beam i8520 and Samsung Galaxy Beam i8530), optical zoom lenses (Samsung Galaxy S4 Zoom and Samsung Galaxy K Zoom), thermal camera, and even PMR446 (walkie-talkie radio) transceiver.
Central processing unit
Smartphones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a CMOS (complementary metal–oxide–semiconductor) system-on-a-chip (SoC) application processor.
The performance of mobile CPU depends not only on the clock rate (generally given in multiples of hertz) but also on the memory hierarchy. Because of these challenges, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications.
Buttons
Smartphones are typically equipped with a power button and volume buttons. Some pairs of volume buttons are unified. Some are equipped with a dedicated camera shutter button. Units for outdoor use may be equipped with an "SOS" emergency call and "PTT" (push-to-talk button). The presence of physical front-side buttons such as the home and navigation buttons has decreased throughout the 2010s, increasingly becoming replaced by capacitive touch sensors and simulated (on-screen) buttons.
As with classic mobile phones, early smartphones such as the Samsung Omnia II were equipped with buttons for accepting and declining phone calls. Due to the advancements of functionality besides phone calls, these have increasingly been replaced by navigation buttons such as "menu" (also known as "options"), "back", and "tasks". Some early 2010s smartphones such as the HTC Desire were additionally equipped with a "Search" button (🔍) for quick access to a web search engine or apps' internal search feature.
Since 2013, smartphones' home buttons started integrating fingerprint scanners, starting with the iPhone 5s and Samsung Galaxy S5.
Functions may be assigned to button combinations. For example, screenshots can usually be taken using the home and power buttons, with a short press on iOS and one-second holding Android OS, the two most popular mobile operating systems. On smartphones with no physical home button, usually the volume-down button is instead pressed with the power button. Some smartphones have a screenshot and possibly screencast shortcuts in the navigation button bar or the power button menu.
Display
One of the main characteristics of smartphones is the screen. Depending on the device's design, the screen fills most or nearly all of the space on a device's front surface. Many smartphone displays have an aspect ratio of 16:9, but taller aspect ratios became more common in 2017, as well as the aim to eliminate bezels by extending the display surface to as close to the edges as possible.
Screen sizes
Screen sizes are measured in diagonal inches. Phones with screens larger than 5.2 inches are often called "phablets". Smartphones with screens over 4.5 inches in size are commonly difficult to use with only a single hand, since most thumbs cannot reach the entire screen surface; they may need to be shifted around in the hand, held in one hand and manipulated by the other, or used in place with both hands. Due to design advances, some modern smartphones with large screen sizes and "edge-to-edge" designs have compact builds that improve their ergonomics, while the shift to taller aspect ratios have resulted in phones that have larger screen sizes whilst maintaining the ergonomics associated with smaller 16:9 displays.
Panel types
Liquid-crystal displays (LCDs) and organic light-emitting diode (OLED) displays are the most common. Some displays are integrated with pressure-sensitive digitizers, such as those developed by Wacom and Samsung, and Apple's Force Touch system. A few phones, such as the YotaPhone prototype, are equipped with a low-power electronic paper rear display, as used in e-book readers.
Alternative input methods
Some devices are equipped with additional input methods such as a stylus for higher precision input and hovering detection, and/or a self-capacitive touch screens layer for floating finger detection. The latter has been implemented on few phones such as the Samsung Galaxy S4, Note 3, S5, Alpha, and Sony Xperia Sola, making the Galaxy Note 3 the only smartphone with both so far.
Hovering can enable preview tooltips such as on the video player's seek bar, in text messages, and quick contacts on the dial pad, as well as lock screen animations, and the simulation of a hovering mouse cursor on web sites.
Some styluses support hovering as well and are equipped with a button for quick access to relevant tools such as digital post-it notes and highlighting of text and elements when dragging while pressed, resembling drag selection using a computer mouse. Some series such as the Samsung Galaxy Note series and LG G Stylus series have an integrated tray to store the stylus in.
Few devices such as the iPhone 6s until iPhone Xs and Huawei Mate S are equipped with a pressure-sensitive touch screen, where the pressure may be used to simulate a gas pedal in video games, access to preview windows and shortcut menus, controlling the typing cursor, and a weight scale, the latest of which has been rejected by Apple from the App Store.
Some early 2010s HTC smartphones such as the HTC Desire (Bravo) and HTC Legend are equipped with an optical track pad for scrolling and selection.
Notification light
Many smartphones except Apple iPhones are equipped with low-power light-emitting diodes besides the screen that are able to notify the user about incoming messages, missed calls, low battery levels, and facilitate locating the mobile phone in darkness, with marginial power consumption.
To distinguish between the sources of notifications, the colour combination and blinking pattern can vary. Usually three diodes in red, green, and blue (RGB) are able to create a multitude of colour combinations.
Sensors
Smartphones are equipped with a multitude of sensors to enable system features and third-party applications.
Common sensors
Accelerometers and gyroscopes enable automatic control of screen rotation. Uses by third-party software include bubble level simulation. An ambient light sensor allows for automatic screen brightness and contrast adjustment, and an RGB sensor enables the adaption of screen colour.
Many mobile phones are also equipped with a barometer sensor to measure air pressure, such as Samsung since 2012 with the Galaxy S3, and Apple since 2014 with the iPhone 6. It allows estimating and detecting changes in altitude.
A magnetometer can act as a digital compass by measuring Earth's magnetic field.
Rare sensors
Samsung equips their flagship smartphones since the 2014 Galaxy S5 and Galaxy Note 4 with a heart rate sensor to assist in fitness-related uses and act as a shutter key for the front-facing camera.
So far, only the 2013 Samsung Galaxy S4 and Note 3 are equipped with an ambient temperature sensor and a humidity sensor, and only the Note 4 with an ultraviolet radiation sensor which could warn the user about excessive exposure.
A rear infrared laser beam for distance measurement can enable time-of-flight camera functionality with accelerated autofocus, as implemented on select LG mobile phones starting with LG G3 and LG V10.
Due to their currently rare occurrence among smartphones, not much software to utilize these sensors has been developed yet.
Storage
While eMMC (embedded multi media card) flash storage was most commonly used in mobile phones, its successor, UFS (Universal Flash Storage) with higher transfer rates emerged throughout the 2010s for upper-class devices.
Capacity
While the internal storage capacity of mobile phones has been near-stagnant during the first half of the 2010s, it has increased steeper during its second half, with Samsung for example increasing the available internal storage options of their flagship class units from 32 GB to 512 GB within only 2 years between 2016 and 2018.
Memory cards
The space for data storage of some mobile phones can be expanded using MicroSD memory cards, whose capacity has multiplied throughout the 2010s (→ ). Benefits over USB on the go storage and cloud storage include offline availability and privacy, not reserving and protruding from the charging port, no connection instability or latency, no dependence on voluminous data plans, and preservation of the limited rewriting cycles of the device's permanent internal storage.
In case of technical defects which make the device unusable or unbootable as a result of liquid damage, fall damage, screen damage, bending damage, malware, or bogus system updates, etc., data stored on the memory card is likely rescueable externally, while data on the inaccessible internal storage would be lost. A memory card can usually immediately be re-used in a different memory-card-enabled device with no necessity for prior file transfers.
Some dual-SIM mobile phones are equipped with a hybrid slot, where one of the two slots can be occupied by either a SIM card or a memory card. Some models, typically of higher end, are equipped with three slots including one dedicated memory card slot, for simultaneous dual-SIM and memory card usage.
Physical location
The location of both SIM and memory card slots vary among devices, where they might be located accessibly behind the back cover or else behind the battery, the latter of which denies hot swapping.
Mobile phones with non-removable rear cover typically house SIM and memory cards in a small tray on the handset's frame, ejected by inserting a needle tool into a pinhole.
Some earlier mid-range phones such as the 2011 Samsung Galaxy Fit and Ace have a sideways memory card slot on the frame covered by a cap that can be opened without tool.
File transfer
Originally, mass storage access was commonly enabled to computers through USB. Over time, mass storage access was removed, leaving the Media Transfer Protocol as protocol for USB file transfer, due to its non-exclusive access ability where the computer is able to access the storage without it being locked away from the mobile phone's software for the duration of the connection, and no necessity for common file system support, as communication is done through an abstraction layer.
However, unlike mass storage, Media Transfer Protocol lacks parallelism, meaning that only a single transfer can run at a time, for which other transfer requests need to wait to finish. In addition, the direct access of files through MTP is not supported. Any file is wholly downloaded from the device before opened.
Sound
Some audio quality enhancing features, such as Voice over LTE and HD Voice have appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used in long-distance calls. Audio quality can be improved using a VoIP application over WiFi. Cellphones have small speakers so that the user can use a speakerphone feature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear.
Some mobile phones such as the HTC One M8 and the Sony Xperia Z2 are equipped with stereophonic speakers to create spacial sound when in horizontal orientation.
Audio connector
The 3.5mm headphone receptible (coll. "headphone jack") allows the immediate operation of passive headphones, as well as connection to other external auxiliary audio appliances. Among devices equipped with the connector, it is more commonly located at the bottom (charging port side) than on the top of the device
The decline of the connector's availability among newly released mobile phones among all major vendors commenced in 2016 with its lack on the Apple iPhone 7. An adapter reserving the charging port can retrofit the plug.
Battery-powered, wireless Bluetooth headphones are an alternative. Those tend to be costlier however due to their need for internal hardware such as a Bluetooth transceiver, and a Bluetooth coupling is required ahead of each operation.
Battery
A smartphone typically uses a lithium-ion battery due to its high energy density.
Batteries chemically wear down as a result of repeated charging and discharging throughout ordinary usage, losing both energy capacity and output power, which results in loss of processing speeds followed by system outages. Battery capacity may be reduced to 80% after few hundred recharges, and the drop in performance accelerates with time.
Some mobile phones are designed with batteries that can be interchanged upon expiration by the end user, usually by opening the back cover. While such a design had initially been used in most mobile phones, including those with touch screen that were not Apple iPhones, it has largely been usurped throughout the 2010s by permanently built-in, non-replaceable batteries; a design practice criticized for planned obsolescence.
Due to limitations of electrical currents that existing USB cables' copper wires could handle, charging protocols which make use of elevated voltages such as Qualcomm Quick Charge and MediaTek Pump Express have been developed to increase the power throughput for faster charging. The smartphone's integrated charge controller (IC) requests the elevated voltage from a supported charger. "VOOC" by Oppo, also marketed as "dash charge", took the counter approach and increased current to cut out some heat produced from internally regulating the arriving voltage in the end device down to the battery's charging terminal voltage, but is incompatible with existing USB cables, as it requires the thicker copper wires of high-current USB cables. Later, USB Power Delivery (USB-PD) was developed with the aim to standardize the negotiation of charging parameters across devices of up to 100 Watts, but is only supported on cables with USB-C on both endings due to the connector's dedicated PD channels.
While charging rates have been increasing, with 15 watts in 2014, 20 Watts in 2016, and 45 watts in 2018, the power throughput may be throttled down significantly during operation of the device.
Wireless charging has been widely adapted, allowing for intermittent recharging without wearing down the charging port through frequent reconnection, with Qi being the most common standard, followed by Powermat. Due to the lower efficiency of wireless power transmission, charging rates are below that of wired charging, and more heat is produced at similar charging rates.
By the end of 2017, smartphone battery life has become generally adequate; however, earlier smartphone battery life was poor due to the weak batteries that could not handle the significant power requirements of the smartphones' computer systems and color screens.
Smartphone users purchase additional chargers for use outside the home, at work, and in cars and by buying portable external "battery packs". External battery packs include generic models which are connected to the smartphone with a cable, and custom-made models that "piggyback" onto a smartphone's case. In 2016, Samsung had to recall millions of the Galaxy Note 7 smartphones due to an explosive battery issue. For consumer convenience, wireless charging stations have been introduced in some hotels, bars, and other public spaces.
Cameras
Cameras have become standard features of smartphones. As of 2019 phone cameras are now a highly competitive area of differentiation between models, with advertising campaigns commonly based on a focus on the quality or capabilities of a device's main cameras.
Images are usually saved in the JPEG file format; some high-end phones since the mid-2010s also have RAW imaging capability.
Space constraints
Typically smartphones have at least one main rear-facing camera and a lower-resolution front-facing camera for "selfies" and video chat. Owing to the limited depth available in smartphones for image sensors and optics, rear-facing cameras are often housed in a "bump" that's thicker than the rest of the phone. Since increasingly thin mobile phones have more abundant horizontal space than the depth that is necessary and used in dedicated cameras for better lenses, there's additionally a trend for phone manufacturers to include multiple cameras, with each optimized for a different purpose (telephoto, wide angle, etc.).
Viewed from back, rear cameras are commonly located at the top center or top left corner. A cornered location benefits by not requiring other hardware to be packed around the camera module while increasing ergonomy, as the lens is less likely to be covered when held horizontally.
Modern advanced smartphones have cameras with optical image stabilisation (OIS), larger sensors, bright lenses, and even optical zoom plus RAW images. HDR, "Bokeh mode" with multi lenses and multi-shot night modes are now also familiar. Many new smartphone camera features are being enabled via computational photography image processing and multiple specialized lenses rather than larger sensors and lenses, due to the constrained space available inside phones that are being made as slim as possible.
Dedicated camera button
Some mobile phones such as the Samsung i8000 Omnia 2, some Nokia Lumias and some Sony Xperias are equipped with a physical camera shutter button.
Those with two pressure levels resemble the point-and-shoot intuition of dedicated compact cameras. The camera button may be used as a shortcut to quickly and ergonomically launch the camera software, as it is located more accessibly inside a pocket than the power button.
Back cover materials
Back covers of smartphones are typically made of polycarbonate, aluminium, or glass. Polycarbonate back covers may be glossy or matte, and possibly textured, like dotted on the Galaxy S5 or leathered on the Galaxy Note 3 and Note 4.
While polycarbonate back covers may be perceived as less "premium" among fashion- and trend-oriented users, its utilitarian strengths and technical benefits include durability and shock absorption, greater elasticity against permanent bending like metal, inability to shatter like glass, which facilitates designing it removable; better manufacturing cost efficiency, and no blockage of radio signals or wireless power like metal.
Accessories
A wide range of accessories are sold for smartphones, including cases, memory cards, screen protectors, chargers, wireless power stations, USB On-The-Go adapters (for connecting USB drives and or, in some cases, a HDMI cable to an external monitor), MHL adapters, add-on batteries, power banks, headphones, combined headphone-microphones (which, for example, allow a person to privately conduct calls on the device without holding it to the ear), and Bluetooth-enabled powered speakers that enable users to listen to media from their smartphones wirelessly.
Cases range from relatively inexpensive rubber or soft plastic cases which provide moderate protection from bumps and good protection from scratches to more expensive, heavy-duty cases that combine a rubber padding with a hard outer shell. Some cases have a "book"-like form, with a cover that the user opens to use the device; when the cover is closed, it protects the screen. Some "book"-like cases have additional pockets for credit cards, thus enabling people to use them as wallets.
Accessories include products sold by the manufacturer of the smartphone and compatible products made by other manufacturers.
However, some companies, like Apple, stopped including chargers with smartphones in order to "reduce carbon footprint," etc., causing many customers to pay extra for charging adapters.
Software
Mobile operating systems
A mobile operating system (or mobile OS) is an operating system for phones, tablets, smartwatches, or other mobile devices.
Mobile operating systems combine features of a personal computer operating system with other features useful for mobile or handheld use; usually including, and most of the following considered essential in modern mobile systems; a touchscreen, cellular, Bluetooth, Wi-Fi Protected Access, Wi-Fi, Global Positioning System (GPS) mobile navigation, video- and single-frame picture cameras, speech recognition, voice recorder, music player, near field communication, and infrared blaster. By Q1 2018, over 383 million smartphones were sold with 85.9 percent running Android, 14.1 percent running iOS and a negligible number of smartphones running other OSes. Android alone is more popular than the popular desktop operating system Windows, and in general smartphone use (even without tablets) exceeds desktop use. Other well-known mobile operating systems are Flyme OS and Harmony OS.
Mobile devices with mobile communications abilities (e.g., smartphones) contain two mobile operating systemsthe main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device.
Mobile app
A mobile app is a computer program designed to run on a mobile device, such as a smartphone. The term "app" is a short-form of the term "software application".
Application stores
The introduction of Apple's App Store for the iPhone and iPod Touch in July 2008 popularized manufacturer-hosted online distribution for third-party applications (software and computer programs) focused on a single platform. There are a huge variety of apps, including video games, music products and business tools. Up until that point, smartphone application distribution depended on third-party sources providing applications for multiple platforms, such as GetJar, Handango, Handmark, and PocketGear. Following the success of the App Store, other smartphone manufacturers launched application stores, such as Google's Android Market (later renamed to the Google Play Store) and RIM's BlackBerry App World, Android-related app stores like Aptoide, Cafe Bazaar, F-Droid, GetJar, and Opera Mobile Store. In February 2014, 93% of mobile developers were targeting smartphones first for mobile app development.
Sales
Since 1996, smartphone shipments have had positive growth. In November 2011, 27% of all photographs created were taken with camera-equipped smartphones. In September 2012, a study concluded that 4 out of 5 smartphone owners use the device to shop online. Global smartphone sales surpassed the sales figures for feature phones in early 2013. Worldwide shipments of smartphones topped 1 billion units in 2013, up 38% from 2012's 725 million, while comprising a 55% share of the mobile phone market in 2013, up from 42% in 2012. In 2013, smartphone sales began to decline for the first time. In Q1 2016 for the first time the shipments dropped by 3 percent year on year. The situation was caused by the maturing China market. A report by NPD shows that fewer than 10% of US citizens have bought $1,000+ smartphones, as they are too expensive for most people, without introducing particularly innovative features, and amid Huawei, Oppo and Xiaomi introducing products with similar feature sets for lower prices. In 2019, smartphone sales declined by 3.2%, the largest in smartphone history, while China and India were credited with driving most smartphone sales worldwide. It is predicted that widespread adoption of 5G will help drive new smartphone sales.
By manufacturer
In 2011, Samsung had the highest shipment market share worldwide, followed by Apple. In 2013, Samsung had 31.3% market share, a slight increase from 30.3% in 2012, while Apple was at 15.3%, a decrease from 18.7% in 2012. Huawei, LG and Lenovo were at about 5% each, significantly better than 2012 figures, while others had about 40%, the same as the previous years figure. Only Apple lost market share, although their shipment volume still increased by 12.9%; the rest had significant increases in shipment volumes of 36–92%. In Q1 2014, Samsung had a 31% share and Apple had 16%. In Q4 2014, Apple had a 20.4% share and Samsung had 19.9%. In Q2 2016, Samsung had a 22.3% share and Apple had 12.9%. In Q1 2017, IDC reported that Samsung was first placed, with 80 million units, followed by Apple with 50.8 million, Huawei with 34.6 million, Oppo with 25.5 million and Vivo with 22.7 million.
Samsung's mobile business is half the size of Apple's, by revenue. Apple business increased very rapidly in the years 2013 to 2017. Realme, a brand owned by Oppo, is the fastest-growing phone brand worldwide since Q2 2019. In China, Huawei and Honor, a brand owned by Huawei, have 46% of market share combined and posted 66% annual growth as of 2019, amid growing Chinese nationalism. In 2019, Samsung had a 74% market share in 5G smartphones while 5G smartphones had 1% of market share in China.
Research has shown that iPhones are commonly associated with wealth, and that the average iPhone user has 40% more annual income than the average Android user. Women are more likely than men to own an iPhone. TrendForce predicts that foldable phones will start to become popular in 2021.
By operating system
Use
Contemporary use and convergence
The rise in popularity of touchscreen smartphones and mobile apps distributed via app stores along with rapidly advancing network, mobile processor, and storage technologies led to a convergence where separate mobile phones, organizers, and portable media players were replaced by a smartphone as the single device most people carried. Advances in digital camera sensors and on-device image processing software more gradually led to smartphones replacing simpler cameras for photographs and video recording. The built-in GPS capabilities and mapping apps on smartphones largely replaced stand-alone satellite navigation devices, and paper maps became less common. Mobile gaming on smartphones greatly grew in popularity, allowing many people to use them in place of handheld game consoles, and some companies tried creating game console/phone hybrids based on phone hardware and software. People frequently have chosen not to get fixed-line telephone service in favor of smartphones. Music streaming apps and services have grown rapidly in popularity, serving the same use as listening to music stations on a terrestrial or satellite radio. Streaming video services are easily accessed via smartphone apps and can be used in place of watching television. People have often stopped wearing wristwatches in favor of checking the time on their smartphones, and many use the clock features on their phones in place of alarm clocks. Mobile phones can also be used as a digital note taking, text editing and memorandum device whose computerization facilitates searching of entries.
Additionally, in many lesser technologically developed regions smartphones are people's first and only means of Internet access due to their portability, with personal computers being relatively uncommon outside of business use. The cameras on smartphones can be used to photograph documents and send them via email or messaging in place of using fax (facsimile) machines. Payment apps and services on smartphones allow people to make less use of wallets, purses, credit and debit cards, and cash. Mobile banking apps can allow people to deposit checks simply by photographing them, eliminating the need to take the physical check to an ATM or teller. Guide book apps can take the place of paper travel and restaurant/business guides, museum brochures, and dedicated audio guide equipment.
Mobile banking and payment
In many countries, mobile phones are used to provide mobile banking services, which may include the ability to transfer cash payments by secure SMS text message. Kenya's M-PESA mobile banking service, for example, allows customers of the mobile phone operator Safaricom to hold cash balances which are recorded on their SIM cards. Cash can be deposited or withdrawn from M-PESA accounts at Safaricom retail outlets located throughout the country and can be transferred electronically from person to person and used to pay bills to companies.
Branchless banking has been successful in South Africa and the Philippines. A pilot project in Bali was launched in 2011 by the International Finance Corporation and an Indonesian bank, Bank Mandiri.
Another application of mobile banking technology is Zidisha, a US-based nonprofit micro-lending platform that allows residents of developing countries to raise small business loans from Web users worldwide. Zidisha uses mobile banking for loan disbursements and repayments, transferring funds from lenders in the United States to borrowers in rural Africa who have mobile phones and can use the Internet.
Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999, the Philippines launched the country's first commercial mobile payments systems with mobile operators Globe and Smart.
Some mobile phones can make mobile payments via direct mobile billing schemes, or through contactless payments if the phone and the point of sale support near field communication (NFC). Enabling contactless payments through NFC-equipped mobile phones requires the co-operation of manufacturers, network operators, and retail merchants.
Facsimile
Some apps allows for sending and receiving facsimile (Fax), over a smartphone, including facsimile data (composed of raster bi-level graphics) generated directly and digitally from document and image file formats.
Criticism and issues
Social impacts
In 2012, University of Southern California study found that unprotected adolescent sexual activity was more common among owners of smartphones. A study conducted by the Rensselaer Polytechnic Institute's (RPI) Lighting Research Center (LRC) concluded that smartphones, or any backlit devices, can seriously affect sleep cycles. Some persons might become psychologically attached to smartphones resulting in anxiety when separated from the devices. A "smombie" (a combination of "smartphone" and "zombie") is a walking person using a smartphone and not paying attention as they walk, possibly risking an accident in the process, an increasing social phenomenon. The issue of slow-moving smartphone users led to the temporary creation of a "mobile lane" for walking in Chongqing, China. The issue of distracted smartphone users led the city of Augsburg, Germany to embed pedestrian traffic lights in the pavement.
While driving
Mobile phone use while driving—including calling, text messaging, playing media, web browsing, gaming, using mapping apps or operating other phone features—is common but controversial, since it is widely considered dangerous due to what is known as distracted driving. Being distracted while operating a motor vehicle has been shown to increase the risk of accidents. In September 2010, the US National Highway Traffic Safety Administration (NHTSA) reported that 995 people were killed by drivers distracted by phones. In March 2011 a US insurance company, State Farm Insurance, announced the results of a study which showed 19% of drivers surveyed accessed the Internet on a smartphone while driving. Many jurisdictions prohibit the use of mobile phones while driving. In Egypt, Israel, Japan, Portugal and Singapore, both handheld and hands-free calling on a mobile phone (which uses a speakerphone) is banned. In other countries, including the UK and France, and in many US states, calling is only banned on handheld phones, while hands-free calling is permitted.
A 2011 study reported that over 90% of college students surveyed text (initiate, reply or read) while driving.
The scientific literature on the danger of driving while sending a text message from a mobile phone, or texting while driving, is limited. A simulation study at the University of Utah found a sixfold increase in distraction-related accidents when texting. Due to the complexity of smartphones that began to grow more after, this has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav.
A 2010 study reviewed the incidence of phone use while cycling and its effects on behavior and safety. In 2013 a national survey in the US reported the number of drivers who reported using their phones to access the Internet while driving had risen to nearly one of four. A study conducted by the University of Vienna examined approaches for reducing inappropriate and problematic use of mobile phones, such as using phones while driving.
Accidents involving a driver being distracted by being in a call on a phone have begun to be prosecuted as negligence similar to speeding. In the United Kingdom, from 27 February 2007, motorists who are caught using a handheld phone while driving will have three penalty points added to their license in addition to the fine of £60. This increase was introduced to try to stem the increase in drivers ignoring the law. Japan prohibits all use of phones while driving, including use of hands-free devices. New Zealand has banned handheld phone use since 1 November 2009. Many states in the United States have banned text messaging on phones while driving. Illinois became the 17th American state to enforce this law. As of July 2010, 30 states had banned texting while driving, with Kentucky becoming the most recent addition on July 15.
Public Health Law Research maintains a list of distracted driving laws in the United States. This database of laws provides a comprehensive view of the provisions of laws that restrict the use of mobile devices while driving for all 50 states and the District of Columbia between 1992, when first law was passed through December 1, 2010. The dataset contains information on 22 dichotomous, continuous or categorical variables including, for example, activities regulated (e.g., texting versus talking, hands-free versus handheld calls, web browsing, gaming), targeted populations, and exemptions.
Legal
A "patent war" between Samsung and Apple started when the latter claimed that the original Galaxy S Android phone copied the interfaceand possibly the hardwareof Apple's iOS for the iPhone 3GS. There was also smartphone patents licensing and litigation involving Sony Mobile, Google, Apple Inc., Samsung, Microsoft, Nokia, Motorola, HTC, Huawei and ZTE, among others. The conflict is part of the wider "patent wars" between multinational technology and software corporations. To secure and increase market share, companies granted a patent can sue to prevent competitors from using the methods the patent covers. Since the 2010s the number of lawsuits, counter-suits, and trade complaints based on patents and designs in the market for smartphones, and devices based on smartphone OSes such as Android and iOS, has increased significantly. Initial suits, countersuits, rulings, license agreements, and other major events began in 2009 as the smartphone market stated to grow more rapidly by 2012.
Medical
With the rise in number of mobile medical apps in the market place, government regulatory agencies raised concerns on the safety of the use of such applications. These concerns were transformed into regulation initiatives worldwide with the aim of safeguarding users from untrusted medical advice. According to the findings of these medical experts in recent years, excessive smartphone use in society may lead to headaches, sleep disorders and insufficient sleep, while severe smartphone addiction may lead to physical health problems, such as hunchback, muscle relaxation and uneven nutrition.
Impacts on cognition and mental health
There is a debate about beneficial and detrimental impacts of smartphones or smartphone-uses on cognition and mental health.
Security
Smartphone malware is easily distributed through an insecure app store. Often, malware is hidden in pirated versions of legitimate apps, which are then distributed through third-party app stores. Malware risk also comes from what is known as an "update attack", where a legitimate application is later changed to include a malware component, which users then install when they are notified that the app has been updated. As well, one out of three robberies in 2012 in the United States involved the theft of a mobile phone. An online petition has urged smartphone makers to install kill switches in their devices. In 2014, Apple's "Find my iPhone" and Google's "Android Device Manager" can locate, disable, and wipe the data from phones that have been lost or stolen. With BlackBerry Protect in OS version 10.3.2, devices can be rendered unrecoverable to even BlackBerry's own Operating System recovery tools if incorrectly authenticated or dissociated from their account.
Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the United States Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including iOS and Android). In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software.
Guidelines for mobile device security were issued by NIST and many other organizations. For conducting a private, in-person meeting, at least one site recommends that the user switch the smartphone off and disconnect the battery.
Sleep
Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affects melatonin levels and sleep cycles. In an effort to alleviate these issues, "Night Mode" functionality to change the color temperature of a screen to a warmer hue based on the time of day to reduce the amount of blue light generated became available through several apps for Android and the f.lux software for jailbroken iPhones. iOS 9.3 integrated a similar, system-level feature known as "Night Shift." Several Android device manufacturers bypassed Google's initial reluctance to make Night Mode a standard feature in Android and included software for it on their hardware under varying names, before Android Oreo added it to the OS for compatible devices.
It has also been theorized that for some users, addiction to use of their phones, especially before they go to bed, can result in "ego depletion." Many people also use their phones as alarm clocks, which can also lead to loss of sleep.
Lifespan
In mobile phones released since the second half of the 2010s, operational life span commonly is limited by built-in batteries which are not designed to be interchangeable. The life expectancy of batteries depends on usage intensity of the powered device, where activity (longer usage) and tasks demanding more energy expire the battery earlier.
Lithium-ion and Lithium-polymer batteries, those commonly powering portable electronics, additionally wear down more from fuller charge and deeper discharge cycles, and when unused for an extended amount of time while depleted, where self-discharging may lead to a harmful depth of discharge.
The functional life span of mobile phones may be limited by lack of software update support, such as deprecation of TLS cipher suites by certificate authority with no official patches provided for earlier devices.
See also
Comparison of smartphones
E-reader
Lists of mobile computers
List of mobile software distribution platforms
Media Transfer Protocol
Mobile Internet device
Portable media player
Second screen
Smartphone kill switch
Smartphone zombie
Notes
References
External links
Smartphones
Cloud clients
Consumer electronics
Information appliances
Mobile computers
Personal computing |
168753 | https://en.wikipedia.org/wiki/Data%20haven | Data haven | A data haven, like a corporate haven or tax haven, is a refuge for uninterrupted or unregulated data. Data havens are locations with legal environments that are friendly to the concept of a computer network freely holding data and even protecting its content and associated information. They tend to fit into three categories: a physical locality with weak information-system enforcement and extradition laws, a physical locality with intentionally strong protections of data, and virtual domains designed to secure data via technical means (such as encryption) regardless of any legal environment.
Tor's onion space (hidden service), HavenCo (centralized), and Freenet (decentralized) are three models of modern-day virtual data havens.
Purposes of data havens
Reasons for establishing data havens include access to free political speech for users in countries where censorship of the Internet is practiced.
Other reasons can include:
Whistleblowing
Distributing software, data or speech that violates laws such as the DMCA
Copyright infringement
Circumventing data protection laws
Online gambling
Pornography
Cybercrime
History of the term
The 1978 report of the British government's Data Protection Committee expressed concern that different privacy standards in different countries would lead to the transfer of personal data to countries with weaker protections; it feared that Britain might become a "data haven". Also in 1978, Adrian Norman published a mock consulting study on the feasibility of setting up a company providing a wide range of data haven services, called "Project Goldfish".
Science fiction novelist William Gibson used the term in his novels Count Zero and Mona Lisa Overdrive, as did Bruce Sterling in Islands in the Net. The 1990s segments of Neal Stephenson's 1999 novel Cryptonomicon concern a small group of entrepreneurs attempting to create a data haven.
See also
Anonymity
Anonymous P2P
Pseudonymity
Corporate haven
Crypto-anarchism
Sealand located in international waters in the North Sea
CyberBunker
PRQ, an ISP in Sweden
IPREDator located in Sweden
International Modern Media Institute
WikiLeaks
References
Computer law
Anonymity networks
Crypto-anarchism
Information privacy
Internet privacy
Data laws |
169320 | https://en.wikipedia.org/wiki/Radio-frequency%20identification | Radio-frequency identification | Radio-frequency identification (RFID) uses electromagnetic fields to automatically identify and track tags attached to objects. An RFID system consists of a tiny radio transponder, a radio receiver and transmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually an identifying inventory number, back to the reader. This number can be used to track inventory goods.
Passive tags are powered by energy from the RFID reader's interrogating radio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike a barcode, the tag does not need to be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method of automatic identification and data capture (AIDC).
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through the assembly line, RFID-tagged pharmaceuticals can be tracked through warehouses, and implanting RFID microchips in livestock and pets enables positive identification of animals. Tags can also be used in shops to expedite checkout, and to prevent theft by customers and employees.
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information without consent has raised serious privacy concerns. These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89 billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.
History
In 1945, Léon Theremin invented the "Thing", a listening device for the Soviet Union which retransmitted incident radio waves with the added audio information. Sound waves vibrated a diaphragm which slightly altered the shape of the resonator, which modulated the reflected radio frequency. Even though this device was a covert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energized and activated by waves from an outside source.
Similar technology, such as the Identification friend or foe transponder, was routinely used by the Allies and Germany in World War II to identify aircraft as friendly or hostile. Transponders are still used by most powered aircraft. An early work exploring RFID is the landmark 1948 paper by Harry Stockman, who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID, as it was a passive radio transponder with memory. The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to the New York Port Authority and other potential users. It consisted of a transponder with 16 bit memory for use as a toll device. The basic Cardullo patent covers the use of RF, sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system, electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic checkbook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).
In 1973, an early demonstration of reflected power (modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Frayman at the Los Alamos National Laboratory. The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.
In 1983, the first patent to be associated with the abbreviation RFID was granted to Charles Walton.
Design
A radio-frequency identification system uses tags, or labels attached to the objects to be identified. Two-way radio transmitter-receivers called interrogators or readers send a signal to the tag and read its response.
Tags
RFID tags are made out of three pieces: a micro chip (an integrated circuit which stores and processes information and modulates and demodulates radio-frequency (RF) signals), an antenna for receiving and transmitting the signal and a substrate.
The tag information is stored in a non-volatile memory. The RFID tag includes either fixed or programmable logic for processing the transmission and sensor data, respectively.
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal. A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission. This makes a difference in interference and in exposure to radiation.
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
Readers
RFID systems can be classified by the type of tag and reader. There are 3 types:
A Passive Reader Active Tag (PRAT) system has a passive reader which only receives radio signals from active tags (battery operated, transmit only). The reception range of a PRAT system reader can be adjusted from , allowing flexibility in applications such as asset protection and supervision.
An Active Reader Passive Tag (ARPT) system has an active reader, which transmits interrogator signals and also receives authentication replies from passive tags.
An Active Reader Active Tag (ARAT) system uses active tags activated with an interrogator signal from the active reader. A variation of this system could also use a Battery-Assisted Passive (BAP) tag which acts like a passive tag but has a small battery to power the tag's return reporting signal.
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Frequencies
Signaling
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In this near field region, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag can backscatter a signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.
An Electronic Product Code (EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like a URL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.
Often more than one tag will respond to a tag reader, for example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to "singulate" a particular tag, allowing its data to be read in the midst of many similar tags. In a slotted Aloha system, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.
Both methods have drawbacks when used with many tags or with multiple overlapping readers.
Bulk reading
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupled HF RFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet) suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is a fuzzy method for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.
Miniaturization
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers at Bristol University successfully glued RFID micro-transponders to live ants in order to study their behavior. This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip. Manufacture is enabled by using the silicon-on-insulator (SOI) process. These dust-sized chips can store 38-digit numbers using 128-bit Read Only Memory (ROM). A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
TFID
In early 2020, MIT researchers demonstrated a terahertz frequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.
Uses
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use of barcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds per second; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each. Battery-Assisted Passive (BAP) tags were in the US$3–10 range.
RFID can be used in a variety of applications, such as:
Access management
Tracking of goods
Tracking of persons and animals
Toll collection and contactless payment
Machine readable travel documents
Smartdust (for massively distributed sensor networks)
Locating lost airport baggage
Timing sporting events
Tracking and billing processes
Monitoring the physical state of perishable goods
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture between GS1 and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by the Auto-ID Center.
Commerce
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improve supply chain management. Warehouse Management System incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.
Retail
RFID is used for item level tagging in retail stores. In addition to inventory control, this provides both protection against theft by customers (shoplifting) and employees ("shrinkage") by using electronic article surveillance (EAS), and a self checkout process for customers. Tags of different types can be physically removed with a special tool or deactivated electronically once items have been paid for. On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticate poker chips, and can selectively invalidate any chips known to be stolen.
Access control
RFID tags are widely used in identification badges, replacing earlier magnetic stripe cards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.
Advertising
In 2010 Vail Resorts began using UHF Passive RFID tags in ski passes.
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at the PGA Golf Championships, and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.
Promotion tracking
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.
Transportation and logistics
Yard management, shipping and freight and distribution centers use RFID tracking. In the railroad industry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.
Some countries are using RFID for vehicle registration and enforcement. RFID can help detect and retrieve stolen cars.
RFID is used in intelligent transportation systems. In New York City, RFID readers are deployed at intersections to track E-ZPass tags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used in adaptive traffic control of the traffic lights.
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.
Infrastructure management and protection
At least one company has introduced RFID to identify and locate underground infrastructure assets such as gas pipelines, sewer lines, electrical cables, communication cables, etc.
Passports
The first RFID passports ("E-passport") were issued by Malaysia in 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.
Other countries that insert RFID in passports include Norway (2005), Japan (March 1, 2006), most EU countries (around 2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015) and Israel (2017).
Standards for RFID passports are determined by the International Civil Aviation Organization (ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to the ISO/IEC 14443 RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in new United States passports will store the same information that is printed within the passport, and include a digital picture of the owner. The United States Department of State initially stated the chips could only be read from a distance of , but after widespread criticism and a clear demonstration that special equipment can read the test passports from away, the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers to skim information when the passport is closed. The department will also implement Basic Access Control (BAC), which functions as a personal identification number (PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.
Transportation payments
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Some bike lockers are operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.
The Zipcar car-sharing service uses RFID cards for locking and unlocking cars and for member identification.
In Singapore, RFID replaces paper Season Parking Ticket (SPT).
Animal identification
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak of mad-cow disease, RFID has become crucial in animal identification management. An implantable RFID tag or transponder can also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals. The Canadian Cattle Identification Agency began using RFID tags as a replacement for barcode tags. Currently CCIA tags are used in Wisconsin and by United States farmers on a voluntary basis. The USDA is currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.
Human implantation
Biocompatible microchip implants that use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artist Eduardo Kac in 1997. Kac implanted the microchip live on television (and also live on the Internet) in the context of his artwork Time Capsule.
A year later, British professor of cybernetics Kevin Warwick had an RFID chip implanted in his arm by his general practitioner, George Boulos. In 2004 the 'Baja Beach Clubs' operated by Conrad Chase in Barcelona and Rotterdam offered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009 British scientist Mark Gasson had an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.
The Food and Drug Administration in the United States approved the use of RFID chips in humans in 2004.
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms, and to the emergence of an "ultimate panopticon", a society where all citizens behave in a socially accepted manner because others might be watching.
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.
Institutions
Hospitals and healthcare
Adoption of RFID in the medical industry has been widespread and very effective. Hospitals are among the first users to combine both active and passive RFID. Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification. Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices. The U.S. Department of Veterans Affairs (VA) recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.
Since 2004 a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems, usually for workflow and inventory management.
The use of RFID to prevent mix-ups between sperm and ova in IVF clinics is also being considered.
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activists Katherine Albrecht and Liz McIntyre discovered an FDA Warning Letter that spelled out health risks. According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as a security device, taking the place of the more traditional electromagnetic security strip.
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in the Vatican Library in Rome.
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on a conveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.
However, as of 2008 this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off, but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID. In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size. Also, the tasks that RFID takes over are largely not the primary tasks of librarians. A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.
Privacy concerns have been raised surrounding library use of RFID. Because some RFID tags can be read up to away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information, and the tags used in the majority of libraries use a frequency only readable from approximately . Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.
Museums
RFID technologies are now also implemented in end-user applications in museums. An example was the custom-designed temporary research application, "eXspot," at the Exploratorium, a science museum in San Francisco, California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.
Schools and universities
In 2004 school authorities in the Japanese city of Osaka made a decision to start chipping children's clothing, backpacks, and student IDs in a primary school. Later, in 2007, a school in Doncaster, England is piloting a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms. St Charles Sixth Form College in west London, England, started in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly, Whitcliffe Mount School in Cleckheaton, England uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already use RFID in IDs for borrowing books. Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.
Sports
RFID for timing races began in the early 1990s with pigeon racing, introduced by the company Deister Electronics in Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.
In races utilizing RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error, lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle with . The chips must be about 400mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.10 ft).
Passive and active RFID systems are used in off-road events such as Orienteering, Enduro and Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.
RFID is being adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number of ski resorts have adopted RFID tags to provide skiers hands-free access to ski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56 megahertz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.
The NFL in the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently cameras stay focused on the quarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays. The chip triangulates the player's position within six inches and will be used to digitally broadcast replays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app. The RFID chips are manufactured by Zebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year to track vector data.
Complement to barcode
RFID tags are often a complement, but not a substitute, for UPC or EAN barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airline boarding passes. The new EPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require many terabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with Universal Product Code (UPC) or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Waste management
Since around 2007 there been increasing development in the use of RFID in the waste management industry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification. The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks. RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT) municipal solid waste usage-pricing models.
Telemetry
Active RFID tags have the potential to function as low-cost remote sensors that broadcast telemetry back to a base station. Applications of tagometry data could include sensing of road conditions by implanted beacons, weather reports, and noise level monitoring.
Passive RFID tags can also report sensor data. For example, the Wireless Identification and Sensing Platform is a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.
Regulation and standardization
To avoid injuries to humans and animals, RF transmission needs to be controlled.
A number of organizations have set standards for RFID, including the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), ASTM International, the DASH7 Alliance and EPCglobal.
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry Association CompTIA for certifying RFID engineers, and the International Airlines Transport Association IATA for luggage in airports.
Every country can set its own rules for frequency allocation for RFID tags, and not all radio bands are available in all countries. These frequencies are known as the ISM bands (Industrial Scientific and Medical bands). The return signal of the tag may still cause interference for other radio users.
Low-frequency (LF: 125–134.2 kHz and 140–148.5 kHz) (LowFID) tags and high-frequency (HF: 13.56 MHz) (HighFID) tags can be used globally without a license.
Ultra-high-frequency (UHF: 865–928 MHz) (Ultra-HighFID or UHFID) tags cannot be used globally as there is no single global standard, and regulations differ from country to country.
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power. In Europe, RFID and other low-power radio applications are regulated by ETSI recommendations EN 300 220 and EN 302 208, and ERO recommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz. Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current research. The North American UHF standard is not accepted in France as it interferes with its military bands. On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States’ 915 MHz band, establishing an international standard environment for RFID.
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.
Standards that have been made regarding RFID include:
ISO 11784/11785 – Animal identification. Uses 134.2 kHz.
ISO 14223 – Radiofrequency identification of animals – Advanced transponders
ISO/IEC 14443: This standard is a popular HF (13.56 MHz) standard for HighFIDs which is being used as the basis of RFID-enabled passports under ICAO 9303. The Near Field Communication standard that lets mobile devices act as RFID readers/transponders is also based on ISO/IEC 14443.
ISO/IEC 15693: This is also a popular HF (13.56 MHz) standard for HighFIDs widely used for non-contact smart payment and credit cards.
ISO/IEC 18000: Information technology—Radio frequency identification for item management:
ISO/IEC 18092 Information technology—Telecommunications and information exchange between systems—Near Field Communication—Interface and Protocol (NFCIP-1)
ISO 18185: This is the industry standard for electronic seals or "e-seals" for tracking cargo containers using the 433 MHz and 2.4 GHz frequencies.
ISO/IEC 21481 Information technology—Telecommunications and information exchange between systems—Near Field Communication Interface and Protocol −2 (NFCIP-2)
ASTM D7434, Standard Test Method for Determining the Performance of Passive Radio Frequency Identification (RFID) Transponders on Palletized or Unitized Loads
ASTM D7435, Standard Test Method for Determining the Performance of Passive Radio Frequency Identification (RFID) Transponders on Loaded Containers
ASTM D7580, Standard Test Method for Rotary Stretch Wrapper Method for Determining the Readability of Passive RFID Transponders on Homogenous Palletized or Unitized Loads
ISO 28560-2— specifies encoding standards and data model to be used within libraries.
In order to ensure global interoperability of products, several organizations have set up additional standards for RFID testing. These standards include conformance, performance and interoperability tests.
EPC Gen2
EPC Gen2 is short for EPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture between GS1 and GS1 US, is working on international standards for the use of mostly passive RFID and the Electronic Product Code (EPC) in the identification of many items in the supply chain for companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention from Intermec that the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free. The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.
Problems and concerns
Data flooding
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts have been designed, mainly offered as middleware performing the filtering from noisy and redundant raw data to significant processed data.
Global standardization
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as the barcode. To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
Security concerns
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to the United States Department of Defense's recent adoption of RFID tags for supply chain management. More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control, payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable to skimming and eavesdropping, albeit at shorter distances.
A second method of prevention is by using cryptography. Rolling codes and challenge–response authentication (CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission. Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for a cryptographically coded response from the tag. The protocols used during CRA can be symmetric, or may use public key cryptography.
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost,
many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such as cover-coding.
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy. Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package. Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption, as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002. There are also concerns that the database structure of Object Naming Service may be susceptible to infiltration, similar to denial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.
Health
Microchip–induced tumours have been noted during animal trials.
Shielding
In an effort to prevent the passive “skimming” of RFID-enabled cards or passports, the U.S. General Services Administration (GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves. For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program. The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder. Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use of EMV chips rather than RFID makes this sort of theft rare.
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating a Faraday cage, does work. Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.
Shielding effectiveness depends on the frequency being used. Low-frequency LowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads. High frequency HighFID tags (13.56 MHz—smart cards and access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface. UHF Ultra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.
Controversies
Privacy
The use of RFID has engendered considerable controversy and some consumer privacy advocates have initiated product boycotts. Consumer privacy experts Katherine Albrecht and Liz McIntyre are two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:
As the owner of an item may not necessarily be aware of the presence of an RFID tag and the tag can be read at a distance without the knowledge of the individual, sensitive data may be acquired without consent.
If a tagged item is paid for by credit card or in conjunction with use of a loyalty card, then it would be possible to indirectly deduce the identity of the purchaser by reading the globally unique ID of that item contained in the RFID tag. This is a possibility if the person watching also had access to the loyalty card and credit card data, and the person with the equipment knows where the purchaser is going to be.
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home and thus can be used for surveillance and other purposes unrelated to their supply chain inventory functions.
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works. They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of the Clipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested by IBM researchers Paul Moskowitz and Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.
In January 2004 privacy advocates from CASPIAN and the German privacy group FoeBuD were invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customer loyalty cards contained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.
During the UN World Summit on the Information Society (WSIS) between the 16th to 18 November 2005, founder of the free software movement, Richard Stallman, protested the use of RFID security cards by covering his card with aluminum foil.
In 2004–2005 the Federal Trade Commission staff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.
RFID was one of the main topics of the 2006 Chaos Communication Congress (organized by the Chaos Computer Club in Berlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The group monochrom staged a special 'Hack RFID' song.
Government control
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from a US passport card by using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy. In the book SpyChips: How Major Corporations and Government Plan to Track Your Every Move by Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".
Deliberate destruction in clothing and other items
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven; however some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags and EPC tags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password. Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.
See also
AS5678
Balise
Bin bug
Chipless RFID
Internet of Things
Mass surveillance
Microchip implant (human)
Near Field Communication (NFC)
PositiveID
Privacy by design
Proximity card
Resonant inductive coupling
RFdump
RFID in schools
RFID Journal
RFID on metal
RSA blocker tag
Smart label
Speedpass
TecTile
Tracking system
References
External links
An open source RFID library used as door opener
UHF regulations overview by GS1
What is RFID? Educational video by The RFID Network
Privacy concerns and proposed privacy legislation
What is RFID? – animated explanation
IEEE Council on RFID
Proximity cards
Automatic identification and data capture
Privacy
Ubiquitous computing
Wireless
Radio frequency interfaces |
171385 | https://en.wikipedia.org/wiki/Cryptomathic | Cryptomathic | Cryptomathic is a software company specializing in the area of cryptography for e-commerce security systems. The company develops secure software for the financial and governmental industries. It focuses especially on developing back-end solutions using hardware security modules.
Cryptomathic has its headquarters in Aarhus, Denmark. The company was founded in 1986, by three professors from University of Aarhus, among them Peter Landrock and Ivan Damgård. It now operates world-wide with offices in Cambridge, UK; Munich, Germany; San Jose, California, US and Sophia Antipolis, France.
Cryptomathic has collaborated in research projects with the Isaac Newton Institute for Mathematical Sciences to develop Cryptomathic's systems for securing messaging between hardware security modules (HSMs). With Bristol University, Cryptomathic conducted research on authenticated encryption between HSMs.
Awards and recognition
In 2002, Cryptomathic's chief cryptographer Vincent Rijmen was named one of the top 100 innovators in the world under the age of 35 by the MIT Technology Review TR100.
In 2003, Cryptomathic was recognized by the World Economic Forum as a Technology Pioneer, based on its innovative product for mobile electronic signatures. The term "What You See Is What You Sign" (WYSIWYS) was coined in 1998 by Peter Landrock and Torben Pedersen of Cryptomathic during their work on delivering secure and legally binding digital signatures for Pan-European projects. In 2004, Cryptomathic received the Visa Smart Star Award for it contributions to the field of EMV and Chip and PIN based on its data preparation offering. In 2010, Cryptomathic's founder, Peter Landrock was named a Finalist for European Inventor 2010 in the "Lifetime Achievement" category by the European Patent Office.
The member of Cryptomathic's advisory board, Whitfield Diffie is co-author of the Diffie–Hellman key exchange, a method of securely exchanging cryptographic keys. Diffie and Hellman were awarded with the 2015 Turing Award for "fundamental contributions to modern cryptography" including public-key cryptography and digital signatures.
References
Cryptography companies
Companies based in Cambridge
Software companies established in 1986
1986 establishments in Denmark |
172684 | https://en.wikipedia.org/wiki/InterBase | InterBase | InterBase is a relational database management system (RDBMS) currently developed and marketed by Embarcadero Technologies. InterBase is distinguished from other RDBMSs by its small footprint, close to zero administration requirements, and multi-generational architecture. InterBase runs on the Microsoft Windows, macOS, Linux, Solaris operating systems as well as iOS and Android.
Technology
InterBase is a SQL-92-compliant relational database and supports standard interfaces such as JDBC, ODBC, and ADO.NET.
Small footprint
A full InterBase server installation requires around 40 MB on disk. A minimum InterBase client install requires about 400 KB of disk space.
Embedded or server
InterBase can be run as an embedded database or regular server.
Data controller friendly inbuilt encryption
Since InterBase XE, InterBase includes 256bit AES strength encryption that offers full database, table or column data encryption. This assists data controllers conform with data protection laws around at-rest data by providing separated encryption / db access to the database and ensuring the database file is encrypted wherever it resides. The separation of the encryption also enables developers to just develop the application rather than worry about the data visible from a specific user login.
Multi-generational architecture
Concurrency control
To avoid blocking during updates, Interbase uses multiversion concurrency control instead of locks. Each transaction will create a version of the record. Upon the write step, the update will fail rather than be blocked initially.
Rollbacks and recovery
InterBase also uses multi-generational records to implement rollbacks rather than transaction logs.
Drawbacks
Certain operations are more difficult to implement in a multi-generational architecture, and hence perform slowly relative to a more traditional implementation. One example is the SQL COUNT verb. Even when an index is available on the column or columns included in the COUNT, all records must be visited in order to see if they are visible under the current transaction isolation.
History
Early years
Jim Starkey was working at DEC on their DATATRIEVE 4th generation language 4GL product when he came up with an idea for a system to manage concurrent changes by many users. The idea dramatically simplified the existing problems of locking which were proving to be a serious problem for the new relational database systems being developed at the time. Starkey, however, had the idea after he had spun off his original relational database project to another group and a turf war ensued. Starkey left the company after shipping the first version of the Rdb/ELN product.
Although InterBase's implementation is much more similar to the system described by Reed in his MIT dissertation than any other database that existed at the time and Starkey knew Bernstein from his previous position at the Computer Corporation of America and later at DEC, Starkey has stated that he arrived at the idea of multiversion concurrency control independently. In the same comment, Starkey says:
The inspiration for multi-generational concurrency control was a database system done by Prime that supported page level snapshots. The intention of the feature was to give a reader a consistent view of the database without blocking writers. The idea intrigued me as a very useful characteristic of a database system.
He had heard that the local workstation vendor Apollo Computer was looking for a database offering on their Unix machines, and they agreed to fund development. With their encouragement he formed Groton Database Systems (named after the town, Groton, Massachusetts, where they were located) on Labor Day 1984 and started work on what would eventually be released as InterBase. In 1986 Apollo suffered a corporate shakeup and decided to exit the software business, but by this time the product was making money.
The road to Borland
Between 1986 and 1991 the product was gradually sold to Ashton-Tate, makers of the famous dBASE who were at the time purchasing various database companies in order to fill out their portfolio. The company was soon in trouble, and Borland purchased Ashton-Tate in 1991, acquiring InterBase as part of the deal.
Open source
In early 2000, Borland announced that InterBase would be released under open-source, and began negotiations to spin off a separate company to manage the product. When the people who were to run the new company and Borland could not agree on the terms of the separation, InterBase remained a Borland product, and the source code for InterBase version 6 was released under a variant of the Mozilla Public License in mid-2000.
With the InterBase division at Borland under new management, the company released a proprietary version of InterBase version 6 and then 6.5. Borland released several updates to the open source code before announcing that it would no longer actively develop the open source project. Firebird, an open source fork of the InterBase 6 code, however, remains in active development.
In 2001, a backdoor was discovered (and fixed) in the software that had been present in all versions since 1994.
CodeGear
On February 8 of 2006, Borland announced the intention to sell their line of development tool products, including InterBase, Delphi, JBuilder, and other tools , but instead of selling the divisions, Borland spun them out as a subsidiary on 14 November 2006. InterBase, along with IDE tools such as Delphi and JBuilder were included in the new company's product lineup. Then on 7 May 2008, Borland and Embarcadero Technologies announced that Embarcadero had "signed a definitive asset purchase agreement to purchase CodeGear." The acquisition, for approximately $24.5 million, closed on 30 June 2008.
Recent releases
At the end of 2002, Borland released InterBase version 7, featuring support for SMP, enhanced support for monitoring and control of the server by administrators, and more. Borland released InterBase 7.1 in June 2003, 7.5 in December 2004, and 7.5.1 on June 1, 2005.
In September 2006, Borland announced the availability of InterBase 2007. Its new features include point in time recovery via journaling (which also allows recoverability without the performance penalty of synchronous writes), incremental backup, batch statement operations, new Unicode character encodings, and a new ODBC driver.
In September 2008, Embarcadero announced the availability of InterBase 2009. Its new features include full database encryption, selective column-level data encryption and over-the-wire encryption offering secure TCP/IP communication via Secure Sockets Layer (SSL).
In September 2010, Embarcadero announced the availability of InterBase XE. Its new features include a 64 bit client and server, improved security, improved scalability, support for dynamic SQL in stored procedures, and optimized performance of large objects with stream methods.
In 2013/2014 Embarcadero added iOS and then Android to the available supported platforms in InterBase XE3. Additionally InterBase IBLite was released - a run time royalty free edition of InterBase covering Windows, macOS, iOS and Android.
In December 2014, embarcadero released InterBase XE7 offering a brand new, patent pending change tracking technology called "Change Views.". Added Ubuntu to the certified Linux platforms and also added 64bit Linux support. Additional 64bit transaction ID's were introduced and new distinguished data dumps enabling rapid updates of read only copies of the master database.
In March 2017, Embarcadero released InterBase 2017. InterBase 2017 includes InterBase ToGo for Linux, Server wide monitoring support for InterBase Server, a number of language enhancements (including derived tables and common table expressions, truncate table for faster data removal), enhancements to Change Views for expanding a subscription with a table wide scope, new transaction isolation levels and transaction wait time management.
In Nov 2019, Embarcadero released InterBase 2020, followed by Update 1 release in May 2020. The InterBase 2020 release adds a number of new features, including tablespaces support for InterBase, allowing for better performance on servers with multiple data storage options. See further at https://www.embarcadero.com/products/interbase/version-history
See also
Comparison of relational database management systems
List of relational database management systems
References
External links
InterBase product page
How to connect with Interbase database with Ole Db
Client-server database management systems
CodeGear software
Cross-platform software
MacOS database-related software
Proprietary database management systems
RDBMS software for Linux
Windows database-related software |
172807 | https://en.wikipedia.org/wiki/Frank%20A.%20Stevenson | Frank A. Stevenson | Frank A. Stevenson (born 1970) is a Norwegian software developer, and part-time cryptanalyst. He is primarily known for his exposition of weaknesses in the DVD Forum's Content Scramble System (CSS). Although the cryptoanalysis was done independently, he is known for his relations to DeCSS, and appeared before the courts as a witness in the Jon Johansen court trial. He also gave a deposition for the DVD CCA v. McLaughlin, Bunner, et al. case.
Stevenson worked for Funcom as a game developer for many years, after which he moved to Kvaleberg to work on mobile phone software. In July 2010, Stevenson published information about vulnerabilities in the A5/1 encryption algorithm used by most 2G GSM networks, and also showed the Kraken software, that demonstrates that the crypto indeed can be broken with modest hardware.
Games credited
Stevenson has been credited with the following video games:
Anarchy Online: Alien Invasion (2004), Funcom Oslo A/S
Anarchy Online (2001), Funcom Oslo A/S
Anarchy Online: The Notum Wars (2001), KOCH Media Deutschland GmbH
The Longest Journey (1999), IQ Media Nordic
Dragonheart: Fire & Steel (1996), Acclaim Entertainment, Inc.
Winter Gold (1996), Nintendo Co., Ltd.
See also
Cryptography
Cryptology
DeCSS
References
1970 births
Living people
Modern cryptographers
Norwegian people of English descent |
173155 | https://en.wikipedia.org/wiki/Public%20good%20%28economics%29 | Public good (economics) | In economics, a public good (also referred to as a social good or collective good) is a good that is both non-excludable and non-rivalrous. For such goods, users cannot be barred from accessing or using them for failing to pay for them. Also, use by one person neither prevents access of other people nor does it reduce availability to others. Therefore, the good can be used simultaneously by more than one person. This is in contrast to a common good, such as wild fish stocks in the ocean, which is non-excludable but rivalrous to a certain degree. If too many fish were harvested, the stocks would deplete, limiting the access of fish for others. A public good must be valuable to more than one user, otherwise, the fact that it can be used simultaneously by more than one person would be economically irrelevant.
Capital goods may be used to produce public goods or services that are "...typically provided on a large scale to many consumers." Unlike other types of economic goods, public goods are described as “non-rivalrous” or “non-exclusive,” and use by one person neither prevents access of other people nor does it reduce availability to others. Similarly, using capital goods to produce public goods may result in the creation of new capital goods. In some cases, public goods or services are considered "...insufficiently profitable to be provided by the private sector.... (and), in the absence of government provision, these goods or services would be produced in relatively small quantities or, perhaps, not at all."
Public goods include knowledge, official statistics, national security, and common languages. Additionally, flood control systems, lighthouses, and street lighting are also common social goods. Collective goods that are spread all over the face of the earth may be referred to as global public goods. For instance, knowledge is well shared globally. Information about men, women and youth health awareness, environmental issues, and maintaining biodiversity is common knowledge that every individual in the society can get without necessarily preventing others access. Also, sharing and interpreting contemporary history with a cultural lexicon, particularly about protected cultural heritage sites and monuments are other sources of knowledge that the people can freely access.
Public goods problems are often closely related to the "free-rider" problem, in which people not paying for the good may continue to access it. Thus, the good may be under-produced, overused or degraded. Public goods may also become subject to restrictions on access and may then be considered to be club goods; exclusion mechanisms include toll roads, congestion pricing, and pay television with an encoded signal that can be decrypted only by paid subscribers.
There is a good deal of debate and literature on how to measure the significance of public goods problems in an economy, and to identify the best remedies.
Academic literature on public goods
Paul A. Samuelson is usually credited as the economist who articulated the modern theory of public goods in a mathematical formalism, building on earlier work of Wicksell and Lindahl. In his classic 1954 paper The Pure Theory of Public Expenditure, he defined a public good, or as he called it in the paper a "collective consumption good", as follows:[goods] which all enjoy in common in the sense that each individual's consumption of such a good leads to no subtractions from any other individual's consumption of that good...A Lindahl tax is a type of taxation brought forward by Erik Lindahl, an economist from Sweden in 1919. His idea was to tax individuals, for the provision of a public good, according to the marginal benefit they receive. Public goods are costly and eventually someone needs to pay the cost. It is difficult to determine how much each person should pay. So, Lindahl developed a theory of how the expense of public utilities needs to be settled. His argument was that people would pay for the public goods according to the way they benefit from the good. The more a person benefits from these goods, the higher the amount they pay. People are more willing to pay for goods that they value. Taxes are needed to fund public goods and people are willing to bear the burden of taxes. Additionally, the theory dwells on people's willingness to pay for the public good. From the fact that public goods are paid through taxation according to the Lindahl idea, the basic duty of the organization that should provide the people with this services and products is the government. The services and public utility in most cases are part of the many governmental activities that government engage purely for the satisfaction of the public and not generation of profits. In the introductory section of his book, Public Good Theories of the Nonprofit Sector, Bruce R. Kingma stated that; In the Weisbrod model nonprofit organizations satisfy a demand for public goods, which is left unfilled by government provision. The government satisfies the demand of the median voters and therefore provides a level of the public good less than some citizens'-with a level of demand greater than the median voter's-desire. This unfilled demand for the public good is satisfied by nonprofit organizations. These nonprofit organizations are financed by the donations of citizens who want to increase the output of the public good.
Terminology, and types of goods
Non-rivalrous: accessible by all while one's usage of the product does not affect the availability for subsequent use.
Non-excludability: that is, it is impossible to exclude any individuals from consuming the good.
Pure public: when a good exhibits the two traits, non-rivalry and non-excludability, it is referred to as the pure public good.
Impure public goods: the goods that satisfy the two public good conditions (non-rivalry and non-excludability) only to a certain extent or only some of the time.
Private good: The opposite of a public good which does not possess these properties. A loaf of bread, for example, is a private good; its owner can exclude others from using it, and once it has been consumed, it cannot be used by others.
Common-pool resource: A good that is rivalrous but non-excludable. Such goods raise similar issues to public goods: the mirror to the public goods problem for this case is the 'tragedy of the commons'. For example, it is so difficult to enforce restrictions on deep-sea fishing that the world's fish stocks can be seen as a non-excludable resource, but one which is finite and diminishing.
Club goods: are the goods that excludable but are non-rivalrous such as private parks.
Mixed good: final goods that are intrinsically private but that are produced by the individual consumer by means of private and public good inputs. The benefits enjoyed from such a good for any one individual may depend on the consumption of others, as in the cases of a crowded road or a congested national park.
Definition matrix
Elinor Ostrom proposed additional modifications to the classification of goods to identify fundamental differences that affect the incentives facing individuals
Replacing the term "rivalry of consumption" with "subtractability of use".
Conceptualizing subtractability of use and excludability to vary from low to high rather than characterizing them as either present or absent.
Overtly adding a very important fourth type of good—common-pool resources—that shares the attribute of subtractability with private goods and difficulty of exclusion with public goods. Forests, water systems, fisheries, and the global atmosphere are all common-pool resources of immense importance for the survival of humans on this earth.
Changing the name of a "club" good to a "toll" good since goods that share these characteristics are provided by small scale public as well as private associations.
Challenges in identifying public goods
The definition of non-excludability states that it is impossible to exclude individuals from consumption. Technology now allows radio or TV broadcasts to be encrypted such that persons without a special decoder are excluded from the broadcast. Many forms of information goods have characteristics of public goods. For example, a poem can be read by many people without reducing the consumption of that good by others; in this sense, it is non-rivalrous. Similarly, the information in most patents can be used by any party without reducing consumption of that good by others. Official statistics provide a clear example of information goods that are public goods, since they are created to be non-excludable. Creative works may be excludable in some circumstances, however: the individual who wrote the poem may decline to share it with others by not publishing it. Copyrights and patents both encourage the creation of such non-rival goods by providing temporary monopolies, or, in the terminology of public goods, providing a legal mechanism to enforce excludability for a limited period of time. For public goods, the "lost revenue" of the producer of the good is not part of the definition: a public good is a good whose consumption does not reduce any other's consumption of that good. Public goods also incorporate private goods, which makes it challenging to define what is private or public. For instance, you may think that the community soccer field is a public good. However, you need to bring your own cleats and ball to be able to play. There is also a rental fee that you would have to pay for you to be able to occupy that space. It is a mixed case of public and private goods.
Debate has been generated among economists whether such a category of "public goods" exists. Steven Shavell has suggested the following:
when professional economists talk about public goods they do not mean that there are a general category of goods that share the same economic characteristics, manifest the same dysfunctions, and that may thus benefit from pretty similar corrective solutions...there is merely an infinite series of particular problems (some of overproduction, some of underproduction, and so on), each with a particular solution that cannot be deduced from the theory, but that instead would depend on local empirical factors.
There is a common misconception that public goods are goods provided by the public sector. Although it is often the case that government is involved in producing public goods, this is not always true. Public goods may be naturally available, or they may be produced by private individuals, by firms, or by non-state groups, called collective action.
The theoretical concept of public goods does not distinguish geographic region in regards to how a good may be produced or consumed. However, some theorists, such as Inge Kaul, use the term "global public good" for a public good which is non-rivalrous and non-excludable throughout the whole world, as opposed to a public good which exists in just one national area. Knowledge has been argued as an example of a global public good, but also as a commons, the knowledge commons.
Graphically, non-rivalry means that if each of several individuals has a demand curve for a public good, then the individual demand curves are summed vertically to get the aggregate demand curve for the public good. This is in contrast to the procedure for deriving the aggregate demand for a private good, where individual demands are summed horizontally.
Some writers have used the term "public good" to refer only to non-excludable "pure public goods" and refer to excludable public goods as "club goods".
Digital Public Goods
Digital public goods include software, data sets, AI models, standards and content that are open source.
Use of the term “digital public good” appears as early as April, 2017 when Nicholas Gruen wrote Building the Public Goods of the Twenty-First Century, and has gained popularity with the growing recognition of the potential for new technologies to be implemented at scale to effectively serve people. Digital technologies have also been identified by countries, NGOs and private sector entities as a means to achieve the Sustainable Development Goals (SDGs).
A digital public good is defined by the UN Secretary-General's Roadmap for Digital Cooperation, as: “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.”
Examples
Common examples of public goods include
public fireworks
clean air and other environmental goods
information goods, such as official statistics
open-source software
authorship
public television
radio
invention
herd immunity
Wikipedia
Shedding light on some mis-classified public goods
Some goods, like orphan drugs, require special governmental incentives to be produced, but cannot be classified as public goods since they do not fulfill the above requirements (non-excludable and non-rivalrous.)
Law enforcement, streets, libraries, museums, and education are commonly misclassified as public goods, but they are technically classified in economic terms as quasi-public goods because excludability is possible, but they do still fit some of the characteristics of public goods.
The provision of a lighthouse is a standard example of a public good, since it is difficult to exclude ships from using its services. No ship's use detracts from that of others, but since most of the benefit of a lighthouse accrues to ships using particular ports, lighthouse maintenance can be profitably bundled with port fees (Ronald Coase, The Lighthouse in Economics 1974). This has been sufficient to fund actual lighthouses.
Technological progress can create new public goods. The most simple examples are street lights, which are relatively recent inventions (by historical standards). One person's enjoyment of them does not detract from other persons' enjoyment, and it currently would be prohibitively expensive to charge individuals separately for the amount of light they presumably use.
Official statistics are another example. The government's ability to collect, process and provide high-quality information to guide decision-making at all levels has been strongly advanced by technological progress. On the other hand, a public good's status may change over time. Technological progress can significantly impact excludability of traditional public goods: encryption allows broadcasters to sell individual access to their programming. The costs for electronic road pricing have fallen dramatically, paving the way for detailed billing based on actual use.
Public goods are not restricted to human beings. It is one aspect of the study of cooperation in biology.
Free rider problem
The free rider problem is a primary issue in collective decision-making. An example is that some firms in a particular industry will choose not to participate in a lobby whose purpose is to affect government policies that could benefit the industry, under the assumption that there are enough participants to result in a favourable outcome without them. The free rider problem is also a form of market failure, in which market-like behavior of individual gain-seeking does not produce economically efficient results. The production of public goods results in positive externalities which are not remunerated. If private organizations do not reap all the benefits of a public good which they have produced, their incentives to produce it voluntarily might be insufficient. Consumers can take advantage of public goods without contributing sufficiently to their creation. This is called the free rider problem, or occasionally, the "easy rider problem". If too many consumers decide to "free-ride", private costs exceed private benefits and the incentive to provide the good or service through the market disappears. The market thus fails to provide a good or service for which there is a need.
The free rider problem depends on a conception of the human being as homo economicus: purely rational and also purely selfish—extremely individualistic, considering only those benefits and costs that directly affect him or her. Public goods give such a person an incentive to be a free rider.
For example, consider national defence, a standard example of a pure public good. Suppose homo economicus thinks about exerting some extra effort to defend the nation. The benefits to the individual of this effort would be very low, since the benefits would be distributed among all of the millions of other people in the country. There is also a very high possibility that he or she could get injured or killed during the course of his or her military service. On the other hand, the free rider knows that he or she cannot be excluded from the benefits of national defense, regardless of whether he or she contributes to it. There is also no way that these benefits can be split up and distributed as individual parcels to people. The free rider would not voluntarily exert any extra effort, unless there is some inherent pleasure or material reward for doing so (for example, money paid by the government, as with an all-volunteer army or mercenaries).
The free-riding problem is even more complicated than it was thought to be until recently. Any time non-excludability results in failure to pay the true marginal value (often called the "demand revelation problem"), it will also result in failure to generate proper income levels, since households will not give up valuable leisure if they cannot individually increment a good. This implies that, for public goods without strong special interest support, under-provision is likely since cost-benefit analysis is being conducted at the wrong income levels, and all of the un-generated income would have been spent on the public good, apart from general equilibrium considerations.
In the case of information goods, an inventor of a new product may benefit all of society, but hardly anyone is willing to pay for the invention if they can benefit from it for free. In the case of an information good, however, because of its characteristics of non-excludability and also because of almost zero reproduction costs, commoditization is difficult and not always efficient even from a neoclassical economic point of view.
Efficient production levels of public goods
The Pareto optimal provision of a public good in a society occurs when the sum of the marginal valuations of the public good (taken across all individuals) is equal to the marginal cost of providing that public good. These marginal valuations are, formally, marginal rates of substitution relative to some reference private good, and the marginal cost is a marginal rate of transformation that describes how much of that private good it costs to produce an incremental unit of the public good.) This contrasts to the Pareto optimality condition of private goods, which equates each consumer's valuation of the private good to its marginal cost of production.
For an example, consider a community of just two consumers and the government is considering whether or not to build a public park. One person is prepared to pay up to $200 for its use, while the other is willing to pay up to $100. The total value to the two individuals of having the park is $300. If it can be produced for $225, there is a $75 surplus to maintaining the park, since it provides services that the community values at $300 at a cost of only $225.
The classical theory of public goods defines efficiency under idealized conditions of complete information, a situation already acknowledged in Wicksell (1896). Samuelson emphasized that this poses problems for the efficient provision of public goods in practice and the assessment of an efficient Lindahl tax to finance public goods, because individuals have incentives to underreport how much they value public goods. Subsequent work, especially in mechanism design and the theory of public finance developed how valuations and costs could actually be elicited in practical conditions of incomplete information, using devices such as the Vickrey–Clarke–Groves mechanism. Thus, deeper analysis of problems of public goods motivated much work that is at the heart of modern economic theory.
Local public goods
The basic theory of public goods as discussed above begins with situations where the level of a public good (e.g., quality of the air) is equally experienced by everyone. However, in many important situations of interest, the incidence of benefits and costs is not so simple. For example, when people keep an office clean or monitor a neighborhood for signs of trouble, the benefits of that effort accrue to some people (those in their neighborhoods) more than to others. The overlapping structure of these neighborhoods is often modeled as a network. (When neighborhoods are totally separate, i.e., non-overlapping, the standard model is the Tiebout model.)
An example of locally public good that could help everyone, even ones not from the neighborhood, is a bus. Let's say you are a college student who is visiting their friend who goes to school in another city. You get to benefit from this services just like everyone that resides and goes to school in said city. There is also a correlation of benefit and cost that you are now a part of. Climate change is a really serious matter in our given time. You are benefiting by not having to walk to your destination and taking a bus instead. However, others might prefer to walk so they do not become a part of the problem, which is pollution due to gas given out by auto mobiles.
Recently, economists have developed the theory of local public goods with overlapping neighborhoods, or public goods in networks: both their efficient provision, and how much can be provided voluntarily in a non-cooperative equilibrium. When it comes to socially efficient provision, networks that are more dense or close-knit in terms of how much people can benefit each other have more scope for improving on an inefficient status quo. On the other hand, voluntary provision is typically below the efficient level, and equilibrium outcomes tend to involve strong specialization, with a few individuals contributing heavily and their neighbors free-riding on those contributions.
Ownership
Even when a good is non-rival and non-excludable, ownership rights play an important role. Suppose that physical assets (e.g., buildings or machines) are necessary to produce a public good. Who should own the physical assets?
Economic theorists such as Oliver Hart (1995) have emphasized that ownership matters for investment incentives when contracts are incomplete. The incomplete contracting paradigm has been applied to public goods by Besley and Ghatak (2001). They consider the government and a non-governmental organization (NGO) who can both make investments to provide a public good. Besley and Ghatak argue that the party who has a larger valuation of the public good should be the owner, regardless of whether the government or the NGO has a better investment technology. This result contrasts with the case of private goods studied by Hart (1995), where the party with the better investment technology should be the owner. However, it has been shown that the investment technology may matter also in the public-good case when a party is indispensable or when there are bargaining frictions between the government and the NGO. Halonen-Akatwijuka and Pafilis (2020) have demonstrated that Besley and Ghatak's results are not robust when there is a long-term relationship, such that the parties interact repeatedly. Moreover, Schmitz (2021) has shown that when the parties have private information about their valuations of the public good, then the investment technology can be an important determinant of the optimal ownership structure.
See also
Anti-rival good
Excludability
Lindahl tax, a method proposed by Erik Lindahl for financing public goods
Private-collective model of innovation, which explains the creation of public goods by private investors
Public bad
Public trust doctrine
Public goods game, a standard of experimental economics
Public works, government-financed constructions
Tragedy of the commons
Tragedy of the anticommons
Rivalry (economics)
Quadratic funding, a mechanism to allocate funding for the production of public goods based on democratic principles
References
Bibliography
Further reading
Acoella, Nicola (2006), ‘Distributive issues in the provision and use of global public goods’, in: ‘Studi economici’, 88(1): 23–42.
Zittrain, Jonathan, The Future of the Internet: And How to Stop It. 2008
Lessig, Lawrence, Code 2.0, Chapter 7, What Things Regulate
External links
Public Goods: A Brief Introduction, by The Linux Information Project (LINFO)
Global Public Goods – analysis from Global Policy Forum
The Nature of Public Goods
Hardin, Russell, "The Free Rider Problem", The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.)
Community building
Goods (economics)
Market failure
Public economics
Good |
173586 | https://en.wikipedia.org/wiki/Playfair%20cipher | Playfair cipher | The Playfair cipher or Playfair square or Wheatstone–Playfair cipher is a manual symmetric encryption technique and was the first literal digram substitution cipher. The scheme was invented in 1854 by Charles Wheatstone, but bears the name of Lord Playfair for promoting its use.
The technique encrypts pairs of letters (bigrams or digrams), instead of single letters as in the simple substitution cipher and rather more complex Vigenère cipher systems then in use. The Playfair is thus significantly harder to break since the frequency analysis used for simple substitution ciphers does not work with it. The frequency analysis of bigrams is possible, but considerably more difficult. With 600 possible bigrams rather than the 26 possible monograms (single symbols, usually letters in this context), a considerably larger cipher text is required in order to be useful.
History
The Playfair cipher was the first cipher to encrypt pairs of letters in cryptologic history. Wheatstone invented the cipher for secrecy in telegraphy, but it carries the name of his friend Lord Playfair, first Baron Playfair of St. Andrews, who promoted its use. The first recorded description of the Playfair cipher was in a document signed by Wheatstone on 26 March 1854.
It was initially rejected by the British Foreign Office when it was developed because of its perceived complexity. Wheatstone offered to demonstrate that three out of four boys in a nearby school could learn to use it in 15 minutes, but the Under Secretary of the Foreign Office responded, "That is very possible, but you could never teach it to attachés."
It was however later used for tactical purposes by British forces in the Second Boer War and in World War I and for the same purpose by the British and Australians during World War II. This was because Playfair is reasonably fast to use and requires no special equipment - just a pencil and some paper. A typical scenario for Playfair use was to protect important but non-critical secrets during actual combat e.g. the fact that an artillery barrage of smoke shells would commence within 30 minutes to cover soldiers' advance towards the next objective. By the time enemy cryptanalysts could decode such messages hours later, such information would be useless to them because it was no longer relevant.
During World War II, the Government of New Zealand used it for communication among New Zealand, the Chatham Islands, and the coastwatchers in the Pacific Islands. Coastwatchers established by Royal Australian Navy Intelligence also used this cipher.
Superseded
Playfair is no longer used by military forces because of the advent of digital encryption devices. This cipher is now regarded as insecure for any purpose, because modern computers could easily break it within microseconds.
The first published solution of the Playfair cipher was described in a 19-page pamphlet by Lieutenant Joseph O. Mauborgne, published in 1914.
Description
The Playfair cipher uses a 5 by 5 table containing a key word or phrase. Memorization of the keyword and 4 simple rules was all that was required to create the 5 by 5 table and use the cipher.
To generate the key table, one would first fill in the spaces in the table (a modified Polybius square) with the letters of the keyword (dropping any duplicate letters), then fill the remaining spaces with the rest of the letters of the alphabet in order (usually omitting "J" or "Q" to reduce the alphabet to fit; other versions put both "I" and "J" in the same space). The key can be written in the top rows of the table, from left to right, or in some other pattern, such as a spiral beginning in the upper-left-hand corner and ending in the center. The keyword together with the conventions for filling in the 5 by 5 table constitute the cipher key.
To encrypt a message, one would break the message into digrams (groups of 2 letters) such that, for example, "HelloWorld" becomes "HE LL OW OR LD". These digrams will be substituted using the key table. Since encryption requires pairs of letters, messages with an odd number of characters usually append an uncommon letter, such as "X", to complete the final digram. The two letters of the digram are considered opposite corners of a rectangle in the key table. To perform the substitution, apply the following 4 rules, in order, to each pair of letters in the plaintext:
If both letters are the same (or only one letter is left), add an "X" after the first letter. Encrypt the new pair and continue. Some variants of Playfair use "Q" instead of "X", but any letter, itself uncommon as a repeated pair, will do.
If the letters appear on the same row of your table, replace them with the letters to their immediate right respectively (wrapping around to the left side of the row if a letter in the original pair was on the right side of the row).
If the letters appear on the same column of your table, replace them with the letters immediately below respectively (wrapping around to the top side of the column if a letter in the original pair was on the bottom side of the column).
If the letters are not on the same row or column, replace them with the letters on the same row respectively but at the other pair of corners of the rectangle defined by the original pair. The order is important – the first letter of the encrypted pair is the one that lies on the same row as the first letter of the plaintext pair.
To decrypt, use the inverse (opposite) of the last 3 rules, and the first as-is (dropping any extra "X"s or "Q"s that do not make sense in the final message when finished).
There are several minor variations of the original Playfair cipher.
Example
Using "playfair example" as the key (assuming that I and J are interchangeable), the table becomes (omitted letters in red):
The first step of encrypting the message "hide the gold in the tree stump" is to convert it to the pairs of letters "HI DE TH EG OL DI NT HE TR EX ES TU MP" (with the null "X" used to separate the repeated "E"s). Then:
Thus the message "hide the gold in the tree stump" becomes "BM OD ZB XD NA BE KU DM UI XM MO UV IF", which may be restructured as "BMODZ BXDNA BEKUD MUIXM MOUVI F" for ease of reading the cipher text.
Clarification with picture
Assume one wants to encrypt the digram OR. There are five general cases:
Cryptanalysis
Like most classical ciphers, the Playfair cipher can be easily cracked if there is enough text. Obtaining the key is relatively straightforward if both plaintext and ciphertext are known. When only the ciphertext is known, brute force cryptanalysis of the cipher involves searching through the key space for matches between the frequency of occurrence of digrams (pairs of letters) and the known frequency of occurrence of digrams in the assumed language of the original message.
Cryptanalysis of Playfair is similar to that of four-square and two-square ciphers, though the relative simplicity of the Playfair system makes identifying candidate plaintext strings easier. Most notably, a Playfair digraph and its reverse (e.g. AB and BA) will decrypt to the same letter pattern in the plaintext (e.g. RE and ER). In English, there are many words which contain these reversed digraphs such as REceivER and DEpartED. Identifying nearby reversed digraphs in the ciphertext and matching the pattern to a list of known plaintext words containing the pattern is an easy way to generate possible plaintext strings with which to begin constructing the key.
A different approach to tackling a Playfair cipher is the shotgun hill climbing method. This starts with a random square of letters. Then minor changes are introduced (i.e. switching letters, rows, or reflecting the entire square) to see if the candidate plaintext is more like standard plaintext than before the change (perhaps by comparing the digrams to a known frequency chart). If the new square is deemed to be an improvement, then it is adopted and then further mutated to find an even better candidate. Eventually, the plaintext or something very close is found to achieve a maximal score by whatever grading method is chosen. This is obviously beyond the range of typical human patience, but computers can adopt this algorithm to crack Playfair ciphers with a relatively small amount of text.
Another aspect of Playfair that separates it from four-square and two-square ciphers is the fact that it will never contain a double-letter digram, e.g. EE. If there are no double letter digrams in the ciphertext and the length of the message is long enough to make this statistically significant, it is very likely that the method of encryption is Playfair.
A good tutorial on reconstructing the key for a Playfair cipher can be found in chapter 7, "Solution to Polygraphic Substitution Systems," of Field Manual 34-40-2, produced by the United States Army. Another cryptanalysis of a Playfair cipher can be found in Chapter XXI of Helen Fouché Gaines, Cryptanalysis / a study of ciphers and their solutions.
A detailed cryptanalysis of Playfair is undertaken in chapter 28 of Dorothy L. Sayers' mystery novel Have His Carcase. In this story, a Playfair message is demonstrated to be cryptographically weak, as the detective is able to solve for the entire key making only a few guesses as to the formatting of the message (in this case, that the message starts with the name of a city and then a date). Sayers' book includes a detailed description of the mechanics of Playfair encryption, as well as a step-by-step account of manual cryptanalysis.
The German Army, Air Force and Police used the Double Playfair cipher as a medium-grade cipher in WWII, based on the British Playfair cipher they had broken early in WWI. They adapted it by introducing a second square from which the second letter of each bigram was selected, and dispensed with the keyword, placing the letters in random order. But with the German fondness for pro forma messages, they were broken at Bletchley Park. Messages were preceded by a sequential number, and numbers were spelled out. As the German numbers 1 (eins) to twelve (zwölf) contain all but eight of the letters in the Double Playfair squares, pro forma traffic was relatively easy to break (Smith, page 74-75)
Use in modern crosswords
Advanced thematic cryptic crosswords like The Listener Crossword (published in the Saturday edition of the British newspaper The Times) occasionally incorporate Playfair ciphers. Normally between four and six answers have to be entered into the grid in code, and the Playfair keyphrase is thematically significant to the final solution.
The cipher lends itself well to crossword puzzles, because the plaintext is found by solving one set of clues, while the ciphertext is found by solving others. Solvers can then construct the key table by pairing the digrams (it is sometimes possible to guess the keyword, but never necessary).
Use of the Playfair cipher is generally explained as part of the preamble to the crossword. This levels the playing field for those solvers who have not come across the cipher previously. But the way the cipher is used is always the same. The 25-letter alphabet used always contains Q and has I and J coinciding. The key table is always filled row by row.
In popular culture
The novel Have His Carcase by Dorothy L. Sayers gives a blow-by-blow account of the cracking of a Playfair cipher.
The World War 2 thriller The Trojan Horse by Hammond Innes conceals the formula for a new high-strength metal alloy using the Playfair cipher.
In the film National Treasure: Book of Secrets, a treasure hunt clue is encoded as a Playfair cipher.
In the audio book Rogue Angel : God of Thunder, a Playfair cipher clue is used to send Anja Creed to Venice.
In the novel York: The Map of Stars (part three of a trilogy for children) by Laura Ruby, a clue to solving the Morningstarr cipher is encrypted using the Playfair cipher.
The Playfair cipher serves as a plot device in a season 2 episode of the 2019 Batwoman (TV series).
In the novel The Sinclair Betrayal by M J Lee, a character who is preparing to drop into France during WW2 is taught the three types of ciphers: double transposition, Playfair, and Ironside.
See also
Topics in cryptography
Notes
References
Smith, Michael Station X: The Codebreakers of Bletchley Park (1998, Channel 4 Books/Macmillan, London)
External links
Online encrypting and decrypting Playfair with JavaScript
Extract from some lecture notes on ciphers – Digraphic Ciphers: Playfair
Playfair Cypher
Cross platform implementation of Playfair cipher
Javascript implementation of the Playfair cipher
Python and streamlit implementation of Playfair cipher
Classical ciphers
English inventions |
173620 | https://en.wikipedia.org/wiki/Venona%20project | Venona project | The Venona project was a United States counterintelligence program initiated during World War II by the United States Army's Signal Intelligence Service (later absorbed by the National Security Agency), which ran from February 1, 1943, until October 1, 1980. It was intended to decrypt messages transmitted by the intelligence agencies of the Soviet Union (e.g. the NKVD, the KGB, and the GRU). Initiated when the Soviet Union was an ally of the US, the program continued during the Cold War, when it was considered an enemy.
During the 37-year duration of the Venona project, the Signal Intelligence Service decrypted and translated approximately 3,000 messages. The signals intelligence yield included discovery of the Cambridge Five espionage ring in the United Kingdom and Soviet espionage of the Manhattan Project in the U.S. (known as project Enormous). Some of the espionage was undertaken to support the Soviet atomic bomb project. The Venona project remained secret for more than 15 years after it concluded. Some of the decoded Soviet messages were not declassified and published by the United States until 1995.
Background
During World War II and the early years of the Cold War, the Venona project was a source of information on Soviet intelligence-gathering directed at the Western military powers. Although unknown to the public, and even to Presidents Franklin D. Roosevelt and Harry S. Truman, these programs were of importance concerning crucial events of the early Cold War. These included the Julius and Ethel Rosenberg spying case (which was based on events during World War II) and the defections of Donald Maclean and Guy Burgess to the Soviet Union.
Most decipherable messages were transmitted and intercepted between 1942 and 1945, during World War II, when the Soviet Union was an ally of the US. Sometime in 1945, the existence of the Venona program was revealed to the Soviet Union by cryptologist-analyst Bill Weisband, an NKVD agent in the U.S. Army's SIGINT. These messages were slowly and gradually decrypted beginning in 1946. This effort continued (many times at a low level of effort in the latter years) through 1980, when the Venona program was terminated. The analyst effort assigned to it was moved to more important projects.
To what extent the various individuals referred to in the messages were involved with Soviet intelligence is a topic of historical dispute. While a number of academics and historians assert that most of the individuals mentioned in the Venona decrypts were most likely either clandestine assets and/or contacts of Soviet intelligence agents, others argue that many of those people probably had no malicious intentions and committed no crimes.
Commencement
The Venona Project was initiated on February 1, 1943, by Gene Grabeel, an American mathematician and cryptanalyst, under orders from Colonel Carter W. Clarke, Chief of Special Branch of the Military Intelligence Service at that time. Clarke distrusted Joseph Stalin, and feared that the Soviet Union would sign a separate peace with Nazi Germany, allowing Germany to focus its military forces against the United Kingdom and the United States. Cryptanalysts of the U.S. Army's Signal Intelligence Service at Arlington Hall analyzed encrypted high-level Soviet diplomatic intelligence messages intercepted in large volumes during and immediately after World War II by American, British, and Australian listening posts.
Decryption
This message traffic, which was encrypted with a one-time pad system, was stored and analyzed in relative secrecy by hundreds of cryptanalysts over a 40-year period starting in the early 1940s. When used correctly, the one-time pad encryption system, which has been used for all the most-secret military and diplomatic communication since the 1930s, is unbreakable. However, due to a serious blunder on the part of the Soviets, some of this traffic was vulnerable to cryptanalysis. The Soviet company that manufactured the one-time pads produced around 35,000 pages of duplicate key numbers, as a result of pressures brought about by the German advance on Moscow during World War II. The duplication—which undermines the security of a one-time system—was discovered, and attempts to lessen its impact were made by sending the duplicates to widely separated users. Despite this, the reuse was detected by cryptanalysts in the US.
Breakthrough
The Soviet systems in general used a code to convert words and letters into numbers, to which additive keys (from one-time pads) were added, encrypting the content. When used correctly so that the plaintext is of a length equal to or less than that of a random key, one-time pad encryption is unbreakable. However, cryptanalysis by American code-breakers revealed that some of the one-time pad material had incorrectly been reused by the Soviets (specifically, entire pages, although not complete books), which allowed decryption (sometimes only partial) of a small part of the traffic.
Generating the one-time pads was a slow and labor-intensive process, and the outbreak of war with Germany in June 1941 caused a sudden increase in the need for coded messages. It is probable that the Soviet code generators started duplicating cipher pages in order to keep up with demand.
It was Arlington Hall's Lieutenant Richard Hallock, working on Soviet "Trade" traffic (so called because these messages dealt with Soviet trade issues), who first discovered that the Soviets were reusing pages. Hallock and his colleagues, amongst whom were Genevieve Feinstein, Cecil Phillips, Frank Lewis, Frank Wanat, and Lucille Campbell, went on to break into a significant amount of Trade traffic, recovering many one-time pad additive key tables in the process.
A young Meredith Gardner then used this material to break into what turned out to be NKVD (and later GRU) traffic by reconstructing the code used to convert text to numbers. Gardner credits Marie Meyer, a linguist with the Signal Intelligence Service with making some of the initial recoveries of the Venona codebook. Samuel Chew and Cecil Phillips also made valuable contributions. On 20 December 1946, Gardner made the first break into the code, revealing the existence of Soviet espionage in the Manhattan Project. Venona messages also indicated that Soviet spies worked in Washington in the State Department, Treasury, Office of Strategic Services, and even the White House. Very slowly, using assorted techniques ranging from traffic analysis to defector information, more of the messages were decrypted.
Claims have been made that information from the physical recovery of code books (a partially burned one was obtained by the Finns) to bugging embassy rooms in which text was entered into encrypting devices (analyzing the keystrokes by listening to them being punched in) contributed to recovering much of the plaintext. These latter claims are less than fully supported in the open literature.
One significant aid (mentioned by the NSA) in the early stages may have been work done in cooperation between the Japanese and Finnish cryptanalysis organizations; when the Americans broke into Japanese codes during World War II, they gained access to this information. There are also reports that copies of signals purloined from Soviet offices by the Federal Bureau of Investigation (FBI) were helpful in the cryptanalysis. The Finnish radio intelligence sold much of its material concerning Soviet codes to OSS in 1944 during Operation Stella Polaris, including the partially burned code book.
Results
The NSA reported that (according to the serial numbers of the Venona cables) thousands of cables were sent, but only a fraction were available to the cryptanalysts. Approximately 2,200 messages were decrypted and translated; about half of the 1943 GRU-Naval Washington to Moscow messages were broken, but none for any other year, although several thousand were sent between 1941 and 1945. The decryption rate of the NKVD cables was as follows:
1942 1.8%
1943 15.0%
1944 49.0%
1945 1.5%
Out of some hundreds of thousands of intercepted encrypted texts, it is claimed under 3,000 have been partially or wholly decrypted. All the duplicate one-time pad pages were produced in 1942, and almost all of them had been used by the end of 1945, with a few being used as late as 1948. After this, Soviet message traffic reverted to being completely unreadable.
The existence of Venona decryption became known to the Soviets within a few years of the first breaks. It is not clear whether the Soviets knew how much of the message traffic or which messages had been successfully decrypted. At least one Soviet penetration agent, British Secret Intelligence Service representative to the U.S. Kim Philby, was told about the project in 1949, as part of his job as liaison between British and U.S. intelligence. Since all of the duplicate one-time pad pages had been used by this time, the Soviets apparently did not make any changes to their cryptographic procedures after they learned of Venona. However, this information allowed them to alert those of their agents who might be at risk of exposure due to the decryption.
Significance
The decrypted messages gave important insights into Soviet behavior in the period during which duplicate one-time pads were used. With the first break into the code, Venona revealed the existence of Soviet espionage at Los Alamos National Laboratories. Identities soon emerged of American, Canadian, Australian, and British spies in service to the Soviet government, including Klaus Fuchs, Alan Nunn May, and Donald Maclean. Others worked in Washington in the State Department, the Treasury, Office of Strategic Services,
and even the White House.
The messages show that the U.S. and other nations were targeted in major espionage campaigns by the Soviet Union as early as 1942. Among those identified are Julius and Ethel Rosenberg; Alger Hiss; Harry Dexter White, the second-highest official in the Treasury Department; Lauchlin Currie, a personal aide to Franklin Roosevelt; and Maurice Halperin, a section head in the Office of Strategic Services.
The identification of individuals mentioned in Venona transcripts is sometimes problematic, since people with a "covert relationship" with Soviet intelligence are referenced by cryptonyms. Further complicating matters is the fact the same person sometimes had different cryptonyms at different times, and the same cryptonym was sometimes reused for different individuals. In some cases, notably Hiss, the matching of a Venona cryptonym to an individual is disputed. In many other cases, a Venona cryptonym has not yet been linked to any person. According to authors John Earl Haynes and Harvey Klehr, the Venona transcripts identify approximately 349 Americans who they claim had a covert relationship with Soviet intelligence, though fewer than half of these have been matched to real-name identities. However, not every agent may have been communicating directly with Soviet intelligence. Each of those 349 persons may have had many others working for, and reporting only to, them.
The Office of Strategic Services, the predecessor to the CIA, housed at one time or another between fifteen and twenty Soviet spies. Duncan Lee, Donald Wheeler, Jane Foster Zlatowski, and Maurice Halperin passed information to Moscow. The War Production Board, the Board of Economic Warfare, the Office of the Coordinator of Inter-American Affairs and the Office of War Information, included at least half a dozen Soviet sources each among their employees.
Bearing of Venona on particular cases
Venona has added information—some unequivocal, some ambiguous—to several espionage cases. Some known spies, including Theodore Hall, were neither prosecuted nor publicly implicated, because the Venona evidence against them was withheld.
19
The identity of Soviet source cryptonymed "19" remains unclear. According to British writer Nigel West, "19" was president of Czechoslovak government-in-exile Edvard Beneš. Military historian Eduard Mark and American authors Herbert Romerstein and Eric Breindel concluded it was Roosevelt's aide Harry Hopkins. According to American authors John Earl Haynes and Harvey Klehr, "19" could be someone from the British delegation to the Washington Conference in May 1943. Moreover, they argue no evidence of Hopkins as an agent has been found in other archives, and the partial message relating to "19" does not indicate if this source was a spy.
However, Vasili Mitrokhin was a KGB archivist who defected to the United Kingdom in 1992 with copies of large numbers of KGB files. He claimed Harry Hopkins was a secret Russian agent. Moreover, Oleg Gordievsky, a high-level KGB officer who also defected from the Soviet Union, reported that Iskhak Akhmerov, the KGB officer who controlled the clandestine Soviet agents in the U.S. during the war, had said Hopkins was "the most important of all Soviet wartime agents in the United States".
Alexander Vassiliev's notes identified source code-named "19" as Laurence Duggan.
Julius and Ethel Rosenberg
Venona has added significant information to the case of Julius and Ethel Rosenberg, making it clear Julius was guilty of espionage, and also showing that Ethel, while not acting as a principal, still acted as an accessory, who took part in Julius's espionage activity and played a role in the recruitment of her brother for atomic espionage.
Venona and other recent information has shown, while the content of Julius' atomic espionage was not as vital to the Soviets as alleged at the time of his espionage activities, in other fields it was extensive. The information Rosenberg passed to the Soviets concerned the proximity fuze, design and production information on the Lockheed P-80 jet fighter, and thousands of classified reports from Emerson Radio.
The Venona evidence indicates unidentified sources code-named "Quantum" and "Pers" who facilitated transfer of nuclear weapons technology to the Soviet Union from positions within the Manhattan Project. According to Alexander Vassiliev's notes from KGB archive, "Quantum" was Boris Podolsky and "Pers" was Russell W. McNutt, an engineer from the uranium processing plant in Oak Ridge.
Klaus Fuchs
The Venona decryptions were also important in the exposure of the atomic spy Klaus Fuchs. Some of the earliest messages decrypted concerned information from a scientist at the Manhattan Project, who was referred to by the code names of CHARLES and REST. One such message from Moscow to New York, dated April 10, 1945, called information provided by CHARLES "of great value." Noting that the information included "data on the atomic mass of the nuclear explosive" and "details on the explosive method of actuating" the atomic bomb, the message requested further technical details from CHARLES. Investigations based on the Venona decryptions eventually identified CHARLES and REST as Fuchs in 1949.
Alger Hiss and Harry Dexter White
According to the Moynihan Commission on Government Secrecy, the complicity of both Alger Hiss and Harry Dexter White is conclusively proven by Venona, stating "The complicity of Alger Hiss of the State Department seems settled. As does that of Harry Dexter White of the Treasury Department." In his 1998 book, United States Senator Daniel Patrick Moynihan expressed certainty about Hiss's identification by Venona as a Soviet spy, writing "Hiss was indeed a Soviet agent and appears to have been regarded by Moscow as its most important."
Several current authors, researchers, and archivists consider the Venona evidence on Hiss to be inconclusive.
Donald Maclean and Guy Burgess
Kim Philby had access to CIA and FBI files, and more damaging, access to Venona Project briefings. When Philby learned of Venona in 1949, he obtained advance warning that his fellow Soviet spy Donald Maclean was in danger of being exposed. The FBI told Philby about an agent cryptonymed "Homer", whose 1945 message to Moscow had been decoded. As it had been sent from New York and had its origins in the British Embassy in Washington, Philby, who would not have known Maclean's cryptonym, deduced the sender's identity. By early 1951, Philby knew U.S. intelligence would soon also conclude Maclean was the sender, and advised Moscow to extract Maclean. This led to Maclean and Guy Burgess' flight in May 1951 to Moscow, where they lived the remainder of their lives.
Soviet espionage in Australia
In addition to British and American operatives, Australians collected Venona intercepts at a remote base in the Outback. The Soviets remained unaware of this base as late as 1950.
The founding of the Australian Security Intelligence Organisation (ASIO) by Labor Prime Minister Ben Chifley in 1949 was considered highly controversial within Chifley's own party. Until then, the left-leaning Australian Labor Party had been hostile to domestic intelligence agencies on civil-liberties grounds and a Labor government founding one seemed a surprising about-face. But the presentation of Venona material to Chifley, revealing evidence of Soviet agents operating in Australia, brought this about. As well as Australian diplomat suspects abroad, Venona had revealed Walter Seddon Clayton (cryptonym "KLOD"), a leading official within the Communist Party of Australia (CPA), as the chief organiser of Soviet intelligence gathering in Australia. Investigation revealed that Clayton formed an underground network within the CPA so that the party could continue to operate if it were banned. In 1950, George Ronald Richards was appointed ASIO's deputy-director of operations for Venona, based in Sydney, charged with investigating intelligence uncovered the eleven Australians identified in the cables that were decoded. He continued Venona-related work in London with MI5 from November 1952 and went on to lead Operation Cabin 12, the high-profile 1953–1954 defection to Australia of Soviet spy Vladimir Petrov.
Public disclosure
For much of its history, knowledge of Venona was restricted even from the highest levels of government. Senior army officers, in consultation with the FBI and CIA, made the decision to restrict knowledge of Venona within the government (even the CIA was not made an active partner until 1952). Army Chief of Staff Omar Bradley, concerned about the White House's history of leaking sensitive information, decided to deny President Truman direct knowledge of the project. The president received the substance of the material only through FBI, Justice Department, and CIA reports on counterintelligence and intelligence matters. He was not told the material came from decoded Soviet ciphers. To some degree this secrecy was counter-productive; Truman was distrustful of FBI head J. Edgar Hoover and suspected the reports were exaggerated for political purposes.
Some of the earliest detailed public knowledge that Soviet code messages from World War II had been broken came with the release of Chapman Pincher's book, Too Secret Too Long, in 1984. Robert Lamphere's book, The FBI-KGB War, was released in 1986. Lamphere had been the FBI liaison to the code-breaking activity, had considerable knowledge of Venona and the counter-intelligence work that resulted from it. However, the first detailed account of the Venona project, identifying it by name and making clear its long-term implications in post-war espionage, was contained in MI5 assistant director Peter Wright's 1987 memoir, Spycatcher.
Many inside the NSA had argued internally that the time had come to publicly release the details of the Venona project, but it was not until 1995 that the bipartisan Commission on Government Secrecy, with Senator Moynihan as chairman, released Venona project materials. Moynihan wrote:
"[The] secrecy system has systematically denied American historians access to the records of American history. Of late we find ourselves relying on archives of the former Soviet Union in Moscow to resolve questions of what was going on in Washington at mid-century. ... the Venona intercepts contained overwhelming proof of the activities of Soviet spy networks in America, complete with names, dates, places, and deeds."
One of the considerations in releasing Venona translations was the privacy interests of the individuals mentioned, referenced, or identified in the translations. Some names were not released because to do so would constitute an invasion of privacy. However, in at least one case, independent researchers identified one of the subjects whose name had been obscured by the NSA.
The dearth of reliable information available to the public—or even to the President and Congress—may have helped to polarize debates of the 1950s over the extent and danger of Soviet espionage in the United States. Anti-Communists suspected many spies remained at large, perhaps including some known to the government. Those who criticized the governmental and non-governmental efforts to root out and expose communists felt these efforts were an overreaction (in addition to other reservations about McCarthyism). Public access—or broader governmental access—to the Venona evidence would certainly have affected this debate, as it is affecting the retrospective debate among historians and others now. As the Moynihan Commission wrote in its final report:
"A balanced history of this period is now beginning to appear; the Venona messages will surely supply a great cache of facts to bring the matter to some closure. But at the time, the American Government, much less the American public, was confronted with possibilities and charges, at once baffling and terrifying."
The National Cryptologic Museum features an exhibit on the Venona project in its "Cold War/Information Age" gallery.
Texas textbook controversy
Controversy arose in 2009 over the Texas State Board of Education's revision of their high school history class curricula to suggest Venona shows Senator Joseph McCarthy to have been justified in his zeal in exposing those whom he believed to be Soviet spies or communist sympathizers. Critics such as Emory University history professor Harvey Klehr assert most people and organizations identified by McCarthy, such as those brought forward in the Army-McCarthy hearings or rival politicians in the Democratic party, were not mentioned in the Venona content and that his accusations remain largely unsupported by evidence.
Critical views
The majority of historians are convinced of the historical value of the Venona material. Intelligence historian Nigel West believes that "Venona remain[s] an irrefutable resource, far more reliable than the mercurial recollections of KGB defectors and the dubious conclusions drawn by paranoid analysts mesmerized by Machiavellian plots." However, a number of writers and scholars have taken a critical view of the translations. They question the accuracy of the translations and the identifications of covernames that the NSA translations give. Writers Walter and Miriam Schneir, in a lengthy 1999 review of one of the first book-length studies of the messages, object to what they see as the book's overconfidence in the translations' accuracy, noting that the undecrypted gaps in the texts can make interpretation difficult, and emphasizing the problem of identifying the individuals mentioned under covernames. To support their critique, they cite a declassified memorandum, written in 1956 by A. H. Belmont, who was assistant to FBI director J. Edgar Hoover at the time. In the memo, Belmont discusses the possibility of using the Venona translations in court to prosecute Soviet agents, and comes out strongly opposed to their use. His reasons include legal uncertainties about the admissibility of the translations as evidence, and the difficulties that prosecution would face in supporting the validity of the translations. Belmont highlights the uncertainties in the translation process, noting that the cryptographers have indicated that "almost anything included in a translation of one of these deciphered messages may in the future be radically revised." He also notes the complexities of identifying people with covernames, describing how the personal details mentioned for covername "Antenna" fit more than one person, and the investigative process required to finally connect "Antenna" to Julius Rosenberg. The Schneirs conclude that "A reader faced with Venona's incomplete, disjointed messages can easily arrive at a badly skewed impression."
Many of the critiques of the Venona translations have been based on specific cases. The Schneirs' critique of the Venona documents was based on their decades of work on the case of Ethel and Julius Rosenberg. Another critique of the Venona translations came from the late Rutgers University law professor John Lowenthal, who as a law student worked as a volunteer for Alger Hiss's defense team, and later wrote extensively on the Hiss case. Lowenthal's critique focused on one message (Venona 1822 KGB Washington-Moscow 30 March 1945), in which the comments identified the covername 'Ales' as "probably Alger Hiss." Lowenthal raised a number of objections to this identification, rejecting it as "a conclusion psychologically motivated and politically correct but factually wrong." Lowenthal's article led to an extended debate on the 'Ales' message, and even prompted the NSA to declassify the original Russian text. Currently, Venona 1822 is the only message for which the complete decrypted Russian text has been published.
Victor Navasky, editor and publisher of The Nation, has also written several editorials highly critical of John Earl Haynes' and Harvey Klehr's interpretation of recent work on the subject of Soviet espionage. Navasky claims the Venona material is being used to "distort ... our understanding of the cold war" and that the files are potential "time bombs of misinformation." Commenting on the list of 349 Americans identified by Venona, published in an appendix to Venona: Decoding Soviet Espionage in America, Navasky wrote, "The reader is left with the implication—unfair and unproven—that every name on the list was involved in espionage, and as a result, otherwise careful historians and mainstream journalists now routinely refer to Venona as proof that many hundreds of Americans were part of the red spy network." Navasky goes further in his defense of the listed people and has claimed a great deal of the so-called espionage that went on was nothing more than "exchanges of information among people of good will" and that "most of these exchanges were innocent and were within the law."
According to historian Ellen Schrecker, "Because they offer insights into the world of the secret police on both sides of the Iron Curtain, it is tempting to treat the FBI and Venona materials less critically than documents from more accessible sources. But there are too many gaps in the record to use these materials with complete confidence." Schrecker believes the documents established the guilt of many prominent figures, but is still critical of the views of scholars such as John Earl Haynes, arguing, "complexity, nuance, and a willingness to see the world in other than black and white seem alien to Haynes' view of history."
See also
Elizabeth Bentley
History of Soviet and Russian espionage in the United States
List of Americans in the Venona papers
List of Soviet agents in the United States
Russian State Archive of Socio-Political History
Notes
References and further reading
Books
Online sources
Venona PDFs, arranged by date (NSA)
"Secrets, Lies, and Atomic Spies", PBS Transcript, Airdate: February 5, 2002
External links
Venona Documents – National Security Agency
Cold War espionage
Cold War intelligence operations
Espionage projects
History of cryptography
National Security Agency
Soviet Union–United Kingdom relations
Soviet Union–United States relations
Spy rings |
174094 | https://en.wikipedia.org/wiki/Microsoft%20Messenger%20service | Microsoft Messenger service | Messenger (formerly MSN Messenger Service, .NET Messenger Service and Windows Live Messenger Service) was an instant messaging and presence system developed by Microsoft in 1999 for use with its MSN Messenger software. It was used by instant messaging clients including Windows 8, Windows Live Messenger, Microsoft Messenger for Mac, Outlook.com and Xbox Live. Third-party clients also connected to the service. It communicated using the Microsoft Notification Protocol, a proprietary instant messaging protocol. The service allowed anyone with a Microsoft account to sign in and communicate in real time with other people who were signed in as well.
On 11 January 2013 Microsoft announced that they were retiring the existing Messenger service globally (except for mainland China where Messenger will continue to be available) and replacing it with Skype.
In April 2013, Microsoft merged the service into Skype network; existing users were able to sign into Skype with their existing accounts and access their contact list. As part of the merger, Skype instant messaging functionality is now running on the backbone of the former Messenger service.
Background
Despite multiple name changes to the service and its client software over the years, the Messenger service is often referred to colloquially as "MSN", due to the history of MSN Messenger. The service itself was known as MSN Messenger Service from 1999 to 2001, at which time, Microsoft changed its name to .NET Messenger Service and began offering clients that no longer carried the "MSN" name, such as the Windows Messenger client included with Windows XP, which was originally intended to be a streamlined version of MSN Messenger, free of advertisements and integrated into Windows.
Nevertheless, the company continued to offer more upgrades to MSN Messenger until the end of 2005, when all previous versions of MSN Messenger and Windows Messenger were superseded by a new program, Windows Live Messenger, as part of Microsoft's launch of its Windows Live online services.
For several years, the official name for the service remained .NET Messenger Service, as indicated on its official network status web page, though Microsoft rarely used the name to promote the service. Because the main client used to access the service became known as Windows Live Messenger, Microsoft started referring to the entire service as the Windows Live Messenger Service in its support documentation in the mid-2000s.
The service can integrate with the Windows operating system, automatically and simultaneously signing into the network as the user logs into their Windows account. Organizations can also integrate their Microsoft Office Communications Server and Active Directory with the service. In December 2011, Microsoft released an XMPP interface to the Messenger service.
As part of a larger effort to rebrand many of its Windows Live services, Microsoft began referring to the service as simply Messenger in 2012.
Software
Official clients
Microsoft offered the following instant messaging clients that connected to the Messenger service:
Windows Live Messenger, for users of Windows 7 and previous versions
MSN Messenger was the former name of the client from 1999 to 2006
Windows Messenger is a scaled-down client that was included with Windows XP in 2001
Microsoft Messenger for Mac, for users of Mac OS X
Outlook.com includes web browser-based functionality for instant messaging
Hotmail, the predecessor to Outlook.com, includes similar functionality for Messenger
Windows Live Web Messenger was a web-based program for use through Internet Explorer
MSN Web Messenger was the former name of the web-based client
Windows 8, includes a built-in Messaging client
Xbox Live includes access to the Messenger service from within the Xbox Dashboard
MSN TV (formerly WebTV) had a built-in messaging client available on the original WebTV/MSN TV and MSN TV 2 devices, which was originally introduced via a Summer 2000 software update
Messenger on Windows Phone includes access to the Messenger service from within a phone running Windows Phone
Windows Live Messenger for iPhone and iPod Touch includes access to the Messenger service from within an iPhone, iPod Touch or iPad
Windows Live Messenger for Nokia includes access to the Messenger service from within a Nokia phone
Messenger Play! includes access to the Messenger service from within an Android phone or tablet
Windows Live Messenger for BlackBerry includes access to the Messenger service from within a BlackBerry
Third-party clients
Additionally, these third-party clients and others were able to access the Messenger service:
Adium (Mac OS X, GPL)
aMSN (multi-platform, GPL)
Ayttm (multi-platform, GPL)
BitlBee (Windows and Unix-like, GPL)
CenterIM (cross-platform, GPL)
emesene (multi-platform, GPL)
Empathy (Linux GNOME, GPL)
eBuddy (Web-based and mobile)
Fire (Mac OS X, GPL)
XMPP (any client supporting XMPP protocol can use transports to connect to the Messenger service)
Kopete (Linux KDE, GPL)
Meebo (Web-based)
Meetro (multi-platform, proprietary)
Miranda IM (Windows, GPL)
Pidgin (formerly Gaim) (multi-platform, GPL)
tmsnc (multi-platform, text based)
Trillian (multi-platform, Web, proprietary)
Yahoo! Messenger (multi-platform, proprietary)
Criticism
Microsoft Messenger has been criticized for the use of the Microsoft Notification Protocol, which does not provide any encryption. This makes wiretapping personal conversations in Messenger possible if someone intercepts the communication, which is easy in unencrypted public Wi-Fi networks.
See also
Microsoft Notification Protocol
Comparison of instant messaging protocols
Comparison of instant messaging clients
References
External links
MSN Messenger protocol documentation
MSNPiki (protocol wiki)
Skype replaces Microsoft Messenger for online calls
.NET
Instant messaging protocols
Windows communication and services |
174754 | https://en.wikipedia.org/wiki/Teiji%20Takagi | Teiji Takagi | Teiji Takagi (高木 貞治 Takagi Teiji, April 21, 1875 – February 28, 1960) was a Japanese mathematician, best known for proving the Takagi existence theorem in class field theory. The Blancmange curve, the graph of a nowhere-differentiable but uniformly continuous function, is also called the Takagi curve after his work on it.
Biography
He was born in the rural area of the Gifu Prefecture, Japan. He began learning mathematics in middle school, reading texts in English since none were available in Japanese. After attending a high school for gifted students, he went on to the Imperial University (later Tokyo Imperial University), at that time the only university in Japan before the Imperial University System was established on June 18, 1897. There he learned mathematics from such European classic texts as Salmon's Algebra and Weber's Lehrbuch der Algebra. Aided by Hilbert, he then studied at Göttingen. Aside from his work in algebraic number theory he wrote a great number of Japanese textbooks on mathematics and geometry.
During World War I, he was isolated from European mathematicians and developed his existence theorem in class field theory, building on the work of Heinrich Weber. As an Invited Speaker, he presented a synopsis of this research in a talk Sur quelques théoremes généraux de la théorie des nombres algébriques at the International Congress of Mathematicians in Strasbourg in 1920. There he found little recognition of the value of his research, since algebraic number theory was then studied mainly in Germany and German mathematicians were excluded from the Congress. Takagi published his theory in the same year in the journal of the University of Tokyo. However, the significance of Takagi's work was first recognized by Emil Artin in 1922, and was again pointed out by Carl Ludwig Siegel, and at the same time by Helmut Hasse, who lectured in Kiel in 1923 on class field theory and presented Takagi's work in a lecture at the meeting of the DMV in 1925 in Danzig and in his Klassenkörperbericht (class field report) in the 1926 annual report of the DMV. Takagi was then internationally recognized as one of the world's leading number theorists. In 1932 he was vice-president of the International Congress of Mathematicians in Zurich and in 1936 was a member of the selection committee for the first Fields Medal.
He was also instrumental during World War II in the development of Japanese encryption systems; see Purple.
The Autonne-Takagi factorization of complex symmetric matrices is named in his honour.
Family
Sigekatu Kuroda - son-in-law. Mathematician.
S.-Y. Kuroda - grandson (son of Sigekatu Kuroda). Mathematician and Chomskyan linguist.
Bibliography
References
External links
Takagi Lectures by the Mathematical Society of Japan
Teiji Takagi: collected papers (2nd edition), edited by S. Iyanaga, K. Iwasawa, K. Kodaita and K. Yosida. Pp 376. DM188. 1990. (Springer) / CAMBRIDGE UNIVERSITY PRESS / The Mathematical Association 1991
People from Gifu Prefecture
People of the Empire of Japan
1875 births
1960 deaths
19th-century Japanese mathematicians
20th-century Japanese mathematicians
Number theorists
University of Tokyo faculty
University of Göttingen alumni
University of Tokyo alumni
Recipients of the Order of Culture |
175115 | https://en.wikipedia.org/wiki/William%20F.%20Friedman | William F. Friedman | William Frederick Friedman (September 24, 1891 – November 12, 1969) was a US Army cryptographer who ran the research division of the Army's Signal Intelligence Service (SIS) in the 1930s, and parts of its follow-on services into the 1950s. In 1940, subordinates of his led by Frank Rowlett broke Japan's PURPLE cipher, thus disclosing Japanese diplomatic secrets before America's entrance into World War II.
Early life
Friedman was born Wolf Friedman (, ), in Chişinău, Bessarabia, the son of Frederick Friedman, a Jew from Bucharest who worked as a translator and linguist for the Russian Postal Service, and the daughter of a well-to-do wine merchant. Friedman's family fled Russia in 1892 to escape the virulent anti-Semitism there, ending up in Pittsburgh, Pennsylvania. Three years later, his first name was changed to William.
As a child, Friedman was introduced to cryptography in the short story "The Gold-Bug" by Edgar Allan Poe. He studied at the Michigan Agricultural College (known today as Michigan State University) in East Lansing and received a scholarship to work on genetics at Cornell University. Meanwhile, George Fabyan, who ran a private research laboratory to study any personally interesting project, decided to set up his own genetics project and was referred to Friedman. Friedman joined Fabyan's Riverbank Laboratories outside Chicago in September 1915. As head of the Department of Genetics, one of the projects he ran studied the effects of moonlight on crop growth, and so he experimented with the planting of wheat during various phases of the moon.
Initial work in cryptology
Another of Fabyan's pet projects was research into secret messages which Sir Francis Bacon had allegedly hidden in various texts during the reigns of Elizabeth I and James I. The research was carried out by Elizabeth Wells Gallup. She believed that she had discovered many such messages in the works of William Shakespeare, and convinced herself that Bacon had written many, if not all, of Shakespeare's works. Friedman had become something of an expert photographer while working on his other projects, and was asked to travel to England on several occasions to help Gallup photograph historical manuscripts during her research. He became fascinated with the work as he courted Elizebeth Smith, Gallup's assistant and an accomplished cryptographer. They married, and he soon became director of Riverbank's Department of Codes and Ciphers as well as its Department of Genetics. During this time, Friedman wrote a series of 8 papers on cryptography, collectively known as the "Riverbank Publications", including the first description of the index of coincidence, an important mathematical tool in cryptanalysis.
With the entry of the United States into World War I, Fabyan offered the services of his Department of Codes and Ciphers to the government. No Federal department existed for this kind of work (although both the Army and Navy had had embryonic departments at various times), and soon Riverbank became the unofficial cryptographic center for the US Government. During this period, the Friedmans broke a code used by German-funded Indian radicals in the US who planned to ship arms to India to gain independence from Britain. Analyzing the format of the messages, Riverbank realized that the code was based on a dictionary of some sort, a cryptographic technique common at the time. The Friedmans soon managed to decrypt most of the messages, but only long after the case had come to trial did the book itself come to light: a German-English dictionary published in 1880.
Signals Intelligence Service
The United States government decided to set up its own cryptological service, and sent Army officers to Riverbank to train under Friedman. To support the program, Friedman wrote a series of technical monographs, completing seven by early 1918. He then enlisted in the Army and went to France to serve as the personal cryptographer for General John J. Pershing. He returned to the US in 1920 and published an eighth monograph, "The Index of Coincidence and its Applications in Cryptography", considered by some to be the most important publication in modern cryptography to that time. His texts for Army cryptographic training were well thought of and remained classified for several decades.
In 1921 he became chief cryptanalyst for the War Department and later led the Signals Intelligence Service (SIS)—a position he kept for a quarter century. In 1929, after The American Black Chamber in New York City was disbanded, its files were entrusted to SIS, and the cryptographic and intelligence services was reorganized to suit its new position at the War Department.
Friedman coined several terms, including "cryptanalysis", and wrote many monographs on cryptography. One of these (written mostly in his spare time) was the first draft of his Elements of cryptanalysis, which later was expanded to four volumes and became the U.S. Army's cryptographic main textbook and reference. Realizing that mathematical and language skills were essential to SIS' work, Friedman managed to get authority to hire three men with both mathematical training and language knowledge. They were Solomon Kullback, Frank Rowlett and Abraham Sinkov, each of whom went on to distinguished service for decades. In addition he also was finally able to hire a man fluent in Japanese, John Hurt.
During this period Elizebeth Friedman continued her own work in cryptology, and became famous in a number of trials involving rum-runners and the Coast Guard and FBI during Prohibition.
Solution of cipher machines
During the 1920s, several new cipher machines were developed generally based on using typewriter mechanics and basic electrical circuitry. An early example was the Hebern Rotor Machine, designed in the US in 1915 by Edward Hebern. This system offered such security and simplicity of use that Hebern heavily promoted it to investors.
Friedman realized that the new rotor machines would be important, and devoted some time to analyzing Hebern's design. Over a period of years, he developed principles of analysis and discovered several problems common to most rotor-machine designs. Examples of some dangerous features which allowed cracking of the generated code included having rotors step one position with each keypress, and putting the fastest rotor (the one that turns with every keypress) at either end of the rotor series. In this case, by collecting enough ciphertext and applying a standard statistical method known as the kappa test, he showed that he could, albeit with great difficulty, crack any cipher generated by such a machine.
Friedman used his understanding of rotor machines to develop several that were immune to his own attacks. The best of the lot was the SIGABA—which was destined to become the US's highest-security cipher machine in World War II after improvements by Frank Rowlett and Laurance Safford. Just over 10,000 were built. A patent on SIGABA was filed at the end of 1944, but kept secret until 2001, after Friedman had died, when it was finally issued as .
In 1939, the Japanese introduced a new cipher machine for their most sensitive diplomatic traffic, replacing an earlier system that SIS referred to as "RED." The new cipher, which SIS called "PURPLE", was different and much more difficult. The Navy's cryptological unit (OP-20-G) and the SIS thought it might be related to earlier Japanese cipher machines, and agreed that SIS would handle the attack on the system. After several months trying to discover underlying patterns in PURPLE ciphertexts, an SIS team led by Friedman and Rowlett, in an extraordinary achievement, figured it out. PURPLE, unlike the German Enigma or the Hebern design, did not use rotors but stepper switches like those in automated telephone exchanges. Leo Rosen of SIS built a machine using — as was later discovered — the identical model of switch that the Japanese designer had chosen.
Thus, by the end of 1940, SIS had constructed an exact analog of the PURPLE machine without ever having seen one. With the duplicate machines and an understanding of PURPLE, SIS could decrypt increasing amounts of Japanese traffic. One such intercept was the message to the Japanese Embassy in Washington, D.C., ordering an end (on December 7, 1941) to negotiations with the US. The message gave a clear indication of impending war, and was to have been delivered to the US State Department only hours prior to the attack on Pearl Harbor. The controversy over whether the US had foreknowledge of the Pearl Harbor attack has roiled well into the 21st century.
In 1941, Friedman was hospitalized with a "nervous breakdown", widely attributed to the mental strain of his work on PURPLE. While he remained in hospital, a four-man team — Abraham Sinkov and Leo Rosen from SIS, and Lt. Prescott Currier and Lt. Robert Weeks from the U.S. Navy's OP-20-G — visited the British establishment at the "Government Code and Cypher School" at Bletchley Park. They gave the British a PURPLE machine, in exchange for details on the design of the Enigma machine and on how the British decrypted the Enigma cipher. However Friedman visited Bletchley Park in April 1943 and played a key role in drawing up the 1943 BRUSA Agreement.
National Security Agency
Following World War II, Friedman remained in government signals intelligence. In 1949 he became head of the cryptographic division of the newly formed Armed Forces Security Agency (AFSA) and in 1952 became chief cryptologist for the National Security Agency (NSA) when it was formed to take over from AFSA. Friedman produced a classic series of textbooks, "Military Cryptanalysis", which was used to train NSA students. (These were revised and extended, under the title "Military Cryptanalytics", by Friedman's assistant and successor Lambros D. Callimahos, and used to train many additional cryptanalysts.) During his early years at NSA, he encouraged it to develop what were probably the first super-computers, although he was never convinced a machine could have the "insight" of a human mind.
Friedman spent much of his free time trying to decipher the famous Voynich Manuscript, said to be written sometime between 1403–1437. However, after four decades of study he finally had to admit defeat, contributing no more than an educated guess as to its origins and meaning.
In 1955, Friedman initiated, on behalf of the NSA, a secret agreement with Crypto AG, a Swiss manufacturer of encryption machines. The agreement resulted in many of the company's machines being compromised, so that the messages produced by them became crackable by the NSA.
Friedman retired in 1956 and, with his wife, turned his attention to the problem that had originally brought them together: examining Bacon's supposed codes. Together they wrote a book entitled The Cryptologist Looks at Shakespeare, which won a prize from the Folger Library and was published under the title The Shakespearean Ciphers Examined. The book demonstrated flaws in Gallup's work and in that of others who sought hidden ciphers in Shakespeare's work.
At NSA's request Friedman prepared Six Lectures Concerning Cryptography and Cryptanalysis, which he delivered at NSA. But later the Agency, concerned about security, confiscated the reference materials from Friedman's home.
Death and legacy
Friedman's health began to fail in the late 1960s, and he died in 1969. Friedman and his wife Elizebeth are buried in Arlington National Cemetery.
Friedman and his wife donated their archives to the library of the George C. Marshall Foundation, which also has had material reclassified and removed by the NSA.
Friedman has been inducted into the Military Intelligence Hall of Fame and there is a building named after William and Elizebeth at the NSA complex at Fort George G. Meade in Maryland. He was also presented the Medal for Merit by President Harry Truman, and the National Security Medal by Dwight Eisenhower.
Friedman has the distinction of having one of the longest known suppressed patent applications, for , a patent for a "cryptographic system". It was filed on July 25, 1933, issued on August 1, 2000.
Children
Friedman had two children with his wife, Elizebeth. Barbara Friedman (later Atchison) (born 1923), and John Ramsay Friedman (1926–2010).
In popular culture
Commander Schoen, a character appearing in Neal Stephenson's novel Cryptonomicon, is to a large extent inspired by Friedman. Schoen shares a significant background and personality traits with Friedman, including being one of the top cryptanalysts of the U.S. Army, breaking Japanese codes prior to Japan's involvement in World War II, and the psychological problems that he suffered from as a result. In his acknowledgements, Stephenson writes "Among all these great wartime hackers, some kind of special recognition must go to William Friedman, who sacrificed his health to break the Japanese machine cipher called Purple before the war even began."
Awards and honors
1944: Commendation for Exceptional Civilian Service
1946: Medal for Merit
1955: National Security Medal.
See also
Cryptography
Elizebeth Smith Friedman
Magic (cryptography)
Riverbank Publications
References
Bibliography
Clark, Ronald W. (1977). The Man Who Broke Purple: the Life of Colonel William F. Friedman, Who Deciphered the Japanese Code in World War II. Boston: Little Brown & Co. ;
Friedman, William F. Six Lectures on Cryptology, U.S. National Security Agency, 1965, declassified 1977, 1984
Gannon, James. (2001). Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C.: Brassey's. ;
Kahn, David. (1966). The Codebreakers: the Story of Secret Writing. London: Weidenfeld and Nicolson.
Rowlett, Frank B. (1999). The Story of Magic: Memoirs of an American Cryptologic Pioneer. Laguna Hills, California: Aegean Park Press. ;
Jensen, Cora J. (apparently William Friedman). "Saying It" In Cipher, The Florists’ Review, Vol. XLVI, No. 1196, p. 17 (28 Oct 1920), available from Internet Archive as digitized michrofiche 5205536_46_4
with Elizebeth S. Friedman, Riverbank Publication Number 21, Methods for the Reconstruction of Primary Alphabets, 1918, in Methods for the Solution of Ciphers, Publications 15-22, Rufus A. Long Digital Library of Cryptography, George C. Marshall Library, 1917-1922,
For references to other material, see The Friedman Collection: An Analytical Guide
External links
William F. Friedman Papers at George C. Marshall Foundation
Friedman Cryptologic Collection at George C. Marshall Foundation
NSA Hall of Honor page on Friedman
NSA William F. Friedman Collection of Official Papers
Colonel William F. Friedman (the Godfather of Cryptology) by Robert A. Reeves
Reprints of Friedman's work were available through Aegean Park Press
1891 births
1969 deaths
People from Kishinyovsky Uyezd
Moldovan Jews
Bessarabian Jews
Emigrants from the Russian Empire to the United States
American cryptographers
American people of Moldovan-Jewish descent
Riverbank Laboratories
Baconian theory of Shakespeare authorship
Mathematicians from Illinois
People from Kane County, Illinois
People from Washington, D.C.
Cornell University alumni
Michigan State University alumni
American people of World War I
American people of World War II
Military personnel from Pittsburgh
National Security Agency cryptographers
Signals Intelligence Service cryptographers
Medal for Merit recipients
Burials at Arlington National Cemetery |
175560 | https://en.wikipedia.org/wiki/Ciphertext | Ciphertext | In cryptography, ciphertext or cyphertext is the result of encryption performed on plaintext using an algorithm, called a cipher. Ciphertext is also known as encrypted or encoded information because it contains a form of the original plaintext that is unreadable by a human or computer without the proper cipher to decrypt it. This process prevents the loss of sensitive information via hacking. Decryption, the inverse of encryption, is the process of turning ciphertext into readable plaintext. Ciphertext is not to be confused with codetext because the latter is a result of a code, not a cipher.
Conceptual underpinnings
Let be the plaintext message that Alice wants to secretly transmit to Bob and let be the encryption cipher, where is a cryptographic key. Alice must first transform the plaintext into ciphertext, , in order to securely send the message to Bob, as follows:
In a symmetric-key system, Bob knows Alice's encryption key. Once the message is encrypted, Alice can safely transmit it to Bob (assuming no one else knows the key). In order to read Alice's message, Bob must decrypt the ciphertext using which is known as the decryption cipher,
Alternatively, in a non-symmetric key system, everyone, not just Alice and Bob, knows the encryption key; but the decryption key cannot be inferred from the encryption key. Only Bob knows the decryption key and decryption proceeds as
Types of ciphers
The history of cryptography began thousands of years ago. Cryptography uses a variety of different types of encryption. Earlier algorithms were performed by hand and are substantially different from modern algorithms, which are generally executed by a machine.
Historical ciphers
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include:
Substitution cipher: the units of plaintext are replaced with ciphertext (e.g., Caesar cipher and one-time pad)
Polyalphabetic substitution cipher: a substitution cipher using multiple substitution alphabets (e.g., Vigenère cipher and Enigma machine)
Polygraphic substitution cipher: the unit of substitution is a sequence of two or more letters rather than just one (e.g., Playfair cipher)
Transposition cipher: the ciphertext is a permutation of the plaintext (e.g., rail fence cipher)
Historical ciphers are not generally used as a standalone encryption technique because they are quite easy to crack. Many of the classical ciphers, with the exception of the one-time pad, can be cracked using brute force.
Modern ciphers
Modern ciphers are more secure than classical ciphers and are designed to withstand a wide range of attacks. An attacker should not be able to find the key used in a modern cipher, even if he knows any amount of plaintext and corresponding ciphertext. Modern encryption methods can be divided into the following categories:
Private-key cryptography (symmetric key algorithm): the same key is used for encryption and decryption
Public-key cryptography (asymmetric key algorithm): two different keys are used for encryption and decryption
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. In an asymmetric key algorithm (e.g., RSA), there are two separate keys: a public key is published and enables any sender to perform encryption, while a private key is kept secret by the receiver and enables only him to perform correct decryption.
Symmetric key ciphers can be divided into block ciphers and stream ciphers. Block ciphers operate on fixed-length groups of bits, called blocks, with an unvarying transformation. Stream ciphers encrypt plaintext digits one at a time on a continuous stream of data and the transformation of successive digits varies during the encryption process.
Cryptanalysis
Cryptanalysis is the study of methods for obtaining the meaning of encrypted information, without access to the secret information that is normally required to do so. Typically, this involves knowing how the system works and finding a secret key. Cryptanalysis is also referred to as codebreaking or Password cracking cracking the code. Ciphertext is generally the easiest part of a cryptosystem to obtain and therefore is an important part of cryptanalysis. Depending on what information is available and what type of cipher is being analyzed, cryptanalysts can follow one or more attack models to crack a cipher.
Attack models
Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or code texts
Known-plaintext: the attacker has a set of ciphertexts to which he knows the corresponding plaintext
Chosen-plaintext attack: the attacker can obtain the ciphertexts corresponding to an arbitrary set of plaintexts of his own choosing
Batch chosen-plaintext attack: where the cryptanalyst chooses all plaintexts before any of them are encrypted. This is often the meaning of an unqualified use of "chosen-plaintext attack".
Adaptive chosen-plaintext attack: where the cryptanalyst makes a series of interactive queries, choosing subsequent plaintexts based on the information from the previous encryptions.
Chosen-ciphertext attack: the attacker can obtain the plaintexts corresponding to an arbitrary set of ciphertexts of his own choosing
Adaptive chosen-ciphertext attack
Indifferent chosen-ciphertext attack
Related-key attack: like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.
The ciphertext-only attack model is the weakest because it implies that the cryptanalyst has nothing but ciphertext. Modern ciphers rarely fail under this attack.
Famous ciphertexts
The Babington Plot ciphers
The Shugborough inscription
The Zimmermann Telegram
The Magic Words are Squeamish Ossifrage
The cryptogram in "The Gold-Bug"
Beale ciphers
Kryptos
Zodiac Killer ciphers
See also
Books on cryptography
Cryptographic hash function
Frequency analysis
RED/BLACK concept
:Category:Undeciphered historical codes and ciphers
References
Further reading
Helen Fouché Gaines, “Cryptanalysis”, 1939, Dover.
David Kahn, The Codebreakers - The Story of Secret Writing () (1967)
Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1968.
Cryptography |
177842 | https://en.wikipedia.org/wiki/OASIS%20%28organization%29 | OASIS (organization) | The Organization for the Advancement of Structured Information Standards (OASIS; ) is a nonprofit consortium that works on the development, convergence, and adoption of open standards for cybersecurity, blockchain, Internet of things (IoT), emergency management, cloud computing, legal data exchange, energy, content technologies, and other areas.
History
OASIS was founded under the name "SGML Open" in 1993. It began as a trade association of Standard Generalized Markup Language (SGML) tool vendors to cooperatively promote the adoption of SGML through mainly educational activities, though some amount of technical activity was also pursued including an update of the CALS Table Model specification and specifications for fragment interchange and entity management.
In 1998, with the movement of the industry to XML, SGML Open changed its emphasis from SGML to XML, and changed its name to OASIS Open to be inclusive of XML and reflect an expanded scope of technical work and standards. The focus of the consortium's activities also moved from promoting adoption (as XML was getting much attention on its own) to developing technical specifications. In July 2000 a new technical committee process was approved. With the adoption of the process the manner in which technical committees were created, operated, and progressed their work was regularized. At the adoption of the process there were five technical committees; by 2004 there were nearly 70.
During 1999, OASIS was approached by UN/CEFACT, the committee of the United Nations dealing with standards for business, to jointly develop a new set of specifications for electronic business. The joint initiative, called "ebXML" and which first met in November 1999, was chartered for a three-year period. At the final meeting under the original charter, in Vienna, UN/CEFACT and OASIS agreed to divide the remaining work between the two organizations and to coordinate the completion of the work through a coordinating committee. In 2004 OASIS submitted its completed ebXML specifications to ISO TC154 where they were approved as ISO 15000.
The consortium has its headquarters in Burlington, Massachusetts, shared with other companies. On September 4, 2014, the consortium moved from 25 Corporate Drive Suite 103 to 35 Corporate Dr Suite 150, still on the same loop route.
Standards development
The following standards are under development or maintained by OASIS technical committees:
AMQP — Advanced Message Queuing Protocol, an application layer protocol for message-oriented middleware.
BCM — Business Centric-Methodology, BCM is a comprehensive approach and proven techniques that enable a service-oriented architecture (SOA) and support enterprise agility and interoperability.
CAM — Content Assembly Mechanism, is a generalized assembly mechanism for using templates of XML business transaction content and the associated rules. CAM templates augment schema syntax and provide implementers with the means to specify interoperable interchange patterns.
CAMP — Cloud Application Management for Platforms, is an API for managing public and private cloud applications.
CAP — Common Alerting Protocol, is an XML-based data format for exchanging public warnings and emergencies between alerting technologies.
CDP — Customer Data Platform, is a specification that aims to standardize the exchange of customer data across systems and silos by defining a web-based API using GraphQL.
CMIS — Content Management Interoperability Services, is a domain model and Web services standard for working with Enterprise content management repositories and systems.
CIQ — Customer Information Quality, is an XML Specifications for defining, representing, interoperating and managing party information (e.g. name, address).
DocBook — DocBook, a markup language for technical documentation. It was originally intended for authoring technical documents related to computer hardware and software but it can be used for any other sort of documentation.
DITA — Darwin Information Typing Architecture, a modular and extensible XML-based language for topic-based information, such as for online help, documentation, and training.
EML — Election Markup Language, End to End information standards and processes for conducting democratic elections using XML-based information recording.
EDXL — Emergency Data Exchange Language, Suite of XML-based messaging standards that facilitate emergency information sharing between government entities and the full range of emergency-related organizations
GeoXACML — Geospatial eXtensible Access Control Markup Language, a geo-specific extension to XACML Version 2.0, mainly the geometric data-type urn:ogc:def:dataType:geoxacml:1.0:geometry and several geographic functions such as topological, bag, set, geometric and conversion functions.
KMIP — The Key Management Interoperability Protocol tries to establish a single, comprehensive protocol for the communication between enterprise key management systems and encryption systems.
Legal XML LegalDocumentML (Akoma Ntoso), LegalRuleML, Electronic Court Filing, and eNotarization standards.
MQTT — Message Queuing Telemetry Transport, a client-server, publish/subscribe messaging transport protocol. It is light weight, open, simple, and designed to be easy to implement. These characteristics make it ideal for use in many situations, including constrained environments such as for communication in machine to machine (M2M) and Internet of Things (IoT) contexts where a small code footprint is required and/or network bandwidth is at a premium.
oBIX — open Building Information Exchange, an extensible XML specification for enterprise interaction with building-based (or other) control systems, including HVAC, Access Control, Intrusion Detection, and many others.
OData — Open Data Protocol (OData), Simplifying data sharing across disparate applications in enterprise, Cloud, and mobile devices.
OpenDocument — OASIS Open Document Format for Office Applications, an open document file format for saving office documents such as spreadsheets, memos, charts, and presentations.
OSLC — Open Services for Lifecycle Collaboration, (OSLC) develops standards that make it easy and practical for software lifecycle tools to share data with one another. See the OSLC community web site (http://open-services.net) for more details.
PKCS #11 - PKCS #11 standard defines a platform-independent API to cryptographic tokens, such as hardware security modules (HSM) and smart cards, and names the API itself "Cryptoki" (from "cryptographic token interface" and pronounced as "crypto-key" - but "PKCS #11" is often used to refer to the API as well as the standard that defines it).
SAML — Security Assertion Markup Language, a standard XML-based framework for the secure exchange of authentication and authorization information.
SARIF - Static Analysis Results Interchange Format, a standard JSON-based format for the output of static analysis tools.
SDD — Solution Deployment Descriptor, a standard XML-based schema defining a standardized way to express software installation characteristics required for lifecycle management in a multi-platform environment.
SPML — Service Provisioning Markup Language, a standard XML-based protocol for the integration and interoperation of service provisioning requests.
TOSCA — Topology and Orchestration Specification for Cloud Applications, a Standard to describe cloud services, the relationships between parts of the service, and the operational behavior of the services.
UBL — Universal Business Language, the international effort to define a royalty-free library of standard electronic business documents (purchase order, invoice, waybill, etc.) in XML. UBL 2.1 was approved as ISO/IEC 19845:2015. UBL serves as the basis for numerous electronic commerce networks and implementations worldwide.
UDDI — Universal Description Discovery and Integration, a platform-independent, XML-based registry for companies and individuals to list Web Services.
VirtIO — Virtual I/O, a standard for paravirtualized devices.
WebCGM — Web Computer Graphics Metafile, a profile of Computer Graphics Metafile (CGM), which adds Web linking and is optimized for Web applications in technical illustration, electronic documentation, geophysical data visualization, and similar fields.
WS-BPEL — Web Services Business Process Execution Language
WSDM — Web Services Distributed Management
XACML — eXtensible Access Control Markup Language, a standard XML-based protocol for access control policies.
XDI — XRI Data Interchange, a standard for sharing, linking, and synchronizing data ("dataweb") across multiple domains and applications using XML documents, eXtensible Resource Identifiers (XRIs), and a new method of distributed data control called a link contract.
XLIFF — XML Localization Interchange File Format, a XML-based format created to standardize localization.
XRI — eXtensible Resource Identifier, a URI-compatible scheme and resolution protocol for abstract identifiers used to identify and share resources across domains and applications.
Members
Adhesion to the consortium requires some fees to be paid, which must be renewed annually, depending on the membership category adherents want to access. Among the adherents are members from American Bar Association, Collabora, Dell, EclecticIQ, General Motors, IBM, ISO/IEC, KDE e.V., Microsoft, Novell, Oracle, Red Hat, The Document Foundation, universities, government agencies, individuals and employees from other less-known companies.
Member sections
Member sections are special interest groups within the consortium that focus on specific topics. These sections keep their own distinguishable identity and have full autonomy to define their work programme and agenda. The integration of the member section in the standardization process is organized via the technical committees.
Active member sections are for example:
Legal XML
IDTrust
Member sections may be completed when they have achieved their objectives. The standards that they promoted are then maintained by the relevant technical committees directly within OASIS. For example:
AMQP
WS-I
Patent disclosure controversy
Like many bodies producing open standards e.g. ECMA, OASIS added a Reasonable and non-discriminatory licensing (RAND) clause to its policy in February 2005. That amendment required participants to disclose intent to apply for software patents for technologies under consideration in the standard. Contrary to the W3C, which requires participants to offer royalty-free licenses to anyone using the resulting standard, OASIS offers a similar Royalty Free on Limited Terms mode, along with a Royalty Free on RAND Terms mode and a RAND (reasonable and non-discriminatory) mode for its committees. Compared to W3C, OASIS is less restrictive regarding obligation to companies to grant a royalty-free license to the patents they own.
Controversy has rapidly arisen because this licensing was added silently and allows publication of standards which could require licensing fee payments to patent holders. This situation could effectively eliminate the possibility of free/open source implementations of these standards. Further, contributors could initially offer royalty-free use of their patent, later imposing per-unit fees, after the standard has been accepted.
On April 11, 2005, The New York Times reported IBM committed for free, all of its patents to the OASIS group. Larry Rosen, a software law expert and the leader of the reaction which rose up when OASIS quietly included a RAND clause in its policy, welcomed the initiative and supposed OASIS will not continue using that policy as other companies involved would follow. History proved him wrong, as that RAND policy has still not been removed and other commercial companies have not published such a free statement towards OASIS.
Patrick Gannon, president and CEO of OASIS from 2001 to 2008, minimized the risk that a company could take advantage of a standard to request royalties when it has been established: "If it's an option nobody uses, then what's the harm?".
Sam Hiser, former marketing lead of the now defunct OpenOffice.org, explained that such patents towards an open standard are counterproductive and inappropriate. He also argued that IBM and Microsoft were shifting their standardization efforts from the W3C to OASIS, in a way to leverage probably their patents portfolio in the future. Hiser also attributed this RAND change to the OASIS policy to Microsoft.
The RAND term could indeed theoretically allow any company involved to leverage their patent in the future. But that amendment was probably added in a way to attract more companies to the consortium, and encourage contributions from potential participants. Big actors like Microsoft could have indeed applied pressure and made a sine-qua-non condition to access the consortium, and possibly jeopardize/boycott the standard if such a clause was not present.
Criticism
Doug Mahugh — while working for Microsoft (a promoter of Office Open XML, a Microsoft document format competing with OASIS's ISO/IEC 26300, i.e. ODF v1.0) — claimed that "many countries have expressed frustration about the pace of OASIS's responses to defect reports that have been submitted on ISO/IEC 26300 and the inability for SC 34 members to participate in the maintenance of ODF." However, Rob Weir, co-chair of the OASIS ODF Technical Committee noted that at the time, "the ODF TC had received zero defect reports from any ISO/IEC national body other than Japan". He added that the submitter of the original Japanese defect report, Murata Mokoto, was satisfied with the preparation of the errata. He also self-published a blog post blaming Microsoft of involving people to improve and modify the accuracy of ODF and OpenXML Wikipedia articles, trying to make ODF sound risky to adopt.
See also
UIMA
References
External links
OASIS specifications
A Call to Action in OASIS
OASIS making it easier to use standards without fee worries
Standards organizations in the United States
Web services
XML organizations
Internet of things |
180558 | https://en.wikipedia.org/wiki/Domestic%20Security%20Enhancement%20Act%20of%202003 | Domestic Security Enhancement Act of 2003 | The Domestic Security Enhancement Act of 2003 was draft legislation written by United States Department of Justice during the George W. Bush administration, under the tenure of United States Attorney General John Ashcroft. The Center for Public Integrity obtained a copy of the draft marked "confidential" on February 7, 2003, and posted it on its Web site along with commentary. It was sometimes called Patriot II, after the USA PATRIOT Act, which was enacted in 2001. It was never introduced to the United States Congress.
The draft version of the bill would have expanded the powers of the United States federal government while simultaneously curtailing judicial review of these powers. Members of the United States Congress said that they had not seen the drafts, though the documents obtained by the CPI indicated that Speaker of the United States House of Representatives Dennis Hastert and Vice President Dick Cheney had received copies.
Provisions of the draft version included:
Removal of court-ordered prohibitions against police agencies spying on domestic groups.
The Federal Bureau of Investigation would be granted powers to conduct searches and surveillance based on intelligence gathered in foreign countries without first obtaining a court order.
Creation of a DNA database of suspected terrorists.
Prohibition of any public disclosure of the names of alleged terrorists including those who have been arrested.
Exemptions from civil liability for people and businesses who voluntarily turn private information over to the government.
Criminalization of the use of encryption to conceal incriminating communications.
Automatic denial of bail for persons accused of terrorism-related crimes, reversing the ordinary common law burden of proof principle. Persons charged with terrorists acts would be required to demonstrate why they should be released on bail rather than the government being required to demonstrate why they should be held.
Expansion of the list of crimes eligible for the death penalty.
The Environmental Protection Agency would be prevented from releasing "worst-case scenario" information to the public about chemical plants.
United States citizens whom the government finds to be either members of, or providing material support to, terrorist groups could have their citizenship revoked and be deported to foreign countries.
Some provisions of this act have been tacked onto other bills such as the Senate Spending bill and subsequently passed.
The American Civil Liberties Union and the Bill of Rights Defense Committee have all been vocal opponents of the PATRIOT Act of 2001, the proposed (as of 2003) PATRIOT 2 Act, and other associated legislation made in response to the threat of domestic terrorism that it believes violates either the letter and/or the spirit of the U.S. Bill of Rights.
On January 31, 2006, the Center for Public Integrity published a story on its website that claimed that this proposed legislation undercut the Bush administration's legal rationale of its NSA wiretapping program.
See also
ADVISE
National security
United States Bill of Rights
COINTELPRO
Bill C-51 (41st Canadian Parliament, 2nd Session)
External links
Original 2003 report from the Center for Public Integrity including draft copies of the legislation.
March 17, 2003 letter in opposition to DSEA from a coalition of organizations from the Center for Democracy and Technology
Analysis of "Patriot II" from the Electronic Frontier Foundation
Privacy law in the United States
Patriot Act
United States proposed federal legislation |
180609 | https://en.wikipedia.org/wiki/Identity%20theft | Identity theft | Identity theft occurs when someone uses another person's personal identifying information, like their name, identifying number, or credit card number, without their permission, to commit fraud or other crimes. The term identity theft was coined in 1964. Since that time, the definition of identity theft has been statutorily defined throughout both the U.K. and the United States as the theft of personally identifiable information. Identity theft deliberately uses someone else's identity as a method to gain financial advantages or obtain credit and other benefits, and perhaps to cause other person's disadvantages or loss. The person whose identity has been stolen may suffer adverse consequences, especially if they are falsely held responsible for the perpetrator's actions. Personally identifiable information generally includes a person's name, date of birth, social security number, driver's license number, bank account or credit card numbers, PINs, electronic signatures, fingerprints, passwords, or any other information that can be used to access a person's financial resources.
Determining the link between data breaches and identity theft is challenging, primarily because identity theft victims often do not know how their personal information was obtained. According to a report done for the FTC, identity theft is not always detectable by the individual victims. Identity fraud is often but not necessarily the consequence of identity theft. Someone can steal or misappropriate personal information without then committing identity theft using the information about every person, such as when a major data breach occurs. A US Government Accountability Office study determined that "most breaches have not resulted in detected incidents of identity theft". The report also warned that "the full extent is unknown". A later unpublished study by Carnegie Mellon University noted that "Most often, the causes of identity theft is not known", but reported that someone else concluded that "the probability of becoming a victim to identity theft as a result of a data breach is ... around only 2%". For example, in one of the largest data breaches which affected over four million records, it resulted in only about 1,800 instances of identity theft, according to the company whose systems were breached.
An October 2010 article entitled "Cyber Crime Made Easy" explained the level to which hackers are using malicious software. As Gunter Ollmann,
Chief Technology Officer of security at Microsoft, said, "Interested in credit card theft? There's an app for that." This statement summed up the ease with which these hackers are accessing all kinds of information online. The new program for infecting users' computers was called Zeus, and the program is so hacker-friendly that even an inexperienced hacker can operate it. Although the hacking program is easy to use, that fact does not diminish the devastating effects that Zeus (or other software like Zeus) can do on a computer and the user. For example, programs like Zeus can steal credit card information, important documents, and even documents necessary for homeland security. If a hacker were to gain this information, it would mean identity theft or even a possible terrorist attack. The ITAC says that about 15 million Americans had their identity stolen in 2012.
Types
Sources such as the Non-profit Identity Theft Resource Center sub-divide identity theft into five categories:
Criminal identity theft (posing as another person when apprehended for a crime)
Financial identity theft (using another's identity to obtain credit, goods, and services)
Identity cloning (using another's information to assume his or her identity in daily life)
Medical identity theft (using another's identity to obtain medical care or drugs)
Child identity theft.
Identity theft may be used to facilitate or fund other crimes including Illegal immigration, terrorism, phishing and espionage. There are cases of identity cloning to attack payment systems, including online credit card processing and medical insurance.
Identity cloning and concealment
In this situation, the identity thief impersonates someone else to conceal their own true identity. Examples are illegal immigrants hiding their illegal status, people hiding from creditors or other individuals and those who simply want to become "anonymous" for personal reasons. Another example is posers, a label given to people who use someone else's photos and information on social networking sites. Posers mostly create believable stories involving friends of the real person they are imitating. Unlike identity theft used to obtain credit which usually comes to light when the debts mount, concealment may continue indefinitely without being detected, particularly if the identity thief can obtain false credentials to pass various authentication tests in everyday life.
Criminal identity theft
When a criminal fraudulently identifies themselves to police as another individual at the point of arrest, it is sometimes referred to as "Criminal Identity Theft." In some cases, criminals have previously obtained state-issued identity documents using credentials stolen from others, or have simply presented a fake ID. Provided the subterfuge works, charges may be placed under the victim's name, letting the criminal off the hook. Victims might only learn of such incidents by chance, for example by receiving a court summons, discovering their driver's licenses are suspended when stopped for minor traffic violations, or through background checks performed for employment purposes.
It can be difficult for the victim of criminal identity theft to clear their record. The steps required to clear the victim's incorrect criminal record depend on which jurisdiction the crime occurred and whether the true identity of the criminal can be determined. The victim might need to locate the original arresting officers and prove their own identity by some reliable means such as fingerprinting or DNA testing and may need to go to a court hearing to be cleared of the charges. Obtaining an expungement of court records may also be required. Authorities might permanently maintain the victim's name as an alias for the criminal's true identity in their criminal records databases. One problem that victims of criminal identity theft may encounter is that various data aggregators might still have incorrect criminal records in their databases even after court and police records are corrected. Thus a future background check may return the incorrect criminal records. This is just one example of the kinds of impact that may continue to affect the victims of identity theft for some months or even years after the crime, aside from the psychological trauma that being 'cloned' typically engenders.
Synthetic identity theft
A variation of identity theft that has recently become more common is synthetic identity theft, in which identities are completely or partially fabricated. The most common technique involves combining a real social security number with a name and birthdate other than the ones that are simply associated with the number. Synthetic identity theft is more difficult to track as it doesn't show on either person's credit report directly but may appear as an entirely new file in the credit bureau or as a subfile on one of the victim's credit reports. Synthetic identity theft primarily harms the creditors who unwittingly grant the fraudsters credit. Individual victims can be affected if their names become confused with the synthetic identities, or if negative information in their subfiles impacts their credit ratings.
Medical identity theft
Privacy researcher Pam Dixon, the founder of the World Privacy Forum, coined the term medical identity theft and released the first major report about this issue in 2006. In the report, she defined the crime for the first time and made the plight of victims public. The report's definition of the crime is that medical identity theft occurs when someone seeks medical care under the identity of another person. Insurance theft is also very common, if a thief has your insurance information and or your insurance card, they can seek medical attention posing as yourself. In addition to risks of financial harm common to all forms of identity theft, the thief's medical history may be added to the victim's medical records. Inaccurate information in the victim's records is difficult to correct and may affect future insurability or cause doctors to rely on misinformation to deliver inappropriate care. After the publication of the report, which contained a recommendation that consumers receive notifications of medical data breach incidents, California passed a law requiring this, and then finally HIPAA was expanded to also require medical breach notification when breaches affect 500 or more people. Data collected and stored by hospitals and other organizations such as medical aid schemes is up to 10 times more valuable to cybercriminals than credit card information.
Child identity theft
Child identity theft occurs when a minor's identity is used by another person for the impostor's personal gain. The impostor can be a family member, a friend, or even a stranger who targets children. The Social Security numbers of children are valued because they do not have any information associated with them. Thieves can establish lines of credit, obtain driver's licenses, or even buy a house using a child's identity. This fraud can go undetected for years, as most children do not discover the problem until years later. Child identity theft is fairly common, and studies have shown that the problem is growing. The largest study on child identity theft, as reported by Richard Power of the Carnegie Mellon Cylab with data supplied by AllClear ID, found that of 40,000 children, 10.2% were victims of identity theft.
The Federal Trade Commission (FTC) estimates that about nine million people will be victims of identity theft in the United States per year. It was also estimated that in 2008 630,000 people under the age of 19 were victims of theft. This then gave them a debt of about $12,799 which was not theirs.
Not only are children in general big targets of identity theft but children who are in foster care are even bigger targets. This is because they are most likely moved around quite frequently and their SSN is being shared with multiple people and agencies. Foster children are even more victims of identity theft within their own families and other relatives. Young people in foster care who are victims of this crime are usually left alone to struggle and figure out how to fix their newly formed bad credit.
Financial identity theft
The most common type of identity theft is related to finance. Financial identity theft includes obtaining credit, loans, goods, and services while claiming to be someone else.
Tax identity theft
One of the major identity theft categories is tax identity theft. The most common method is to use a person's authentic name, address, and Social Security Number to file a tax return with false information, and have the resulting refund direct-deposited into a bank account controlled by the thief. The thief in this case can also try to get a job and then their employer will report the income of the real taxpayer, this then results in the taxpayer getting in trouble with the IRS.
The 14039 Form to the IRS is a form that will help one fight against a theft like tax theft. This form will put the IRS on alert and someone who believed they have been a victim of tax-related theft will be given an Identity Protection Personal Identification Number (IP PIN), which is a 6 digit code used in replacing an SSN for filing tax returns.
Techniques for obtaining and exploiting personal information
Identity thieves typically obtain and exploit personally identifiable information about individuals, or various credentials they use to authenticate themselves, to impersonate them. Examples include:
Rummaging through rubbish for personal information (dumpster diving)
Retrieving personal data from redundant IT equipment and storage media including PCs, servers, PDAs, mobile phones, USB memory sticks, and hard drives that have been disposed of carelessly at public dump sites, given away, or sold on without having been properly sanitized
Using public records about individual citizens, published in official registers such as electoral rolls
Stealing bank or credit cards, identification cards, passports, authentication tokens ... typically by pickpocketing, housebreaking or mail theft
Common-knowledge questioning schemes that offer account verification, such as "What's your mother's maiden name?", "what was your first car model?", or "What was your first pet's name?".
Skimming information from bank or credit cards using compromised or hand-held card readers, and creating clone cards
Using ' contactless' credit card readers to acquire data wirelessly from RFID-enabled passports
Shoulder-Surfing, involves an individual who discreetly watches or hears others providing valuable personal information. This is particularly done in crowded places because it is relatively easy to observe someone as they fill out forms, enter PINs on ATMs or even type passwords on smartphones.
Stealing personal information from computers using breaches in browser security or malware such as Trojan horse keystroke logging programs or other forms of spyware
Hacking computer networks, systems, and databases to obtain personal data, often in large quantities
Exploiting breaches that result in the publication or more limited disclosure of personal information such as names, addresses, Social Security number or credit card numbers
Advertising bogus job offers to accumulate resumes and applications typically disclosing applicants' names, home and email addresses, telephone numbers, and sometimes their banking details
Exploiting insider access and abusing the rights of privileged IT users to access personal data on their employers' systems
Infiltrating organizations that store and process large amounts or particularly valuable personal information
Impersonating trusted organizations in emails, SMS text messages, phone calls, or other forms of communication to dupe victims into disclosing their personal information or login credentials, typically on a fake corporate website or data collection form (phishing)
Brute-force attacking weak passwords and using inspired guesswork to compromise weak password reset questions
Obtaining castings of fingers for falsifying fingerprint identification.
Browsing social networking websites for personal details published by users, often using this information to appear more credible in subsequent social engineering activities
Diverting victims' email or post to obtain personal information and credentials such as credit cards, billing, and bank/credit card statements, or to delay the discovery of new accounts and credit agreements opened by the identity thieves in the victims' names
Using false pretenses to trick individuals, customer service representatives, and help desk workers to disclose personal information and login details or changing user passwords/access rights (pretexting)
Stealing cheques (checks) to acquire banking information, including account numbers and bank codes
Guessing Social Security numbers by using information found on Internet social networks such as Facebook and MySpace
Low security/privacy protection on photos that are easily clickable and downloaded on social networking sites.
Befriending strangers on social networks and taking advantage of their trust until private information is given. (Social Engineering)
Indicators
The majority of identity theft victims do not realize that they are a victim until it has negatively impacted their lives. Many people do not find out that their identities have been stolen until they are contacted by financial institutions or discover suspicious activities on their bank accounts. According to an article by Herb Weisbaum, everyone in the US should assume that their personal information has been compromised at one point. It is therefore of great importance to watch out for warning signs that your identity has been compromised. The following are eleven indicators that someone else might be using your identity.
Credit or debit card charges for goods or services you are not aware of, including unauthorized withdrawals from your account
Receiving calls from credit or debit card fraud control department warning of possible suspicious activity on your credit card account
Receiving credit cards that you did not apply for
Receiving information that a credit scoring investigation was done. They are often done when a loan or phone subscription was applied for.
Checks bouncing for lack of enough money in your account to cover the amount. This might be as a result of unauthorized withdrawals from your account
Identity theft criminals may commit crimes with your personal information. You may not realize this until you see the police on your door arresting you for crimes that you did not commit
Sudden changes to your credit score may indicate that someone else is using your credit cards
Bills for services like gas, water, electricity not arriving in time. This can be an indication that your mail was stolen or redirected
Not being approved for loans because your credit report indicates that you are not credit worthy
Receiving notification from your post office informing you that your mails are being forwarded to another unknown address
Your yearly tax returns indicating that you have earned more than you have actually earned. This might indicate that someone is using your national identification number e.g. SSN to report their earnings to the tax authorities
Individual identity protection
The acquisition of personal identifiers is made possible through serious breaches of privacy. For consumers, this is usually a result of them naively providing their personal information or login credentials to the identity thieves (e.g., in a phishing attack) but identity-related documents such as credit cards, bank statements, utility bills, checkbooks, etc. may also be physically stolen from vehicles, homes, offices, and not the least letterboxes, or directly from victims by pickpockets and bag snatchers. Guardianship of personal identifiers by consumers is the most common intervention strategy recommended by the US Federal Trade Commission, Canadian Phone Busters and most sites that address identity theft. Such organizations offer recommendations on how individuals can prevent their information from falling into the wrong hands.
Identity theft can be partially mitigated by not identifying oneself unnecessarily (a form of information security control known as risk avoidance). This implies that organizations, IT systems, and procedures should not demand excessive amounts of personal information or credentials for identification and authentication. Requiring, storing, and processing personal identifiers (such as Social Security number, national identification number, driver's license number, credit card number, etc.) increases the risks of identity theft unless this valuable personal information is adequately secured at all times. Committing personal identifiers to memory is a sound practice that can reduce the risks of a would-be identity thief from obtaining these records. To help in remembering numbers such as social security numbers and credit card numbers, it is helpful to consider using mnemonic techniques or memory aids such as the mnemonic Major System.
Identity thieves sometimes impersonate dead people, using personal information obtained from death notices, gravestones, and other sources to exploit delays between the death and the closure of the person's accounts, the inattentiveness of grieving families, and weaknesses in the processes for credit-checking. Such crimes may continue for some time until the deceased's families or the authorities notice and react to anomalies.
In recent years, commercial identity theft protection/insurance services have become available in many countries. These services purport to help protect the individual from identity theft or help detect that identity theft has occurred in exchange for a monthly or annual membership fee or premium. The services typically work either by setting fraud alerts on the individual's credit files with the three major credit bureaus or by setting up credit report monitoring with the credit bureaux. While identity theft protection/insurance services have been heavily marketed, their value has been called into question.
Potential outcomes
Identity theft is a serious problem in the United States. In a 2018 study, it was reported that 60 million Americans' identities had been wrongfully acquired. In response, under advisement from the Identity Theft Resource Center, some new bills have been implemented to improve security such as requiring electronic signatures and social security verification.
Several types of identity theft are used to gather information, one of the most common types occurs when consumers make online purchases. A study was conducted with 190 people to determine the relationship between the constructs of fear of financial losses and reputational damages. The conclusions of this study revealed that identity theft was a positive correlation with reputable damages. The relationship between perceived risk and online purchase intention were negative. The significance of this study reveals that online companies are more aware of the potential harm that can be done to their consumers, therefore they are searching for ways to reduce the perceived risk of consumers and not lose out on business.
Victims of identity theft may face years of effort proving to the legal system that they are the true person, leading to emotional strain and financial losses. Most identity theft is perpetrated by a family member of the victim, and some may not be able to obtain new credit cards or open new bank accounts or loans.
Identity protection by organizations
In their May 1998 testimony before the United States Senate, the Federal Trade Commission (FTC) discussed the sale of Social Security numbers and other personal identifiers by credit-raters and data miners. The FTC agreed to the industry's self-regulating principles restricting access to information on credit reports. According to the industry, the restrictions vary according to the category of customer. Credit reporting agencies gather and disclose personal and credit information to a wide business client base.
Poor stewardship of personal data by organizations, resulting in unauthorized access to sensitive data, can expose individuals to the risk of identity theft. The Privacy Rights Clearinghouse has documented over 900 individual data breaches by US companies and government agencies since January 2005, which together have involved over 200 million total records containing sensitive personal information, many containing social security numbers. Poor corporate diligence standards which can result in data breaches include:
failure to shred confidential information before throwing it into dumpsters
failure to ensure adequate network security
credit card numbers stolen by call center agents and people with access to call recordings
the theft of laptop computers or portable media being carried off-site containing vast amounts of personal information. The use of strong encryption on these devices can reduce the chance of data being misused should a criminal obtain them.
the brokerage of personal information to other businesses without ensuring that the purchaser maintains adequate security controls
Failure of governments, when registering sole proprietorships, partnerships, and corporations, to determine if the officers listed in the Articles of Incorporation are who they say they are. This potentially allows criminals access to personal information through credit rating and data mining services.
The failure of corporate or government organizations to protect consumer privacy, client confidentiality and political privacy has been criticized for facilitating the acquisition of personal identifiers by criminals.
Using various types of biometric information, such as fingerprints, for identification and authentication has been cited as a way to thwart identity thieves, however, there are technological limitations and privacy concerns associated with these methods as well.
Market
There is an active market for buying and selling stolen personal information, which occurs mostly in darknet markets but also in other black markets. People increase the value of the stolen data by aggregating it with publicly available data, and sell it again for a profit, increasing the damage that can be done to the people whose data was stolen.
Legal responses
International
In March 2014, after it was learned two passengers with stolen passports were on board Malaysia Airlines Flight 370, which went missing on 8 March 2014. It came to light that Interpol maintains a database of 40 million lost and stolen travel documents from 157 countries, which Interpol makes available to governments and the public, including airlines and hotels. The Stolen and Lost Travel Documents (SLTD) database, however, is rarely used. Big News Network (which is based in the UAE) reported that Interpol Secretary-General Ronald K. Noble told a forum in Abu Dhabi in the previous month, "The bad news is that, despite being incredibly cost-effective and deployable to virtually anywhere in the world, only a handful of countries are systematically using SLTD to screen travelers. The result is a major gap in our global security apparatus that is left vulnerable to exploitation by criminals and terrorists."
Australia
In Australia, each state has enacted laws that deal with different aspects of identity or fraud issues. Some states have now amended relevant criminal laws to reflect crimes of identity theft, such as the Criminal Law Consolidation Act 1935 (SA), Crimes Amendment (Fraud, Identity and Forgery Offences) Act 2009, and also in Queensland under the Criminal Code 1899 (QLD). Other states and territories are in states of development in respect of regulatory frameworks relating to identity theft such as Western Australia in respect of the Criminal Code Amendment (Identity Crime) Bill 2009.
At the Commonwealth level, under the Criminal Code Amendment (Theft, Fraud, Bribery & Related Offences) Act 2000 which amended certain provisions within the Criminal Code Act 1995,
Between 2014 and 2015 in Australia, there were 133,921 fraud and deception offences, an increase of 6% from previous year. The total cost reported by the Attorney General Department was:
There are also high indirect costs associated as a direct result of an incident. For example, the total indirect costs for police recorded fraud is $5,774,081.
Likewise, each state has enacted its own privacy laws to prevent the misuse of personal information and data. The Commonwealth Privacy Act applies only to Commonwealth and territory agencies and to certain private-sector bodies (where, for example, they deal with sensitive records, such as medical records, or they have more than $3 million in turnover PA).
Canada
Under section 402.2 of the Criminal Code,
Under section 403 of the Criminal Code,
In Canada, Privacy Act (federal legislation) covers only federal government, agencies and crown corporations. Each province and territory has its own privacy law and privacy commissioners to limit the storage and use of personal data.
For the private sector, the purpose of the Personal Information Protection and Electronic Documents Act (2000, c. 5) (known as PIPEDA) is to establish rules to govern the collection, use, and disclosure of personal information; except for the provinces of Quebec, Ontario, Alberta and British Columbia where provincial laws have been deemed substantially similar.
France
In France, a person convicted of identity theft can be sentenced up to five years in prison and fined up to €75,000.
Hong Kong
Under HK Laws. Chap 210 Theft Ordinance, sec. 16A Fraud
Under the Personal Data (Privacy) Ordinance, it established the post of Privacy Commissioner for Personal Data and mandates how much personal information one can collect, retain and destroy. This legislation also provides citizens the right to request information held by businesses and the government to the extent provided by this law.
India
Under the Information Technology Act 2000 Chapter IX Sec 66C
Philippines
Social networking sites are one of the most famous spreaders of posers in the online community, giving the users the freedom to post any information they want without any verification that the account is being used by the real person.
The Philippines, which ranks eighth in the numbers of users of Facebook and other social networking sites (such as Twitter, Multiply and Tumblr), has been known as a source of various identity theft problems. Identities of people who carelessly put personal information on their profiles can easily be stolen just by simple browsing. Some people meet online, get to know each other through Facebook chat, and exchange messages that share private information. Others get romantically involved with online friends and end up sharing too much information (such as their social security number, bank account, home address, and company address).
This phenomenon leads to the creation of the Cybercrime Prevention Act of 2012 (Republic Act No. 10175). Section 2 of this act states that it recognizes the importance of communication and multimedia for the development, exploitation, and dissemination of information, but violators will be punished by the law through imprisonment or a fine upwards of ₱200,000, but not exceeding ₱1,000,000, or (depending on the damage caused) both.
Sweden
Sweden has had relatively few problems with identity theft because only Swedish identity documents were accepted for identity verification. Stolen documents are traceable by banks and certain other institutions. Banks are required to check the identity of anyone withdrawing money or getting loans. If a bank gives money to someone using an identity document that has been reported as stolen, the bank must take this loss. Since 2008, any EU passport is valid in Sweden for identity verification, and Swedish passports are valid all over the EU. This makes it harder to detect stolen documents, but banks in Sweden still must ensure that stolen documents are not accepted.
Other types of identity theft have become more common in Sweden. One common example is ordering a credit card to someone who has an unlocked letterbox and is not home during the daytime. The thief steals the letter with the credit card and the letter with the code, which typically arrives a few days later. Usage of a stolen credit card is difficult in Sweden since an identity document or a PIN code is normally demanded. If a shop does not demand either, it must take the loss from accepting a stolen credit card. The practice of observing someone using their credit card's PIN code, stealing the credit card, or skimming it, and then using the credit card has become more common.
Legally, Sweden is an open society. The Principle of Public Access states that all information (e.g. addresses, incomes, taxes) kept by public authorities must be available for anyone, except in certain cases (for example, the addresses of people who need to hide are restricted). This makes fraud easier.
Until 2016, there were no laws that specifically prohibited using someone's identity. Instead, there were only laws regarding any indirect damages caused. Impersonating anyone else for financial gain is a type of fraud in the Criminal Code (). Impersonating anyone else to discredit them by hacking into their social media accounts and provoke is considered libel. However, it is difficult to convict someone of committing this crime. In late 2016, a new law was introduced which partially banned undetermined identity usage.
United Kingdom
In the United Kingdom, personal data is protected by the Data Protection Act 1998. The Act covers all personal data which an organization may hold, including names, birthday and anniversary dates, addresses, and telephone numbers.
Under English law (which extends to Wales but not to Northern Ireland or Scotland), the deception offences under the Theft Act 1968 increasingly contend with identity theft situations. In R v Seward (2005) EWCA Crim 1941, the defendant was acting as the "frontman" in the use of stolen credit cards and other documents to obtain goods. He obtained goods to the value of £10,000 for others who are unlikely ever to be identified. The Court of Appeal considered a sentencing policy for deception offenses involving "identity theft" and concluded that a prison sentence was required. Henriques J. said at para 14: "Identity fraud is a particularly pernicious and prevalent form of dishonesty calling for, in our judgment, deterrent sentences."
Statistics released by CIFAS (UK's Fraud Prevention Service) show that there were 89,000 victims of identity theft in the UK in 2010 and 85,000 victims in 2009. Men in their 30s and 40s are the most common victims. Identity fraud now accounts for nearly half of all frauds recorded.
United States
The increase in crimes of identity theft led to the drafting of the Identity Theft and Assumption Deterrence Act. In 1998, The Federal Trade Commission appeared before the United States Senate. The FTC discussed crimes which exploit consumer credit to commit loan fraud, mortgage fraud, lines-of-credit fraud, credit card fraud, commodities and services frauds. The Identity Theft Deterrence Act (2003)[ITADA] amended U.S. Code Title 18, § 1028 ("Fraud related to activity in connection with identification documents, authentication features, and information"). The statute now makes the possession of any "means of identification" to "knowingly transfer, possess, or use without lawful authority" a federal crime, alongside unlawful possession of identification documents. However, for federal jurisdiction to prosecute, the crime must include an "identification document" that either: (a) is purportedly issued by the United States, (b) is used or intended to defraud the United States, (c) is sent through the mail, or (d) is used in a manner that affects interstate or foreign commerce. See (c). Punishment can be up to 5, 15, 20, or 30 years in federal prison, plus fines, depending on the underlying crime per (b). In addition, punishments for the unlawful use of a "means of identification" were strengthened in § 1028A ("Aggravated Identity Theft"), allowing for a consecutive sentence under specific enumerated felony violations as defined in § 1028A(c)(1) through (11).
The Act also provides the Federal Trade Commission with authority to track the number of incidents and the dollar value of losses. Their figures relate mainly to consumer financial crimes and not the broader range of all identification-based crimes.
If charges are brought by state or local law enforcement agencies, different penalties apply to depend on the state.
Six Federal agencies conducted a joint task force to increase the ability to detect identity theft. Their joint recommendation on "red flag" guidelines is a set of requirements on financial institutions and other entities which furnish credit data to credit reporting services to develop written plans for detecting identity theft. The FTC has determined that most medical practices are considered creditors and are subject to requirements to develop a plan to prevent and respond to patient identity theft. These plans must be adopted by each organization's board of directors and monitored by senior executives.
Identity theft complaints as a percentage of all fraud complaints decreased from 2004 to 2006. The Federal Trade Commission reported that fraud complaints in general were growing faster than ID theft complaints. The findings were similar in two other FTC studies done in 2003 and 2005. In 2003, 4.6 percent of the US population said they were a victim of ID theft. In 2005, that number had dropped to 3.7 percent of the population. The commission's 2003 estimate was that identity theft accounted for some $52.6 billion of losses in the preceding year alone and affected more than 9.91 million Americans; the figure comprises $47.6 billion lost by businesses and $5 billion lost by consumers.
According to the U.S. Bureau of Justice Statistics, in 2010, 7% of US households experienced identity theft - up from 5.5% in 2005 when the figures were first assembled, but broadly flat since 2007. In 2012, approximately 16.6 million persons, or 7% of all U.S. residents age 16 or older, reported being victims of one or more incidents of identity theft.
At least two states, California and Wisconsin have created an Office of Privacy Protection to assist their citizens in avoiding and recovering from identity theft.
In 2009, Indiana created an Identity Theft Unit within their Office of Attorney General to educate and assist consumers in avoiding and recovering from identity theft as well as assist law enforcement in investigating and prosecuting identity theft crimes.
In Massachusetts in 2009–2010, Governor Deval Patrick committed to balancing consumer protection with the needs of small business owners. His Office of Consumer Affairs and Business Regulation announced certain adjustments to Massachusetts' identity theft regulations that maintain protections and also allow flexibility in compliance. These updated regulations went into effect on 1 March 2010. The regulations are clear that their approach to data security is a risk-based approach important to small businesses and might not handle a lot of personal information about customers.
The IRS has created the IRS Identity Protection Specialized Unit to help taxpayers' who are victims of federal tax-related identity theft. Generally, the identity thief will use a stolen SSN to file a forged tax return and attempt to get a fraudulent refund early in the filing season. A taxpayer will need to fill out Form 14039, Identity Theft Affidavit.
As for the future of medical care and Medicaid, people are mostly concerned about cloud computing. The addition of using cloud information within the United States medicare system would institute easily accessible health information for individuals, but that also makes it easier for identity theft. Currently, new technology is being produced to help encrypt and protect files, which will create a smooth transition to cloud technology in the healthcare system.
Notification
Many states followed California's lead and enacted mandatory data breach notification laws. As a result, companies that report a data breach typically report it to all their customers.
Spread and impact
Surveys in the US from 2003 to 2006 showed a decrease in the total number of victims and a decrease in the total value of identity fraud from US$47.6 billion in 2003 to $15.6 billion in 2006. The average fraud per person decreased from $4,789 in 2003 to $1,882 in 2006. A Microsoft report shows that this drop is due to statistical problems with the methodology, that such survey-based estimates are "hopelessly flawed" and exaggerate the true losses by orders of magnitude.
The 2003 survey from the Identity Theft Resource Center found that:
Only 15% of victims find out about the theft through proactive action taken by a business
The average time spent by victims resolving the problem is about 330 hours
73% of respondents indicated the crime involved the thief acquiring a credit card
In a widely publicized account, Michelle Brown, a victim of identity fraud, testified before a U.S. Senate Committee Hearing on Identity Theft. Ms. Brown testified that: "over a year and a half from January 1998 through July 1999, one individual impersonated me to procure over $50,000 in goods and services. Not only did she damage my credit, but she escalated her crimes to a level that I never truly expected: she engaged in drug trafficking. The crime resulted in my erroneous arrest record, a warrant out for my arrest, and eventually, a prison record when she was booked under my name as an inmate in the Chicago Federal Prison."
In Australia, identity theft was estimated to be worth between A$1billion and A$4 billion per annum in 2001.
In the United Kingdom, the Home Office reported that identity fraud costs the UK economy £1.2 billion annually (experts believe that the real figure could be much higher) although privacy groups object to the validity of these numbers, arguing that they are being used by the government to push for introduction of national ID cards. Confusion over exactly what constitutes identity theft has led to claims that statistics may be exaggerated.
An extensively reported study from Microsoft Research in 2011 finds that estimates of identity theft losses contain enormous exaggerations, writing that surveys "are so compromised and biased that no faith whatever can be placed in their findings."
See also
Types of fraud and theft
Organizations
U.S.
Laws
(Massachusetts personal information protection law)
Notable identity thieves and cases
References
External links
Identity theft – United States Federal Trade Commission
Identity Theft Recovery Plan FTC steps for identity theft victims.
The President's Task Force on Identity Theft – a government task force established by US President George W. Bush to fight identity theft.
Identity Theft – Carnegie Mellon University
Identity Theft: A Research Review, National Institute of Justice 2007
Identity Theft and Fraud – United States Department of Justice
Dateline NBC investigation 'To Catch an ID Thief'
Scam on the Run - Fugitive Identity Thief Led Global Criminal Enterprise FBI
1964 neologisms
Fraud
Identity documents
Organized crime activity
Security breaches |
181334 | https://en.wikipedia.org/wiki/Discrete%20logarithm | Discrete logarithm | In mathematics, for given real numbers a and b, the logarithm logb a is a number x such that . Analogously, in any group G, powers bk can be defined for all integers k, and the discrete logarithm logb a is an integer k such that . In number theory, the more commonly used term is index: we can write x = indr a (mod m) (read "the index of a to the base r modulo m") for rx ≡ a (mod m) if r is a primitive root of m and gcd(a,m) = 1.
Discrete logarithms are quickly computable in a few special cases. However, no efficient method is known for computing them in general. Several important algorithms in public-key cryptography, such as El Gamal base their security on the assumption that the discrete logarithm problem over carefully chosen groups has no efficient solution.
Definition
Let G be any group. Denote its group operation by multiplication and its identity element by 1. Let b be any element of G. For any positive integer k, the expression bk denotes the product of b with itself k times:
Similarly, let b−k denote the product of b−1 with itself k times. For k = 0, the kth power is the identity: .
Let a also be an element of G. An integer k that solves the equation is termed a discrete logarithm (or simply logarithm, in this context) of a to the base b. One writes k = logb a.
Examples
Powers of 10
The powers of 10 form an infinite subset G = {…, 0.001, 0.01, 0.1, 1, 10, 100, 1000, …} of the rational numbers. This set G is a cyclic group under multiplication, and 10 is a generator. For any element a of the group, one can compute log10 a. For example, log10 10000 = 4, and log10 0.001 = −3. These are instances of the discrete logarithm problem.
Other base-10 logarithms in the real numbers are not instances of the discrete logarithm problem, because they involve non-integer exponents. For example, the equation log10 53 = 1.724276… means that 101.724276… = 53. While integer exponents can be defined in any group using products and inverses, arbitrary exponents in the real numbers require other concepts such as the exponential function.
Powers of a fixed real number
A similar example holds for any non-zero real number b. The powers form a multiplicative subgroup G = {…, b−3, b−2, b−1, 1, b1, b2, b3, …} of the non-zero real numbers. For any element a of G, one can compute logb a.
Modular arithmetic
One of the simplest settings for discrete logarithms is the group (Zp)×. This is the group of multiplication modulo the prime p. Its elements are congruence classes modulo p, and the group product of two elements may be obtained by ordinary integer multiplication of the elements followed by reduction modulo p.
The kth power of one of the numbers in this group may be computed by finding its kth power as an integer and then finding the remainder after division by p. When the numbers involved are large, it is more efficient to reduce modulo p multiple times during the computation. Regardless of the specific algorithm used, this operation is called modular exponentiation. For example, consider (Z17)×. To compute 34 in this group, compute 34 = 81, and then divide 81 by 17, obtaining a remainder of 13. Thus 34 = 13 in the group (Z17)×.
The discrete logarithm is just the inverse operation. For example, consider the equation 3k ≡ 13 (mod 17) for k. From the example above, one solution is k = 4, but it is not the only solution. Since 316 ≡ 1 (mod 17)—as follows from Fermat's little theorem—it also follows that if n is an integer then 34+16n ≡ 34 × (316)n ≡ 13 × 1n ≡ 13 (mod 17). Hence the equation has infinitely many solutions of the form 4 + 16n. Moreover, because 16 is the smallest positive integer m satisfying 3m ≡ 1 (mod 17), these are the only solutions. Equivalently, the set of all possible solutions can be expressed by the constraint that k ≡ 4 (mod 16).
Powers of the identity
In the special case where b is the identity element 1 of the group G, the discrete logarithm logb a is undefined for a other than 1, and every integer k is a discrete logarithm for a = 1.
Properties
Powers obey the usual algebraic identity bk + l = bk bl. In other words, the function
defined by f(k) = bk is a group homomorphism from the integers Z under addition onto the subgroup H of G generated by b. For all a in H, logb a exists. Conversely, logb a does not exist for a that are not in H.
If H is infinite, then logb a is also unique, and the discrete logarithm amounts to a group isomorphism
On the other hand, if H is finite of order n, then logb a is unique only up to congruence modulo n, and the discrete logarithm amounts to a group isomorphism
where Zn denotes the additive group of integers modulo n.
The familiar base change formula for ordinary logarithms remains valid: If c is another generator of H, then
Algorithms
The discrete logarithm problem is considered to be computationally intractable. That is, no efficient classical algorithm is known for computing discrete logarithms in general.
A general algorithm for computing logb a in finite groups G is to raise b to larger and larger powers k until the desired a is found. This algorithm is sometimes called trial multiplication. It requires running time linear in the size of the group G and thus exponential in the number of digits in the size of the group. Therefore, it is an exponential-time algorithm, practical only for small groups G.
More sophisticated algorithms exist, usually inspired by similar algorithms for integer factorization. These algorithms run faster than the naïve algorithm, some of them proportional to the square root of the size of the group, and thus exponential in half the number of digits in the size of the group. However none of them run in polynomial time (in the number of digits in the size of the group).
Baby-step giant-step
Function field sieve
Index calculus algorithm
Number field sieve
Pohlig–Hellman algorithm
Pollard's rho algorithm for logarithms
Pollard's kangaroo algorithm (aka Pollard's lambda algorithm)
There is an efficient quantum algorithm due to Peter Shor.
Efficient classical algorithms also exist in certain special cases. For example, in the group of the integers modulo p under addition, the power bk becomes a product bk, and equality means congruence modulo p in the integers. The extended Euclidean algorithm finds k quickly.
With Diffie–Hellman a cyclic group modulus a prime p is used, allowing an efficient computation of the discrete logarithm with Pohlig–Hellman if the order of the group (being p−1) is sufficiently smooth, i.e. has no large prime factors.
Comparison with integer factorization
While computing discrete logarithms and factoring integers are distinct problems, they share some properties:
both are special cases of the hidden subgroup problem for finite abelian groups,
both problems seem to be difficult (no efficient algorithms are known for non-quantum computers),
for both problems efficient algorithms on quantum computers are known,
algorithms from one problem are often adapted to the other, and
the difficulty of both problems has been used to construct various cryptographic systems.
Cryptography
There exist groups for which computing discrete logarithms is apparently difficult. In some cases (e.g. large prime order subgroups of groups (Zp)×) there is not only no efficient algorithm known for the worst case, but the average-case complexity can be shown to be about as hard as the worst case using random self-reducibility.
At the same time, the inverse problem of discrete exponentiation is not difficult (it can be computed efficiently using exponentiation by squaring, for example). This asymmetry is analogous to the one between integer factorization and integer multiplication. Both asymmetries (and other possibly one-way functions) have been exploited in the construction of cryptographic systems.
Popular choices for the group G in discrete logarithm cryptography (DLC) are the cyclic groups (Zp)× (e.g. ElGamal encryption, Diffie–Hellman key exchange, and the Digital Signature Algorithm) and cyclic subgroups of elliptic curves over finite fields (see Elliptic curve cryptography).
While there is no publicly known algorithm for solving the discrete logarithm problem in general, the first three steps of the number field sieve algorithm only depend on the group G, not on the specific elements of G whose finite log is desired. By precomputing these three steps for a specific group, one need only carry out the last step, which is much less computationally expensive than the first three, to obtain a specific logarithm in that group.
It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less, e.g. cyclic groups with order of the Oakley primes specified in RFC 2409. The Logjam attack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade.
The authors of the Logjam attack estimate that the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would be within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024 DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography.
References
Further reading
Richard Crandall; Carl Pomerance. Chapter 5, Prime Numbers: A computational perspective, 2nd ed., Springer.
See also
A. W. Faber Model 366
Percy Ludgate and Irish logarithm
Modular arithmetic
Group theory
Cryptography
Logarithms
Finite fields
Computational hardness assumptions
Unsolved problems in computer science |
181382 | https://en.wikipedia.org/wiki/Anonymity | Anonymity | Anonymity describes situations where the acting person's identity is unknown. Some writers have argued that namelessness, though technically correct, does not capture what is more centrally at stake in contexts of anonymity. The important idea here is that a person be non-identifiable, unreachable, or untrackable. Anonymity is seen as a technique, or a way of realizing, a certain other values, such as privacy, or liberty. Over the past few years, anonymity tools used on the dark web by criminals and malicious users have drastically altered the ability of law enforcement to use conventional surveillance techniques.
An important example for anonymity being not only protected, but enforced by law is the vote in free elections. In many other situations (like conversation between strangers, buying some product or service in a shop), anonymity is traditionally accepted as natural. There are also various situations in which a person might choose to withhold their identity. Acts of charity have been performed anonymously when benefactors do not wish to be acknowledged. A person who feels threatened might attempt to mitigate that threat through anonymity. A witness to a crime might seek to avoid retribution, for example, by anonymously calling a crime tipline. Criminals might proceed anonymously to conceal their participation in a crime. Anonymity may also be created unintentionally, through the loss of identifying information due to the passage of time or a destructive event.
In a certain situations, however, it may be illegal to remain anonymous. In the United States, 24 states have "stop and identify" statutes that require persons detained to self-identify when requested by a law enforcement officer.
The term "anonymous message" typically refers to a message that does not reveal its sender. In many countries, anonymous letters are protected by law and must be delivered as regular letters.
In mathematics, in reference to an arbitrary element (e.g., a human, an object, a computer), within a well-defined set (called the "anonymity set"), "anonymity" of that element refers to the property of that element of not being identifiable within this set. If it is not identifiable, then the element is said to be "anonymous."
Pseudonymity
Sometimes a person may desire a long-term relationship (such as a reputation) with another party without necessarily disclosing personally identifying information to that party. In this case, it may be useful for the person to establish a unique identifier, called a pseudonym. Examples of pseudonyms are pen names, nicknames, credit card numbers, student numbers, bank account numbers, etc. A pseudonym enables the other party to link different messages from the same person and, thereby, to establish a long-term relationship. Pseudonyms are widely used in social networks and other virtual communication, although recently some important service providers like Google try to discourage pseudonymity.
Someone using a pseudonym would be strictly considered to be using "pseudonymity" not "anonymity", but sometimes the latter is used to refer to both (in general, a situation where the legal identity of the person is disguised)
Psychological effects
Anonymity may reduce the accountability one perceives to have for their actions, and removes the impact these actions might otherwise have on their reputation. This can have dramatic effects, both useful and harmful to various parties involved. Thus, it may be used for psychological tactics involving any respective party to purport or support or discredit any sort of activity or belief.
In conversational settings, anonymity may allow people to reveal personal history and feelings without fear of later embarrassment. Electronic conversational media can provide physical isolation, in addition to anonymity. This prevents physical retaliation for remarks, and prevents negative or taboo behavior or discussion from tarnishing the reputation of the speaker. This can be beneficial when discussing very private matters, or taboo subjects or expressing views or revealing facts that may put someone in physical, financial, or legal danger (such as illegal activity, or unpopular, or outlawed political views).
In work settings, the three most common forms of anonymous communication are traditional suggestion boxes, written feedback, and Caller ID blocking. Additionally, the appropriateness of anonymous organizational communication varies depending on the use, with organizational surveys or assessments typically perceived as highly appropriate and firing perceived as highly inappropriate. Anonymity use and appropriateness have also been found to be significantly related to the quality of relationships with key others at work.
With few perceived negative consequences, anonymous or semi-anonymous forums often provide a soapbox for disruptive conversational behavior. The term "troll" is sometimes used to refer to those who engage in such disruptive behavior.
Relative anonymity is often enjoyed in large crowds. Different people have different psychological and philosophical reactions to this development, especially as a modern phenomenon. This anonymity is an important factor in crowd psychology, and behavior in situations such as a riot. This perceived anonymity can be compromised by technologies such as photography. Groupthink behavior and conformity are also considered to be an established effect of internet anonymity.
Anonymity also permits highly trained professionals such as judges to freely express themselves regarding the strategies they employ to perform their jobs objectively.
Anonymity, commerce, and crime
Anonymous commercial transactions can protect the privacy of consumers. Some consumers prefer to use cash when buying everyday goods (like groceries or tools), to prevent sellers from aggregating information or soliciting them in the future. Credit cards are linked to a person's name, and can be used to discover other information, such as postal address, phone number, etc. The ecash system was developed to allow secure anonymous transactions. Another example would be Enymity, which actually makes a purchase on a customer's behalf. When purchasing taboo goods and services, anonymity makes many potential consumers more comfortable with or more willing to engage in the transaction. Many loyalty programs use cards that personally identify the consumer engaging in each transaction (possibly for later solicitation, or for redemption or security purposes), or that act as a numerical pseudonym, for use in data mining.
Anonymity can also be used as a protection against legal prosecution. For example, when committing unlawful actions, many criminals attempt to avoid identification by the means of obscuring/covering their faces with scarves or masks, and wear gloves or other hand coverings in order to not leave any fingerprints. In organized crime, groups of criminals may collaborate on a certain project without revealing to each other their names or other personally identifiable information. The movie The Thomas Crown Affair depicted a fictional collaboration by people who had never previously met and did not know who had recruited them. The anonymous purchase of a gun or knife to be used in a crime helps prevent linking an abandoned weapon to the identity of the perpetrator.
Anonymity in charity
There are two aspects, one, giving to a large charitable organization obscures the beneficiary of a donation from the benefactor, the other is giving anonymously to obscure the benefactor both from the beneficiary and from everyone else.
Anonymous charity has long been a widespread and durable moral precept of many ethical and religious systems, as well as being in practice a widespread human activity. A benefactor may not wish to establish any relationship with the beneficiary, particularly if the beneficiary is perceived as being unsavory. Benefactors may not wish to identify themselves as capable of giving. A benefactor may wish to improve the world, as long as no one knows who did it, out of modesty, wishing to avoid publicity. Another reason for anonymous charity is a benefactor who does not want a charitable organization to pursue them for more donations, sometimes aggressively.
Issues facing the anonymous
Attempts at anonymity are not always met with support from society.
Anonymity sometimes clashes with the policies and procedures of governments or private organizations. In the United States, disclosure of identity is required to be able to vote, though the secret ballot prevents disclosure of individual voting patterns. In airports in most countries, passengers are not allowed to board flights unless they have identified themselves to airline or transportation security personnel, typically in the form of the presentation of an identification card.
On the other hand, some policies and procedures require anonymity.
Referring to the anonymous
When it is necessary to refer to someone who is anonymous, it is typically necessary to create a type of pseudo-identification for that person. In literature, the most common way to state that the identity of an author is unknown is to refer to them as simply "Anonymous". This is usually the case with older texts in which the author is long dead and unable to claim authorship of a work. When the work claims to be that of some famous author the pseudonymous author is identified as "Pseudo-", as in Pseudo-Dionysius the Areopagite, an author claiming—and long believed—to be Dionysius the Areopagite, an early Christian convert.
Anonymus, in its Latin spelling, generally with a specific city designation, is traditionally used by scholars in the humanities to refer to an ancient writer whose name is not known, or to a manuscript of their work. Many such writers have left valuable historical or literary records: an incomplete list of such Anonymi is at Anonymus.
In the history of art, many painting workshops can be identified by their characteristic style and discussed and the workshop's output set in chronological order. Sometimes archival research later identifies the name, as when the "Master of Flémalle"—defined by three paintings in the Städelsches Kunstinstitut in Frankfurt— was identified as Robert Campin. The 20th-century art historian Bernard Berenson methodically identified numerous early Renaissance Florentine and Sienese workshops under such sobriquets as "Amico di Sandro" for an anonymous painter in the immediate circle of Sandro Botticelli.
In legal cases, a popularly accepted name to use when it is determined that an individual needs to maintain anonymity is "John Doe". This name is often modified to "Jane Doe" when the anonymity-seeker is female. The same names are also commonly used when the identification of a dead person is not known. The semi-acronym Unsub is used as law enforcement slang for "Unknown Subject of an Investigation".
The military often feels a need to honor the remains of soldiers for whom identification is impossible. In many countries, such a memorial is named the Tomb of the Unknown Soldier.
Anonymity and the press
Most modern newspapers and magazines attribute their articles to individual editors, or to news agencies. An exception is the Markker weekly The Economist. All British newspapers run their leaders, or editorials, anonymously. The Economist fully adopts this policy, saying "Many hands write The Economist, but it speaks with a collective voice". Guardian considers that "people will often speak more honestly if they are allowed to speak anonymously". According to Ross Eaman, in his book The A to Z of Journalism, until the mid-19th century, most writers in Great Britain, especially the less well known, did not sign their names to their work in newspapers, magazines and reviews.
Anonymity on the Internet
Most commentary on the Internet is essentially done anonymously, using unidentifiable pseudonyms. However this has been widely discredited in a study by the University of Birmingham, which found that the number of people who use the internet anonymously is statistically the same as the number of people who use the internet to interact with friends or known contacts. While these usernames can take on an identity of their own, they are sometimes separated and anonymous from the actual author. According to the University of Stockholm this is creating more freedom of expression, and less accountability. Wikipedia is collaboratively written mostly by authors using either unidentifiable pseudonyms or IP address identifiers, although a few have used identified pseudonyms or their real names.
However, the Internet was not designed for anonymity: IP addresses serve as virtual mailing addresses, which means that any time any resource on the Internet is accessed, it is accessed from a particular IP address, and the data traffic patterns to and from IP addresses can be intercepted, monitored, and analysed, even if the content of that traffic is encrypted. This address can be mapped to a particular Internet Service Provider (ISP), and this ISP can then provide information about what customer that IP address was leased to. This does not necessarily implicate a specific individual (because other people could be using that customer's connection, especially if the customer is a public resource, such as a library), but it provides regional information and serves as powerful circumstantial evidence.
Anonymizing services such as I2P and Tor address the issue of IP tracking. In short, they work by encrypting packets within multiple layers of encryption. The packet follows a predetermined route through the anonymizing network. Each router sees the immediate previous router as the origin and the immediate next router as the destination. Thus, no router ever knows both the true origin and destination of the packet. This makes these services more secure than centralized anonymizing services (where a central point of knowledge exists).
Sites such as Chatroulette, Omegle, and Tinder (which pair up random users for a conversation) capitalized on a fascination with anonymity. Apps like Yik Yak, Secret and Whisper let people share things anonymously or quasi-anonymously whereas Random let the user to explore the web anonymously. Other sites, however, including Facebook and Google+, ask users to sign in with their legal names. In the case of Google+, this requirement led to a controversy known as the nymwars.
The prevalence of cyberbullying is often attributed to relative Internet anonymity, due to the fact that potential offenders are able to mask their identities and prevent themselves from being caught. A principal in a high school stated that comments made on these anonymous site are "especially vicious and hurtful since there is no way to trace their source and it can be disseminated widely. "Cyberbullying, as opposed to general bullying, is still a widely-debated area of Internet freedom in several states.
Though Internet anonymity can provide a harmful environment through which people can hurt others, anonymity can allow for a much safer and relaxed internet experience. In a study conducted at Carnegie Mellon University, 15 out of 44 participants stated that they choose to be anonymous online because of a prior negative experience during which they did not maintain an anonymous presence. Such experiences include stalking, releasing private information by an opposing school political group, or tricking an individual into traveling to another country for a job that did not exist. Participants in this study stated that they were able to avoid their previous problems by using false identification online.
David Chaum is called the Godfathers of anonymity and he has a claim to be one of the great visionaries of contemporary science. In the early 1980s, while a computer scientist at Berkeley, Chaum predicted the world in which computer networks would make mass surveillance a possibility. As Dr. Joss Wright explains: "David Chaum was very ahead of his time. He predicted in the early 1980s concerns that would arise on the internet 15 or 20 years later." There are some people though that consider anonymity in internet being a danger for our society as a whole. David Davenport, an assistant professor in the Computer Engineering Department of Bilkent University in Ankara, Turkey, considers that by allowing anonymous Net communication, the fabric of our society is at risk. "Accountability requires those responsible for any misconduct be identified and brought to justice. However, if people remain anonymous, by definition, they cannot be identified, making it impossible to hold them accountable." he says.
Arguments for and against anonymity
As A. Michael Froomkin says: "The regulation of anonymous and pseudonymous communications promises to be one of the most important and contentious Internet-related issues of the next decade". Anonymity and pseudonymity can be used for good and bad purposes. And anonymity can in many cases be desirable for one person and not desirable for another person. A company may, for example, not like an employee to divulge information about improper practices within the company, but society as a whole may find it important that such improper practices are publicly exposed.
Good purposes of anonymity and pseudonymity:
People dependent on an organization, or afraid of revenge, may divulge serious misuse, which should be revealed. Anonymous tips can be used as an information source by newspapers, as well as by police departments, soliciting tips aimed at catching criminals. Not everyone will regard such anonymous communication as good. For example, message boards established outside companies, but for employees of such companies to vent their opinions on their employer, have sometimes been used in ways that at least the companies themselves were not happy about [Abelson 2001]. Police use of anonymity is a complex issue, since the police often will want to know the identity of the tipper in order to get more information, evaluate the reliability or get the tipper as a witness. Is it ethical for police to identify the tipper if it has opened up an anonymous tipping hotline?
People in a country with a repressive political regime may use anonymity (for example Internet-based anonymity servers in other countries) to avoid persecution for their political opinions. Note that even in democratic countries, some people claim, rightly or wrongly, that certain political opinions are persecuted. [Wallace 1999] gives an overview of uses of anonymity to protect political speech. Every country has a limit on which political opinions are allowed, and there are always people who want to express forbidden opinions, like racial agitation in most democratic countries.
People may openly discuss personal stuff which would be embarrassing to tell many people about, such as sexual problems. Research shows that anonymous participants disclose significantly more information about themselves [Joinson 2001]. People might also feel more open to sharing their personal work anonymously if they feel that their friends and family would harass them or disapprove of their work. Examples of such work could include fan fiction or vocal performances.
People may get more objective evaluation of their messages, by not showing their real name.
People are more equal in anonymous discussions, factors like status, gender, etc., will not influence the evaluation of what they say.
Pseudonymity can be used to experiment with role playing, for example a man posing as a woman in order to understand the feelings of people of different gender.
Pseudonymity can be a tool for timid people to dare establish contacts which can be of value for them and others, e.g. through contact advertisements.
People can contribute to online social discussion with reduced risk of harm by online predators. Online predators include "criminals, hackers, scammers, stalkers, and malicious online vendors."
People can avoid becoming famous by publishing their work anonymously.
There has always, however, also been a negative side of anonymity:
Anonymity can be used to protect a criminal performing many different crimes, for example slander, distribution of child pornography, illegal threats, racial agitation, fraud, intentional damage such as distribution of computer viruses, etc. The exact set of illegal acts varies from country to country, but most countries have many laws forbidding certain "informational" acts, everything from high treason to instigation of rebellion, etc., to swindling.
Anonymity can be used for online payments for criminals paying others to perform illegal acts or purchases.
Anonymity can be used to seek contacts for performing illegal acts, like a Child grooming searching for children to abuse or a swindler searching for people to rip off.
Even when the act is not illegal, anonymity can be used for offensive or disruptive communication. For example, some people use anonymity in order to say harmful things about other people, known as cyberbullying.
Internet trolls use anonymity to harm discussions in online social platforms.
The border between illegal and legal but offensive use is not very sharp, and varies depending on the law in each country.
Anonymous (group)
Anonymous (used as a mass noun) is a loosely associated international network of activist and hacktivist entities. A website nominally associated with the group describes it as "an internet gathering" with "a very loose and decentralized command structure that operates on ideas rather than directives". The group became known for a series of well-publicized publicity stunts and distributed denial-of-service (DDoS) attacks on government, religious, and corporate websites. An image commonly associated with Anonymous is the "man without a head" represents leaderless organization and anonymity.
Legal protection of anonymity
Anonymity is perceived as a right by many, especially the anonymity in the internet communications. The partial right for anonymity is legally protected to various degrees in different jurisdictions.
United States
The tradition of anonymous speech is older than the United States. Founders Alexander Hamilton, James Madison, and John Jay wrote The Federalist Papers under the pseudonym "Publius" and "the Federal Farmer" spoke up in rebuttal. The US Supreme Court has repeatedly recognized rights to speak anonymously derived from the First Amendment.
The right to anonymous political campaigning was established in the U.S. Supreme Court decision in McIntyre v. Ohio Elections Commission (1995) case: "Anonymity is a shield from the tyranny of the majority...It thus exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular individuals from retaliation—and their ideas from suppression—at the hand of an intolerant society". The Supreme court explained that protecting anonymous political speech receives the highest protection however, this priority takes on new dimensions in the digital age.
The right of individuals for "anonymous communication" was established by the decision in case Columbia Insurance Company v. Seescandy.com, et al. (1999) of the United States District Court for the Northern District of California: "People are permitted to interact pseudonymously and anonymously with each other so long as those acts are not in violation of the law".
The right of individuals for "anonymous reading" was established in the U.S. Supreme Court decision in United States v. Rumely (1953): "Once the government can demand of a publisher the names of the purchasers of his publications, the free press as we know it disappears. Then the spectre of a government agent will look over the shoulder of everyone who reads".
The pressure on anonymous communication has grown substantially after the 2001 terrorist attack on the World Trade Center and the subsequent new political climate. Although it is still difficult to oversee their exact implications, measures such as the US Patriot Act, the European Cybercrime Convention and the European Union rules on data retention are only few of the signs that the exercise of the right to the anonymous exchange of information is under substantial pressure.
An above-mentioned 1995 Supreme Court ruling in McIntyre v. Ohio Elections Commission reads: "(...) protections for anonymous speech are vital to democratic discourse. Allowing dissenters to shield their identities frees them to express critical minority views . . . Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society."
However, anonymous online speech is not without limits. It is clearly demonstrated in a case from 2008, one in which the defendant stated on a law-school discussion board that two women should be raped, an anonymous poster's comments may extend beyond free speech protections. In the case, a Connecticut federal court must apply a standard to decide whether the poster's identity should be revealed. There are several tests, however, that the court could apply when considering this issue.
European Union
The right to internet anonymity is also covered by European legislation that recognizes the fundamental right to data protection, freedom of expression, freedom of impression. The European Union Charter of Fundamental Rights recognizes in Article. 8 (Title II: “Freedoms”) the right of everyone to protection of personal data concerning him. The right to privacy is now essentially the individual's right to have and to maintain control over information about him.
International legislation
One of the most controversial international legal acts, regarding this subject is Anti-Counterfeiting Trade Agreement (ACTA). As of February 2015, the treaty was signed -but not all ratified- by 31 states as well as the European Union. Japan was on 4 October 2012 the first to ratify the treaty. It creates an international regime for imposing civil and criminal penalties on Internet counterfeiting and copyright infringement. Although ACTA is intentionally vague, leaving signatories to draw precise rules themselves, critics say it could mean innocent travellers having their laptops searched for unlicensed music, or being jailed for carrying a generic drug. Infringers could be liable for the total loss of potential sales (implying that everyone who buys a counterfeit product would have bought the real thing). It applies to unintentional use of copyright material. It puts the onus on website owners to ensure they comply with laws across several territories. It has been negotiated secretively and outside established international trade bodies, despite EU criticisms.
Anonymity and politics
The history of anonymous expression in political dissent is both long and with important effect, as in the Letters of Junius or Voltaire's Candide, or scurrilous as in pasquinades. In the tradition of anonymous British political criticism, The Federalist Papers were anonymously authored by three of America's Founding Fathers. Without the public discourse on the controversial contents of the U.S. Constitution, ratification would likely have taken much longer as individuals worked through the issues. The United States Declaration of Independence, however, was not anonymous. If it had been unsigned, it might well have been less effective. John Perry Barlow, Joichi Ito, and other U.S. bloggers express a very strong support for anonymous editing as one of the basic requirements of open politics as conducted on the Internet.
Anonymity and pseudonymity in art
Anonymity is directly related to the concept of obscurantism or pseudonymity, where an artist or a group attempts to remain anonymous, for various reasons such as adding an element of mystique to themselves or their work, attempting to avoid what is known as the "cult of personality" or hero worship (in which the charisma, good looks, wealth or other unrelated or mildly related aspects of the people is the main reason for interest in their work, rather than the work itself) or to break into a field or area of interest normally dominated by males (as by the famous science fiction author James Tiptree, Jr who was actually a woman named Alice Bradley Sheldon, and likely JT LeRoy). Some seem to want to avoid the "limelight" of popularity and to live private lives, such as Thomas Pynchon, J. D. Salinger, De Onbekende Beeldhouwer (an anonymous sculptor whose exhibited work in Amsterdam attracted strong attention in the 1980s and 1990s), and by DJ duo Daft Punk (1993-2021). For street artist Banksy, "anonymity is vital to him because graffiti is illegal".
Anonymity has been used in music by avant-garde ensemble The Residents, Jandek (until 2004), costumed comedy rock band The Radioactive Chicken Heads, and DJs Deadmau5 (1998-present) and Marshmello (2015-present).
This is frequently applied in fiction, from The Lone Ranger, Superman, and Batman, where a hidden identity is assumed.
Mathematics of anonymity
Suppose that only Alice, Bob, and Carol have keys to a bank safe and that, one day, contents of the safe go missing (lock not violated). Without additional information, we cannot know for sure whether it was Alice, Bob or Carol who emptied the safe. Notably, each element in {Alice, Bob, Carol} could be the perpetrator with a probability of 1. However, as long as none of them was convicted with 100% certainty, we must hold that the perpetrator remains anonymous and that the attribution of the probability of 1 to one of the players has to remain undecided.
If Carol has a definite alibi at the time of perpetration, then we may deduce that it must have been either Alice or Bob who emptied the safe. In this particular case, the perpetrator is not completely anonymous anymore, as both Alice and Bob now know "who did it" with a probability of 1.
See also
List of anonymously published works
List of anonymous masters
Notname
Data anonymization
Friend-to-friend
Internet privacy
Online disinhibition effect
Personally identifiable information
Anonymity (social choice)
Notes
References |
182249 | https://en.wikipedia.org/wiki/Cryptographically-secure%20pseudorandom%20number%20generator | Cryptographically-secure pseudorandom number generator | A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also loosely known as a cryptographic random number generator (CRNG) (see Random number generation § "True" vs. pseudo-random numbers).
Most cryptographic applications require random numbers, for example:
key generation
nonces
salts in certain signature schemes, including ECDSA, RSASSA-PSS
The "quality" of the randomness required for these applications varies.
For example, creating a nonce in some protocols needs only uniqueness.
On the other hand, the generation of a master key requires a higher quality, such as more entropy. And in the case of one-time pads, the information-theoretic guarantee of perfect secrecy only holds if the key material comes from a true random source with high entropy, and thus any kind of pseudorandom number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-quality source, generally the operating system's randomness API. However, unexpected correlations have been found in several such ostensibly independent processes. From an information-theoretic point of view, the amount of randomness, the entropy that can be generated, is equal to the entropy provided by the system. But sometimes, in practical situations, more random numbers are needed than there is entropy available. Also, the processes to extract randomness from a running system are slow in actual practice. In such instances, a CSPRNG can sometimes be used. A CSPRNG can "stretch" the available entropy over more bits.
Requirements
The requirements of an ordinary PRNG are also satisfied by a cryptographically secure PRNG, but the reverse is not true. CSPRNG requirements fall into two groups: first, that they pass statistical randomness tests; and secondly, that they hold up well under serious attack, even when part of their initial or running state becomes available to an attacker.
Every CSPRNG should satisfy the next-bit test. That is, given the first k bits of a random sequence, there is no polynomial-time algorithm that can predict the (k+1)th bit with probability of success non-negligibly better than 50%. Andrew Yao proved in 1982 that a generator passing the next-bit test will pass all other polynomial-time statistical tests for randomness.
Every CSPRNG should withstand "state compromise extensions". In the event that part or all of its state has been revealed (or guessed correctly), it should be impossible to reconstruct the stream of random numbers prior to the revelation. Additionally, if there is an entropy input while running, it should be infeasible to use knowledge of the input's state to predict future conditions of the CSPRNG state.
Example: If the CSPRNG under consideration produces output by computing bits of π in sequence, starting from some unknown point in the binary expansion, it may well satisfy the next-bit test and thus be statistically random, as π appears to be a random sequence. (This would be guaranteed if π is a normal number, for example.) However, this algorithm is not cryptographically secure; an attacker who determines which bit of pi (i.e. the state of the algorithm) is currently in use will be able to calculate all preceding bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First, while most PRNGs outputs appear random to assorted statistical tests, they do not resist determined reverse engineering. Specialized statistical tests may be found specially tuned to such a PRNG that shows the random numbers not to be truly random. Second, for most PRNGs, when their state has been revealed, all past random numbers can be retrodicted, allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type of cryptanalysis.
Definitions
In the asymptotic setting, a family of deterministic polynomial time computable functions for some polynomial , is a pseudorandom number generator (PRNG, or PRG in some references), if it stretches the length of its input ( for any ), and if its output is computationally indistinguishable from true randomness, i.e. for any probabilistic polynomial time algorithm , which outputs 1 or 0 as a distinguisher,
for some negligible function . (The notation means that is chosen uniformly at random from the set .)
There is an equivalent characterization: For any function family , is a PRNG if and only if the next output bit of cannot be predicted by a polynomial time algorithm.
A forward-secure PRNG with block length is a PRNG , where the input string with length is the current state at period , and the output (, ) consists of the next state and the pseudorandom output block of period , that withstands state compromise extensions in the following sense. If the initial state is chosen uniformly at random from , then for any , the sequence must be computationally indistinguishable from , in which the are chosen uniformly at random from .
Any PRNG can be turned into a forward secure PRNG with block length by splitting its output into the next state and the actual output. This is done by setting , in which and ; then is a forward secure PRNG with as the next state and as the pseudorandom output block of the current period.
Entropy extraction
Santha and Vazirani proved that several bit streams with weak randomness can be combined to produce a higher-quality quasi-random bit stream.
Even earlier, John von Neumann proved that a simple algorithm can remove a considerable amount of the bias in any bit stream, which should be applied to each bit stream before using any variation of the Santha–Vazirani design.
Designs
In the discussion below, CSPRNG designs are divided into three classes:
those based on cryptographic primitives such as ciphers and cryptographic hashes,
those based upon mathematical problems thought to be hard, and
special-purpose designs.
The last often introduces additional entropy when available and, strictly speaking, are not "pure" pseudorandom number generators, as their output is not completely determined by their initial state. This addition can prevent attacks even if the initial state is compromised.
Designs based on cryptographic primitives
A secure block cipher can be converted into a CSPRNG by running it in counter mode. This is done by choosing a random key and encrypting a 0, then encrypting a 1, then encrypting a 2, etc. The counter can also be started at an arbitrary number other than zero. Assuming an n-bit block cipher the output can be distinguished from random data after around 2n/2 blocks since, following the birthday problem, colliding blocks should become likely at that point, whereas a block cipher in CTR mode will never output identical blocks. For 64-bit block ciphers this limits the safe output size to a few gigabytes, with 128-bit blocks the limitation is large enough not to impact typical applications. However, when used alone it does not meet all of the criteria of a CSPRNG (as stated above) since it is not strong against "state compromise extensions": with knowledge of the state (in this case a counter and a key) you can predict all past output.
A cryptographically secure hash of a counter might also act as a good CSPRNG in some cases. In this case, it is also necessary that the initial value of this counter is random and secret. However, there has been little study of these algorithms for use in this manner, and at least some authors warn against this use.
Most stream ciphers work by generating a pseudorandom stream of bits that are combined (almost always XORed) with the plaintext; running the cipher on a counter will return a new pseudorandom stream, possibly with a longer period. The cipher can only be secure if the original stream is a good CSPRNG, although this is not necessarily the case (see the RC4 cipher). Again, the initial state must be kept secret.
Number-theoretic designs
The Blum Blum Shub algorithm has a security proof based on the difficulty of the quadratic residuosity problem. Since the only known way to solve that problem is to factor the modulus, it is generally regarded that the difficulty of integer factorization provides a conditional security proof for the Blum Blum Shub algorithm. However the algorithm is very inefficient and therefore impractical unless extreme security is needed.
The Blum–Micali algorithm has a security proof based on the difficulty of the discrete logarithm problem but is also very inefficient.
Daniel Brown of Certicom has written a 2006 security proof for Dual_EC_DRBG, based on the assumed hardness of the Decisional Diffie–Hellman assumption, the x-logarithm problem, and the truncated point problem. The 2006 proof explicitly assumes a lower outlen than in the Dual_EC_DRBG standard, and that the P and Q in the Dual_EC_DRBG standard (which were revealed in 2013 to be probably backdoored by NSA) are replaced with non-backdoored values.
Special designs
There are a number of practical PRNGs that have been designed to be cryptographically secure, including
the Yarrow algorithm which attempts to evaluate the entropic quality of its inputs. Yarrow is used in macOS and other Apple OS' up until about Dec. 2019. Apple has switched to Fortuna since then. (See /dev/random).
the ChaCha20 algorithm replaced RC4 in OpenBSD (version 5.4), NetBSD (version 7.0), and FreeBSD (version 12.0).
ChaCha20 also replaced SHA-1 in Linux in version 4.8.
the Fortuna algorithm, the successor to Yarrow, which does not attempt to evaluate the entropic quality of its inputs. Fortuna is used in FreeBSD. Apple changed to Fortuna for most or all Apple OS' beginning around Dec. 2019.
the function CryptGenRandom provided in Microsoft's Cryptographic Application Programming Interface
ISAAC based on a variant of the RC4 cipher
Linear-feedback shift register tuned with evolutionary algorithm based on the NIST Statistical Test Suite.
arc4random
AES-CTR DRBG is often used as a random number generator in systems that use AES encryption.
ANSI X9.17 standard (Financial Institution Key Management (wholesale)), which has been adopted as a FIPS standard as well. It takes as input a TDEA (keying option 2) key bundle k and (the initial value of) a 64-bit random seed s. Each time a random number is required it:
Obtains the current date/time D to the maximum resolution possible.
Computes a temporary value
Computes the random value , where ⊕ denotes bitwise exclusive or.
Updates the seed
Obviously, the technique is easily generalized to any block cipher; AES has been suggested.
Standards
Several CSPRNGs have been standardized. For example,
FIPS 186-4
NIST SP 800-90A:
This withdrawn standard has four PRNGs. Two of them are uncontroversial and proven: CSPRNGs named Hash_DRBG and HMAC_DRBG.
The third PRNG in this standard, CTR DRBG, is based on a block cipher running in counter mode. It has an uncontroversial design but has been proven to be weaker in terms of distinguishing attack, than the security level of the underlying block cipher when the number of bits output from this PRNG is greater than two to the power of the underlying block cipher's block size in bits.
When the maximum number of bits output from this PRNG is equal to the 2blocksize, the resulting output delivers the mathematically expected security level that the key size would be expected to generate, but the output is shown to not be indistinguishable from a true random number generator. When the maximum number of bits output from this PRNG is less than it, the expected security level is delivered and the output appears to be indistinguishable from a true random number generator.
It is noted in the next revision that claimed security strength for CTR_DRBG depends on limiting the total number of generate requests and the bits provided per generate request.
The fourth and final PRNG in this standard is named Dual_EC_DRBG. It has been shown to not be cryptographically secure and is believed to have a kleptographic NSA backdoor.
NIST SP 800-90A Rev.1: This is essentially NIST SP 800-90A with Dual_EC_DRBG removed, and is the withdrawn standard's replacement.
ANSI X9.17-1985 Appendix C
ANSI X9.31-1998 Appendix A.2.4
ANSI X9.62-1998 Annex A.4, obsoleted by ANSI X9.62-2005, Annex D (HMAC_DRBG)
A good reference is maintained by NIST.
There are also standards for statistical testing of new CSPRNG designs:
A Statistical Test Suite for Random and Pseudorandom Number Generators, NIST Special Publication 800-22.
NSA kleptographic backdoor in the Dual_EC_DRBG PRNG
The Guardian and The New York Times have reported in 2013 that the National Security Agency (NSA) inserted a backdoor into a pseudorandom number generator (PRNG) of NIST SP 800-90A which allows the NSA to readily decrypt material that was encrypted with the aid of Dual_EC_DRBG. Both papers report that, as independent security experts long suspected, the NSA has been introducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time by one of the top secret documents leaked to the Guardian by Edward Snowden. The NSA worked covertly to get its own version of the NIST draft security standard approved for worldwide use in 2006. The leaked document states that "eventually, NSA became the sole editor." In spite of the known potential for a kleptographic backdoor and other known significant deficiencies with Dual_EC_DRBG, several companies such as RSA Security continued using Dual_EC_DRBG until the backdoor was confirmed in 2013. RSA Security received a $10 million payment from the NSA to do so.
Security flaws
DUHK attack
On October 23, 2017, Shaanan Cohney, Matthew Green, and Nadia Heninger, cryptographers at The University of Pennsylvania and Johns Hopkins University released details of the DUHK (Don't Use Hard-coded Keys) attack on WPA2 where hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm, stating "an attacker can brute-force encrypted data to discover the rest of the encryption parameters and deduce the master encryption key used to encrypt web sessions or virtual private network (VPN) connections."
Japanese PURPLE cipher machine
During World War II, Japan used a cipher machine for diplomatic communications; the United States was able to crack it and read its messages, mostly because the "key values" used were insufficiently random.
References
External links
, Randomness Requirements for Security
Java "entropy pool" for cryptographically secure unpredictable random numbers.
Java standard class providing a cryptographically strong pseudo-random number generator (PRNG).
Cryptographically Secure Random number on Windows without using CryptoAPI
Conjectured Security of the ANSI-NIST Elliptic Curve RNG, Daniel R. L. Brown, IACR ePrint 2006/117.
A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator, Daniel R. L. Brown and Kristian Gjosteen, IACR ePrint 2007/048. To appear in CRYPTO 2007.
Cryptanalysis of the Dual Elliptic Curve Pseudorandom Generator, Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/190.
Efficient Pseudorandom Generators Based on the DDH Assumption, Reza Rezaeian Farashahi and Berry Schoenmakers and Andrey Sidorenko, IACR ePrint 2006/321.
Analysis of the Linux Random Number Generator, Zvi Gutterman and Benny Pinkas and Tzachy Reinman.
NIST Statistical Test Suite documentation and software download.
Cryptographic algorithms
Cryptographically secure pseudorandom number generators
Cryptographic primitives |
182369 | https://en.wikipedia.org/wiki/Whitelisting | Whitelisting | A whitelist (or, less commonly, a passlist or allowlist) is a mechanism which explicitly allows some identified entities to access a particular privilege, service, mobility, or recognition i.e. it is a list of things allowed when everything is denied by default. It is the opposite of a blacklist which is list of things denied when everything is allowed by default.
Email whitelists
Spam filters often include the ability to "whitelist" certain sender IP addresses, email addresses or domain names to protect their email from being rejected or sent to a junk mail folder. These can be manually maintained by the user or system administrator - but can also refer to externally maintained whitelist services.
Non-commercial whitelists
Non-commercial whitelists are operated by various non-profit organisations, ISPs, and others interested in blocking spam. Rather than paying fees, the sender must pass a series of tests; for example, their email server must not be an open relay and have a static IP address. The operator of the whitelist may remove a server from the list if complaints are received.
Commercial whitelists
Commercial whitelists are a system by which an Internet service provider allows someone to bypass spam filters when sending email messages to its subscribers, in return for a pre-paid fee, either an annual or a per-message fee. A sender can then be more confident that their messages have reached recipients without being blocked, or having links or images stripped out of them, by spam filters. The purpose of commercial whitelists is to allow companies to reliably reach their customers by email.
Advertising whitelists
Many websites rely on ads as a source of revenue, but the use of ad blockers is increasingly common. Websites that detect an adblocker in use often ask for it to be disabled - or their site to be "added to the whitelist" - a standard feature of most adblockers.
Network whitelists
Network Whitelisting can occur at different layers of the OSI model.
LAN whitelists
LAN whitelists are enforced at layer 2 of the OSI model. Another use for whitelists is in local area network (LAN) security. Many network admins set up MAC address whitelists, or a MAC address filter, to control who is allowed on their networks. This is used when encryption is not a practical solution or in tandem with encryption. However, it's sometimes ineffective because a MAC address can be faked
Firewall whitelists
Some firewalls can be configured to only allow data-traffic from/ to certain (ranges of) IP-addresses. A firewall generally works at layer 3 and 4 of the OSI model. Layer 3 is the Network Layer where IP works and Layer 4 is the Transport Layer, where TCP and UDP function.
Application whitelists
The application layer is layer 7 in the Open Systems Interconnection (OSI) seven-layer model and in the TCP/IP protocol suite. Whitelisting is commonly enforced by applications at this level.
One approach in combating viruses and malware is to whitelist software which is considered safe to run, blocking all others. This is particularly attractive in a corporate environment, where there are typically already restrictions on what software is approved.
Leading providers of application whitelisting technology include Bit9, Velox, McAfee, Lumension, Airlock Digital and SMAC
On Microsoft Windows, recent versions include AppLocker, which allows administrators to control which executable files are denied or allowed to execute. With AppLocker, administrators are able to create rules based on file names, publishers or file location that will allow certain files to execute. Rules can apply to individuals or groups. Policies are used to group users into different enforcement levels. For example, some users can be added to a report-only policy that will allow administrators to understand the impact before moving that user to a higher enforcement level.
Linux system typically have AppArmor and SE Linux features available which can be used to effectively block all applications which are not explicitly whitelisted, and commercial products are also available.
On HP-UX introduced a feature called "HP-UX Whitelisting" on 11iv3 version.
Controversy
In 2018, a journal commentary on a report on predatory publishing was released making claims that 'white' and 'black' are racially charged terms that need to be avoided in instances such as 'whitelist' and 'blacklist'. The journal hit mainstream in Summer 2020 following the George Floyd protests in America wherein a black man was murdered by an officer, sparking protests on police brutality.
The premise of the journal is that 'black' and 'white' have negative and positive connotations respectively. It states that since 'blacklist's first recorded usage was during "the time of mass enslavement and forced deportation of Africans to work in European-held colonies in the Americas," the word is therefore related to race. There is no mention of 'whitelist' and its origin or relation to race.
This issue is most widely disputed in computing industries where 'whitelist' and 'blacklist' are prevalent (e.g. IP whitelisting). Despite the commentary-nature of the journal, some companies and individuals in others have taken to replacing 'whitelist' and 'blacklist' with new alternatives such as 'allow list' and 'deny list'.
Those that oppose these changes question its attribution to race, citing the same etymology quote that the 2018 journal uses. The quote suggests that the term 'blacklist' arose from 'black book' almost 100 years prior. 'Black book' does not appear to have any etymology or sources that support ties to race, instead coming from the 1400s referring "to a list of people who had committed crimes or fallen out of favor with leaders" and popularized by King Henry VIII's literal usage of a book bound in black. Others also note the prevalence of positive and negative connotations to 'white' and 'black' in the Bible, predating attributions to skin tone and slavery. It wasn't until the 1960s Black Power movement that "Black" became a widespread word to refer to one's race as a person of color in America (alternate to African-American) lending itself to the argument that the negative connotation behind 'black' and 'blacklist' both predate attribution to race.
See also
Blacklisting
Blacklist (computing)
DNSWL, whitelisting based on DNS
Walled garden (technology), a whitelist that a device's owner cannot control
References
Spamming
Antivirus software
Malware
Social privilege
Social status
Databases
Blacklisting |
182410 | https://en.wikipedia.org/wiki/Television%20in%20the%20United%20Kingdom | Television in the United Kingdom | Regular television broadcasts in the United Kingdom started in 1936 as a public service which was free of advertising, while the introduction of television and the first tests commencing in 1927. Currently, the United Kingdom has a collection of free-to-air, free-to-view and subscription services over a variety of distribution media, through which there are over 480 channels for consumers as well as on-demand content. There are six main channel owners who are responsible for most material viewed.
There are 27,000 hours of domestic content produced a year at a cost of £2.6 billion. Since 24 October 2012, all television broadcasts in the United Kingdom have been in a digital format, following the end of analogue transmissions in Northern Ireland. Digital content is delivered via terrestrial, satellite and cable, as well as over IP. As of 2003, 53.2% of households watch through terrestrial, 31.3% through satellite, and 15.6% through cable.
The Royal Television Society (RTS) is a British-based educational charity for the discussion, and analysis of television in all its forms, past, present, and future. It is the oldest television society in the world.
Broadcast television providers
Free-to-air, free-to-view and subscription providers operate, with differences in the number of channels, capabilities such as the programme guide (EPG), video on demand (VOD), high-definition (HD), interactive television via the red button, and coverage across the UK. All providers make available the UK's five most-watched channels: BBC One, BBC Two, ITV, Channel 4 and Channel 5.
Broadcast television is distributed as radio waves via terrestrial or satellite transmissions, or as electrical or light signals through ground-based cables. In the UK, these use the Digital Video Broadcasting standard. Most TVs sold in the UK (as well as much of the rest of Europe) come with a DVB-T (terrestrial) tuner. Following the financial failure of digital terrestrial pay TV service ITV Digital in 2002, UK digital terrestrial TV services were rebranded as Freeview and do not require a subscription. Set-top boxes are generally used to receive channels from other providers. Most services have integrated their broadcast TV services with additional video streams distributed via the Internet, or through their own Internet Protocol network.
The Broadcasters' Audience Research Board publish quarterly statistics of the number of UK households per broadcast TV platform. Aggregating the statistics for Q1 2020 show that 56% subscribe to one or more broadcast TV services, vs 44% who receive free TV.
Digital terrestrial television
The primary digital terrestrial TV service is Freeview, which launched in 2002 and is free-of-charge to view. It replaced the subscription service named ONdigital or ITV Digital, which ran from 1997-2002. Digital terrestrial television was itself the replacement for analogue terrestrial TV, which ran from 1936-2012.
As of March 2021, Freeview contains up to 70+ TV and radio channels, which are received via an aerial. It is operated by DTV Services Ltd., a joint venture between the BBC, ITV, Channel 4 and Sky. The transmitter network is predominately operated by Arqiva.
The TV channels are transmitted in bundles, called multiplexes, and the available channels are dependent on how many multiplexes are transmitted in each area. The 7 national multiplexes are available to 76% of households from 30 transmitters; 6 multiplexes are available to 14% of households from 60 transmitters; and 3 multiplexes are available to 9% of households from 1,069 transmitters. The seventh national multiplex is due to close by 30 June, 2022. In Northern Ireland, a multiplex carrying channels from the Republic of Ireland can reach 71% of Northern Irish households from 3 transmitters. Local TV and radio is available to 54% of households from an additional multiplex at 44 transmitters, and an extra multiplex is available to 54% of households in Greater Manchester.
Multiple vendors sell hybrid set-top-boxes or smart TVs which combine terrestrial channels with streamed (Internet TV) content. Internet-based TV services such as BBC iPlayer, ITV Hub and All 4 are available via the broadband connection of Freeview Play and YouView receivers. These also support optional subscription services such as Netflix and Prime Video. BT TV and TalkTalk TV offer additional subscription channels for their respective broadband customers using YouView devices. Netgem TV offers set-top-boxes combining Freeview Play with subscription channels, and is available directly or from several broadband suppliers.
Saorview, the terrestrial TV service in Ireland which launched in 2011, can be received in parts of Northern Ireland via overspill transmissions.
Cable television
Many regional companies developed cable-television services in the late 1980s and 1990s as licenses for cable television were awarded on a city-by-city basis. The mid-1990s saw the companies start to merge and the turn of the century only three big companies remained. In 2007 Telewest and NTL merged, resulting in the formation of Virgin Media, which is available to 55% of households. Cable TV is a subscription service normally bundled with a phone line and broadband.
Satellite television
There are two distinctly-marketed direct-broadcast satellite (DBS) services (also known as direct-to-home (DTH), to be distinguished from satellite signals intended for non-consumer reception).
Sky TV is a subscription service operated by Sky Ltd, owned by Comcast, which launched in 1998 as SkyDigital. Compared to the previous analogue service which had launched in 1989, it provided more channels, widescreen, interactive TV and a near video-on-demand service using staggered start times for pay-per-view content. Innovations since have included high definition, 3D TV, a digital video recorder, the ability to view recordings on other devices, remote operation via the Internet to add recordings, and on-demand content via the satellite-receiver's broadband connection of both Sky and third-party TV. The Sky subscription also includes access to Sky Go, which allows mobile devices and computers to access subscription content via the Internet.
Freesat is a free satellite-service developed jointly by the BBC and ITV. In contrast to Freesat from Sky, it does not need a viewing card. Like Sky, it provides high-definition content, digital recording and video-on-demand via the broadband connection. The on-screen programme guide lists the available channels, rather than encrypted channels which need a subscription to view.
Freesat and Sky TV transmit from SES Astra satellites at 28.2° east (Astra 2E/2F/2G). As the satellites are in geostationary orbit, they are positioned above the earth's equator () approximately 35,786 km above sea level; this places them above the Democratic Republic of the Congo.
Internet video services
TV via the Internet can be streamed or downloaded, and consist of amateur or professionally produced content. In the UK, most broadcasters provide catch-up TV services which allow viewing of TV for a window after it was broadcast. Online video can be viewed via mobile devices, computers, TVs equipped with a built in Internet connection, or TVs connected to an external set-top-box, streaming stick or games console. Most of the broadcast TV providers have integrated their set-top-boxes with Internet video to provide a hybrid broadcast and online service.
Catch-up services
Since 2006, UK channel owners and content producers have been creating Internet services to access their programmes. Often, these are available for a window after the broadcast schedule. These services generally block users outside of the UK.
Online video services for professionally produced content
There are numerous online services targeting the UK, offering a combination of subscription, rental and purchase options for viewing online TV. Most are available via any Internet connection, however some require a specific broadband connection. Some services sell 3rd party services, such as Amazon's Prime Video.
BARB tracks the number of households subscribing to Netflix, Prime Video and Now, referred to as SVOD households. Their statistics for Q1 2020 show that 53% of households subscribe to at least one of these, and 24% to at least two. Netflix has 13.01 million subscribers, Prime Video (Amazon) has 7.86 million, and Now has 1.62 million, according to BARB's figures for Q1 2020. BARB's equivalent figures for broadcast TV show that 56% of households subscribe.
The table following summarises some of the available Internet TV services in the UK. For brevity, it does not include catch-up-only or amateur-only services, individual channels, distributors of illegal or adult content, services which solely redistribute free broadcast channels, portals, or services which don't target the UK. 'Free' refers to free at the point of consumption, not including fees for Internet connectivity or a TV licence.
Other international streaming services with pricing in GBP include: Acorn TV, Arrow, BKTV, Crunchyroll, Dekkoo, Demand Africa, Docsville, Funimation Now, GuideDoc, Hayu, Hoichoi, Hotstar, iQiyi, iWantTFC, Mubi, NewsPlayer+, Revry, Shudder, Starz, True Story, WOW Presents Plus and ZEE5.
Channels and channel owners
Viewing statistics
Most viewed channels
The Broadcasters' Audience Research Board (BARB) measures television ratings in the UK. As of 2 January 2022, the average daily viewing time per home was 3 hours 8 minutes (of BARB-reported channels, includes broadcast and Internet viewings). 15 channels have a 4-week share of ≥ 1.0%.
Most viewed broadcaster groups
, there are 10 broadcaster groups with a four week share of ≥ 1.0% (although BARB reports sub-groups of BBC and Paramount individually, and it's unclear what the 'ITV' group refers to).
BBC and UKTV
The British Broadcasting Corporation (BBC) is the world's oldest and largest broadcaster, and is the country's principal public service broadcaster of radio and television. BBC Television is funded primarily by a television licence and from sales of its programming to overseas markets. It does not carry advertising. The licence fee is levied on all households that watch or record TV as it's being broadcast and the fee is determined by periodic negotiation between the government and the BBC.
Its first analogue terrestrial channel was launched by the BBC Television Service in 1936. It rebranded to BBC 1 in 1964, the same year that BBC 2 launched, the UK's third analogue terrestrial channel after ITV. BBC News 24 launched as an analogue cable channel in 1997, later rebranding to BBC News. BBC Parliament, which was originally an analogue cable channel known as The Parliamentary Channel, was acquired by the BBC in 1998. From 1998 onwards the BBC started digital TV transmissions, launching new channels and broadcasting via satellite in addition to terrestrial and cable.
The BBC's Internet-based service iPlayer contains content from the BBC's TV channels, the Welsh-language public-service broadcaster S4C, as well as videos created from BBC radio programmes, with Radio 1 in particular appearing as a channel alongside the normal TV channels.
UKTV is a commercial broadcaster owned by BBC Studios, one of the BBC's commercial units. Originating in 1992 with UK Gold, UKTV expanded its channels from 1997 onwards, with the BBC taking full ownership in June 2019. Unlike the BBC's public service channels, the UKTV channels contain advertising.
ITV
ITV is the network of fourteen regional and one national commercial television franchise, founded in 1955 to provide competition to the BBC. ITV was the country's first commercial television provider funded by advertisements, and has been the most popular commercial channel through most of its existence. Each region was originally independent and used its own on-air identity. Through a series of mergers, takeovers and relaxation of regulation, thirteen of the franchises are now held by ITV plc, and the remaining two by STV Group. STV Group uses the on-air brand name of STV for its two franchises in Scotland. ITV plc uses the on-air brand name of UTV in Northern Ireland, and ITV for the remaining regions, although UTV has used ITV branding and presentation since April 2020 due to the impact of Coronavirus on staffing. The national breakfast-time franchise is held by ITV plc, which appears as an indistinguishable programming block across the network. Legally, the network has been referred to as Channel 3 since 1990, which is the name Ofcom uses.
Since 1998, ITV plc has operated additional free or subscription channels, starting with ITV2.
Channel 4
Launched in 1982, Channel 4 is a state-owned national broadcaster which is funded by its commercial activities (including advertising). Channel 4 has expanded greatly after gaining greater independence from the IBA, especially in the multi-channel digital world launching E4, Film4, More4, 4Music, 4seven and various timeshift services. Since 2005, it has been a member of the Freeview consortium, and operates one of the six digital terrestrial multiplexes with ITV as Digital 3&4. Since the advent of digital television, Channel 4 is now also broadcast in Wales across all digital platforms. Channel 4 was the first British channel not to carry regional variations for programming, however it does have 6 set advertising regions.
With Bauer Media Group, Channel 4 jointly owns a range of music channels under the Box Plus Network banner.
Sky
Sky is a European broadcaster owned by global American media conglomerate Comcast. Sky Television launched in 1989, with a 4-channel service received via satellite. The channels at launch were Sky Channel, Sky News, Sky Movies and Eurosport. They were initially free to receive, and Sky Movies was the first to move to a subscription early in 1990. Sky News was the UK's first dedicated news channel. The new service was the UK's first consumer satellite TV service, beating rival BSB, with which Sky would later merge to become BSkyB. Sky's satellite service grew to become a subscription platform through which Sky offer their own channels, pay-per-view services and channels from other broadcasters. Sky's digital platform launched in 1998, with the original analogue service closing in 2001. Sky was acquired by Comcast in 2018.
Since 2012, Sky operate Now, an Internet TV streaming service offering subscriptions without a fixed-term contract.
Sky's channel portfolio has grown greatly since the launch of digital TV. Sky make their channels available via rival cable and Internet services as well as their own satellite service and Now.
Paramount Global
Channel 5 was the fifth analogue terrestrial channel to launch, in March 1997. Due to constraints with the available UHF frequencies at the time, many households had to retune their video recorders, which shared the frequency on their RF output with the frequency used by Channel 5's new broadcasts. Channel 5 was the first terrestrial channel to also broadcast via satellite. From 2006 onwards, Channel 5 launched new digital channels and an Internet on-demand service. After changing ownership several times, in May 2014 Channel 5 and its sister channels were acquired by Viacom, an American media conglomerate,, known as Paramount since 2022.
By the time it acquired Channel 5, Paramount already operated a large number of subscription channels in the UK, including the MTV, Nickelodeon and Comedy Central channels, which are available via Sky TV, Virgin Media and Now. In terms of viewing share, the combined viewing across Paramount's channels make the group the UK's 5th largest broadcaster, according to BARB's viewing figures for 1 March 2020.
Local and regional television
Local television
Since 2012, additional local TV channels are available via Freeview channel 7 or 8. The channels are licensed by Ofcom, with 34 local TV channels licensed as of 2 July 2020. 19 of the licenses are held by That's TV, and 8 are held by Made Television. The remainder are held independently. Each license contains the amount of local TV programming required. As an example, the license for Scarborough, which is held by That's TV, requires 7 hours of local programming per week (1 hour/day on average). 13 additional licenses were originally intended, but Ofcom decided not to advertise these in June 2018.
The way Ofcom structured local TV - being dependent on terrestrial transmission - was criticised in a Guardian article in 2015 as being 'years behind in its thinking', as it doesn't account for the Internet. In the article, Ofcom responded that the licensing scheme was inherited from the Department for Digital, Culture, Media and Sport. In April 2018, BBC News reported that 'many of the stations have been ridiculed for the poor quality of their output or have been reported to Ofcom for breaching broadcasting rules'. The local TV companies receive a subsidy from the BBC of £147.50 per local news story, funded by the license fee, paid whether the BBC uses the content or not. A June 2018 article on BuzzFeed claims that That's TV was created primarily to extract money from the BBC whilst delivering little content of useful value.
Regional television
BBC One, BBC Two and the ITV network (comprising ITV and STV) are split into regions in which regional news and other programming is broadcast. ITV/STV is split into 14 geographic licencees, with several of these split into 2 or 3 sub-regions, resulting in a greater total number of regional news programmes. Ofcom sets a quota for the BBC and ITV on the amount of regional programming required.
Advertising on ITV/STV and Channel 4 is regional. Channel 4 is split into 6 advertising regions, but has no regional programming.
Country-specific channels
BBC Scotland and the Gaelic-language channel BBC Alba target Scotland, and the Welsh-language channel S4C targets Wales. In Northern Ireland, channels originating in the Republic of Ireland are available, including RTÉ One, RTÉ2 and the Irish-language TG4.
Programming
British television differs from other countries, such as the United States, in as much that programmes produced in the United Kingdom do not generally have a long season run of around 20 weeks. Instead, they are produced in a series, a set of episodes varying in length, usually aired over a period of a few months. See List of British television series.
100 Greatest British Television Programmes
100 Greatest British Television Programmes was a list compiled in 2000 by the British Film Institute (BFI), chosen by a poll of industry professionals, to determine what were the greatest British television programmes of any genre ever to have been screened. Although not including any programmes made in 2000 or later, the list is useful as an indication of what were generally regarded as the most successful British programmes of the 20th century. The top 10 programmes are:
100 Greatest TV Moments
100 Greatest TV Moments was a list compiled by Channel 4 in 1999. The top 10 entries are:
List of most watched television broadcasts
The majority of special events attracting large audiences are often carried on more than one channel. The most-watched programme of all time on a single channel is the 1973 wedding ceremony of The Princess Anne, shown only on BBC1. The figures in these tables represent the average viewership achieved by each broadcast during its run-time and do not include peak viewership.
Post-1981 figures verified by the Broadcasters' Audience Research Board (BARB)
Pre-1981 figures supplied by the British Film Institute (BFI)
Notes:
The Wedding of Princess Margaret and Lord Snowdon (6 May 1960) was watched by an estimated 25 million viewers in Britain.
At least two Muhammad Ali boxing matches were reported to have been watched by at least 26million viewers in the United Kingdom: the Fight of the Century (Ali vs. Frazier) was reported to have been watched by 27.5million British viewers in 1971, and The Rumble in the Jungle (Ali vs. Foreman) was reported to have been watched by 26million viewers on BBC1 in 1974.
Live Aid is reported to have reached approximately 24.5million British viewers in July 1985.
The Wedding of Prince William and Catherine Middleton (29 April 2011) received a total audience peak of 26 million viewers, but this is a combined figure aggregated from the ten different channels that broadcast the ceremony. The highest figures of these were 13.59 million on BBC1, with an extra 4.02 million watching on ITV.
Genre lists
100 Greatest Kids' TV shows
The 100 Greatest Kids' TV shows was a poll conducted by the British television channel Channel 4 in 2001. The top 5 UK-produced programmes are:
British Academy Television Award for Best Drama Series
The British Academy Television Award for Best Drama Series is one of the major categories of the British Academy Television Awards. The last 5 winners are:
2020: The End of the F***ing World – Clerkenwell Films / Channel 4
2019: Killing Eve – Sid Gentle Films / BBC One
2018: Peaky Blinders – Tiger Aspect Productions / BBC Two
2017: Happy Valley – Red Production Company / BBC One
2016: Wolf Hall – Company Pictures / BBC Two
Terrestrial channel programming
Weekday
Weekday programming on terrestrial channels begins at 6 am with breakfast national news programmes (along with regional news updates) on BBC Breakfast on BBC One and Good Morning Britain on ITV, with Channel 5 showing children's programmes under the Milkshake! brand. Channel 4 predominately broadcasts comedy programmes such as Everybody Loves Raymond and in its morning slot. The weekday breakfast news programme ends at 9:15 am on BBC One and 9am on ITV.
Following this on BBC One, lifestyle programming is generally shown, including property, auction and home and gardening. BBC One continues this genre until after the lunchtime news, whereby afternoon has a soap called Doctors followed by dramas. BBC Two airs the BBC News updates and political programming between 9 am and 1 pm. Channel 4 often shows home-project and archaeology lifestyle programming in the early afternoon after a Channel 4 News summary. Channel 5 broadcasts chat show programmes in the morning including Jeremy Vine with regular news bulletins. In the afternoon, it shows dramas followed by an hour of Australian soaps such as Home and Away and Neighbours and films.
News bulletins are broadcast between 6 pm and 7 pm on both BBC One and ITV, with BBC One beginning with the national BBC News at Six and ITV with the flagship regional news programme. At around 6:30 pm, BBC One broadcasts the regional news programmes whilst ITV broadcasts the ITV Evening News. Channel 4 News starts at 7 pm and 5 News broadcasts for an hour at 5 pm.
Primetime programming is usually dominated by further soaps, including EastEnders on BBC One, Coronation Street and Emmerdale on ITV, and Hollyoaks on Channel 4. These soap operas or 'continuing dramas' as they are now called can vary throughout the year, however weekly dramas, such as Holby City, are also fixed to scheduling. BBC Two broadcasts factual programming, including lifestyle and documentaries. BBC Four begins programming at 7pm. The channel shows a wide variety of programmes including arts, documentaries, music, international film, comedy, original programmes, drama and current affairs. It is required by its licence to air at least 100 hours of new arts and music programmes, 110 hours of new factual programmes and to premiere 20 foreign films each year. BBC One, BBC Two, ITV, Channel 4 and Channel 5 broadcast dramas and documentaries in the evenings. At 10pm with the flagship national news on BBC One in BBC News at Ten (followed by Newsnight on BBC Two) and on ITV on ITV News at Ten followed by the regional late night news. Because of this, the UK can often rely more heavily on TV guides, be it with the newspaper, online, via information services on the television such as the BBC Red Button service or the built in Electronic Programme Guides.
Weekend
Weekend daytime programming traditionally consists of more lifestyle programming plus films and live and recorded coverage of sporting events on most weekend afternoons. There are further battles for viewers in the weekend primetime slot, often featuring documentaries and game shows in the evening. Lunchtime, early evening and late evening news programmes continue on BBC One and ITV although the length of the bulletins are shorter than during the week. Sunday night schedules usually consist of dramas, light entertainment, documentaries, films, music concerts, festivals or sporting events.
Cultural impact
Christian morality
In 1963 Mary Whitehouse, incensed by the liberalising policies followed by Sir Hugh Greene, then director general of the BBC, began her letter writing campaign. She subsequently launched the Clean Up TV Campaign, and founded the National Viewers' and Listeners' Association in 1965. In 2008, Toby Young in an article for The Independent wrote: "On the wider question of whether sex and violence on TV has led to a general moral collapse in society at large, the jury is still out. No one doubts that Western civilization is teetering on the brink ... but it is unfair to lay the blame entirely at the feet of BBC2 and Channel 4."
In 2005, the BBC's broadcast of Jerry Springer: The Opera elicited 55,000 complaints, and provoked protests from Christian organisation Christian Voice, and a private prosecution against the BBC by the Christian Institute. A summons was not issued.
Awards
The British Academy Television Awards are the most prestigious awards given in the British television industry, analogous to the Emmy Awards in the United States. They have been awarded annually since 1954, and are only open to British programmes. After all the entries have been received, they are voted for online by all eligible members of the Academy. The winner is chosen from the four nominees by a special jury of nine academy members for each award, the members of each jury selected by the Academy's Television Committee.
The National Television Awards is a British television awards ceremony, sponsored by ITV and initiated in 1995. Although not widely held to be as prestigious as the BAFTAs, the National Television Awards are probably the most prominent ceremony for which the results are voted on by the general public. Unlike the BAFTAs, the National Television Awards allow foreign programmes to be nominated, providing they have been screened on a British channel during the eligible time period.
Regulation
Ofcom is the independent regulator and competition authority for the communication industries in the United Kingdom, including television. As the regulatory body for media broadcasts, Ofcom's duties include:
Specification of the Broadcast Code, which took effect on 25 July 2005, with the latest version being published October 2008. The Code itself is published on Ofcom's website, and provides a mandatory set of rules which broadcast programmes must comply with. The 10 main sections cover protection of under-eighteens, harm and offence, crime, religion, impartiality and accuracy, elections, fairness, privacy, sponsorship and commercial references. As stipulated in the Communications Act 2003, Ofcom enforces adherence to the Code. Failure for a broadcaster to comply with the Code results in warnings, fines, and potentially revokation of a broadcasting licence.
Rules on the amount and distribution of advertising, which also took effect July 2005
Examining specific complaints by viewers or other bodies about programmes and sponsorship. Ofcom issues Broadcast Bulletins on a fortnightly basis which are accessible via its web site. As an example, a bulletin from February 2009 has a complaint from the National Heart Forum over sponsorship of The Simpsons by Domino's Pizza on Sky One. Ofcom concluded this was in breach of the Broadcast Code, since it contravened an advertising restriction of food high in fat, salt or sugar. (Restrictions in food and drink advertising to children were introduced in November 2006.)
The management, regulation and assignment of the electromagnetic spectrum in the UK, and licensing of portions of the spectrum for television broadcasting
Public consultations on matters relating to TV broadcasting. The results of the consultations are published by Ofcom, and inform the policies that Ofcom creates and enforces.
In 2008, Ofcom issued fines to the total of £7.7m. This included £5.67m of fines to ITV companies, including a £3m fine to LWT over voting irregularities on Saturday Night Takeaway, and fines totalling £495,000 to the BBC. Ofcom said phone-in scandals had contributed significantly to the fine totals.
The Committee for Advertising Practice (CAP, or BCAP) is the body contracted by Ofcom to create and maintain the codes of practice governing television advertising. The Broadcast Advertising Codes (or the TV codes) are accessible on CAP's web site. The Codes cover advertising standards (the TV Code), guidance notes, scheduling rules, text services (the Teletext Code) and interactive television guidance. The main sections of the TV Code concern compliance, programmes and advertising, unacceptable products, political and controversial issues, misleading advertising, harm and offence, children, medicines, treatments, health claims and nutrition, finance and investments, and religion.
The Advertising Standards Authority is an independent body responsible for resolving complaints relating to the advertising industry within the UK. It is not government funded, but funded by a levy on the advertising industry. It ensures compliance with the Codes created by CAP. The ASA covers all forms of advertising, not just television advertisements. The ASA can refer problematic adverts to Ofcom, since the channels carrying the adverts are ultimately responsible for the advertising content, and are answerable to Ofcom. Ofcom can issue fines or revoke broadcast licences if necessary.
Licensing
In the United Kingdom and the Crown dependencies, a television licence is required to receive any publicly broadcast television service, or for using BBC iPlayer. This includes the commercial channels, cable and satellite transmissions, Internet-streamed channels, and applies regardless of the technology used to view. The money from the licence fee is used to provide radio, television and Internet content for the BBC, Welsh-language television programmes for S4C, monitoring of global mass media, nine orchestras and performing groups, technical research, and contributions to broadband roll out. The fee is classified as a hypothecated tax rather than a subscription. The BBC gives the following figures for expenditure of licence fee income per month in 2020/2021:
Production
As of 2002, 27,000 hours of original programming are produced year in the UK television industry, excluding news, at a cost of £2.6bn. Ofcom has determined that 56% (£1.5bn) of production is in-house by the channel owners, and the remainder by independent production companies. Ofcom is enforcing a 25% independent production quota for the channel operators, as stipulated in the Broadcasting Act 1990.
In-house production
ITV plc, the company which owns 12 of the 15 regional ITV franchises, has set its production arm ITV Studios a target of producing 75% of the ITV schedule, the maximum allowed by Ofcom. This would be a rise from 54% at present, as part of a strategy to make ITV content-led chiefly to double production revenues to £1.2bn by 2012. ITV Studios currently produces programmes such as Coronation Street, Emmerdale and Heartbeat.
In contrast, the BBC has implemented a Window of Creative Competition (WOCC), a 25% proportion over and above the 25% Ofcom quota in which the BBC's in-house production and independent producers can compete. The BBC produces shows such as All Creatures Great and Small and F***off I'm a Hairy Woman.
Channel 4 commissions all programmes from independent producers.
Independent production
As a consequence of the launch of Channel 4 in 1982, and the 25% independent quota from the Broadcasting Act 1990, an independent production sector has grown in the UK. Notable companies include Talkback Thames, Endemol UK, Hat Trick Productions, and Tiger Aspect Productions. A full list can be seen here: :Category:Television production companies of the United Kingdom
History
Timeline
Closed and aborted television providers
The following Internet TV services have closed:
The following services were aborted before launch:
Sky Picnic, a proposed subscription digital terrestrial service from Sky in 2007
'Project Kangaroo', an Internet TV service announced by the BBC, ITV and Channel 4 in 2007. Some of the technology was reused in SeeSaw. A similar concept later launched as BritBox.
Analogue terrestrial television
Analogue TV was transmitted via VHF (1936) and later UHF (1964) radio waves, with analogue broadcasts ending in 2012.
VHF transmissions started in 1936 and closed in 1985 (with a gap 1939–1946), carrying two channels. The launch channel was the BBC Television Service, known as BBC 1 since 1964. This was joined by Independent Television, a network of regional franchises launching between 1955 and 1962. The channels transmitted in monochrome using the 405-line television system at 25 frames per second, initially with an aspect ratio of 5:4, switching to 4:3 in 1950.
UHF transmissions started in 1964 and closed in 2012. The launch channel was BBC 2. This would be joined by BBC 1, the ITV network, Channel 4 or S4C in Wales, Channel 5 as well as a network of local TV channels. Transmissions started using the System I standard, a 625-line monochrome picture at 25 frames/second (576i) and a 4:3 aspect ratio. Technical advancements included colour (1967), teletext (1974), and stereo sound (1991). The drive to switch viewers from analogue to digital transmissions was a process called the digital switchover.
Whilst there are no longer any analogue broadcasts in the UK, a PAL signal may be present in closed RF distribution systems, e.g. a video feed from an intercom in a block of flats, or a security system.
Defunct channels
There are nearly 200 defunct British channels. For a list, see List of former TV channels in the UK or :Category:Defunct British television channels.
Commentary
The rise of television in the UK
The British Broadcasting Corporation (BBC) was established in 1927 to develop radio broadcasting, and inevitably became involved in TV in 1936. The BBC is funded by income from a "Broadcast Receiving Licence" purchased by UK residents. The cost of this is set by agreement with the UK Government.
Television caught on in the United Kingdom in 1947, but its expansion was slow. By 1951, with only two transmitters, near London and Birmingham, only 9% of British homes owned a television set. The United Kingdom was the first country in the world to have a regular daily television schedule direct to homes, and it was the first to have technical professions to work on TVs.
Up until 1972, television broadcasting hours were tightly regulated by the British government, under the control of the Postmaster General. Before the launch of the commercial channel ITV in 1955, the BBC was restricted by law to just five hours maximum of television in a day. This was increased at the launch of the commercial channel ITV to a 7-hour broadcasting day for both channels. Gradually the number of hours were increased. Typically, during the late 1960s, the law regulated a 50-hour broadcasting week for all television channels in the UK. This that meant BBC1, BBC2 and ITV could only broadcast normal programming for 7 hours a day from Mondays to Fridays, and 7.5 hours a day on Saturdays and Sundays.
Until 1957, television in the United Kingdom could not air from 6.00pm-7.00pm. This was called the "Toddlers' Truce", in which the idea was that parents could put their children to bed before primetime television would commence; this restriction was lifted in 1957. However, on Sundays, television remained off the air from 6.00pm-7.00pm. This was in response to religious leaders' fears that television would interfere with people attending church services. In 1958, a compromise was reached, in which only religious programming could be aired during this time slot. The restriction was lifted in January 1972.
The Postmaster General allowed exemptions to the regulations. All schools programming, adult education, religious programming, state occasions, political broadcasts and Welsh language programming were totally exempt from the restrictions. Sport and outside broadcasting events were given a separate quota of broadcasting hours which could be used in a year, starting off at 200 hours a year in the mid 1950s, rising to a quota of 350 hours a year by the late 1960s. Broadcasting on Christmas Eve, Christmas Day, Boxing Day, New Year's Eve and New Year's Day was also exempt from the tightly controlled restrictions.
The election of a Conservative government in June 1970 brought in changes to the control of broadcasting hours. At first, the typical broadcasting day was extended to 8 hours a day, with an increase in exemptions over Christmas, and an increase in the sport/outside broadcasting quota. On 19 January 1972, the then Minister for Posts and Telecommunications, Christopher Chataway, announced to the British House of Commons that all restrictions on broadcasting hours on television would be lifted from that day, with the broadcasters allowed to set their own broadcasting hours from then on. By November 1972, a full daytime schedule had been launched on ITV from 9.30am each day, with the BBC also expanding their schedules to include more daytime programming.
The UK Government previously appointed people to the BBC's Board of Governors, a body responsible for the general direction of the organisation, and appointment of senior executives, but not its day-to-day management. From 2007, the BBC Trust replaced the Board of Governors. It is operationally independent of BBC management and external bodies, and aims to act in the best interests of licence fee payers.
Commercial television was first introduced in the United Kingdom in 1955. Unlike the US, there was a distinct split between advertisements and programming. Advertisers purely purchased spots within pre-defined breaks within programming, and had no connection to the programme content. The content and nature of adverts was strictly controlled by the ITA, the body controlling commercial television.
History of satellite television
The first commercial direct-broadcast satellite (DBS, also known as direct-to-home) service in the United Kingdom, Sky Television, was launched in 1989 and used the newly launched Astra satellite at 19.2° east, providing four analogue TV channels. The channels and subsequent VideoCrypt video encryption system used the existing PAL broadcast standard, unlike the winner of the UK state DBS licence, British Satellite Broadcasting (BSB).
In 1990, BSB launched, broadcasting five channels (Now, Galaxy, The Movie Channel, The Power Station and The Sports Channel) in D-MAC format and using the EuroCypher video encryption system which was derived from the General Instruments VideoCipher system used in the USA. One of the main selling points of the BSB offering was the Squarial, a flat plate antenna and low-noise block converter (LNB). Sky's system used conventional and cheaper dish and LNB technology.
The two companies competed over the UK rights to movies. Sky operated from an industrial park in Isleworth in West London, whereas BSB had newly built offices in London (Marco Polo House). The two services subsequently merged to form British Sky Broadcasting (BSkyB). BSB's D-MAC/EuroCypher system was gradually replaced with Sky's VideoCrypt video encryption system.
In 1994, 17% of the group was floated on the London Stock Exchange (with ADRs listed on the New York Stock Exchange), and Rupert Murdoch's News Corporation owns a 35% stake.
By 1998, following the launch of several more satellites to Astra's 19.2° east position, the number of channels had increased to around 60 and BSkyB launched the first subscription-based digital television platform in the UK, offering a range of 300 channels broadcast from Astra's new satellite, at 28.2° east position under the brand name Sky Digital. BSkyB's analogue service has now been discontinued, with all customers having been migrated to Sky Digital.
In May 2008, a free-to-air satellite service from the BBC and ITV was launched under the brand name Freesat, carrying a variety of channels from Astra 28.2°E, including some content in HD formats.
See also
Industry bodies
Broadcasting, Entertainment, Cinematograph and Theatre Union (BECTU), National Union of Journalists (NUJ) and Equity, trade unions for members of the broadcasting industry
Clearcast, performs clearance of television advertising copy and the final advertisements. Replaced the Broadcast Advertising Clearance Centre (BACC) on 1 January 2008
Culture, Media and Sport Select Committee, a select committee of the House of Commons of the United Kingdom, established in 1997, which oversees the Department for Culture, Media and Sport (DCMS), the government department responsible for broadcasting in the UK
Digital TV Group (DTG), an industry association for digital television, formed in 1995
Digital UK, the body in charge of digital switchover of television in the UK
Producers Association for Cinema and Television
Royal Television Society (RTS), a society for the discussion, analysis and preservation of television in all its forms, past, present and future, which formed in 1927
United Kingdom Independent Broadcasting (UKIB), an affiliation of independent production companies and broadcasters, representing non-BBC interests in the European Broadcasting Union
Genres and programming
Ofcom Code on Sports and Other Listed and Designated Events, regulatory rules devised in 1997 which ensure particular sporting events are available for free via terrestrial television
Sports broadcasting contracts in the United Kingdom
British sitcom
Light entertainment
:Category:British television-related lists
List of American television series based on British television series
List of British television programmes based on American television series
List of films based on British television series
List of films based on British sitcoms
List of BBC Radio programmes adapted for television, and of television programmes adapted for radio
List of children's television series in the United Kingdom
List of UK game shows
List of longest-running UK television series
Miscellaneous
Appreciation Index (AI), a score between 0 and 100 which measures the public's approval of a particular programme, which can be used to measure attitudes to programmes with small or niche audiences
Broadcast, a weekly trade magazine for the broadcast industry
Edinburgh International Television Festival, an annual industry gathering in Edinburgh
Public service broadcasting in the United Kingdom, broadcasting intended for public benefit rather than purely commercial concerns
Public information film, government commissioned short films usually shown during television advertising breaks
Listings and general television magazines Radio Times, Soaplife, TV & Satellite Week, TV easy, TV Quick, TVTimes, What's on TV
Notes
References
External links
The BFI TV 100 at the BFI website
BBC News coverage
British TV News
1936 establishments in the United Kingdom
British television-related lists
Cultural history of the United Kingdom
Telecommunications-related introductions in 1936 |
183241 | https://en.wikipedia.org/wiki/Satellite%20modem | Satellite modem | A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa.
There are some devices that include only a demodulator (and no modulator, thus only allowing data to be downloaded by satellite) that are also referred to as "satellite modems." These devices are used in satellite Internet access (in this case uploaded data is transferred through a conventional PSTN modem or an ADSL modem).
Satellite link
A satellite modem is not the only device needed to establish a communication channel. Other equipment that is essential for creating a satellite link include satellite antennas and frequency converters.
Data to be transmitted are transferred to a modem from data terminal equipment (e.g. a computer). The modem usually has intermediate frequency (IF) output (that is, 50-200 MHz), however, sometimes the signal is modulated directly to L band. In most cases, frequency has to be converted using an upconverter before amplification and transmission.
A modulated signal is a sequence of symbols, pieces of data represented by a corresponding signal state, e.g. a bit or a few bits, depending upon the modulation scheme being used. Recovering a symbol clock (making a local symbol clock generator synchronous with the remote one) is one of the most important tasks of a demodulator.
Similarly, a signal received from a satellite is firstly downconverted (this is done by a Low-noise block converter - LNB), then demodulated by a modem, and at last handled by data terminal equipment. The LNB is usually powered by the modem through the signal cable with 13 or 18 V DC.
Features
The main functions of a satellite modem are modulation and demodulation. Satellite communication standards also define error correction codes and framing formats.
Popular modulation types being used for satellite communications:
Binary phase-shift keying (BPSK);
Quadrature phase-shift keying (QPSK);
Offset quadrature phase-shift keying (OQPSK);
8PSK;
Quadrature amplitude modulation (QAM), especially 16QAM.
The popular satellite error correction codes include:
Convolutional codes:
with constraint length less than 10, usually decoded using a Viterbi algorithm (see Viterbi decoder);
with constraint length more than 10, usually decoded using a Fano algorithm (see Sequential decoder);
Reed–Solomon codes usually concatenated with convolutional codes with an interleaving;
New modems support superior error correction codes (turbo codes and LDPC codes).
Frame formats that are supported by various satellite modems include:
Intelsat business service (IBS) framing
Intermediate data rate (IDR) framing
MPEG-2 transport framing (used in DVB)
E1 and T1 framing
High-end modems also incorporate some additional features:
Multiple data interfaces (like RS-232, RS-422, V.35, G.703, LVDS, Ethernet);
Embedded Distant-end Monitor and Control (EDMAC), allowing to control the distant-end modem;
Automatic Uplink Power Control (AUPC), that is, adjusting the output power to maintain a constant signal to noise ratio at the remote end;
Drop and insert feature for a multiplexed stream, allowing to replace some channels in it.
Internal structure
Probably the best way of understanding how a modem works is to look at its internal structure. A block diagram of a generic satellite modem is shown on the image.
Analog tract
After a digital-to-analog conversion in the transmitter, the signal passes through a reconstruction filter. Then, if needed, frequency conversion is performed.
The purpose of the analog tract in the receiver is to convert signal's frequency, to adjust its power via an automatic gain control circuit and to get its complex envelope components.
The input signal for the analog tract is at the intermediate frequency, sometimes, in the L band, in which case it must be converted to an IF. Then the signal is either sampled or processed by the four-quadrant multiplier which produces the complex envelope components (I, Q) through multiplying it by the heterodyne frequency (see superheterodyne receiver).
At last the signal passes through an anti-aliasing filter and is sampled or (digitized).
Modulator and demodulator
A digital modulator transforms a digital stream into a radio signal at the intermediate frequency (IF). A modulator is generally simpler than a demodulator because it doesn't have to recover symbol and carrier frequencies.
A demodulator is one of the most important parts of the receiver. The exact structure of the demodulator is defined by a modulation type. However, the fundamental concepts are similar. Moreover, it is possible to develop a demodulator that can process signals with different modulation types.
Digital demodulation implies that a symbol clock (and, in most cases, an intermediate frequency generator) at the receiving side has to be synchronous with those at the transmitting side. This is achieved by the following two circuits:
timing recovery circuit, determining the borders of symbols;
carrier recovery circuit, which determines the actual meaning of each symbol. There are modulation types (like frequency-shift keying) that can be demodulated without carrier recovery, however, this method, known as noncoherent demodulation, is generally worse.
There are also additional components in the demodulator such as the intersymbol interference equalizer.
If the analog signal was digitized without a four-quadrant multiplier, the complex envelope has to be calculated by a digital complex mixer.
Sometimes a digital automatic gain control circuit is implemented in the demodulator.
FEC coding
Error correction techniques are essential for satellite communications, because, due to satellite's limited power a signal-to-noise ratio at the receiver is usually rather poor. Error correction works by adding an artificial redundancy to a data stream at the transmitting side and using this redundancy to correct errors caused by noise and interference. This is performed by an FEC encoder. The encoder applies an error correction code to the digital stream, thereby adding redundancy.
An FEC decoder decodes the Forward error correction code used within the signal. For example, the Digital Video Broadcasting standard defines a concatenated code consisting of inner convolutional (standard NASA code, punctured, with rates , , , , ), interleaving and outer Reed–Solomon code (block length: 204 bytes, information block: 188 bytes, can correct up to 8 bytes in the block).
Differential coding
There are several modulation types (such as PSK and QAM) that have a phase ambiguity, that is, a carrier can be restored in different ways. Differential coding is used to resolve this ambiguity.
When differential coding is used, the data are deliberately made to depend not only on the current symbol, but also on the previous one.
Scrambling
Scrambling is a technique used to randomize a data stream to eliminate long '0'-only and '1'-only sequences and to assure energy dispersal. Long '0'-only and '1'-only sequences create difficulties for timing recovery circuit. Scramblers and descramblers are usually based on linear-feedback shift registers.
A scrambler randomizes the transmitted data stream. A descrambler restores the original stream from the scrambled one.
Scrambling shouldn't be confused with encryption, since it doesn't protect information from intruders.
Multiplexing
A multiplexer transforms several digital streams into one stream. This is often referred to as 'muxing.'
Generally, a demultiplexer is a device that transforms one multiplexed data stream into several. Satellite modems don't have many outputs, so a demultiplexer here performs a drop operation, allowing to the modem to choose channels that will be transferred to the output.
A demultiplexer achieves this goal by maintaining frame synchronization.
Applications
Satellite modems are often used for home internet access.
There are two different types, both employing the Digital Video Broadcasting (DVB) standard as their basis:
One-way satmodems (DVB-IP modems) use a return channel not based on communication with the satellite, such as telephone or cable.
Two-way satmodems (DVB-RCS modems, also called astromodems) employ a satellite-based return channel as well; they do not need another connection. DVB-RCS is ETSI standard EN 301 790.
There are also industrial satellite modems intended to provide a permanent link. They are used, for example, in the Steel shankar network.
See also
Communications satellite
Yahsat
Intelsat
Satellite Internet access
VSAT
External links
Satellite broadcasting
Modems
Telecommunications equipment
Telecommunications infrastructure |
184588 | https://en.wikipedia.org/wiki/PIC%20microcontrollers | PIC microcontrollers | PIC (usually pronounced as "pick") is a family of microcontrollers made by Microchip Technology, derived from the PIC1650 originally developed by General Instrument's Microelectronics Division. The name PIC initially referred to Peripheral Interface Controller, and is currently expanded as Programmable Intelligent Computer.
The first parts of the family were available in 1976; by 2013 the company had shipped more than twelve billion individual parts, used in a wide variety of embedded systems.
The PIC was originally intended to be used with the General Instrument CP1600, the first commercially available single-chip 16-bit microprocessor. The CP1600 had a complex bus that made it difficult to interface with, and the PIC was introduced as a companion device offering ROM for program storage, RAM for temporary data handling, and a simple CPU for controlling the transfers. While this offered considerable power, GI's marketing was limited and the CP1600 was not a success. When the company spun off their chip division to form Microchip in 1985, sales of the CP1600 were all but dead. By this time, the PIC had formed a major market of its own, and it became one of the new company's primary products.
Early models had hard-written ROM for code storage, but with its spinoff it was soon upgraded to use EPROM and then EEPROM which made it much easier for end-users to program. All current models use flash memory for program storage, and newer models allow the PIC to reprogram itself. Since then the line has seen significant change; memory is now available in 8-bit, 16-bit, and, in latest models, 32-bit wide. Program instructions vary in bit-count by family of PIC, and may be 12, 14, 16, or 24 bits long. The instruction set also varies by model, with more powerful chips adding instructions for digital signal processing functions. The hardware implementations of PIC devices range from 6-pin SMD, 8-pin DIP chips up to 144-pin SMD chips, with discrete I/O pins, ADC and DAC modules, and communications ports such as UART, I2C, CAN, and even USB. Low-power and high-speed variations exist for many types.
The manufacturer supplies computer software for development known as MPLAB X, assemblers and C/C++ compilers, and programmer/debugger hardware under the MPLAB and PICKit series. Third party and some open-source tools are also available. Some parts have in-circuit programming capability; low-cost development programmers are available as well as high-volume production programmers.
PIC devices are popular with both industrial developers and hobbyists due to their low cost, wide availability, large user base, an extensive collection of application notes, availability of low cost or free development tools, serial programming, and re-programmable flash-memory capability.
History
Original concept
The original PIC was intended to be used with General Instrument's new CP1600 16-bit central processing unit (CPU). In order to fit 16-bit data bus and address bus into a then-standard 40-pin dual inline package (DIP) chip, the two busses shared the same set of 16 connection pins. In order to communicate with the CPU, devices had to watch other pins on the CPU to determine if the data on the bus was an address or data. Since only one of these was being presented at a time, the devices had to watch the bus to go into address mode, see if that address was part of its memory mapped input/output range, "latch" that address and then wait for the data mode to turn on and then read the value. Additionally, the 1600 used several external pins to select which device it was attempting to talk to, further complicating the interfacing.
As interfacing devices to the 1600 could be complex, GI also released a series of support chips with all of the required circuitry built-in. These included keyboard drivers, cassette deck interfaces for storage, and a host of similar systems. For more complex systems, GI introduced the 8-bit PIC in 1975. The idea was that a device would use the PIC to handle all the interfacing with the host computer's CP1600, but also use its own internal processor to handle the actual device it was connected to. For instance, a floppy disk drive could be implemented with a PIC talking to the CPU on one side and the floppy disk controller on the other. In keeping with this idea, what would today be known as a microcontroller, the PIC included a small amount of read-only memory (ROM) that would be written with the user's device controller code, and a separate random access memory (RAM) for buffering and working with data. These were connected separately, making the PIC a Harvard architecture system with code and data being managed on separate internal pathways.
In theory, the combination of 1600 CPU and PIC device controllers provided a very high-performance device control system, one that was similar in power and performance to the channel controllers see on mainframe computers. In the floppy controller example, for instance, a single PIC could control the drive, provide a reasonable amount of buffering to improve performance, and then transfer data to and from the host computer using direct memory access (DMA) or through relatively simple code in the CPU. The downside to this approach was cost; while the PIC was not necessary for low-speed devices like a keyboard, many tasks would require one or more PICs to build out a complete system.
While the design concept had a number of attractive features, General Instrument never strongly marketed the 1600, preferring to deal only with large customers and ignoring the low-end market. This resulted in very little uptake of the system, with the Intellivision being the only really widespread use with about three million units. When GI spun off its chip division to form Microchip Technology in 1985, production of the CP1600 ended. By this time, however, the PIC had developed a large market of customers using it for a wide variety of roles, and the PIC went on to become one of the new company's primary products.
After the 1600
In 1985, General Instrument sold their microelectronics division and the new owners cancelled almost everything which by this time was mostly out-of-date. The PIC, however, was upgraded with an internal EPROM to produce a programmable channel controller.
At the same time Plessey in the UK released NMOS processors numbered PIC1650 and PIC1655 based on the GI design, using the same instruction sets, either user mask programmable or versions pre-programed for auto-diallers and keyboard interfaces.
In 1998 Microchip introduced the PIC 16F84, a flash programmable and erasable version of its successful serial programmable PIC16C84.
In 2001, Microchip introduced more Flash programmable devices, with full production commencing in 2002.
Today, a huge variety of PICs are available with various on-board peripherals (serial communication modules, UARTs, motor control kernels, etc.) and program memory from 256 words to 64K words and more (a "word" is one assembly language instruction, varying in length from 8 to 16 bits, depending on the specific PIC micro family).
PIC and PICmicro are now registered trademarks of Microchip Technology. It is generally thought that PIC stands for Peripheral Interface Controller, although General Instruments' original acronym for the initial PIC1640 and PIC1650 devices was "Programmable Interface Controller". The acronym was quickly replaced with "Programmable Intelligent Computer".
The Microchip 16C84 (PIC16x84), introduced in 1993, was the first Microchip CPU with on-chip EEPROM memory.
By 2013, Microchip was shipping over one billion PIC microcontrollers every year.
Device families
PIC micro chips are designed with a Harvard architecture, and are offered in various device families. The baseline and mid-range families use 8-bit wide data memory, and the high-end families use 16-bit data memory. The latest series, PIC32MZ is a 32-bit MIPS-based microcontroller. Instruction words are in sizes of 12-bit (PIC10 and PIC12), 14-bit (PIC16) and 24-bit (PIC24 and dsPIC). The binary representations of the machine instructions vary by family and are shown in PIC instruction listings.
Within these families, devices may be designated PICnnCxxx (CMOS) or PICnnFxxx (Flash). "C" devices are generally classified as "Not suitable for new development" (not actively promoted by Microchip). The program memory of "C" devices is variously described as OTP, ROM, or EEPROM. As of October 2016, the only OTP product classified as "In production" is the pic16HV540. "C" devices with quartz windows (for erasure), are in general no longer available.
PIC10 and PIC12
These devices feature a 12-bit wide code memory, a 32-byte register file, and a tiny two level deep call stack. They are represented by the PIC10 series, as well as by some PIC12 and PIC16 devices. Baseline devices are available in 6-pin to 40-pin packages.
Generally the first 7 to 9 bytes of the register file are special-purpose registers, and the remaining bytes are general purpose RAM. Pointers are implemented using a register pair: after writing an address to the FSR (file select register), the INDF (indirect f) register becomes an alias for the addressed register.
If banked RAM is implemented, the bank number is selected by the high 3 bits of the FSR. This affects register numbers 16–31; registers 0–15 are global and not affected by the bank select bits.
Because of the very limited register space (5 bits), 4 rarely read registers were not assigned addresses, but written by special instructions (OPTION and TRIS).
The ROM address space is 512 wand may only specify addresses in the first half of each 512-word page. That is, the CALL instruction specifies the low 9 bits of the address, but only the low 8 bits of that address are a parameter of the instruction, while the 9th bit (bit 8) is implicitly specified as 0 by the CALL instruction itself.
Lookup tables are implemented using a computed GOTO (assignment to PCL register) into a table of RETLW instructions. RETLW returns returning in the W register an 8-bit immediate constant that is encoded into the instruction.
This "baseline core" does not support interrupts; all I/O must be polled. There are some "enhanced baseline" variants with interrupt support and a four-level call stack.
PIC10F32x devices feature a mid-range 14-bit wide code memory of 256 or 512 words, a 64-byte SRAM register file, and an 8-level deep hardware stack. These devices are available in 6-pin SMD and 8-pin DIP packages (with two pins unused). One input only and three I/O pins are available. A complex set of interrupts are available. Clocks are an internal calibrated high-frequency oscillator of 16 MHz with a choice of selectable speeds via software and a 31 kHz low-power source.
PIC16
These devices feature a 14-bit wide code memory, and an improved 8-level deep call stack. The instruction set differs very little from the baseline devices, but the two additional opcode bits allow 128 registers and 2048 words of code to be directly addressed. There are a few additional miscellaneous instructions, and two additional 8-bit literal instructions, add and subtract. The mid-range core is available in the majority of devices labeled PIC12 and PIC16.
The first 32 bytes of the register space are allocated to special-purpose registers; the remaining 96 bytes are used for general-purpose RAM. If banked RAM is used, the high 16 registers (0x70–0x7F) are global, as are a few of the most important special-purpose registers, including the STATUS register which holds the RAM bank select bits. (The other global registers are FSR and INDF, the low 8 bits of the program counter PCL, the PC high preload register PCLATH, and the master interrupt control register INTCON.)
The PCLATH register supplies high-order instruction address bits when the 8 bits supplied by a write to the PCL register, or the 11 bits supplied by a GOTO or CALL instruction, is not sufficient to address the available ROM space.
PIC17
The 17 series never became popular and has been superseded by the PIC18 architecture (however, see clones below). The 17 series is not recommended for new designs, and availability may be limited to users.
Improvements over earlier cores are 16-bit wide opcodes (allowing many new instructions), and a 16-level deep call stack. PIC17 devices were produced in packages from 40 to 68 pins.
The 17 series introduced a number of important new features:
a memory mapped accumulator
read access to code memory (table reads)
direct register to register moves (prior cores needed to move registers through the accumulator)
an external program memory interface to expand the code space
an 8-bit × 8-bit hardware multiplier
a second indirect register pair
auto-increment/decrement addressing controlled by control bits in a status register (ALUSTA)
A significant limitation was that RAM space was limited to 256 bytes (26 bytes of special function registers, and 232 bytes of general-purpose RAM), with awkward bank-switching in the models that supported more.
PIC18
In 2000, Microchip introduced the PIC18 architecture. Unlike the 17 series, it has proven to be very popular, with a large number of device variants presently in manufacture. In contrast to earlier devices, which were more often than not programmed in assembly, C has become the predominant development language.
The 18 series inherits most of the features and instructions of the 17 series, while adding a number of important new features:
call stack is 21 bits wide and much deeper (31 levels deep)
the call stack may be read and written (TOSU:TOSH:TOSL registers)
conditional branch instructions
indexed addressing mode (PLUSW)
extending the FSR registers to 12 bits, allowing them to linearly address the entire data address space
the addition of another FSR register (bringing the number up to 3)
The RAM space is 12 bits, addressed using a 4-bit bank select register and an 8-bit offset in each instruction. An additional "access" bit in each instruction selects between bank 0 (a=0) and the bank selected by the BSR (a=1).
A 1-level stack is also available for the STATUS, WREG and BSR registers. They are saved on every interrupt, and may be restored on return. If interrupts are disabled, they may also be used on subroutine call/return by setting the s bit (appending ", FAST" to the instruction).
The auto increment/decrement feature was improved by removing the control bits and adding four new indirect registers per FSR. Depending on which indirect file register is being accessed it is possible to postdecrement, postincrement, or preincrement FSR; or form the effective address by adding W to FSR.
In more advanced PIC18 devices, an "extended mode" is available which makes the addressing even more favorable to compiled code:
a new offset addressing mode; some addresses which were relative to the access bank are now interpreted relative to the FSR2 register
the addition of several new instructions, notable for manipulating the FSR registers.
PIC18 devices are still developed (2021) and fitted with CIP (Core Independent Peripherals)
PIC24 and dsPIC
In 2001, Microchip introduced the dsPIC series of chips, which entered mass production in late 2004. They are Microchip's first inherently 16-bit microcontrollers. PIC24 devices are designed as general purpose microcontrollers. dsPIC devices include digital signal processing capabilities in addition.
Although still similar to earlier PIC architectures, there are significant enhancements:
All registers are 16 bits wide
Program counter is 22 bits (Bits 22:1; bit 0 is always 0)
Instructions are 24 bits wide
Data address space expanded to 64 KiB
First 2 KiB is reserved for peripheral control registers
Data bank switching is not required unless RAM exceeds 62 KiB
"f operand" direct addressing extended to 13 bits (8 KiB)
16 W registers available for register-register operations.(But operations on f operands always reference W0.)
Instructions come in byte and (16-bit) word forms
Stack is in RAM (with W15 as stack pointer); there is no hardware stack
W14 is the frame pointer
Data stored in ROM may be accessed directly ("Program Space Visibility")
Vectored interrupts for different interrupt sources
Some features are:
(16×16)-bit single-cycle multiplication and other digital signal processing operations
hardware multiply–accumulate (MAC)
hardware divide assist (19 cycles for 32/16-bit divide)
barrel shifting - For both accumulators and general purpose registers
bit reversal
hardware support for loop indexing
peripheral direct memory access
dsPICs can be programmed in C using Microchip's XC16 compiler (formerly called C30) which is a variant of GCC.
Instruction ROM is 24 bits wide. Software can access ROM in 16-bit words, where even words hold the least significant 16 bits of each instruction, and odd words hold the most significant 8 bits. The high half of odd words reads as zero. The program counter is 23 bits wide, but the least significant bit is always 0, so there are 22 modifiable bits.
Instructions come in two main varieties, with most important operations (add, xor, shifts, etc.) allowing both forms.
The first is like the classic PIC instructions, with an operation between a specified f register (i.e. the first 8K of RAM) and a single accumulator W0, with a destination select bit selecting which is updated with the result. (The W registers are memory-mapped. so the f operand may be any W register.)
The second form is more conventional, allowing three operands, which may be any of 16 W registers. The destination and one of the sources also support addressing modes, allowing the operand to be in memory pointed to by a W register.
PIC32M MIPS-based line
PIC32MX
In November 2007, Microchip introduced the PIC32MX family of 32-bit microcontrollers, based on the MIPS32 M4K Core. The device can be programmed using the Microchip MPLAB C Compiler for PIC32 MCUs, a variant of the GCC compiler. The first 18 models currently in production (PIC32MX3xx and PIC32MX4xx) are pin to pin compatible and share the same peripherals set with the PIC24FxxGA0xx family of (16-bit) devices allowing the use of common libraries, software and hardware tools. Today, starting at 28 pin in small QFN packages up to high performance devices with Ethernet, CAN and USB OTG, full family range of mid-range 32-bit microcontrollers are available.
The PIC32 architecture brought a number of new features to Microchip portfolio, including:
The highest execution speed 80 MIPS (120+ Dhrystone )
The largest flash memory: 512 kB
One instruction per clock cycle execution
The first cached processor
Allows execution from RAM
Full Speed Host/Dual Role and OTG USB capabilities
Full JTAG and 2-wire programming and debugging
Real-time trace
PIC32MZ
In November 2013, Microchip introduced the PIC32MZ series of microcontrollers, based on the MIPS M14K core. The PIC32MZ series include:
252 MHz core speed, 415 DMIPS
Up to 2 MB Flash and 512KB RAM
New peripherals including high-speed USB, crypto engine and SQI
In 2015, Microchip released the PIC32MZ EF family, using the updated MIPS M5150 Warrior M-class processor.
In 2017, Microchip introduced the PIC32MZ DA Family, featuring an integrated Graphics Controller, Graphics Processor and 32MB of DDR2 DRAM.
PIC32MM
In June 2016, Microchip introduced the PIC32MM family, specialized for low-power and low-cost applications. The PIC32MM features core-independent peripherals, sleep modes down to 500 nA, and 4 x 4 mm packages. The PIC32MM microcontrollers use the MIPS Technologies M4K, a 32-bit MIPS32 processor.
They are meant for very low power consumption and limited to 25 MHz.
Their key advantage is to support the 16bits instructions of MIPS making program size much more compact (about 40%)
PIC32MK
Microchip introduced the PIC32MK family in 2017, specialized for motor control, industrial control, Industrial Internet of Things (IIoT) and multi-channel CAN applications.
Core architecture
The PIC architecture is characterized by its multiple attributes:
Separate code and data spaces (Harvard architecture).
Except PIC32: The MIPS M4K architecture's separate data and instruction paths are effectively merged into a single common address space by the System Bus Matrix module.
A small number of fixed-length instructions
Most instructions are single-cycle (2 clock cycles, or 4 clock cycles in 8-bit models), with one delay cycle on branches and skips
One accumulator (W0), the use of which (as source operand) is implied (i.e. is not encoded in the opcode)
All RAM locations function as registers as both source and/or destination of math and other functions.
A hardware stack for storing return addresses
A small amount of addressable data space (32, 128, or 256 bytes, depending on the family), extended through banking
Data-space mapped CPU, port, and peripheral registers
ALU status flags are mapped into the data space
The program counter is also mapped into the data space and writable (this is used to implement indirect jumps).
There is no distinction between memory space and register space because the RAM serves the job of both memory and registers, and the RAM is usually just referred to as the register file or simply as the registers.
Data space (RAM)
PICs have a set of registers that function as general-purpose RAM. Special-purpose control registers for on-chip hardware resources are also mapped into the data space. The addressability of memory varies depending on device series, and all PIC device types have some banking mechanism to extend addressing to additional memory (but some device models have only one bank implemented). Later series of devices feature move instructions, which can cover the whole addressable space, independent of the selected bank. In earlier devices, any register move must achieved through the accumulator.
To implement indirect addressing, a "file select register" (FSR) and "indirect register" (INDF) are used. A register number is written to the FSR, after which reads from or writes to INDF will actually be from or to the register pointed to by FSR. Later devices extended this concept with post- and pre- increment/decrement for greater efficiency in accessing sequentially stored data. This also allows FSR to be treated almost like a stack pointer (SP).
External data memory is not directly addressable except in some PIC18 devices with high pin count. However, general I/O ports can be used to implement a parallel bus or a serial interface for accessing external memory and other peripherals (using subroutines), with the caveat that such programed memory access is (of course) much slower than access to the native memory of the PIC MCU.
Code space
The code space is generally implemented as on-chip ROM, EPROM or flash ROM. In general, there is no provision for storing code in external memory due to the lack of an external memory interface. The exceptions are PIC17 and select high pin count PIC18 devices.
Word size
All PICs handle (and address) data in 8-bit chunks. However, the unit of addressability of the code space is not generally the same as the data space. For example, PICs in the baseline (PIC12) and mid-range (PIC16) families have program memory addressable in the same wordsize as the instruction width, i.e. 12 or 14 bits respectively. In contrast, in the PIC18 series, the program memory is addressed in 8-bit increments (bytes), which differs from the instruction width of 16 bits.
In order to be clear, the program memory capacity is usually stated in number of (single-word) instructions, rather than in bytes.
Stacks
PICs have a hardware call stack, which is used to save return addresses. The hardware stack is not software-accessible on earlier devices, but this changed with the 18 series devices.
Hardware support for a general-purpose parameter stack was lacking in early series, but this greatly improved in the 18 series, making the 18 series architecture more friendly to high-level language compilers.
Instruction set
PIC's instructions vary from about 35 instructions for the low-end PICs to over 80 instructions for the high-end PICs. The instruction set includes instructions to perform a variety of operations on registers directly, the accumulator and a literal constant or the accumulator and a register, as well as for conditional execution, and program branching.
Some operations, such as bit setting and testing, can be performed on any numbered register, but bi-operand arithmetic operations always involve W (the accumulator), writing the result back to either W or the other operand register. To load a constant, it is necessary to load it into W before it can be moved into another register. On the older cores, all register moves needed to pass through W, but this changed on the "high-end" cores.
PIC cores have skip instructions, which are used for conditional execution and branching. The skip instructions are "skip if bit set" and "skip if bit not set". Because cores before PIC18 had only unconditional branch instructions, conditional jumps are implemented by a conditional skip (with the opposite condition) followed by an unconditional branch. Skips are also of utility for conditional execution of any immediate single following instruction. It is possible to skip skip instructions. For example, the instruction sequence "skip if A; skip if B; C" will execute C if A is true or if B is false.
The 18 series implemented shadow registers, registers which save several important registers during an interrupt, providing hardware support for automatically saving processor state when servicing interrupts.
In general, PIC instructions fall into five classes:
Operation on working register (WREG) with 8-bit immediate ("literal") operand. E.g. movlw (move literal to WREG), andlw (AND literal with WREG). One instruction peculiar to the PIC is retlw, load immediate into WREG and return, which is used with computed branches to produce lookup tables.
Operation with WREG and indexed register. The result can be written to either the Working register (e.g. addwf reg,w). or the selected register (e.g. addwf reg,f).
Bit operations. These take a register number and a bit number, and perform one of 4 actions: set or clear a bit, and test and skip on set/clear. The latter are used to perform conditional branches. The usual ALU status flags are available in a numbered register so operations such as "branch on carry clear" are possible.
Control transfers. Other than the skip instructions previously mentioned, there are only two: goto and call.
A few miscellaneous zero-operand instructions, such as return from subroutine, and sleep to enter low-power mode.
Performance
The architectural decisions are directed at the maximization of speed-to-cost ratio. The PIC architecture was among the first scalar CPU designs and is still among the simplest and cheapest. The Harvard architecture, in which instructions and data come from separate sources, simplifies timing and microcircuit design greatly, and this benefits clock speed, price, and power consumption.
The PIC instruction set is suited to implementation of fast lookup tables in the program space. Such lookups take one instruction and two instruction cycles. Many functions can be modeled in this way. Optimization is facilitated by the relatively large program space of the PIC (e.g. 4096 × 14-bit words on the 16F690) and by the design of the instruction set, which allows embedded constants. For example, a branch instruction's target may be indexed by W, and execute a "RETLW", which does as it is named return with literal in W.
Interrupt latency is constant at three instruction cycles. External interrupts have to be synchronized with the four-clock instruction cycle, otherwise there can be a one instruction cycle jitter. Internal interrupts are already synchronized. The constant interrupt latency allows PICs to achieve interrupt-driven low-jitter timing sequences. An example of this is a video sync pulse generator. This is no longer true in the newest PIC models, because they have a synchronous interrupt latency of three or four cycles.
Advantages
Small instruction set to learn
RISC architecture
Built-in oscillator with selectable speeds
Easy entry level, in-circuit programming plus in-circuit debugging PICkit units available for less than $50
Inexpensive microcontrollers
Wide range of interfaces including I²C, SPI, USB, UART, A/D, programmable comparators, PWM, LIN, CAN, PSP, and Ethernet
Availability of processors in DIL package make them easy to handle for hobby use.
Limitations
One accumulator
Register-bank switching is required to access the entire RAM of many devices
Operations and registers are not orthogonal; some instructions can address RAM and/or immediate constants, while others can use the accumulator only.
The following stack limitations have been addressed in the PIC18 series, but still apply to earlier cores:
The hardware call stack is not addressable, so preemptive task switching cannot be implemented
Software-implemented stacks are not efficient, so it is difficult to generate reentrant code and support local variables
With paged program memory, there are two page sizes to worry about: one for CALL and GOTO and another for computed GOTO (typically used for table lookups). For example, on PIC16, CALL and GOTO have 11 bits of addressing, so the page size is 2048 instruction words. For computed GOTOs, where you add to PCL, the page size is 256 instruction words. In both cases, the upper address bits are provided by the PCLATH register. This register must be changed every time control transfers between pages. PCLATH must also be preserved by any interrupt handler.
Compiler development
While several commercial compilers are available, in 2008, Microchip released their own C compilers, C18 and C30, for the line of 18F 24F and 30/33F processors.
As of 2013, Microchip offers their XC series of compilers, for use with MPLAB X. Microchip will eventually phase out its older compilers, such as C18, and recommends using their XC series compilers for new designs.
The RISC instruction set of the PIC assembly language code can make the overall flow difficult to comprehend. Judicious use of simple macros can increase the readability of PIC assembly language. For example, the original Parallax PIC assembler ("SPASM") has macros, which hide W and make the PIC look like a two-address machine. It has macro instructions like mov b, a (move the data from address a to address b) and add b, a (add data from address a to data in address b). It also hides the skip instructions by providing three-operand branch macro instructions, such as cjne a, b, dest (compare a with b and jump to dest if they are not equal).
Hardware features
PIC devices generally feature:
Flash memory (program memory, programmed using MPLAB devices)
SRAM (data memory)
EEPROM memory (programmable at run-time)
Sleep mode (power savings)
Watchdog timer
Various crystal or RC oscillator configurations, or an external clock
Variants
Within a series, there are still many device variants depending on what hardware resources the chip features:
General purpose I/O pins
Internal clock oscillators
8/16/32 bit timers
Synchronous/Asynchronous Serial Interface USART
MSSP Peripheral for I²C and SPI communications
Capture/Compare and PWM modules
Analog-to-digital converters (up to ~1.0 Msps)
USB, Ethernet, CAN interfacing support
External memory interface
Integrated analog RF front ends (PIC16F639, and rfPIC).
KEELOQ Rolling code encryption peripheral (encode/decode)
And many more
Trends
The first generation of PICs with EPROM storage are almost completely replaced by chips with Flash memory. Likewise, the original 12-bit instruction set of the PIC1650 and its direct descendants has been superseded by 14-bit and 16-bit instruction sets. Microchip still sells OTP (one-time-programmable) and windowed (UV-erasable) versions of some of its EPROM based PICs for legacy support or volume orders. The Microchip website lists PICs that are not electrically erasable as OTP. UV erasable windowed versions of these chips can be ordered.
Part number
The F in a PICMicro part number generally indicates the PICmicro uses flash memory and can be erased electronically. Conversely, a C generally means it can only be erased by exposing the die to ultraviolet light (which is only possible if a windowed package style is used). An exception to this rule is the PIC16C84 which uses EEPROM and is therefore electrically erasable.
An L in the name indicates the part will run at a lower voltage, often with frequency limits imposed. Parts designed specifically for low voltage operation, within a strict range of 3 - 3.6 volts, are marked with a J in the part number. These parts are also uniquely I/O tolerant as they will accept up to 5 V as inputs.
Development tools
Microchip provides a freeware IDE package called MPLAB X, which includes an assembler, linker, software simulator, and debugger. They also sell C compilers for the PIC10, PIC12, PIC16, PIC18, PIC24, PIC32 and dsPIC, which integrate cleanly with MPLAB X. Free versions of the C compilers are also available with all features. But for the free versions, optimizations will be disabled after 60 days.
Several third parties develop C language compilers for PICs, many of which integrate to MPLAB and/or feature their own IDE. A fully featured compiler for the PICBASIC language to program PIC microcontrollers is available from meLabs, Inc. Mikroelektronika offers PIC compilers in C, BASIC and Pascal programming languages.
A graphical programming language, Flowcode, exists capable of programming 8- and 16-bit PIC devices and generating PIC-compatible C code. It exists in numerous versions from a free demonstration to a more complete professional edition.
The Proteus Design Suite is able to simulate many of the popular 8 and 16-bit PIC devices along with other circuitry that is connected to the PIC on the schematic. The program to be simulated can be developed within Proteus itself, MPLAB or any other development tool.
Device programmers
Devices called "programmers" are traditionally used to get program code into the target PIC. Most PICs that Microchip currently sells feature ICSP (In Circuit Serial Programming) and/or LVP (Low Voltage Programming) capabilities, allowing the PIC to be programmed while it is sitting in the target circuit.
Microchip offers programmers/debuggers under the MPLAB and PICKit series. MPLAB ICD4 and MPLAB REAL ICE are the current programmers and debuggers for professional engineering, while PICKit 3 is a low-cost programmer / debugger line for hobbyists and students.
Bootloading
Many of the higher end flash based PICs can also self-program (write to their own program memory), a process known as bootloading. Demo boards are available with a small bootloader factory programmed that can be used to load user programs over an interface such as RS-232 or USB, thus obviating the need for a programmer device.
Alternatively there is bootloader firmware available that the user can load onto the PIC using ICSP. After programming the bootloader onto the PIC, the user can then reprogram the device using RS232 or USB, in conjunction with specialized computer software.
The advantages of a bootloader over ICSP is faster programming speeds, immediate program execution following programming, and the ability to both debug and program using the same cable.
Third party
There are many programmers for PIC microcontrollers, ranging from the extremely simple designs which rely on ICSP to allow direct download of code from a host computer, to intelligent programmers that can verify the device at several supply voltages. Many of these complex programmers use a pre-programmed PIC themselves to send the programming commands to the PIC that is to be programmed. The intelligent type of programmer is needed to program earlier PIC models (mostly EPROM type) which do not support in-circuit programming.
Third party programmers range from plans to build your own, to self-assembly kits and fully tested ready-to-go units. Some are simple designs which require a PC to do the low-level programming signalling (these typically connect to the serial or parallel port and consist of a few simple components), while others have the programming logic built into them (these typically use a serial or USB connection, are usually faster, and are often built using PICs themselves for control).
Debugging
In-circuit debugging
All newer PIC devices feature an ICD (in-circuit debugging) interface, built into the CPU core, that allows for interactive debugging of the program in conjunction with MPLAB IDE. MPLAB ICD and MPLAB REAL ICE debuggers can communicate with this interface using the ICSP interface.
This debugging system comes at a price however, namely limited breakpoint count (1 on older devices, 3 on newer devices), loss of some I/O (with the exception of some surface mount 44-pin PICs which have dedicated lines for debugging) and loss of some on-chip features.
Some devices do not have on-chip debug support, due to cost or lack of pins. Some larger chips also have no debug module. To debug these devices, a special -ICD version of the chip mounted on a daughter board which provides dedicated ports is required. Some of these debug chips are able to operate as more than one type of chip by the use of selectable jumpers on the daughter board. This allows broadly identical architectures that do not feature all the on chip peripheral devices to be replaced by a single -ICD chip. For example: the 12F690-ICD will function as one of six different parts each of which features one, some or all of five on chip peripherals.
In-circuit emulators
Microchip offers three full in-circuit emulators: the MPLAB ICE2000 (parallel interface, a USB converter is available); the newer MPLAB ICE4000 (USB 2.0 connection); and most recently, the REAL ICE (USB 2.0 connection). All such tools are typically used in conjunction with MPLAB IDE for source-level interactive debugging of code running on the target.
Operating systems
PIC projects may utilize real-time operating systems such as FreeRTOS, AVIX RTOS, uRTOS, Salvo RTOS or other similar libraries for task scheduling and prioritization.
An open source project by Serge Vakulenko adapts 2.11BSD to the PIC32 architecture, under the name RetroBSD. This brings a familiar Unix-like operating system, including an onboard development environment, to the microcontroller, within the constraints of the onboard hardware.
Clones
Parallax
Parallax produced a series of PICmicro-like microcontrollers known as the Parallax SX. It is currently discontinued. Designed to be architecturally similar to the PIC microcontrollers used in the original versions of the BASIC Stamp, SX microcontrollers replaced the PIC in several subsequent versions of that product.
Parallax's SX are 8-bit RISC microcontrollers, using a 12-bit instruction word, which run fast at 75 MHz (75 MIPS). They include up to 4096 12-bit words of flash memory and up to 262 bytes of random access memory, an eight bit counter and other support logic. There are software library modules to emulate I²C and SPI interfaces, UARTs, frequency generators, measurement counters and PWM and sigma-delta A/D converters. Other interfaces are relatively easy to write, and existing modules can be modified to get new features.
PKK Milandr
Russian PKK Milandr produces microcontrollers using the PIC17 architecture as the 1886 series.
Program memory consists of up to 64kB Flash memory in the 1886VE2U () or 8kB EEPROM in the 1886VE5U (1886ВЕ5У). The 1886VE5U (1886ВЕ5У) through 1886VE7U (1886ВЕ7У) are specified for the military temperature range of -60 °C to +125 °C. Hardware interfaces in the various parts include USB, CAN, I2C, SPI, as well as A/D and D/A converters. The 1886VE3U (1886ВЕ3У) contains a hardware accelerator for cryptographic functions according to GOST 28147-89. There are even radiation-hardened chips with the designations 1886VE8U (1886ВЕ8У) and 1886VE10U (1886ВЕ10У).
ELAN Microelectronics
ELAN Microelectronics Corp. in Taiwan make a line of microcontrollers based on the PIC16 architecture, with 13-bit instructions and a smaller (6-bit) RAM address space.
Holtek Semiconductor
Holtek Semiconductor make a large number of very cheap microcontrollers (as low as 8.5 cents in quantity) with a 14-bit instruction set strikingly similar to the PIC16.
Other manufacturers in Asia
Many ultra-low-cost OTP microcontrollers from Asian manufacturers, found in low-cost consumer electronics are based on the PIC architecture or modified form. Most clones only target the baseline parts (PIC16C5x/PIC12C50x). Microchip has attempted to sue some manufacturers when the copying is particularly egregious,
without success.
See also
PIC16x84
Atmel AVR
Arduino
BASIC Atom
BASIC Stamp
OOPic
PICAXE
TI MSP430
Maximite
References
Further reading
Microcontroller Theory and Applications, with the PIC18F; 2nd Ed; M. Rafiquzzaman; Wiley; 544 pages; 2018; .
Microcontroller System Design Using PIC18F Processors; Nicolas K. Haddad; IGI Global; 428 pages; 2017; .
PIC Microcontroller Projects in C: Basic to Advanced (for PIC18F); 2nd Ed; Dogan Ibrahim; Newnes; 660 pages; 2014; . (1st Ed)
Microcontroller Programming: Microchip PIC; Sanchez and Canton; CRC Press; 824 pages; 2006; . (1st Ed)
PIC Microcontroller Project Book; John Iovine; TAB; 272 pages; 2000; . (1st Ed)
External links
.
Official Microchip website
PIC wifi projects website
Microcontrollers
Instruction set architectures
Microchip Technology hardware |
184781 | https://en.wikipedia.org/wiki/Tempest%20%28codename%29 | Tempest (codename) | TEMPEST is a U.S. National Security Agency specification and a NATO certification referring to spying on information systems through leaking emanations, including unintentional radio or electrical signals, sounds, and vibrations. TEMPEST covers both methods to spy upon others and how to shield equipment against such spying. The protection efforts are also known as emission security (EMSEC), which is a subset of communications security (COMSEC).
The NSA methods for spying on computer emissions are classified, but some of the protection standards have been released by either the NSA or the Department of Defense. Protecting equipment from spying is done with distance, shielding, filtering, and masking. The TEMPEST standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials, filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data.
While much of TEMPEST is about leaking electromagnetic emanations, it also encompasses sounds and mechanical vibrations. For example, it is possible to log a user's keystrokes using the motion sensor inside smartphones. Compromising emissions are defined as unintentional intelligence-bearing signals which, if intercepted and analyzed (side-channel attack), may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment.
History
During World War II, Bell Telephone supplied the U.S. military with the 131-B2 mixer device that encrypted teleprinter signals by XOR’ing them with key material from one-time tapes (the SIGTOT system) or, earlier, a rotor-based key generator called SIGCUM. It used electromechanical relays in its operation. Later Bell informed the Signal Corps that they were able to detect electromagnetic spikes at a distance from the mixer and recover the plain text. Meeting skepticism over whether the phenomenon they discovered in the laboratory could really be dangerous, they demonstrated their ability to recover plain text from a Signal Corps’ crypto center on Varick Street in Lower Manhattan. Now alarmed, the Signal Corps asked Bell to investigate further. Bell identified three problem areas: radiated signals, signals conducted on wires extending from the facility, and magnetic fields. As possible solutions, they suggested shielding, filtering and masking.
Bell developed a modified mixer, the 131-A1 with shielding and filtering, but it proved difficult to maintain and too expensive to deploy. Instead, relevant commanders were warned of the problem and advised to control a 100 ft (30 m) -diameter zone around their communications center to prevent covert interception, and things were left at that. Then in 1951, the CIA rediscovered the problem with the 131-B2 mixer and found they could recover plain text off the line carrying the encrypted signal from a quarter-mile away. Filters for signal and power lines were developed, and the recommended control-perimeter radius was extended to 200 feet (60 m), based more on what commanders could be expected to accomplish than any technical criteria.
A long process of evaluating systems and developing possible solutions followed. Other compromising effects were discovered, such as fluctuations in the power line as rotors stepped. The question of exploiting the noise of electromechanical encryption systems had been raised in the late 1940s, but was re-evaluated now as a possible threat. Acoustical emanations could reveal plain text, but only if the pick-up device was close to the source. Nevertheless, even mediocre microphones would do. Soundproofing the room made the problem worse by removing reflections and providing a cleaner signal to the recorder.
In 1956, the Naval Research Laboratory developed a better mixer that operated at much lower voltages and currents and therefore radiated far less. It was incorporated in newer NSA encryption systems. However, many users needed the higher signal levels to drive teleprinters at greater distances or where multiple teleprinters were connected, so the newer encryption devices included the option to switch the signal back up to the higher strength. NSA began developing techniques and specifications for isolating sensitive-communications pathways through filtering, shielding, grounding, and physical separation: of those lines that carried sensitive plain text – from those intended to carry only non-sensitive data, the latter often extending outside of the secure environment. This separation effort became known as the Red/Black Concept. A 1958 joint policy called NAG-1 set radiation standards for equipment and installations based on a 50 ft (15 m) limit of control. It also specified the classification levels of various aspects of the TEMPEST problem. The policy was adopted by Canada and the UK the next year. Six organizations, Navy, Army, Air Force, NSA, CIA, and the State Department were to provide the bulk of the effort for its implementation.
Difficulties quickly emerged. Computerization was becoming important to processing intelligence data, and computers and their peripherals had to be evaluated, wherein many of them evidenced vulnerabilities. The Friden Flexowriter, a popular I/O typewriter at the time, proved to be among the strongest emitters, readable at distances up to 3,200 ft (about 1 km) in field tests. The U.S. Communications Security Board (USCSB) produced a Flexowriter Policy that banned its use overseas for classified information and limited its use within the U.S. to the Confidential level, and then only within a 400 ft (120 m) security zone – but users found the policy onerous and impractical. Later, the NSA found similar problems with the introduction of cathode-ray-tube displays (CRTs), which were also powerful radiators.
There was a multi-year process of moving from policy recommendations to more strictly enforced TEMPEST rules. The resulting Directive 5200.19, coordinated with 22 separate agencies, was signed by Secretary of Defense Robert McNamara in December 1964, but still took months to fully implement. The NSA’s formal implementation took effect in June 1966.
Meanwhile, the problem of acoustic emanations became more critical with the discovery of some 900 microphones in U.S. installations overseas, most behind the Iron Curtain. The response was to build room-within-a-room enclosures, some transparent, nicknamed "fish bowls". Other units were fully shielded to contain electronic emanations, but were unpopular with the personnel who were supposed to work inside; they called the enclosures "meat lockers", and sometimes just left their doors open. Nonetheless, they were installed in critical locations, such as the embassy in Moscow, where two were installed: one for State-Department use and one for military attachés. A unit installed at the NSA for its key-generation equipment cost $134,000.
Tempest standards continued to evolve in the 1970s and later, with newer testing methods and more-nuanced guidelines that took account of the risks in specific locations and situations. But then as now, security needs often met with resistance. According to NSA's David G. Boak, "Some of what we still hear today in our own circles, when rigorous technical standards are whittled down in the interest of money and time, are frighteningly reminiscent of the arrogant Third Reich with their Enigma cryptomachine."
Shielding standards
Many specifics of the TEMPEST standards are classified, but some elements are public. Current United States and NATO Tempest standards define three levels of protection requirements:
NATO SDIP-27 Level A (formerly AMSG 720B) and USA NSTISSAM Level I
"Compromising Emanations Laboratory Test Standard"
This is the strictest standard for devices that will be operated in NATO Zone 0 environments, where it is assumed that an attacker has almost immediate access (e.g. neighbouring room, 1 metre; 3' distance).
NATO SDIP-27 Level B (formerly AMSG 788A) and USA NSTISSAM Level II
"Laboratory Test Standard for Protected Facility Equipment"
This is a slightly relaxed standard for devices that are operated in NATO Zone 1 environments, where it is assumed that an attacker cannot get closer than about 20 metres (65') (or where building materials ensure an attenuation equivalent to the free-space attenuation of this distance).
NATO SDIP-27 Level C (formerly AMSG 784) and USA NSTISSAM Level III
"Laboratory Test Standard for Tactical Mobile Equipment/Systems"
An even more relaxed standard for devices operated in NATO Zone 2 environments, where attackers have to deal with the equivalent of 100 metres (300') of free-space attenuation (or equivalent attenuation through building materials).
Additional standards include:
NATO SDIP-29 (formerly AMSG 719G)
"Installation of Electrical Equipment for the Processing of Classified Information"
This standard defines installation requirements, for example in respect to grounding and cable distances.
AMSG 799B
"NATO Zoning Procedures"
Defines an attenuation measurement procedure, according to which individual rooms within a security perimeter can be classified into Zone 0, Zone 1, Zone 2, or Zone 3, which then determines what shielding test standard is required for equipment that processes secret data in these rooms.
The NSA and Department of Defense have declassified some TEMPEST elements after Freedom of Information Act requests, but the documents black out many key values and descriptions. The declassified version of the TEMPEST test standard is heavily redacted, with emanation limits and test procedures blacked out. A redacted version of the introductory Tempest handbook NACSIM 5000 was publicly released in December 2000. Additionally, the current NATO standard SDIP-27 (before 2006 known as AMSG 720B, AMSG 788A, and AMSG 784) is still classified.
Despite this, some declassified documents give information on the shielding required by TEMPEST standards. For example, Military Handbook 1195 includes the chart at the right, showing electromagnetic shielding requirements at different frequencies. A declassified NSA specification for shielded enclosures offers similar shielding values, requiring, "a minimum of 100 dB insertion loss from 1 KHz to 10 GHz." Since much of the current requirements are still classified, there are no publicly available correlations between this 100 dB shielding requirement and the newer zone-based shielding standards.
In addition, many separation distance requirements and other elements are provided by the declassified NSA red-black installation guidance, NSTISSAM TEMPEST/2-95.
Certification
The information-security agencies of several NATO countries publish lists of accredited testing labs and of equipment that has passed these tests:
In Canada: Canadian Industrial TEMPEST Program
In Germany: BSI German Zoned Products List
In the UK: UK CESG Directory of Infosec Assured Products, Section 12
In the U.S.: NSA TEMPEST Certification Program
The United States Army also has a Tempest testing facility, as part of the U.S. Army Information Systems Engineering Command, at Fort Huachuca, Arizona. Similar lists and facilities exist in other NATO countries.
Tempest certification must apply to entire systems, not just to individual components, since connecting a single unshielded component (such as a cable or device) to an otherwise secure system could dramatically alter the system RF characteristics.
RED/BLACK separation
TEMPEST standards require "RED/BLACK separation", i.e., maintaining distance or installing shielding between circuits and equipment used to handle plaintext classified or sensitive information that is not encrypted (RED) and secured circuits and equipment (BLACK), the latter including those carrying encrypted signals. Manufacture of TEMPEST-approved equipment must be done under careful quality control to ensure that additional units are built exactly the same as the units that were tested. Changing even a single wire can invalidate the tests.
Correlated emanations
One aspect of Tempest testing that distinguishes it from limits on spurious emissions (e.g., FCC Part 15) is a requirement of absolute minimal correlation between radiated energy or detectable emissions and any plaintext data that are being processed.
Public research
In 1985, Wim van Eck published the first unclassified technical analysis of the security risks of emanations from computer monitors. This paper caused some consternation in the security community, which had previously believed that such monitoring was a highly sophisticated attack available only to governments; van Eck successfully eavesdropped on a real system, at a range of hundreds of metres, using just $15 worth of equipment plus a television set.
As a consequence of this research, such emanations are sometimes called "van Eck radiation", and the eavesdropping technique van Eck phreaking, although government researchers were already aware of the danger, as Bell Labs noted this vulnerability to secure teleprinter communications during World War II and was able to produce 75% of the plaintext being processed in a secure facility from a distance of 80 feet. (24 metres) Additionally the NSA published Tempest Fundamentals, NSA-82-89, NACSIM 5000, National Security Agency (Classified) on February 1, 1982. In addition, the van Eck technique was successfully demonstrated to non-TEMPEST personnel in Korea during the Korean War in the 1950s.
Markus Kuhn has discovered several low-cost techniques for reducing the chances that emanations from computer displays can be monitored remotely. With CRT displays and analog video cables, filtering out high-frequency components from fonts before rendering them on a computer screen will attenuate the energy at which text characters are broadcast. With modern flat panel displays, the high-speed digital serial interface (DVI) cables from the graphics controller are a main source of compromising emanations. Adding random noise to the least significant bits of pixel values may render the emanations from flat-panel displays unintelligible to eavesdroppers but is not a secure method. Since DVI uses a certain bit code scheme that tries to transport a balanced signal of 0 bits and 1 bits, there may not be much difference between two pixel colors that differ very much in their color or intensity. The emanations can differ drastically even if only the last bit of a pixel's color is changed. The signal received by the eavesdropper also depends on the frequency where the emanations are detected. The signal can be received on many frequencies at once and each frequency's signal differs in contrast and brightness related to a certain color on the screen. Usually, the technique of smothering the RED signal with noise is not effective unless the power of the noise is sufficient to drive the eavesdropper's receiver into saturation thus overwhelming the receiver input.
LED indicators on computer equipment can be a source of compromising optical emanations. One such technique involves the monitoring of the lights on a dial-up modem. Almost all modems flash an LED to show activity, and it is common for the flashes to be directly taken from the data line. As such, a fast optical system can easily see the changes in the flickers from the data being transmitted down the wire.
Recent research has shown it is possible to detect the radiation corresponding to a keypress event from not only wireless (radio) keyboards, but also from traditional wired keyboards, and even from laptop keyboards. From the 1970s onward, Soviet bugging of US Embassy IBM Selectric typewriters allowed the keypress-derived mechanical motion of bails, with attached magnets, to be detected by implanted magnetometers, and converted via hidden electronics to a digital radio frequency signal. Each eight character transmission provided Soviet access to sensitive documents, as they were being typed, at US facilities in Moscow and Leningrad.
In 2014, researchers introduced "AirHopper", a bifurcated attack pattern showing the feasibility of data exfiltration from an isolated computer to a nearby mobile phone, using FM frequency signals.
In 2015, "BitWhisper", a Covert Signaling Channel between Air-Gapped Computers using Thermal Manipulations was introduced. "BitWhisper" supports bidirectional communication and requires no additional dedicated peripheral hardware. Later in 2015, researchers introduced GSMem, a method for exfiltrating data from air-gapped computers over cellular frequencies. The transmission - generated by a standard internal bus - renders the computer into a small cellular transmitter antenna. In February 2018, research was published describing how low frequency magnetic fields can be used to escape sensitive data from Faraday-caged, air-gapped computers with malware code-named ’ODINI’ that can control the low frequency magnetic fields emitted from infected computers by regulating the load of CPU cores.
In 2018, a class of side-channel attack was introduced at ACM and Black Hat by Eurecom's researchers: "Screaming Channels". This kind of attack targets mix-signal chips — containing an analog and digital circuit on the same silicon die — with a radio transmitter. The results of this architecture, often found in connected objects, is that the digital part of the chip will leak some metadata on its computations into the analog part, which leads to metadata's leak being encoded in the noise of the radio transmission. Thanks to signal-processing techniques, researchers were able to extract cryptographic keys used during the communication and decrypt the content. This attack class is supposed, by the authors, to being already known since many years by governmental intelligence agencies.
In popular culture
In the television series Numb3rs, season 1 episode "Sacrifice", a wire connected to a high gain antenna was used to "read" from a computer monitor.
In the television series Spooks, season 4 episode "The Sting", a failed attempt to read information from a computer that has no network link is described.
In the novel Cryptonomicon by Neal Stephenson, characters use Van Eck phreaking to likewise read information from a computer monitor in a neighboring room.
In the television series Agents of S.H.I.E.L.D., season 1 episode "Ragtag", an office is scanned for digital signatures in the UHF spectrum.
In the video game Tom Clancy's Splinter Cell: Chaos Theory, part of the final mission involves spying on a meeting in a Tempest-hardened war room. Throughout the entire Splinter Cell series, a laser microphone is used as well.
In the video game Rainbow Six: Siege, the operator Mute has experience in TEMPEST specifications. He designed a Signal Disrupter initially to ensure that hidden microphones in sensitive meetings would not transmit, and adapted them for combat, capable of disrupting remotely activated devices like breaching charges.
In the novel series The Laundry Files by Charles Stross, the character James Angleton (high ranking officer of an ultra-secret intelligence agency) always uses low tech devices such as a typewriter or a Memex to defend against TEMPEST (despite the building being tempest-shielded).
See also
Air gap (networking) - air gaps can be breached by TEMPEST-like techniques
Computer and network surveillance
Computer security
ECHELON
MIL-STD-461
Side-channel attack
References
Sources
Code names
Cryptographic attacks
Side-channel attacks
Signals intelligence
Surveillance
United States government secrecy |
185868 | https://en.wikipedia.org/wiki/Wireless | Wireless | Wireless communication (or just wireless, when the context allows) is the transfer of information between two or more points that do not use an electrical conductor as a medium for the transfer. The most common wireless technologies use radio waves. With radio waves, intended distances can be short, such as a few meters for Bluetooth or as far as millions of kilometers for deep-space radio communications. It encompasses various types of fixed, mobile, and portable applications, including two-way radios, cellular telephones, personal digital assistants (PDAs), and wireless networking. Other examples of applications of radio wireless technology include GPS units, garage door openers, wireless computer mouse, keyboards and headsets, headphones, radio receivers, satellite television, broadcast television and cordless telephones. Somewhat less common methods of achieving wireless communications include the use of other electromagnetic wireless technologies, such as light, magnetic, or electric fields or the use of sound.
The term wireless has been used twice in communications history, with slightly different meaning. It was initially used from about 1890 for the first radio transmitting and receiving technology, as in wireless telegraphy, until the new word radio replaced it around 1920. Radios in the UK that were not portable continued to be referred to as wireless sets into the 1960s. The term was revived in the 1980s and 1990s mainly to distinguish digital devices that communicate without wires, such as the examples listed in the previous paragraph, from those that require wires or cables. This became its primary usage in the 2000s, due to the advent of technologies such as mobile broadband, Wi-Fi and Bluetooth.
Wireless operations permit services, such as mobile and interplanetary communications, that are impossible or impractical to implement with the use of wires. The term is commonly used in the telecommunications industry to refer to telecommunications systems (e.g. radio transmitters and receivers, remote controls, etc.) which use some form of energy (e.g. radio waves and acoustic energy) to transfer information without the use of wires. Information is transferred in this manner over both short and long distances.
History
Photophone
The first wireless telephone conversation occurred in 1880, when Alexander Graham Bell and Charles Sumner Tainter invented the photophone, a telephone that sent audio over a beam of light. The photophone required sunlight to operate, and a clear line of sight between transmitter and receiver. These factors greatly decreased the viability of the photophone in any practical use. It would be several decades before the photophone's principles found their first practical applications in military communications and later in fiber-optic communications.
Electric wireless technology
Early wireless
A number of wireless electrical signaling schemes including sending electric currents through water and the ground using electrostatic and electromagnetic induction were investigated for telegraphy in the late 19th century before practical radio systems became available. These included a patented induction system by Thomas Edison allowing a telegraph on a running train to connect with telegraph wires running parallel to the tracks, a William Preece induction telegraph system for sending messages across bodies of water, and several operational and proposed telegraphy and voice earth conduction systems.
The Edison system was used by stranded trains during the Great Blizzard of 1888 and earth conductive systems found limited use between trenches during World War I but these systems were never successful economically.
Radio waves
In 1894, Guglielmo Marconi began developing a wireless telegraph system using radio waves, which had been known about since proof of their existence in 1888 by Heinrich Hertz, but discounted as a communication format since they seemed, at the time, to be a short range phenomenon. Marconi soon developed a system that was transmitting signals way beyond distances anyone could have predicted (due in part to the signals bouncing off the then unknown ionosphere). Marconi and Karl Ferdinand Braun were awarded the 1909 Nobel Prize for Physics for their contribution to this form of wireless telegraphy.
Millimetre wave communication was first investigated by Jagadish Chandra Bose during 18941896, when he reached an extremely high frequency of up to 60GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901.
Wireless revolution
The wireless revolution began in the 1990s, with the advent of digital wireless networks leading to a social revolution, and a paradigm shift from wired to wireless technology, including the proliferation of commercial wireless technologies such as cell phones, mobile telephony, pagers, wireless computer networks, cellular networks, the wireless Internet, and laptop and handheld computers with wireless connections. The wireless revolution has been driven by advances in radio frequency (RF) and microwave engineering, and the transition from analog to digital RF technology, which enabled a substantial increase in voice traffic along with the delivery of digital data such as text messaging, images and streaming media.
Modes
Wireless communications can be via:
Radio
Radio and microwave communication carry information by modulating properties of electromagnetic waves transmitted through space. Specifically, the transmitter generates artificial electromagnetic waves by applying time-varying electric currents to its antenna. The waves travel away from the antenna until they eventually reach the antenna of a receiver, which induces an electrical current in the receiving antenna. This current can be detected and demodulated to recreate the information sent by the transmitter.
Free-space optical
Free-space optical communication (FSO) is an optical communication technology that uses light propagating in free space to transmit wirelessly data for telecommunications or computer networking. "Free space" means the light beams travel through the open air or outer space. This contrasts with other communication technologies that use light beams traveling through transmission lines such as optical fiber or dielectric "light pipes".
The technology is useful where physical connections are impractical due to high costs or other considerations. For example, free space optical links are used in cities between office buildings which are not wired for networking, where the cost of running cable through the building and under the street would be prohibitive. Another widely used example is consumer IR devices such as remote controls and IrDA (Infrared Data Association) networking, which is used as an alternative to WiFi networking to allow laptops, PDAs, printers, and digital cameras to exchange data.
Sonic
Sonic, especially ultrasonic short range communication involves the transmission and reception of sound.
Electromagnetic induction
Electromagnetic induction only allows short-range communication and power transmission. It has been used in biomedical situations such as pacemakers, as well as for short-range RFID tags.
Services
Common examples of wireless equipment include:
Infrared and ultrasonic remote control devices
Professional LMR (Land Mobile Radio) and SMR (Specialized Mobile Radio) typically used by business, industrial and Public Safety entities.
Consumer Two-way radio including FRS Family Radio Service, GMRS (General Mobile Radio Service) and Citizens band ("CB") radios.
The Amateur Radio Service (Ham radio).
Consumer and professional Marine VHF radios.
Airband and radio navigation equipment used by aviators and air traffic control
Cellular telephones and pagers: provide connectivity for portable and mobile applications, both personal and business.
Global Positioning System (GPS): allows drivers of cars and trucks, captains of boats and ships, and pilots of aircraft to ascertain their location anywhere on earth.
Cordless computer peripherals: the cordless mouse is a common example; wireless headphones, keyboards, and printers can also be linked to a computer via wireless using technology such as Wireless USB or Bluetooth.
Cordless telephone sets: these are limited-range devices, not to be confused with cell phones.
Satellite television: Is broadcast from satellites in geostationary orbit. Typical services use direct broadcast satellite to provide multiple television channels to viewers.
Electromagnetic spectrum
AM and FM radios and other electronic devices make use of the electromagnetic spectrum. The frequencies of the radio spectrum that are available for use for communication are treated as a public resource and are regulated by organizations such as the American Federal Communications Commission, Ofcom in the United Kingdom, the international ITU-R or the European ETSI. Their regulations determine which frequency ranges can be used for what purpose and by whom. In the absence of such control or alternative arrangements such as a privatized electromagnetic spectrum, chaos might result if, for example, airlines did not have specific frequencies to work under and an amateur radio operator was interfering with a pilot's ability to land an aircraft. Wireless communication spans the spectrum from 9 kHz to 300 GHz.
Applications
Mobile telephones
One of the best-known examples of wireless technology is the mobile phone, also known as a cellular phone, with more than 6.6 billion mobile cellular subscriptions worldwide as of the end of 2010. These wireless phones use radio waves from signal-transmission towers to enable their users to make phone calls from many locations worldwide. They can be used within range of the mobile telephone site used to house the equipment required to transmit and receive the radio signals from these instruments.
Data communications
Wireless data communications allows wireless networking between desktop computers, laptops, tablet computers, cell phones and other related devices. The various available technologies differ in local availability, coverage range and performance, and in some circumstances users employ multiple connection types and switch between them using connection manager software or a mobile VPN to handle the multiple connections as a secure, single virtual network. Supporting technologies include:
Wi-Fi is a wireless local area network that enables portable computing devices to connect easily with other devices, peripherals, and the Internet. Standardized as IEEE 802.11 a, b, g, n, ac, ax, Wi-Fi has link speeds similar to older standards of wired Ethernet. Wi-Fi has become the de facto standard for access in private homes, within offices, and at public hotspots. Some businesses charge customers a monthly fee for service, while others have begun offering it free in an effort to increase the sales of their goods.
Cellular data service offers coverage within a range of 10-15 miles from the nearest cell site. Speeds have increased as technologies have evolved, from earlier technologies such as GSM, CDMA and GPRS, through 3G, to 4G networks such as W-CDMA, EDGE or CDMA2000. As of 2018, the proposed next generation is 5G.
Low-power wide-area networks (LPWAN) bridge the gap between Wi-Fi and Cellular for low bitrate Internet of things (IoT) applications.
Mobile-satellite communications may be used where other wireless connections are unavailable, such as in largely rural areas or remote locations. Satellite communications are especially important for transportation, aviation, maritime and military use.
Wireless sensor networks are responsible for sensing noise, interference, and activity in data collection networks. This allows us to detect relevant quantities, monitor and collect data, formulate clear user displays, and to perform decision-making functions
Wireless data communications are used to span a distance beyond the capabilities of typical cabling in point-to-point communication and point-to-multipoint communication, to provide a backup communications link in case of normal network failure, to link portable or temporary workstations, to overcome situations where normal cabling is difficult or financially impractical, or to remotely connect mobile users or networks.
Peripherals
Peripheral devices in computing can also be connected wirelessly, as part of a Wi-Fi network or directly via an optical or radio-frequency (RF) peripheral interface. Originally these units used bulky, highly local transceivers to mediate between a computer and a keyboard and mouse; however, more recent generations have used smaller, higher-performance devices. Radio-frequency interfaces, such as Bluetooth or Wireless USB, provide greater ranges of efficient use, usually up to 10 feet, but distance, physical obstacles, competing signals, and even human bodies can all degrade the signal quality. Concerns about the security of wireless keyboards arose at the end of 2007, when it was revealed that Microsoft's implementation of encryption in some of its 27 MHz models was highly insecure.
Energy transfer
Wireless energy transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. There are two different fundamental methods for wireless energy transfer. Energy can be transferred using either far-field methods that involve beaming power/lasers, radio or microwave transmissions or near-field using electromagnetic induction. Wireless energy transfer may be combined with wireless information transmission in what is known as Wireless Powered Communication. In 2015, researchers at the University of Washington demonstrated far-field energy transfer using Wi-Fi signals to power cameras.
Medical technologies
New wireless technologies, such as mobile body area networks (MBAN), have the capability to monitor blood pressure, heart rate, oxygen level and body temperature. The MBAN works by sending low powered wireless signals to receivers that feed into nursing stations or monitoring sites. This technology helps with the intentional and unintentional risk of infection or disconnection that arise from wired connections.
Categories of implementations, devices and standards
Cellular networks: 0G, 1G, 2G, 3G, Beyond 3G (4G), Future wireless
Cordless telephony: DECT (Digital Enhanced Cordless Telecommunications)
Land Mobile Radio or Professional Mobile Radio: TETRA, P25, OpenSky, EDACS, DMR, dPMR
List of emerging technologies
Radio station in accordance with ITU RR (article 1.61)
Radiocommunication service in accordance with ITU RR (article 1.19)
Radio communication system
Short-range point-to-point communication: Wireless microphones, Remote controls, IrDA, RFID (Radio Frequency Identification), TransferJet, Wireless USB, DSRC (Dedicated Short Range Communications), EnOcean, Near Field Communication
Wireless sensor networks: ZigBee, EnOcean; Personal area networks, Bluetooth, TransferJet, Ultra-wideband (UWB from WiMedia Alliance).
Wireless networks: Wireless LAN (WLAN), (IEEE 802.11 branded as Wi-Fi and HiperLAN), Wireless Metropolitan Area Networks (WMAN) and (LMDS, WiMAX, and HiperMAN)
See also
Comparison of wireless data standards
Digital radio
Hotspot (Wi-Fi)
Li-Fi
MiFi
Mobile (disambiguation)
Radio antenna
Radio resource management (RRM)
Timeline of radio
Tuner (radio)
Wireless access point
Wireless security
Wireless Wide Area Network (True wireless)
ISO 15118 (Vehicle to Grid)
References
Further reading
External links
Bibliography - History of wireless and radio broadcasting
Sir Jagadis Chandra Bose - The man who (almost) invented the radio
History of radio
Television terminology |
186266 | https://en.wikipedia.org/wiki/ITunes | ITunes | iTunes () is a software program that acts as a media player, media library, mobile device management utility, and the client app for the iTunes Store. Developed by Apple Inc., it is used to purchase, play, download, and organize digital multimedia, on personal computers running the macOS and Windows operating systems, and can be used to rip songs from CDs, as well as play content with the use of dynamic, smart playlists. Options for sound optimizations exist, as well as ways to wirelessly share the iTunes library.
Originally announced by CEO Steve Jobs on January 9, 2001, iTunes' original and main focus was music, with a library offering organization, collection, and storage of users' music collections. Starting in 2005, Apple expanded on the core music features with support for digital video, podcasts, e-books, and mobile apps purchased from the iOS App Store.
Until the release of iOS 5 in 2011, all iPhones, iPod Touches and iPads required iTunes for activation and updating mobile apps. Newer iOS devices have less reliance on iTunes in order to function, though it can still be used to back up the contents of mobile devices, as well as to share files with personal computers.
Though well received in its early years, iTunes soon received increasingly significant criticism for a bloated user experience, with Apple adopting an all-encompassing feature-set in iTunes rather than sticking to its original music-based purpose. On June 3, 2019, Apple announced that iTunes in macOS Catalina would be replaced by separate apps, namely Music, Podcasts, and TV. Finder would take over the device management capabilities. This change would not affect Windows or older macOS versions.
History
SoundJam MP, released by Casady & Greene in 1998, was renamed "iTunes" when Apple purchased it in 2000. The primary developers of the software moved to Apple as part of the acquisition, and simplified SoundJam's user interface, added the ability to burn CDs, and removed its recording feature and skin support. The first version of iTunes, promotionally dubbed "World’s Best and Easiest To Use Jukebox Software," was announced on January 9, 2001. Subsequent releases of iTunes often coincided with new hardware devices, and gradually included support for new features, including "smart playlists", the iTunes Store, and new audio formats.
Platform availability
Apple released iTunes for Windows in 2003.
On April 26, 2018, iTunes was released on Microsoft Store for Windows 10, primarily to allow it to be installed on Windows 10 devices configured to only allow installation of software from Microsoft Store. Unlike Windows versions for other platforms, it is more self-contained due to technical requirements for distribution on the store (not installing background helper services such as Bonjour), and is updated automatically though the store rather than using Apple Software Update.
Music library
iTunes features a music library. Each track has attributes, called metadata, that can be edited by the user, including changing the name of the artist, album, and genre, year of release, artwork, among other additional settings. The software supports importing digital audio tracks that can then be transferred to iOS devices, as well as supporting ripping content from CDs. iTunes supports WAV, AIFF, Apple Lossless, AAC, and MP3 audio formats. It uses the Gracenote music database to provide track name listings for audio CDs. When users rip content from a CD, iTunes attempts to match songs to the Gracenote service. For self-published CDs, or those from obscure record labels, iTunes will normally only list tracks as numbered entries ("Track 1" and "Track 2") on an unnamed album by an unknown artist, requiring manual input of data.
File metadata is displayed in users' libraries in columns, including album, artist, genre, composer, and more. Users can enable or disable different columns, as well as change view settings.
Special playlists
Introduced in 2004, "Party Shuffle" selected tracks to play randomly from the library, though users could press a button to skip a song and go to the next in the list. The feature was later renamed "iTunes DJ", before being discontinued altogether, replaced by a simpler "Up Next" feature that notably lost some of "iTunes DJ"'s functionality.
Introduced in iTunes 8 in 2008, "" can automatically generate a playlist of songs from the user's library that "go great together". "Genius" transmits information about the user's library to Apple anonymously, and evolves over time to enhance its recommendation system. It can also suggest purchases to fill out "holes" in the library. The feature was updated with iTunes 9 in 2009 to offer "Genius Mixes", which generated playlists based on specific music genres.
"Smart playlists" are a set of playlists that can be set to automatically filter the library based on a customized list of selection criteria, much like a database query. Multiple criteria can be entered to manage the smart playlist. Selection criteria examples include a genre like Christmas music, songs that haven't been played recently, or songs the user has listened to the most in a time period.
Library sharing
Through a "Home Sharing" feature, users can share their iTunes library wirelessly. Computer firewalls must allow network traffic, and users must specifically enable sharing in the iTunes preferences menu. iOS applications also exist that can transfer content without Internet. Additionally, users can set up a network-attached storage system, and connect to that storage system through an app.
Artwork printing
To compensate for the "boring" design of standard CDs, iTunes can print custom-made jewel case inserts. After burning a CD from a playlist, one can select that playlist and bring up a dialog box with several print options, including different "Themes" of album artworks.
Sound processing
iTunes includes sound processing features, such as equalization, "sound enhancement" and crossfade. There is also a feature called , which automatically adjusts the playback volume of all songs in the library to the same level.
Video
In May 2005, video support was introduced to iTunes with the release of iTunes 4.8, though it was limited to bonus features part of album purchases. The following October, Apple introduced iTunes 6, enabling support for purchasing and viewing video content purchased from the iTunes Store. At launch, the store offered popular shows from the ABC network, including Desperate Housewives and Lost, along with Disney Channel series That's So Raven and The Suite Life of Zack and Cody. CEO Steve Jobs told the press that "We’re doing for video what we’ve done for music — we’re making it easy and affordable to purchase and download, play on your computer, and take with you on your iPod."
In 2008, Apple and select film studios introduced "iTunes Digital Copy", a feature on select DVDs and Blu-ray discs allowing a digital copy in iTunes and associated media players.
Podcasts
In June 2005, Apple updated iTunes with support for podcasts. Users can subscribe to podcasts, change update frequency, define how many episodes to download and how many to delete.
Similar to songs, "Smart playlists" can be used to control podcasts in a playlist, setting criteria such as date and number of times listened to.
Apple is credited for being the major catalyst behind the early growth of podcasting.
Books
In January 2010, Apple announced the iPad tablet, and along with it, a new app for it called iBooks (now known as Apple Books). The app allowed users to purchase e-books from the iTunes Store, manage them through iTunes, and transfer the content to their iPad.
Apps
On July 10, 2008, Apple introduced native mobile apps for its iOS operating system. On iOS, a dedicated App Store application served as the storefront for browsing, purchasing and managing applications, whereas iTunes on computers had a dedicated section for apps rather than a separate app. In September 2017, Apple updated iTunes to version 12.7, removing the App Store section in the process. However, the following month, iTunes 12.6.3 was also released, retaining the App Store, with 9to5Mac noting that the secondary release was positioned by Apple as "necessary for some businesses performing internal app deployments".
iTunes Store
Introduced on April 28, 2003, The iTunes Music Store allows users to buy and download songs, with 200,000 tracks available at launch. In its first week, customers bought more than one million songs. Music purchased was protected by FairPlay, an encryption layer referred to as digital rights management (DRM). The use of DRM, which limited devices capable of playing purchased files, sparked efforts to remove the protection mechanism. Eventually, after an open letter to the music industry by CEO Steve Jobs in February 2007, Apple introduced a selection of DRM-free music in the iTunes Store in April 2007, followed by its entire music catalog without DRM in January 2009.
In October 2005, Apple announced that movies and television shows would become available through its iTunes Store, employing the DRM protection.
iTunes U
In May 2007, Apple announced the launch of "iTunes U" via the iTunes Store, which delivers university lectures from top U.S. colleges.
With iTunes version 12.7 in August 2017, iTunes U collections became a part of the Podcasts app.
On June 10, 2020, Apple formally announced that iTunes U will be discontinued from the end of 2021.
iTunes in the Cloud and iTunes Match
In June 2011, Apple announced "iTunes in the Cloud", in which music purchases were stored on Apple's servers and made available for automatic downloading on new devices. For music the user owns, such as content ripped from CDs, the company introduced "iTunes Match", a feature that can upload content to Apple's servers, match it to its catalog, change the quality to 256kbit/s AAC format, and make it available to other devices.
Internet radio and music streaming
When iTunes was first released, it came with support for the Kerbango Internet radio tuner service. In June 2013, the company announced iTunes Radio, a free music streaming service. In June 2015, Apple announced Apple Music, its paid music streaming service, and subsequently rebranded iTunes Radio as Beats 1, a radio station accompanying Apple Music.
iPhone connectivity
iTunes was used to activate early iPhone models. Beginning with the iPhone 3G in June 2008, activation did not require iTunes, making use of activation at point of sale. Later iPhone models were able to be activated and set-up on their own, without requiring the use of iTunes.
Ping
With the release of iTunes 10 in September 2010, Apple announced iTunes Ping, which CEO Steve Jobs described as "social music discovery". It had features reminiscent of Facebook, including profiles and the ability to follow other users. Ping was discontinued in September 2012.
Criticism
Security
The Telegraph reported in November 2011 that Apple had been aware of a security vulnerability since 2008 that would let unauthorized third parties install "updates" to users' iTunes software. Apple fixed the issue before the Telegraphs report and told the media that "The security and privacy of our users is extremely important", though this was questioned by security researcher Brian Krebs, who told the publication that "A prominent security researcher warned Apple about this dangerous vulnerability in mid-2008, yet the company waited more than 1,200 days to fix the flaw."
Software bloat
iTunes has been repeatedly accused of being bloated as part of Apple's efforts to turn it from a music player to an all-encompassing multimedia platform. Former PC World editor Ed Bott accused the company of hypocrisy in its advertising attacks on Windows for similar practices.
The role of iTunes has been replaced with independent apps for Apple Music, Apple TV, as well as iPhone, iPod, and iPad management being put into Finder, starting with macOS 10.15 Catalina.
See also
iTunes Festival
iTunes Store
iTunes version history
AirPlay
List of audio conversion software
Comparison of iPod managers
Dazzboard
Distribution Into iTunes
FairPlay
Feed aggregators:
Feed aggregators, comparison
Feed aggregators, List
Media players, comparison
Music visualization
References
External links
– official site
Apple Inc. services
Online music database clients
Computer-related introductions in 2001
Products and services discontinued in 2019
2001 software
Apple Inc. software
IOS software
IPod software
Mobile device management software
MacOS CD ripping software
Podcasting software
Internet properties established in 2001
Internet properties disestablished in 2019
Jukebox-style media players
Macintosh media players
MacOS media players
Music streaming services
Tag editors
Transactional video on demand
Windows CD ripping software
Windows CD/DVD writing software
Windows media players |
186698 | https://en.wikipedia.org/wiki/Point-to-Point%20Tunneling%20Protocol | Point-to-Point Tunneling Protocol | The Point-to-Point Tunneling Protocol (PPTP) is an obsolete method for implementing virtual private networks. PPTP has many well known security issues.
PPTP uses a TCP control channel and a Generic Routing Encapsulation tunnel to encapsulate PPP packets. Many modern VPNs use various forms of UDP for this same functionality.
The PPTP specification does not describe encryption or authentication features and relies on the Point-to-Point Protocol being tunneled to implement any and all security functionalities.
The PPTP implementation that ships with the Microsoft Windows product families implements various levels of authentication and encryption natively as standard features of the Windows PPTP stack. The intended use of this protocol is to provide security levels and remote access levels comparable with typical VPN products.
History
A specification for PPTP was published in July 1999 as RFC 2637 and was developed by a vendor consortium formed by Microsoft, Ascend Communications (today part of Nokia), 3Com, and others.
PPTP has not been proposed nor ratified as a standard by the Internet Engineering Task Force.
Description
A PPTP tunnel is instantiated by communication to the peer on TCP port 1723. This TCP connection is then used to initiate and manage a GRE tunnel to the same peer. The PPTP GRE packet format is non standard, including a new acknowledgement number field replacing the typical routing field in the GRE header. However, as in a normal GRE connection, those modified GRE packets are directly encapsulated into IP packets, and seen as IP protocol number 47. The GRE tunnel is used to carry encapsulated PPP packets, allowing the tunnelling of any protocols that can be carried within PPP, including IP, NetBEUI and IPX.
In the Microsoft implementation, the tunneled PPP traffic can be authenticated with PAP, CHAP, MS-CHAP v1/v2 .
Security
PPTP has been the subject of many security analyses and serious security vulnerabilities have been found in the protocol. The known vulnerabilities relate to the underlying PPP authentication protocols used, the design of the MPPE protocol as well as the integration between MPPE and PPP authentication for session key establishment.
A summary of these vulnerabilities is below:
MS-CHAP-v1 is fundamentally insecure. Tools exist to trivially extract the NT Password hashes from a captured MSCHAP-v1 exchange.
When using MS-CHAP-v1, MPPE uses the same RC4 session key for encryption in both directions of the communication flow. This can be cryptanalysed with standard methods by XORing the streams from each direction together.
MS-CHAP-v2 is vulnerable to dictionary attacks on the captured challenge response packets. Tools exist to perform this process rapidly.
In 2012, it was demonstrated that the complexity of a brute-force attack on a MS-CHAP-v2 key is equivalent to a brute-force attack on a single DES key. An online service was also demonstrated which is capable of decrypting a MS-CHAP-v2 MD4 passphrase in 23 hours.
MPPE uses the RC4 stream cipher for encryption. There is no method for authentication of the ciphertext stream and therefore the ciphertext is vulnerable to a bit-flipping attack. An attacker could modify the stream in transit and adjust single bits to change the output stream without possibility of detection. These bit flips may be detected by the protocols themselves through checksums or other means.
EAP-TLS is seen as the superior authentication choice for PPTP; however, it requires implementation of a public-key infrastructure for both client and server certificates. As such, it may not be a viable authentication option for some remote access installations. Most networks that use PPTP have to apply additional security measures or be deemed completely inappropriate for the modern internet environment. At the same time, doing so means negating the aforementioned benefits of the protocol to some point.
See also
IPsec
Layer 2 Tunneling Protocol (L2TP)
Secure Socket Tunneling Protocol (SSTP)
OpenVPN, open source software application that implements VPN
WireGuard, a simple and effective VPN implementation
References
External links
Windows NT: Understanding PPTP from Microsoft
FAQ on security flaws in Microsoft's implementation, Bruce Schneier, 1998
Cryptanalysis of Microsoft's PPTP Authentication Extensions (MS-CHAPv2), Bruce Schneier, 1999
Broken cryptography algorithms
Transport layer protocols
Tunneling protocols |
187806 | https://en.wikipedia.org/wiki/End-to-end | End-to-end | End-to-end or End to End may refer to:
End-to-end auditable voting systems, a voting system
End-to-end delay, the time for a packet to be transmitted across a network from source to destination
End-to-end encryption, a cryptographic paradigm involving uninterrupted protection of data traveling between two communicating parties
End-to-end data integrity
End-to-end principle, a principal design element of the Internet
End-to-end reinforcement learning
End-to-end vector, points from one end of a polymer to the other end
Land's End to John o' Groats, the journey from "End to End" across Great Britain
End-to-end testing (see also: Verification and validation)
See also
E2E (disambiguation)
Point-to-point (telecommunications) |
187813 | https://en.wikipedia.org/wiki/Transport%20Layer%20Security | Transport Layer Security | Transport Layer Security (TLS), the successor of the now-deprecated Secure Sockets Layer (SSL), is a cryptographic protocol designed to provide communications security over a computer network. The protocol is widely used in applications such as email, instant messaging, and voice over IP, but its use in securing HTTPS remains the most publicly visible.
The TLS protocol aims primarily to provide cryptography, including privacy (confidentiality), integrity, and authenticity through the use of certificates, between two or more communicating computer applications. It runs in the application layer and is itself composed of two layers: the TLS record and the TLS handshake protocols.
TLS is a proposed Internet Engineering Task Force (IETF) standard, first defined in 1999, and the current version is TLS 1.3 defined in August 2018. TLS builds on the earlier SSL specifications (1994, 1995, 1996) developed by Netscape Communications for adding the HTTPS protocol to their Navigator web browser.
Description
Client-server applications use the TLS protocol to communicate across a network in a way designed to prevent eavesdropping and tampering.
Since applications can communicate either with or without TLS (or SSL), it is necessary for the client to request that the server sets up a TLS connection. One of the main ways of achieving this is to use a different port number for TLS connections. For example, port 80 is typically used for unencrypted HTTP traffic while port 443 is the common port used for encrypted HTTPS traffic. Another mechanism is for the client to make a protocol-specific request to the server to switch the connection to TLS; for example, by making a STARTTLS request when using the mail and news protocols.
Once the client and server have agreed to use TLS, they negotiate a stateful connection by using a handshaking procedure. The protocols use a handshake with an asymmetric cipher to establish not only cipher settings but also a session-specific shared key with which further communication is encrypted using a symmetric cipher. During this handshake, the client and server agree on various parameters used to establish the connection's security:
The handshake begins when a client connects to a TLS-enabled server requesting a secure connection and the client presents a list of supported cipher suites (ciphers and hash functions).
From this list, the server picks a cipher and hash function that it also supports and notifies the client of the decision.
The server usually then provides identification in the form of a digital certificate. The certificate contains the server name, the trusted certificate authority (CA) that vouches for the authenticity of the certificate, and the server's public encryption key.
The client confirms the validity of the certificate before proceeding.
To generate the session keys used for the secure connection, the client either:
encrypts a random number (PreMasterSecret) with the server's public key and sends the result to the server (which only the server should be able to decrypt with its private key); both parties then use the random number to generate a unique session key for subsequent encryption and decryption of data during the session
uses Diffie–Hellman key exchange to securely generate a random and unique session key for encryption and decryption that has the additional property of forward secrecy: if the server's private key is disclosed in future, it cannot be used to decrypt the current session, even if the session is intercepted and recorded by a third party.
This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the session key until the connection closes. If any one of the above steps fails, then the TLS handshake fails and the connection is not created.
TLS and SSL do not fit neatly into any single layer of the OSI model or the TCP/IP model. TLS runs "on top of some reliable transport protocol (e.g., TCP)," which would imply that it is above the transport layer. It serves encryption to higher layers, which is normally the function of the presentation layer. However, applications generally use TLS as if it were a transport layer, even though applications using TLS must actively control initiating TLS handshakes and handling of exchanged authentication certificates.
When secured by TLS, connections between a client (e.g., a web browser) and a server (e.g., wikipedia.org) should have one or more of the following properties:
The connection is private (or secure) because a symmetric-key algorithm is used to encrypt the data transmitted. The keys for this symmetric encryption are generated uniquely for each connection and are based on a shared secret that was negotiated at the start of the session. The server and client negotiate the details of which encryption algorithm and cryptographic keys to use before the first byte of data is transmitted (see below). The negotiation of a shared secret is both secure (the negotiated secret is unavailable to eavesdroppers and cannot be obtained, even by an attacker who places themself in the middle of the connection) and reliable (no attacker can modify the communications during the negotiation without being detected).
The identity of the communicating parties can be authenticated using public-key cryptography. This authentication is required for the server and optional for the client.
The connection is reliable because each message transmitted includes a message integrity check using a message authentication code to prevent undetected loss or alteration of the data during transmission.
In addition to the above, careful configuration of TLS can provide additional privacy-related properties such as forward secrecy, ensuring that any future disclosure of encryption keys cannot be used to decrypt any TLS communications recorded in the past.
TLS supports many different methods for exchanging keys, encrypting data, and authenticating message integrity. As a result, secure configuration of TLS involves many configurable parameters, and not all choices provide all of the privacy-related properties described in the list above (see the tables below § Key exchange, , and ).
Attempts have been made to subvert aspects of the communications security that TLS seeks to provide, and the protocol has been revised several times to address these security threats. Developers of web browsers have repeatedly revised their products to defend against potential security weaknesses after these were discovered (see TLS/SSL support history of web browsers).
History and development
Secure Data Network System
The Transport Layer Security Protocol (TLS), together with several other basic network security platforms, was developed through a joint initiative begun in August 1986, among the National Security Agency, the National Bureau of Standards, the Defense Communications Agency, and twelve communications and computer corporations who initiated a special project called the Secure Data Network System (SDNS). The program was described in September 1987 at the 10th National Computer Security Conference in an extensive set of published papers. The innovative research program focused on designing the next generation of secure computer communications network and product specifications to be implemented for applications on public and private internets. It was intended to complement the rapidly emerging new OSI internet standards moving forward both in the U.S. government's GOSIP Profiles and in the huge ITU-ISO JTC1 internet effort internationally. Originally known as the SP4 protocol, it was renamed TLS and subsequently published in 1995 as international standard ITU-T X.274| ISO/IEC 10736:1995.
Secure Network Programming
Early research efforts towards transport layer security included the Secure Network Programming (SNP) application programming interface (API), which in 1993 explored the approach of having a secure transport layer API closely resembling Berkeley sockets, to facilitate retrofitting pre-existing network applications with security measures.
SSL 1.0, 2.0, and 3.0
Netscape developed the original SSL protocols, and Taher Elgamal, chief scientist at Netscape Communications from 1995 to 1998, has been described as the "father of SSL". SSL version 1.0 was never publicly released because of serious security flaws in the protocol. Version 2.0, after being released in February 1995 was quickly discovered to contain a number of security and usability flaws. It used the same cryptographic keys for message authentication and encryption. It had a weak MAC construction that used the MD5 hash function with a secret prefix, making it vulnerable to length extension attacks. And it provided no protection for either the opening handshake or an explicit message close, both of which meant man-in-the-middle attacks could go undetected. Moreover, SSL 2.0 assumed a single service and a fixed domain certificate, conflicting with the widely used feature of virtual hosting in Web servers, so most websites were effectively impaired from using SSL.
These flaws necessitated the complete redesign of the protocol to SSL version 3.0. Released in 1996, it was produced by Paul Kocher working with Netscape engineers Phil Karlton and Alan Freier, with a reference implementation by Christopher Allen and Tim Dierks of Consensus Development. Newer versions of SSL/TLS are based on SSL 3.0. The 1996 draft of SSL 3.0 was published by IETF as a historical document in .
SSL 2.0 was deprecated in 2011 by . In 2014, SSL 3.0 was found to be vulnerable to the POODLE attack that affects all block ciphers in SSL; RC4, the only non-block cipher supported by SSL 3.0, is also feasibly broken as used in SSL 3.0. SSL 3.0 was deprecated in June 2015 by .
TLS 1.0
TLS 1.0 was first defined in in January 1999 as an upgrade of SSL Version 3.0, and written by Christopher Allen and Tim Dierks of Consensus Development. As stated in the RFC, "the differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough to preclude interoperability between TLS 1.0 and SSL 3.0". Tim Dierks later wrote that these changes, and the renaming from "SSL" to "TLS", were a face-saving gesture to Microsoft, "so it wouldn't look [like] the IETF was just rubberstamping Netscape's protocol".
The PCI Council suggested that organizations migrate from TLS 1.0 to TLS 1.1 or higher before June 30, 2018. In October 2018, Apple, Google, Microsoft, and Mozilla jointly announced they would deprecate TLS 1.0 and 1.1 in March 2020.
TLS 1.1
TLS 1.1 was defined in in April 2006. It is an update from TLS version 1.0. Significant differences in this version include:
Added protection against cipher-block chaining (CBC) attacks.
The implicit initialization vector (IV) was replaced with an explicit IV.
Change in handling of padding errors.
Support for IANA registration of parameters.
Support for TLS versions 1.0 and 1.1 was widely deprecated by web sites around 2020, disabling access to Firefox versions before 24 and Chromium-based browsers before 29.
TLS 1.2
TLS 1.2 was defined in in August 2008. It is based on the earlier TLS 1.1 specification. Major differences include:
The MD5–SHA-1 combination in the pseudorandom function (PRF) was replaced with SHA-256, with an option to use cipher suite specified PRFs.
The MD5–SHA-1 combination in the finished message hash was replaced with SHA-256, with an option to use cipher suite specific hash algorithms. However, the size of the hash in the finished message must still be at least 96 bits.
The MD5–SHA-1 combination in the digitally signed element was replaced with a single hash negotiated during handshake, which defaults to SHA-1.
Enhancement in the client's and server's ability to specify which hashes and signature algorithms they accept.
Expansion of support for authenticated encryption ciphers, used mainly for Galois/Counter Mode (GCM) and CCM mode of Advanced Encryption Standard (AES) encryption.
TLS Extensions definition and AES cipher suites were added.
All TLS versions were further refined in in March 2011, removing their backward compatibility with SSL such that TLS sessions never negotiate the use of Secure Sockets Layer (SSL) version 2.0.
TLS 1.3
TLS 1.3 was defined in in August 2018. It is based on the earlier TLS 1.2 specification. Major differences from TLS 1.2 include:
Separating key agreement and authentication algorithms from the cipher suites
Removing support for weak and less-used named elliptic curves
Removing support for MD5 and SHA-224 cryptographic hash functions
Requiring digital signatures even when a previous configuration is used
Integrating HKDF and the semi-ephemeral DH proposal
Replacing resumption with PSK and tickets
Supporting 1-RTT handshakes and initial support for 0-RTT
Mandating perfect forward secrecy, by means of using ephemeral keys during the (EC)DH key agreement
Dropping support for many insecure or obsolete features including compression, renegotiation, non-AEAD ciphers, non-PFS key exchange (among which are static RSA and static DH key exchanges), custom DHE groups, EC point format negotiation, Change Cipher Spec protocol, Hello message UNIX time, and the length field AD input to AEAD ciphers
Prohibiting SSL or RC4 negotiation for backwards compatibility
Integrating use of session hash
Deprecating use of the record layer version number and freezing the number for improved backwards compatibility
Moving some security-related algorithm details from an appendix to the specification and relegating ClientKeyShare to an appendix
Adding the ChaCha20 stream cipher with the Poly1305 message authentication code
Adding the Ed25519 and Ed448 digital signature algorithms
Adding the x25519 and x448 key exchange protocols
Adding support for sending multiple OCSP responses
Encrypting all handshake messages after the ServerHello
Network Security Services (NSS), the cryptography library developed by Mozilla and used by its web browser Firefox, enabled TLS 1.3 by default in February 2017. TLS 1.3 support was subsequently added — but due to compatibility issues for a small number of users, not automatically enabled — to Firefox 52.0, which was released in March 2017. TLS 1.3 was enabled by default in May 2018 with the release of Firefox 60.0.
Google Chrome set TLS 1.3 as the default version for a short time in 2017. It then removed it as the default, due to incompatible middleboxes such as Blue Coat web proxies.
During the IETF 100 Hackathon which took place in Singapore in 2017, The TLS Group worked on adapting open-source applications to use TLS 1.3. The TLS group was made up of individuals from Japan, United Kingdom, and Mauritius via the cyberstorm.mu team. This work was continued in the IETF 101 Hackathon in London, and the IETF 102 Hackathon in Montreal.
wolfSSL enabled the use of TLS 1.3 as of version 3.11.1, released in May 2017. As the first commercial TLS 1.3 implementation, wolfSSL 3.11.1 supported Draft 18 and now supports Draft 28, the final version, as well as many older versions. A series of blogs were published on the performance difference between TLS 1.2 and 1.3.
In September 2018, the popular OpenSSL project released version 1.1.1 of its library, in which support for TLS 1.3 was "the headline new feature".
Support for TLS 1.3 was first added to SChannel with Windows 11 and Windows Server 2022.
Enterprise Transport Security
The Electronic Frontier Foundation praised TLS 1.3 and expressed concern about the variant protocol Enterprise Transport Security (ETS) that intentionally disables important security measures in TLS 1.3. Originally called Enterprise TLS (eTLS), ETS is a published standard known as the 'ETSI TS103523-3', "Middlebox Security Protocol, Part3: Enterprise Transport Security". It is intended for use entirely within proprietary networks such as banking systems. ETS does not support forward secrecy so as to allow third-party organizations connected to the proprietary networks to be able to use their private key to monitor network traffic for the detection of malware and to make it easier to conduct audits. Despite the claimed benefits, the EFF warned that the loss of forward secrecy could make it easier for data to be exposed along with saying that there are better ways to analyze traffic.
Digital certificates
A digital certificate certifies the ownership of a public key by the named subject of the certificate, and indicates certain expected usages of that key. This allows others (relying parties) to rely upon signatures or on assertions made by the private key that corresponds to the certified public key. Keystores and trust stores can be in various formats, such as .pem, .crt, .pfx, and .jks.
Certificate authorities
TLS typically relies on a set of trusted third-party certificate authorities to establish the authenticity of certificates. Trust is usually anchored in a list of certificates distributed with user agent software, and can be modified by the relying party.
According to Netcraft, who monitors active TLS certificates, the market-leading certificate authority (CA) has been Symantec since the beginning of their survey (or VeriSign before the authentication services business unit was purchased by Symantec). As of 2015, Symantec accounted for just under a third of all certificates and 44% of the valid certificates used by the 1 million busiest websites, as counted by Netcraft. In 2017, Symantec sold its TLS/SSL business to DigiCert. In an updated report, it was shown that IdenTrust, DigiCert, and Sectigo are the top 3 certificate authorities in terms of market share since May 2019.
As a consequence of choosing X.509 certificates, certificate authorities and a public key infrastructure are necessary to verify the relation between a certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more convenient than verifying the identities via a web of trust, the 2013 mass surveillance disclosures made it more widely known that certificate authorities are a weak point from a security standpoint, allowing man-in-the-middle attacks (MITM) if the certificate authority cooperates (or is compromised).
Algorithms
Key exchange or key agreement
Before a client and server can begin to exchange information protected by TLS, they must securely exchange or agree upon an encryption key and a cipher to use when encrypting data (see ). Among the methods used for key exchange/agreement are: public and private keys generated with RSA (denoted TLS_RSA in the TLS handshake protocol), Diffie–Hellman (TLS_DH), ephemeral Diffie–Hellman (TLS_DHE), elliptic-curve Diffie–Hellman (TLS_ECDH), ephemeral elliptic-curve Diffie–Hellman (TLS_ECDHE), anonymous Diffie–Hellman (TLS_DH_anon), pre-shared key (TLS_PSK) and Secure Remote Password (TLS_SRP).
The TLS_DH_anon and TLS_ECDH_anon key agreement methods do not authenticate the server or the user and hence are rarely used because those are vulnerable to man-in-the-middle attacks. Only TLS_DHE and TLS_ECDHE provide forward secrecy.
Public key certificates used during exchange/agreement also vary in the size of the public/private encryption keys used during the exchange and hence the robustness of the security provided. In July 2013, Google announced that it would no longer use 1024-bit public keys and would switch instead to 2048-bit keys to increase the security of the TLS encryption it provides to its users because the encryption strength is directly related to the key size.
Cipher
Notes
Data integrity
A message authentication code (MAC) is used for data integrity. HMAC is used for CBC mode of block ciphers. Authenticated encryption (AEAD) such as GCM mode and CCM mode uses AEAD-integrated MAC and doesn't use HMAC. HMAC-based PRF, or HKDF is used for TLS handshake.
Applications and adoption
In applications design, TLS is usually implemented on top of Transport Layer protocols, encrypting all of the protocol-related data of protocols such as HTTP, FTP, SMTP, NNTP and XMPP.
Historically, TLS has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). However, it has also been implemented with datagram-oriented transport protocols, such as the User Datagram Protocol (UDP) and the Datagram Congestion Control Protocol (DCCP), usage of which has been standardized independently using the term Datagram Transport Layer Security (DTLS).
Websites
A primary use of TLS is to secure World Wide Web traffic between a website and a web browser encoded with the HTTP protocol. This use of TLS to secure HTTP traffic constitutes the HTTPS protocol.
Notes
Web browsers
, the latest versions of all major web browsers support TLS 1.0, 1.1, and 1.2, and have them enabled by default. However, not all supported Microsoft operating systems support the latest version of IE. Additionally, many Microsoft operating systems currently support multiple versions of IE, but this has changed according to Microsoft's Internet Explorer Support Lifecycle Policy FAQ, "beginning January 12, 2016, only the most current version of Internet Explorer available for a supported operating system will receive technical support and security updates." The page then goes on to list the latest supported version of IE at that date for each operating system. The next critical date would be when an operating system reaches the end of life stage, which is in Microsoft's Windows lifecycle fact sheet.
Mitigations against known attacks are not enough yet:
Mitigations against POODLE attack: some browsers already prevent fallback to SSL 3.0; however, this mitigation needs to be supported by not only clients but also servers. Disabling SSL 3.0 itself, implementation of "anti-POODLE record splitting", or denying CBC ciphers in SSL 3.0 is required.
Google Chrome: complete (TLS_FALLBACK_SCSV is implemented since version 33, fallback to SSL 3.0 is disabled since version 39, SSL 3.0 itself is disabled by default since version 40. Support of SSL 3.0 itself was dropped since version 44.)
Mozilla Firefox: complete (support of SSL 3.0 itself is dropped since version 39. SSL 3.0 itself is disabled by default and fallback to SSL 3.0 are disabled since version 34, TLS_FALLBACK_SCSV is implemented since version 35. In ESR, SSL 3.0 itself is disabled by default and TLS_FALLBACK_SCSV is implemented since ESR 31.3.)
Internet Explorer: partial (only in version 11, SSL 3.0 is disabled by default since April 2015. Version 10 and older are still vulnerable against POODLE.)
Opera: complete (TLS_FALLBACK_SCSV is implemented since version 20, "anti-POODLE record splitting", which is effective only with client-side implementation, is implemented since version 25, SSL 3.0 itself is disabled by default since version 27. Support of SSL 3.0 itself will be dropped since version 31.)
Safari: complete (only on OS X 10.8 and later and iOS 8, CBC ciphers during fallback to SSL 3.0 is denied, but this means it will use RC4, which is not recommended as well. Support of SSL 3.0 itself is dropped on OS X 10.11 and later and iOS 9.)
Mitigation against RC4 attacks:
Google Chrome disabled RC4 except as a fallback since version 43. RC4 is disabled since Chrome 48.
Firefox disabled RC4 except as a fallback since version 36. Firefox 44 disabled RC4 by default.
Opera disabled RC4 except as a fallback since version 30. RC4 is disabled since Opera 35.
Internet Explorer for Windows 7 / Server 2008 R2 and for Windows 8 / Server 2012 have set the priority of RC4 to lowest and can also disable RC4 except as a fallback through registry settings. Internet Explorer 11 Mobile 11 for Windows Phone 8.1 disable RC4 except as a fallback if no other enabled algorithm works. Edge and IE 11 disable RC4 completely in August 2016.
Mitigation against FREAK attack:
The Android Browser included with Android 4.0 and older is still vulnerable to the FREAK attack.
Internet Explorer 11 Mobile is still vulnerable to the FREAK attack.
Google Chrome, Internet Explorer (desktop), Safari (desktop & mobile), and Opera (mobile) have FREAK mitigations in place.
Mozilla Firefox on all platforms and Google Chrome on Windows were not affected by FREAK.
Notes
Libraries
Most SSL and TLS programming libraries are free and open source software.
BoringSSL, a fork of OpenSSL for Chrome/Chromium and Android as well as other Google applications.
Botan, a BSD-licensed cryptographic library written in C++.
BSAFE Micro Edition Suite: a multi-platform implementation of TLS written in C using a FIPS-validated cryptographic module
BSAFE SSL-J: a TLS library providing both a proprietary API and JSSE API, using FIPS-validated cryptographic module
cryptlib: a portable open source cryptography library (includes TLS/SSL implementation)
Delphi programmers may use a library called Indy which utilizes OpenSSL or alternatively ICS which supports TLS 1.3 now.
GnuTLS: a free implementation (LGPL licensed)
Java Secure Socket Extension (JSSE): the Java API and provider implementation (named SunJSSE)
LibreSSL: a fork of OpenSSL by OpenBSD project.
MatrixSSL: a dual licensed implementation
mbed TLS (previously PolarSSL): A tiny SSL library implementation for embedded devices that is designed for ease of use
Network Security Services: FIPS 140 validated open source library
OpenSSL: a free implementation (BSD license with some extensions)
SChannel: an implementation of SSL and TLS Microsoft Windows as part of its package.
Secure Transport: an implementation of SSL and TLS used in OS X and iOS as part of their packages.
wolfSSL (previously CyaSSL): Embedded SSL/TLS Library with a strong focus on speed and size.
A paper presented at the 2012 ACM conference on computer and communications security showed that few applications used some of these SSL libraries correctly, leading to vulnerabilities. According to the authors
"the root cause of most of these vulnerabilities is the terrible design of the APIs to the underlying SSL libraries. Instead of expressing high-level security properties of network tunnels such as confidentiality and authentication, these APIs expose low-level details of the SSL protocol to application developers. As a consequence, developers often use SSL APIs incorrectly, misinterpreting and misunderstanding their manifold parameters, options, side effects, and return values."
Other uses
The Simple Mail Transfer Protocol (SMTP) can also be protected by TLS. These applications use public key certificates to verify the identity of endpoints.
TLS can also be used for tunnelling an entire network stack to create a VPN, which is the case with OpenVPN and OpenConnect. Many vendors have by now married TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of Web-browsers, in order to enable support for client/server applications. Compared to traditional IPsec VPN technologies, TLS has some inherent advantages in firewall and NAT traversal that make it easier to administer for large remote-access populations.
TLS is also a standard method for protecting Session Initiation Protocol (SIP) application signaling. TLS can be used for providing authentication and encryption of the SIP signalling associated with VoIP and other SIP-based applications.
Security
Attacks against TLS/SSL
Significant attacks against TLS/SSL are listed below.
In February 2015, IETF issued an informational RFC summarizing the various known attacks against TLS/SSL.
Renegotiation attack
A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. For example, it allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server. The attacker can't actually decrypt the client–server communication, so it is different from a typical man-in-the-middle attack. A short-term fix is for web servers to stop allowing renegotiation, which typically will not require other changes unless client certificate authentication is used. To fix the vulnerability, a renegotiation indication extension was proposed for TLS. It will require the client and server to include and verify information about previous handshakes in any renegotiation handshakes. This extension has become a proposed standard and has been assigned the number . The RFC has been implemented by several libraries.
Downgrade attacks: FREAK attack and Logjam attack
A protocol downgrade attack (also called a version rollback attack) tricks a web server into negotiating connections with previous versions of TLS (such as SSLv2) that have long since been abandoned as insecure.
Previous modifications to the original protocols, like False Start (adopted and enabled by Google Chrome) or Snap Start, reportedly introduced limited TLS protocol downgrade attacks or allowed modifications to the cipher suite list sent by the client to the server. In doing so, an attacker might succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite negotiated to use either a weaker symmetric encryption algorithm or a weaker key exchange. A paper presented at an ACM conference on computer and communications security in 2012 demonstrated that the False Start extension was at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data.
Encryption downgrade attacks can force servers and clients to negotiate a connection using cryptographically weak keys. In 2014, a man-in-the-middle attack called FREAK was discovered affecting the OpenSSL stack, the default Android web browser, and some Safari browsers. The attack involved tricking servers into negotiating a TLS connection using cryptographically weak 512 bit encryption keys.
Logjam is a security exploit discovered in May 2015 that exploits the option of using legacy "export-grade" 512-bit Diffie–Hellman groups dating back to the 1990s. It forces susceptible servers to downgrade to cryptographically weak 512-bit Diffie–Hellman groups. An attacker can then deduce the keys the client and server determine using the Diffie–Hellman key exchange.
Cross-protocol attacks: DROWN
The DROWN attack is an exploit that attacks servers supporting contemporary SSL/TLS protocol suites by exploiting their support for the obsolete, insecure, SSLv2 protocol to leverage an attack on connections using up-to-date protocols that would otherwise be secure. DROWN exploits a vulnerability in the protocols used and the configuration of the server, rather than any specific implementation error. Full details of DROWN were announced in March 2016, together with a patch for the exploit. At that time, more than 81,000 of the top 1 million most popular websites were among the TLS protected websites that were vulnerable to the DROWN attack.
BEAST attack
On September 23, 2011 researchers Thai Duong and Juliano Rizzo demonstrated a proof of concept called BEAST (Browser Exploit Against SSL/TLS) using a Java applet to violate same origin policy constraints, for a long-known cipher block chaining (CBC) vulnerability in TLS 1.0: an attacker observing 2 consecutive ciphertext blocks C0, C1 can test if the plaintext block P1 is equal to x by choosing the next plaintext block P2 = x C0 C1; as per CBC operation, C2 = E(C1 P2) = E(C1 x C0 C1) = E(C0 x), which will be equal to C1 if x = P1. Practical exploits had not been previously demonstrated for this vulnerability, which was originally discovered by Phillip Rogaway in 2002. The vulnerability of the attack had been fixed with TLS 1.1 in 2006, but TLS 1.1 had not seen wide adoption prior to this attack demonstration.
RC4 as a stream cipher is immune to BEAST attack. Therefore, RC4 was widely used as a way to mitigate BEAST attack on the server side. However, in 2013, researchers found more weaknesses in RC4. Thereafter enabling RC4 on server side was no longer recommended.
Chrome and Firefox themselves are not vulnerable to BEAST attack, however, Mozilla updated their NSS libraries to mitigate BEAST-like attacks. NSS is used by Mozilla Firefox and Google Chrome to implement SSL. Some web servers that have a broken implementation of the SSL specification may stop working as a result.
Microsoft released Security Bulletin MS12-006 on January 10, 2012, which fixed the BEAST vulnerability by changing the way that the Windows Secure Channel (SChannel) component transmits encrypted network packets from the server end. Users of Internet Explorer (prior to version 11) that run on older versions of Windows (Windows 7, Windows 8 and Windows Server 2008 R2) can restrict use of TLS to 1.1 or higher.
Apple fixed BEAST vulnerability by implementing 1/n-1 split and turning it on by default in OS X Mavericks, released on October 22, 2013.
CRIME and BREACH attacks
The authors of the BEAST attack are also the creators of the later CRIME attack, which can allow an attacker to recover the content of web cookies when data compression is used along with TLS. When used to recover the content of secret authentication cookies, it allows an attacker to perform session hijacking on an authenticated web session.
While the CRIME attack was presented as a general attack that could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such as SPDY or HTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined. In 2013 a new instance of the CRIME attack against HTTP compression, dubbed BREACH, was announced. Based on the CRIME attack a BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link or is able to inject content into valid pages the user is visiting (ex: a wireless network under the control of the attacker). All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used. Unlike previous instances of CRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users. This is a known limitation of TLS as it is susceptible to chosen-plaintext attack against the application-layer data it was meant to protect.
Timing attacks on padding
Earlier TLS versions were vulnerable against the padding oracle attack discovered in 2002. A novel variant, called the Lucky Thirteen attack, was published in 2013.
Some experts also recommended avoiding Triple-DES CBC. Since the last supported ciphers developed to support any program using Windows XP's SSL/TLS library like Internet Explorer on Windows XP are RC4 and Triple-DES, and since RC4 is now deprecated (see discussion of RC4 attacks), this makes it difficult to support any version of SSL for any program using this library on XP.
A fix was released as the Encrypt-then-MAC extension to the TLS specification, released as . The Lucky Thirteen attack can be mitigated in TLS 1.2 by using only AES_GCM ciphers; AES_CBC remains vulnerable.
POODLE attack
On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes CBC mode of operation with SSL 3.0 vulnerable to a padding attack (). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption). On average, attackers only need to make 256 SSL 3.0 requests to reveal one byte of encrypted messages.
Although this vulnerability only exists in SSL 3.0 and most clients and servers support TLS 1.0 and above, all major browsers voluntarily downgrade to SSL 3.0 if the handshakes with newer versions of TLS fail unless they provide the option for a user or administrator to disable SSL 3.0 and the user or administrator does so. Therefore, the man-in-the-middle can first conduct a version rollback attack and then exploit this vulnerability.
On December 8, 2014, a variant of POODLE was announced that impacts TLS implementations that do not properly enforce padding byte requirements.
RC4 attacks
Despite the existence of attacks on RC4 that broke its security, cipher suites in SSL and TLS that were based on RC4 were still considered secure prior to 2013 based on the way in which they were used in SSL and TLS. In 2011, the RC4 suite was actually recommended as a work around for the BEAST attack. New forms of attack disclosed in March 2013 conclusively demonstrated the feasibility of breaking RC4 in TLS, suggesting it was not a good workaround for BEAST. An attack scenario was proposed by AlFardan, Bernstein, Paterson, Poettering and Schuldt that used newly discovered statistical biases in the RC4 key table to recover parts of the plaintext with a large number of TLS encryptions. An attack on RC4 in TLS and SSL that requires 13 × 220 encryptions to break RC4 was unveiled on 8 July 2013 and later described as "feasible" in the accompanying presentation at a USENIX Security Symposium in August 2013. In July 2015, subsequent improvements in the attack make it increasingly practical to defeat the security of RC4-encrypted TLS.
As many modern browsers have been designed to defeat BEAST attacks (except Safari for Mac OS X 10.7 or earlier, for iOS 6 or earlier, and for Windows; see ), RC4 is no longer a good choice for TLS 1.0. The CBC ciphers which were affected by the BEAST attack in the past have become a more popular choice for protection. Mozilla and Microsoft recommend disabling RC4 where possible. prohibits the use of RC4 cipher suites in all versions of TLS.
On September 1, 2015, Microsoft, Google and Mozilla announced that RC4 cipher suites would be disabled by default in their browsers (Microsoft Edge, Internet Explorer 11 on Windows 7/8.1/10, Firefox, and Chrome) in early 2016.
Truncation attack
A TLS (logout) truncation attack blocks a victim's account logout requests so that the user unknowingly remains logged into a web service. When the request to sign out is sent, the attacker injects an unencrypted TCP FIN message (no more data from sender) to close the connection. The server therefore doesn't receive the logout request and is unaware of the abnormal termination.
Published in July 2013, the attack causes web services such as Gmail and Hotmail to display a page that informs the user that they have successfully signed-out, while ensuring that the user's browser maintains authorization with the service, allowing an attacker with subsequent access to the browser to access and take over control of the user's logged-in account. The attack does not rely on installing malware on the victim's computer; attackers need only place themselves between the victim and the web server (e.g., by setting up a rogue wireless hotspot). This vulnerability also requires access to the victim's computer.
Another possibility is when using FTP the data connection can have a false FIN in the data stream, and if the protocol rules for exchanging close_notify alerts is not adhered to a file can be truncated.
Unholy PAC attack
This attack, discovered in mid-2016, exploits weaknesses in the Web Proxy Autodiscovery Protocol (WPAD) to expose the URL that a web user is attempting to reach via a TLS-enabled web link. Disclosure of a URL can violate a user's privacy, not only because of the website accessed, but also because URLs are sometimes used to authenticate users. Document sharing services, such as those offered by Google and Dropbox, also work by sending a user a security token that's included in the URL. An attacker who obtains such URLs may be able to gain full access to a victim's account or data.
The exploit works against almost all browsers and operating systems.
Sweet32 attack
The Sweet32 attack breaks all 64-bit block ciphers used in CBC mode as used in TLS by exploiting a birthday attack and either a man-in-the-middle attack or injection of a malicious JavaScript into a web page. The purpose of the man-in-the-middle attack or the JavaScript injection is to allow the attacker to capture enough traffic to mount a birthday attack.
Implementation errors: Heartbleed bug, BERserk attack, Cloudflare bug
The Heartbleed bug is a serious vulnerability specific to the implementation of SSL/TLS in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness, reported in April 2014, allows attackers to steal private keys from servers that should normally be protected. The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret private keys associated with the public certificates used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users. The vulnerability is caused by a buffer over-read bug in the OpenSSL software, rather than a defect in the SSL or TLS protocol specification.
In September 2014, a variant of Daniel Bleichenbacher's PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a man-in-the-middle attack by forging a public key signature.
In February 2015, after media reported the hidden pre-installation of Superfish adware on some Lenovo notebooks, a researcher found a trusted root certificate on affected Lenovo machines to be insecure, as the keys could easily be accessed using the company name, Komodia, as a passphrase. The Komodia library was designed to intercept client-side TLS/SSL traffic for parental control and surveillance, but it was also used in numerous adware programs, including Superfish, that were often surreptitiously installed unbeknownst to the computer user. In turn, these potentially unwanted programs installed the corrupt root certificate, allowing attackers to completely control web traffic and confirm false websites as authentic.
In May 2016, it was reported that dozens of Danish HTTPS-protected websites belonging to Visa Inc. were vulnerable to attacks allowing hackers to inject malicious code and forged content into the browsers of visitors. The attacks worked because the TLS implementation used on the affected servers incorrectly reused random numbers (nonces) that are intended to be used only once, ensuring that each TLS handshake is unique.
In February 2017, an implementation error caused by a single mistyped character in code used to parse HTML created a buffer overflow error on Cloudflare servers. Similar in its effects to the Heartbleed bug discovered in 2014, this overflow error, widely known as Cloudbleed, allowed unauthorized third parties to read data in the memory of programs running on the servers—data that should otherwise have been protected by TLS.
Survey of websites vulnerable to attacks
, the Trustworthy Internet Movement estimated the ratio of websites that are vulnerable to TLS attacks.
Forward secrecy
Forward secrecy is a property of cryptographic systems which ensures that a session key derived from a set of public and private keys will not be compromised if one of the private keys is compromised in the future. Without forward secrecy, if the server's private key is compromised, not only will all future TLS-encrypted sessions using that server certificate be compromised, but also any past sessions that used it as well (provided of course that these past sessions were intercepted and stored at the time of transmission). An implementation of TLS can provide forward secrecy by requiring the use of ephemeral Diffie–Hellman key exchange to establish session keys, and some notable TLS implementations do so exclusively: e.g., Gmail and other Google HTTPS services that use OpenSSL. However, many clients and servers supporting TLS (including browsers and web servers) are not configured to implement such restrictions. In practice, unless a web service uses Diffie–Hellman key exchange to implement forward secrecy, all of the encrypted web traffic to and from that service can be decrypted by a third party if it obtains the server's master (private) key; e.g., by means of a court order.
Even where Diffie–Hellman key exchange is implemented, server-side session management mechanisms can impact forward secrecy. The use of TLS session tickets (a TLS extension) causes the session to be protected by AES128-CBC-SHA256 regardless of any other negotiated TLS parameters, including forward secrecy ciphersuites, and the long-lived TLS session ticket keys defeat the attempt to implement forward secrecy. Stanford University research in 2014 also found that of 473,802 TLS servers surveyed, 82.9% of the servers deploying ephemeral Diffie–Hellman (DHE) key exchange to support forward secrecy were using weak Diffie–Hellman parameters. These weak parameter choices could potentially compromise the effectiveness of the forward secrecy that the servers sought to provide.
Since late 2011, Google has provided forward secrecy with TLS by default to users of its Gmail service, along with Google Docs and encrypted search, among other services.
Since November 2013, Twitter has provided forward secrecy with TLS to users of its service. , about 80% of TLS-enabled websites are configured to use cipher suites that provide forward secrecy to most web browsers.
TLS interception
TLS interception (or HTTPS interception if applied particularly to that protocol) is the practice of intercepting an encrypted data stream in order to decrypt it, read and possibly manipulate it, and then re-encrypt it and send the data on its way again. This is done by way of a "transparent proxy": the interception software terminates the incoming TLS connection, inspects the HTTP plaintext, and then creates a new TLS connection to the destination.
TLS / HTTPS interception is used as an information security measure by network operators in order to be able to scan for and protect against the intrusion of malicious content into the network, such as computer viruses and other malware. Such content could otherwise not be detected as long as it is protected by encryption, which is increasingly the case as a result of the routine use of HTTPS and other secure protocols.
A significant drawback of TLS / HTTPS interception is that it introduces new security risks of its own. One notable limitation is that it provides a point where network traffic is available unencrypted thus giving attackers an incentive to attack this point in particular in order to gain access to otherwise secure content. The interception also allows the network operator, or persons who gain access to its interception system, to perform man-in-the-middle attacks against network users. A 2017 study found that "HTTPS interception has become startlingly widespread, and that interception products as a class have a dramatically negative impact on connection security".
Protocol details
The TLS protocol exchanges records, which encapsulate the data to be exchanged in a specific format (see below). Each record can be compressed, padded, appended with a message authentication code (MAC), or encrypted, all depending on the state of the connection. Each record has a content type field that designates the type of data encapsulated, a length field and a TLS version field. The data encapsulated may be control or procedural messages of the TLS itself, or simply the application data needed to be transferred by TLS. The specifications (cipher suite, keys etc.) required to exchange application data by TLS, are agreed upon in the "TLS handshake" between the client requesting the data and the server responding to requests. The protocol therefore defines both the structure of payloads transferred in TLS and the procedure to establish and monitor the transfer.
TLS handshake
When the connection starts, the record encapsulates a "control" protocol – the handshake messaging protocol (content type 22). This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the format of messages and the order of their exchange. These may vary according to the demands of the client and server – i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message (as specified below).
Basic TLS handshake
A typical connection example follows, illustrating a handshake where the server (but not the client) is authenticated by its certificate:
Negotiation phase:
A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and suggested compression methods. If the client is attempting to perform a resumed handshake, it may send a session ID. If the client can use Application-Layer Protocol Negotiation, it may include a list of supported application protocols, such as HTTP/2.
The server responds with a ServerHello message, containing the chosen protocol version, a random number, cipher suite and compression method from the choices offered by the client. To confirm or allow resumed handshakes the server may send a session ID. The chosen protocol version should be the highest that both the client and server support. For example, if the client supports TLS version 1.1 and the server supports version 1.2, version 1.1 should be selected; version 1.2 should not be selected.
The server sends its Certificate message (depending on the selected cipher suite, this may be omitted by the server).
The server sends its ServerKeyExchange message (depending on the selected cipher suite, this may be omitted by the server). This message is sent for all DHE, ECDHE and DH_anon cipher suites.
The server sends a ServerHelloDone message, indicating it is done with handshake negotiation.
The client responds with a ClientKeyExchange message, which may contain a PreMasterSecret, public key, or nothing. (Again, this depends on the selected cipher.) This PreMasterSecret is encrypted using the public key of the server certificate.
The client and server then use the random numbers and PreMasterSecret to compute a common secret, called the "master secret". All other key data (session keys such as IV, symmetric encryption key, MAC key) for this connection is derived from this master secret (and the client- and server-generated random values), which is passed through a carefully designed pseudorandom function.
The client now sends a ChangeCipherSpec record, essentially telling the server, "Everything I tell you from now on will be authenticated (and encrypted if encryption parameters were present in the server certificate)." The ChangeCipherSpec is itself a record-level protocol with content type of 20.
The client sends an authenticated and encrypted Finished message, containing a hash and MAC over the previous handshake messages.
The server will attempt to decrypt the client's Finished message and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.
Finally, the server sends a ChangeCipherSpec, telling the client, "Everything I tell you from now on will be authenticated (and encrypted, if encryption was negotiated)."
The server sends its authenticated and encrypted Finished message.
The client performs the same decryption and verification procedure as the server did in the previous step.
Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged between client and server will also be authenticated and optionally encrypted exactly like in their Finished message. Otherwise, the content type will return 25 and the client will not authenticate.
Client-authenticated TLS handshake
The following full example shows a client being authenticated (in addition to the server as in the example above; see mutual authentication) via TLS using certificates exchanged between both peers.
Negotiation Phase:
A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and compression methods.
The server responds with a ServerHello message, containing the chosen protocol version, a random number, cipher suite and compression method from the choices offered by the client. The server may also send a session id as part of the message to perform a resumed handshake.
The server sends its Certificate message (depending on the selected cipher suite, this may be omitted by the server).
The server sends its ServerKeyExchange message (depending on the selected cipher suite, this may be omitted by the server). This message is sent for all DHE, ECDHE and DH_anon ciphersuites.
The server sends a CertificateRequest message, to request a certificate from the client.
The server sends a ServerHelloDone message, indicating it is done with handshake negotiation.
The client responds with a Certificate message, which contains the client's certificate.
The client sends a ClientKeyExchange message, which may contain a PreMasterSecret, public key, or nothing. (Again, this depends on the selected cipher.) This PreMasterSecret is encrypted using the public key of the server certificate.
The client sends a CertificateVerify message, which is a signature over the previous handshake messages using the client's certificate's private key. This signature can be verified by using the client's certificate's public key. This lets the server know that the client has access to the private key of the certificate and thus owns the certificate.
The client and server then use the random numbers and PreMasterSecret to compute a common secret, called the "master secret". All other key data ("session keys") for this connection is derived from this master secret (and the client- and server-generated random values), which is passed through a carefully designed pseudorandom function.
The client now sends a ChangeCipherSpec record, essentially telling the server, "Everything I tell you from now on will be authenticated (and encrypted if encryption was negotiated). " The ChangeCipherSpec is itself a record-level protocol and has type 20 and not 22.
Finally, the client sends an encrypted Finished message, containing a hash and MAC over the previous handshake messages.
The server will attempt to decrypt the client's Finished message and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.
Finally, the server sends a ChangeCipherSpec, telling the client, "Everything I tell you from now on will be authenticated (and encrypted if encryption was negotiated). "
The server sends its own encrypted Finished message.
The client performs the same decryption and verification procedure as the server did in the previous step.
Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged between client and server will also be encrypted exactly like in their Finished message.
Resumed TLS handshake
Public key operations (e.g., RSA) are relatively expensive in terms of computational power. TLS provides a secure shortcut in the handshake mechanism to avoid these operations: resumed sessions. Resumed sessions are implemented using session IDs or session tickets.
Apart from the performance benefit, resumed sessions can also be used for single sign-on, as it guarantees that both the original session and any resumed session originate from the same client. This is of particular importance for the FTP over TLS/SSL protocol, which would otherwise suffer from a man-in-the-middle attack in which an attacker could intercept the contents of the secondary data connections.
TLS 1.3 handshake
The TLS 1.3 handshake was condensed to only one round trip compared to the two round trips required in previous versions of TLS/SSL.
First the client sends a clientHello message to the server that contains a list of supported ciphers in order of the client's preference and makes a guess on what key algorithm will be used so that it can send a secret key to share if needed. By making a guess at what key algorithm will be used, the server eliminates a round trip. After receiving the clientHello, the server sends a serverHello with its key, a certificate, the chosen cipher suite and the finished message.
After the client receives the server's finished message, it now is coordinated with the server on which cipher suite to use.
Session IDs
In an ordinary full handshake, the server sends a session id as part of the ServerHello message. The client associates this session id with the server's IP address and TCP port, so that when the client connects again to that server, it can use the session id to shortcut the handshake. In the server, the session id maps to the cryptographic parameters previously negotiated, specifically the "master secret". Both sides must have the same "master secret" or the resumed handshake will fail (this prevents an eavesdropper from using a session id). The random data in the ClientHello and ServerHello messages virtually guarantee that the generated connection keys will be different from in the previous connection. In the RFCs, this type of handshake is called an abbreviated handshake. It is also described in the literature as a restart handshake.
Negotiation phase:
A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and compression methods. Included in the message is the session id from the previous TLS connection.
The server responds with a ServerHello message, containing the chosen protocol version, a random number, cipher suite and compression method from the choices offered by the client. If the server recognizes the session id sent by the client, it responds with the same session id. The client uses this to recognize that a resumed handshake is being performed. If the server does not recognize the session id sent by the client, it sends a different value for its session id. This tells the client that a resumed handshake will not be performed. At this point, both the client and server have the "master secret" and random data to generate the key data to be used for this connection.
The server now sends a ChangeCipherSpec record, essentially telling the client, "Everything I tell you from now on will be encrypted." The ChangeCipherSpec is itself a record-level protocol and has type 20 and not 22.
Finally, the server sends an encrypted Finished message, containing a hash and MAC over the previous handshake messages.
The client will attempt to decrypt the server's Finished message and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.
Finally, the client sends a ChangeCipherSpec, telling the server, "Everything I tell you from now on will be encrypted. "
The client sends its own encrypted Finished message.
The server performs the same decryption and verification procedure as the client did in the previous step.
Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged between client and server will also be encrypted exactly like in their Finished message.
Session tickets
extends TLS via use of session tickets, instead of session IDs. It defines a way to resume a TLS session without requiring that session-specific state is stored at the TLS server.
When using session tickets, the TLS server stores its session-specific state in a session ticket and sends the session ticket to the TLS client for storing. The client resumes a TLS session by sending the session ticket to the server, and the server resumes the TLS session according to the session-specific state in the ticket. The session ticket is encrypted and authenticated by the server, and the server verifies its validity before using its contents.
One particular weakness of this method with OpenSSL is that it always limits encryption and authentication security of the transmitted TLS session ticket to AES128-CBC-SHA256, no matter what other TLS parameters were negotiated for the actual TLS session. This means that the state information (the TLS session ticket) is not as well protected as the TLS session itself. Of particular concern is OpenSSL's storage of the keys in an application-wide context (SSL_CTX), i.e. for the life of the application, and not allowing for re-keying of the AES128-CBC-SHA256 TLS session tickets without resetting the application-wide OpenSSL context (which is uncommon, error-prone and often requires manual administrative intervention).
TLS record
This is the general format of all TLS records.
Content type
This field identifies the Record Layer Protocol Type contained in this record.
Legacy version
This field identifies the major and minor version of TLS prior to TLS 1.3 for the contained message. For a ClientHello message, this need not be the highest version supported by the client. For TLS 1.3 and later, this must to be set 0x0303 and application must send supported versions in an extra message extension block.
Length
The length of "protocol message(s)", "MAC" and "padding" fields combined (i.e. q−5), not to exceed 214 bytes (16 KiB).
Protocol message(s)
One or more messages identified by the Protocol field. Note that this field may be encrypted depending on the state of the connection.
MAC and padding
A message authentication code computed over the "protocol message(s)" field, with additional key material included. Note that this field may be encrypted, or not included entirely, depending on the state of the connection.
No "MAC" or "padding" fields can be present at end of TLS records before all cipher algorithms and parameters have been negotiated and handshaked and then confirmed by sending a CipherStateChange record (see below) for signalling that these parameters will take effect in all further records sent by the same peer.
Handshake protocol
Most messages exchanged during the setup of the TLS session are based on this record, unless an error or warning occurs and needs to be signaled by an Alert protocol record (see below), or the encryption mode of the session is modified by another record (see ChangeCipherSpec protocol below).
Message type This field identifies the handshake message type.
Handshake message data length
This is a 3-byte field indicating the length of the handshake data, not including the header.
Note that multiple handshake messages may be combined within one record.
Alert protocol
This record should normally not be sent during normal handshaking or application exchanges. However, this message can be sent at any time during the handshake and up to the closure of the session. If this is used to signal a fatal error, the session will be closed immediately after sending this record, so this record is used to give a reason for this closure. If the alert level is flagged as a warning, the remote can decide to close the session if it decides that the session is not reliable enough for its needs (before doing so, the remote may also send its own signal).
Level
This field identifies the level of alert. If the level is fatal, the sender should close the session immediately. Otherwise, the recipient may decide to terminate the session itself, by sending its own fatal alert and closing the session itself immediately after sending it. The use of Alert records is optional, however if it is missing before the session closure, the session may be resumed automatically (with its handshakes).
Normal closure of a session after termination of the transported application should preferably be alerted with at least the Close notify Alert type (with a simple warning level) to prevent such automatic resume of a new session. Signalling explicitly the normal closure of a secure session before effectively closing its transport layer is useful to prevent or detect attacks (like attempts to truncate the securely transported data, if it intrinsically does not have a predetermined length or duration that the recipient of the secured data may expect).
Description
This field identifies which type of alert is being sent.
ChangeCipherSpec protocol
CCS protocol type
Currently only 1.
Application protocol
Length
Length of application data (excluding the protocol header and including the MAC and padding trailers)
MAC
32 bytes for the SHA-256-based HMAC, 20 bytes for the SHA-1-based HMAC, 16 bytes for the MD5-based HMAC.
Padding
Variable length; last byte contains the padding length.
Support for name-based virtual servers
From the application protocol point of view, TLS belongs to a lower layer, although the TCP/IP model is too coarse to show it. This means that the TLS handshake is usually (except in the STARTTLS case) performed before the application protocol can start. In the name-based virtual server feature being provided by the application layer, all co-hosted virtual servers share the same certificate because the server has to select and send a certificate immediately after the ClientHello message. This is a big problem in hosting environments because it means either sharing the same certificate among all customers or using a different IP address for each of them.
There are two known workarounds provided by X.509:
If all virtual servers belong to the same domain, a wildcard certificate can be used. Besides the loose host name selection that might be a problem or not, there is no common agreement about how to match wildcard certificates. Different rules are applied depending on the application protocol or software used.
Add every virtual host name in the subjectAltName extension. The major problem being that the certificate needs to be reissued whenever a new virtual server is added.
To provide the server name, Transport Layer Security (TLS) Extensions allow clients to include a Server Name Indication extension (SNI) in the extended ClientHello message. This extension hints to the server immediately which name the client wishes to connect to, so the server
can select the appropriate certificate to send to the clients.
also documents a method to implement name-based virtual hosting by upgrading HTTP to TLS via an HTTP/1.1 Upgrade header. Normally this is to securely implement HTTP over TLS within the main "http" URI scheme (which avoids forking the URI space and reduces the number of used ports), however, few implementations currently support this.
Standards
Primary standards
The current approved version of TLS is version 1.3, which is specified in:
: "The Transport Layer Security (TLS) Protocol Version 1.3".
The current standard replaces these former versions, which are now considered obsolete:
: "The TLS Protocol Version 1.0".
: "The Transport Layer Security (TLS) Protocol Version 1.1".
: "The Transport Layer Security (TLS) Protocol Version 1.2".
As well as the never standardized SSL 2.0 and 3.0, which are considered obsolete:
Internet Draft (1995), SSL Version 2.0
: "The Secure Sockets Layer (SSL) Protocol Version 3.0".
Extensions
Other RFCs subsequently extended TLS.
Extensions to TLS 1.0 include:
: "Using TLS with IMAP, POP3 and ACAP". Specifies an extension to the IMAP, POP3 and ACAP services that allow the server and client to use transport-layer security to provide private, authenticated communication over the Internet.
: "Addition of Kerberos Cipher Suites to Transport Layer Security (TLS)". The 40-bit cipher suites defined in this memo appear only for the purpose of documenting the fact that those cipher suite codes have already been assigned.
: "Upgrading to TLS Within HTTP/1.1", explains how to use the Upgrade mechanism in HTTP/1.1 to initiate Transport Layer Security (TLS) over an existing TCP connection. This allows unsecured and secured HTTP traffic to share the same well known port (in this case, http: at 80 rather than https: at 443).
: "HTTP Over TLS", distinguishes secured traffic from insecure traffic by the use of a different 'server port'.
: "SMTP Service Extension for Secure SMTP over Transport Layer Security". Specifies an extension to the SMTP service that allows an SMTP server and client to use transport-layer security to provide private, authenticated communication over the Internet.
: "AES Ciphersuites for TLS". Adds Advanced Encryption Standard (AES) cipher suites to the previously existing symmetric ciphers.
: "Transport Layer Security (TLS) Extensions", adds a mechanism for negotiating protocol extensions during session initialisation and defines some extensions. Made obsolete by .
: "Transport Layer Security Protocol Compression Methods", specifies the framework for compression methods and the DEFLATE compression method.
: "Transport Layer Security (TLS) Protocol Compression Using Lempel-Ziv-Stac (LZS)".
: "Addition of Camellia Cipher Suites to Transport Layer Security (TLS)".
: "Addition of SEED Cipher Suites to Transport Layer Security (TLS)".
: "Securing FTP with TLS".
: "Pre-Shared Key Ciphersuites for Transport Layer Security (TLS)", adds three sets of new cipher suites for the TLS protocol to support authentication based on pre-shared keys.
Extensions to TLS 1.1 include:
: "Datagram Transport Layer Security" specifies a TLS variant that works over datagram protocols (such as UDP).
: "Transport Layer Security (TLS) Extensions" describes both a set of specific extensions and a generic extension mechanism.
: "Elliptic Curve Cryptography (ECC) Cipher Suites for Transport Layer Security (TLS)".
: "TLS Handshake Message for Supplemental Data".
: "TLS User Mapping Extension".
: "Pre-Shared Key (PSK) Ciphersuites with NULL Encryption for Transport Layer Security (TLS)".
: "Using the Secure Remote Password (SRP) Protocol for TLS Authentication". Defines the TLS-SRP ciphersuites.
: "Transport Layer Security (TLS) Session Resumption without Server-Side State".
: "Using OpenPGP Keys for Transport Layer Security (TLS) Authentication", obsoleted by .
Extensions to TLS 1.2 include:
: "AES Galois Counter Mode (GCM) Cipher Suites for TLS".
: "TLS Elliptic Curve Cipher Suites with SHA-256/384 and AES Galois Counter Mode (GCM)".
: "Transport Layer Security (TLS) Renegotiation Indication Extension".
: "Transport Layer Security (TLS) Authorization Extensions".
: "Camellia Cipher Suites for TLS"
: "Transport Layer Security (TLS) Extensions: Extension Definitions", includes Server Name Indication and OCSP stapling.
: "Using OpenPGP Keys for Transport Layer Security (TLS) Authentication".
: "Prohibiting Secure Sockets Layer (SSL) Version 2.0".
: "Addition of the ARIA Cipher Suites to Transport Layer Security (TLS)".
: "Datagram Transport Layer Security Version 1.2".
: "Addition of the Camellia Cipher Suites to Transport Layer Security (TLS)".
: "Suite B Profile for Transport Layer Security (TLS)".
: "AES-CCM Cipher Suites for Transport Layer Security (TLS)".
: "Elliptic Curve Cryptography (ECC) Brainpool Curves for Transport Layer Security (TLS)".
: "AES-CCM Elliptic Curve Cryptography (ECC) Cipher Suites for TLS".
: "Transport Layer Security (TLS) Application-Layer Protocol Negotiation Extension".
: "Encrypt-then-MAC for Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)".
: "Prohibiting RC4 Cipher Suites".
: "TLS Fallback Signaling Cipher Suite Value (SCSV) for Preventing Protocol Downgrade Attacks".
: "Deprecating Secure Sockets Layer Version 3.0".
: "Transport Layer Security (TLS) Session Hash and Extended Master Secret Extension".
: "A Transport Layer Security (TLS) ClientHello Padding Extension".
Encapsulations of TLS include:
: "The EAP-TLS Authentication Protocol"
Informational RFCs
: "Summarizing Known Attacks on Transport Layer Security (TLS) and Datagram TLS (DTLS)"
: "Recommendations for Secure Use of Transport Layer Security (TLS) and Datagram Transport Layer Security (DTLS)"
See also
Application-Layer Protocol Negotiation – a TLS extension used for SPDY and TLS False Start
Bullrun (decryption program) – a secret anti-encryption program run by the U.S. National Security Agency
Certificate authority
Certificate Transparency
HTTP Strict Transport Security – HSTS
Key ring file
Private Communications Technology (PCT) – a historic Microsoft competitor to SSL 2.0
QUIC (Quick UDP Internet Connections) – "...was designed to provide security protection equivalent to TLS/SSL"; QUIC's main goal is to improve perceived performance of connection-oriented web applications that are currently using TCP
Server-Gated Cryptography
tcpcrypt
DTLS
TLS acceleration
References
Further reading
Creating VPNs with IPsec and SSL/TLS Linux Journal article by Rami Rosen
External links
IETF (Internet Engineering Task Force) TLS Workgroup
Computer-related introductions in 1999
Cryptographic protocols
Presentation layer protocols |
188211 | https://en.wikipedia.org/wiki/Pidgin%20%28software%29 | Pidgin (software) | Pidgin (formerly named Gaim) is a free and open-source multi-platform instant messaging client, based on a library named libpurple that has support for many instant messaging protocols, allowing the user to simultaneously log in to various services from a single application, with a single interface for both popular and obsolete protocols (from AOL to Discord), thus avoiding the hassle of having to deal with a new software for each device and protocol.
The number of Pidgin users was estimated to be over three million in 2007.
Pidgin is widely used for its Off-the-Record Messaging (OTR) plugin, which offers end-to-end encryption. For this reason it is included in the privacy- and anonymity-focused operating system Tails.
History
The program was originally written by Mark Spencer, an Auburn University sophomore, as an emulation of AOL's IM program AOL Instant Messenger on Linux using the GTK+ toolkit. The earliest archived release was on December 31, 1998. It was named GAIM (GTK+ AOL Instant Messenger) accordingly. The emulation was not based on reverse engineering, but instead relied on information about the protocol that AOL had published on the web. Development was assisted by some of AOL's technical staff. Support for other IM protocols was added soon thereafter.
On 6 July 2015, Pidgin scored seven out of seven points on the Electronic Frontier Foundation's secure messaging scorecard. They have received points for having communications encrypted in transit, having communications encrypted with keys the providers don't have access to (end-to-end encryption), making it possible for users to independently verify their correspondent's identities, having past communications secure if the keys are stolen (forward secrecy), having their code open to independent review (open source), having their security designs well-documented, and having recent independent security audits.
Naming dispute
In response to pressure from AOL, the program was renamed to the acronymous-but-lowercase gaim. As AOL Instant Messenger gained popularity, AOL trademarked its acronym, "AIM", leading to a lengthy legal struggle with the creators of GAIM, who kept the matter largely secret.
On April 6, 2007, the project development team announced the results of their settlement with AOL, which included a series of name changes: Gaim became Pidgin, libgaim became libpurple, and gaim-text (the command-line interface version) became finch. The name Pidgin was chosen in reference to the term "pidgin", which describes communication between people who do not share a common language. The name "purple" refers to "prpl", the internal libgaim name for an IM protocol plugin.
Due to the legal issues, version 2.0 of the software was frozen in beta stages. Following the settlement, it was announced that the first official release of Pidgin 2.0.0 was hoped to occur during the two weeks from April 8, 2007. However, Pidgin 2.0 was not released as scheduled; Pidgin developers announced on April 22, 2007 that the delay was due to the preferences directory ".gaim".
Pidgin 2.0.0 was released on May 3, 2007. Other visual changes were made to the interface in this version, including updated icons.
Features
Pidgin provides a graphical front-end for libpurple using GTK+. Libpurple supports many instant-messaging protocols.
Pidgin supports multiple operating systems, including Windows and many Unix-like systems such as Linux, the BSDs, and AmigaOS. It is included by default in the operating systems Tails and Xubuntu.
Pluggability
The program is designed to be extended with plugins. Plugins are often written by third-party developers. They can be used to add support for protocols, which is useful for those such as Skype or Discord which have licensing issues (however, the users' data and interactions are still subject to their policies and eavesdropping). They can also add other significant features. For example, the "Off-the-Record Messaging" (OTR) plugin provides end-to-end encryption.
The TLS encryption system is pluggable, allowing different TLS libraries to be easily substituted. GnuTLS is the default, and NSS is also supported. Some operating systems' ports, such as OpenBSD's, choose to use OpenSSL or LibreSSL by default instead.
Contacts
Contacts with multiple protocols can be grouped into one single contact instead of managing multiple protocols, and contacts can be given aliases or placed into groups.
To reach users as they log on or a status change occurs (such as moving from "Away" to "Available"), Pidgin supports on-action automated scripts called Buddy Pounces to automatically reach the user in customizable ways.
File transfer
Pidgin supports file transfers for many protocols. It lacks some protocol-specific features like the folder sharing available from Yahoo. Direct, peer-to-peer file transfers are supported over protocols such as XMPP and MSN.
Voice and video chat
As of version 2.6 (released on August 18, 2009), Pidgin supports voice/video calls using Farstream. , calls can only be initiated through the XMPP protocol.
Miscellaneous
Further features include support for themes, emoticons, spell checking, and notification area integration.
Supported protocols
The following protocols are officially supported by libpurple 2.12.0, without any extensions or plugins:
Bonjour (Apple's implementation of Zeroconf)
Gadu-Gadu
IRC
Lotus Sametime
Novell GroupWise
OSCAR (AIM, ICQ, MobileMe, ...)
SIMPLE
SILC
XMPP/Jingle (Google Talk, LJ Talk, Gizmo5, ...)
Zephyr
Some XMPP servers provide transports, which allow users to access networks using non-XMPP protocols without having to install plugins or additional software. Pidgin's support for XMPP means that these transports can be used to communicate via otherwise unsupported protocols, including not only instant messaging protocols, but also protocols such as SMS or E-mail.
Additional protocols, supported by third-party plugins, include Discord, Telegram, Microsoft OCS/LCS (extended SIP/SIMPLE), Facebook Messenger, QQ, Skype via skype4pidgin plugin, WhatsApp, Signal and the Xfire gaming network (requires the Gfire plugin).
Plugins
Various other features are supported using third-party plugins. Such features include:
End-to-end encryption, through Off-the-Record Messaging (OTR)
Notifications (such as showing "toaster" popups or Snarl notifications, or lighting LEDs on laptops)
Showing contacts what the user is listening to in various media players
Adding mathematical formulas written in LaTeX to conversations
Skype text chat via skype4pidgin and newer SkypeWeb plugin
Discord text chat via the purple-discord plugin
Watching videos directly into a conversation when receiving a video sharing website link (YouTube, Vimeo)
Mascot
The mascot of Pidgin is a purple pigeon with the name of The Purple Pidgin.
Criticisms
As observed by Wired in 2015, the libpurple codebase is "known for its bountiful security bugs". In 2011, security vulnerabilities were already discovered in popular OTR plugins using libpurple.
As of version 2.4 and later, the ability to manually resize the text input box of conversations was removed. This led to a fork, Carrier (originally named Funpidgin).
Passwords are stored in a plaintext file, readable by any person or program that can access the user's files. Version 3.0 of Pidgin (no announced release date) will support password storage in system keyrings such as KWallet and the GNOME Keyring.
Pidgin does not currently support pausing or reattempting file transfers.
Pidgin does not allow disabling the group sorting on the contact list.
Other notable software based on libpurple
Adium and Proteus (both for macOS)
Meebo (web-based, no longer available)
Telepathy Haze (a Tube for some of the protocols supported by the Telepathy framework)
QuteCom (cross-platform, focused on VoIP and video)
Instantbird (cross-platform, based on Mozilla's Gecko engine)
BitlBee and Minbif are IRCd-like gateways to multiple IM networks, and can be compiled with libpurple to increase functionality.
See also
Multiprotocol instant messaging application
Comparison of instant messaging protocols
Comparison of instant messaging clients
Comparison of Internet Relay Chat clients
Comparison of XMPP clients
Online chat
List of computing mascots
:Category:Computing mascots
References
External links
1998 software
Free instant messaging clients
Free software programmed in C
Instant messaging clients that use GTK
Windows instant messaging clients
AIM (software) clients
Free XMPP clients
Internet Relay Chat clients
Free Internet Relay Chat clients
Windows Internet Relay Chat clients
Portable software
Cross-platform free software
Applications using D-Bus
Yahoo! instant messaging clients
Software that uses Meson |
188371 | https://en.wikipedia.org/wiki/Reconfigurable%20computing | Reconfigurable computing | Reconfigurable computing is a computer architecture combining some of the flexibility of software with the high performance of hardware by processing with very flexible high speed computing fabrics like field-programmable gate arrays (FPGAs). The principal difference when compared to using ordinary microprocessors is the ability to make substantial changes to the datapath itself in addition to the control flow. On the other hand, the main difference from custom hardware, i.e. application-specific integrated circuits (ASICs) is the possibility to adapt the hardware during runtime by "loading" a new circuit on the reconfigurable fabric.
History
The concept of reconfigurable computing has existed since the 1960s, when Gerald Estrin's paper proposed the concept of a computer made of a standard processor and an array of "reconfigurable" hardware. The main processor would control the behavior of the reconfigurable hardware. The latter would then be tailored to perform a specific task, such as image processing or pattern matching, as quickly as a dedicated piece of hardware. Once the task was done, the hardware could be adjusted to do some other task. This resulted in a hybrid computer structure combining the flexibility of software with the speed of hardware.
In the 1980s and 1990s there was a renaissance in this area of research with many proposed reconfigurable architectures developed in industry and academia, such as: Copacobana, Matrix, GARP, Elixent, NGEN, Polyp, MereGen, PACT XPP, Silicon Hive, Montium, Pleiades, Morphosys, and PiCoGA. Such designs were feasible due to the constant progress of silicon technology that let complex designs be implemented on one chip. Some of these massively parallel reconfigurable computers were built primarily for special subdomains such as molecular evolution, neural or image processing. The world's first commercial reconfigurable computer, the Algotronix CHS2X4, was completed in 1991. It was not a commercial success, but was promising enough that Xilinx (the inventor of the Field-Programmable Gate Array, FPGA) bought the technology and hired the Algotronix staff. Later machines enabled first demonstrations of scientific principles, such as the spontaneous spatial self-organisation of genetic coding with MereGen.
Theories
Tredennick's Classification
The fundamental model of the reconfigurable computing machine paradigm, the data-stream-based anti machine is well illustrated by the differences to other machine paradigms that were introduced earlier, as shown by Nick Tredennick's following classification scheme of computing paradigms (see "Table 1: Nick Tredennick’s Paradigm Classification Scheme").
Hartenstein's Xputer
Computer scientist Reiner Hartenstein describes reconfigurable computing in terms of an anti-machine that, according to him, represents a fundamental paradigm shift away from the more conventional von Neumann machine. Hartenstein calls it Reconfigurable Computing Paradox, that software-to-configware (software-to-FPGA) migration results in reported speed-up factors of up to more than four orders of magnitude, as well as a reduction in electricity consumption by up to almost four orders of magnitude—although the technological parameters of FPGAs are behind the Gordon Moore curve by about four orders of magnitude, and the clock frequency is substantially lower than that of microprocessors. This paradox is partly explained by the Von Neumann syndrome.
High-performance computing
High-Performance Reconfigurable Computing (HPRC) is a computer architecture combining reconfigurable computing-based accelerators like field-programmable gate array with CPUs or multi-core processors.
The increase of logic in an FPGA has enabled larger and more complex algorithms to be programmed into the FPGA. The attachment of such an FPGA to a modern CPU over a high speed bus, like PCI express, has enabled the configurable logic to act more like a coprocessor rather than a peripheral. This has brought reconfigurable computing into the high-performance computing sphere.
Furthermore, by replicating an algorithm on an FPGA or the use of a multiplicity of FPGAs has enabled reconfigurable SIMD systems to be produced where several computational devices can concurrently operate on different data, which is highly parallel computing.
This heterogeneous systems technique is used in computing research and especially in supercomputing.
A 2008 paper reported speed-up factors of more than 4 orders of magnitude and energy saving factors by up to almost 4 orders of magnitude.
Some supercomputer firms offer heterogeneous processing blocks including FPGAs as accelerators.
One research area is the twin-paradigm programming tool flow productivity obtained for such heterogeneous systems.
The US National Science Foundation has a center for high-performance reconfigurable computing (CHREC).
In April 2011 the fourth Many-core and Reconfigurable Supercomputing Conference was held in Europe.
Commercial high-performance reconfigurable computing systems are beginning to emerge with the announcement of IBM integrating FPGAs with its IBM Power microprocessors.
Partial re-configuration
Partial re-configuration is the process of changing a portion of reconfigurable hardware circuitry while the other portion keeps its former configuration. Field programmable gate arrays are often used as a support to partial reconfiguration.
Electronic hardware, like software, can be designed modularly, by creating subcomponents and then higher-level components to instantiate them. In many cases it is useful to be able to swap out one or several of these subcomponents while the FPGA is still operating.
Normally, reconfiguring an FPGA requires it to be held in reset while an external controller reloads a design onto it. Partial reconfiguration allows for critical parts of the design to continue operating while a controller either on the FPGA or off of it loads a partial design into a reconfigurable module. Partial reconfiguration also can be used to save space for multiple designs by only storing the partial designs that change between designs.
A common example for when partial reconfiguration would be useful is the case of a communication device. If the device is controlling multiple connections, some of which require encryption, it would be useful to be able to load different encryption cores without bringing the whole controller down.
Partial reconfiguration is not supported on all FPGAs. A special software flow with emphasis on modular design is required. Typically the design modules are built along well defined boundaries inside the FPGA that require the design to be specially mapped to the internal hardware.
From the functionality of the design, partial reconfiguration can be divided into two groups:
dynamic partial reconfiguration, also known as an active partial reconfiguration - permits to change the part of the device while the rest of an FPGA is still running;
static partial reconfiguration - the device is not active during the reconfiguration process. While the partial data is sent into the FPGA, the rest of the device is stopped (in the shutdown mode) and brought up after the configuration is completed.
Current systems
Computer emulation
With the advent of affordable FPGA boards, students' and hobbyists' projects seek to recreate vintage computers or implement more novel architectures. Such projects are built with reconfigurable hardware (FPGAs), and some devices support emulation of multiple vintage computers using a single reconfigurable hardware (C-One).
COPACOBANA
A fully FPGA-based computer is the COPACOBANA, the Cost Optimized Codebreaker and Analyzer and its successor RIVYERA. A spin-off company SciEngines GmbH of the COPACOBANA-Project of the Universities of Bochum and Kiel in Germany continues the development of fully FPGA-based computers.
Mitrionics
Mitrionics has developed a SDK that enables software written using a single assignment language to be compiled and executed on FPGA-based computers. The Mitrion-C software language and Mitrion processor enable software developers to write and execute applications on FPGA-based computers in the same manner as with other computing technologies, such as graphical processing units (“GPUs”), cell-based processors, parallel processing units (“PPUs”), multi-core CPUs, and traditional single-core CPU clusters. (out of business)
National Instruments
National Instruments have developed a hybrid embedded computing system called CompactRIO. It consists of reconfigurable chassis housing the user-programmable FPGA, hot swappable I/O modules, real-time controller for deterministic communication and processing, and graphical LabVIEW software for rapid RT and FPGA programming.
Xilinx
Xilinx has developed two styles of partial reconfiguration of FPGA devices: module-based and difference-based. Module-based partial reconfiguration permits to reconfigure distinct modular parts of the design, while difference-based partial reconfiguration can be used when a small change is made to a design.
Intel
Intel supports partial reconfiguration of their FPGA devices on 28 nm devices such as Stratix V, and on the 20 nm Arria 10 devices. The Intel FPGA partial reconfiguration flow for Arria 10 is based on the hierarchical design methodology in the Quartus Prime Pro software where users create physical partitions of the FPGA that can be reconfigured at runtime while the remainder of the design continues to operate. The Quartus Prime Pro software also support hierarchical partial reconfiguration and simulation of partial reconfiguration.
Classification of systems
As an emerging field, classifications of reconfigurable architectures are still being developed and refined as new architectures are developed; no unifying taxonomy has been suggested to date. However, several recurring parameters can be used to classify these systems.
Granularity
The granularity of the reconfigurable logic is defined as the size of the smallest functional unit (configurable logic block, CLB) that is addressed by the mapping tools. High granularity, which can also be known as fine-grained, often implies a greater flexibility when implementing algorithms into the hardware. However, there is a penalty associated with this in terms of increased power, area and delay due to greater quantity of routing required per computation. Fine-grained architectures work at the bit-level manipulation level; whilst coarse grained processing elements (reconfigurable datapath unit, rDPU) are better optimised for standard data path applications. One of the drawbacks of coarse grained architectures are that they tend to lose some of their utilisation and performance if they need to perform smaller computations than their granularity provides, for example for a one bit add on a four bit wide functional unit would waste three bits. This problem can be solved by having a coarse grain array (reconfigurable datapath array, rDPA) and a FPGA on the same chip.
Coarse-grained architectures (rDPA) are intended for the implementation for algorithms needing word-width data paths (rDPU). As their functional blocks are optimized for large computations and typically comprise word wide arithmetic logic units (ALU), they will perform these computations more quickly and with more power efficiency than a set of interconnected smaller functional units; this is due to the connecting wires being shorter, resulting in less wire capacitance and hence faster and lower power designs. A potential undesirable consequence of having larger computational blocks is that when the size of operands may not match the algorithm an inefficient utilisation of resources can result. Often the type of applications to be run are known in advance allowing the logic, memory and routing resources to be tailored to enhance the performance of the device whilst still providing a certain level of flexibility for future adaptation. Examples of this are domain specific arrays aimed at gaining better performance in terms of power, area, throughput than their more generic finer grained FPGA cousins by reducing their flexibility.
Rate of reconfiguration
Configuration of these reconfigurable systems can happen at deployment time, between execution phases or during execution. In a typical reconfigurable system, a bit stream is used to program the device at deployment time. Fine grained systems by their own nature require greater configuration time than more coarse-grained architectures due to more elements needing to be addressed and programmed. Therefore, more coarse-grained architectures gain from potential lower energy requirements, as less information is transferred and utilised. Intuitively, the slower the rate of reconfiguration the smaller the energy consumption as the associated energy cost of reconfiguration are amortised over a longer period of time. Partial re-configuration aims to allow part of the device to be reprogrammed while another part is still performing active computation. Partial re-configuration allows smaller reconfigurable bit streams thus not wasting energy on transmitting redundant information in the bit stream. Compression of the bit stream is possible but careful analysis is to be carried out to ensure that the energy saved by using smaller bit streams is not outweighed by the computation needed to decompress the data.
Host coupling
Often the reconfigurable array is used as a processing accelerator attached to a host processor. The level of coupling determines the type of data transfers, latency, power, throughput and overheads involved when utilising the reconfigurable logic. Some of the most intuitive designs use a peripheral bus to provide a coprocessor like arrangement for the reconfigurable array. However, there have also been implementations where the reconfigurable fabric is much closer to the processor, some are even implemented into the data path, utilising the processor registers. The job of the host processor is to perform the control functions, configure the logic, schedule data and to provide external interfacing.
Routing/interconnects
The flexibility in reconfigurable devices mainly comes from their routing interconnect. One style of interconnect made popular by FPGAs vendors, Xilinx and Altera are the island style layout, where blocks are arranged in an array with vertical and horizontal routing. A layout with inadequate routing may suffer from poor flexibility and resource utilisation, therefore providing limited performance. If too much interconnect is provided this requires more transistors than necessary and thus more silicon area, longer wires and more power consumption.
Challenges for operating systems
One of the key challenges for reconfigurable computing is to enable higher design productivity and provide an easier way to use reconfigurable computing systems for users that are unfamiliar with the underlying concepts. One way of doing this is to provide standardization and abstraction, usually supported and enforced by an operating system.
One of the major tasks of an operating system is to hide the hardware and present programs (and their programmers) with nice, clean, elegant, and consistent abstractions to work with instead. In other words, the two main tasks of an operating system are abstraction and resource management.
Abstraction is a powerful mechanism to handle complex and different (hardware) tasks in a well-defined and common manner. One of the most elementary OS abstractions is a process. A process is a running application that has the perception (provided by the OS) that it is running on its own on the underlying virtual hardware. This can be relaxed by the concept of threads, allowing different tasks to run concurrently on this virtual hardware to exploit task level parallelism. To allow different processes and threads to coordinate their work, communication and synchronization methods have to be provided by the OS.
In addition to abstraction, resource management of the underlying hardware components is necessary because the virtual computers provided to the processes and threads by the operating system need to share available physical resources (processors, memory, and devices) spatially and temporarily.
See also
Computing with Memory
Glossary of reconfigurable computing
iLAND project
M-Labs
One chip MSX
PipeRench
PSoC
Sprinter
References
Further reading
Cardoso, João M. P.; Hübner, Michael (Eds.), Reconfigurable Computing: From FPGAs to Hardware/Software Codesign, Springer, 2011.
S. Hauck and A. DeHon, Reconfigurable Computing: The Theory and Practice of FPGA-Based Computing, Morgan Kaufmann, 2008.
J. Henkel, S. Parameswaran (editors): Designing Embedded Processors. A Low Power Perspective; Springer Verlag, March 2007
J. Teich (editor) et al.: Reconfigurable Computing Systems. Special Topic Issue of Journal it — Information Technology, Oldenbourg Verlag, Munich. Vol. 49(2007) Issue 3
T.J. Todman, G.A. Constantinides, S.J.E. Wilton, O. Mencer, W. Luk and P.Y.K. Cheung, "Reconfigurable Computing: Architectures and Design Methods", IEEE Proceedings: Computer & Digital Techniques, Vol. 152, No. 2, March 2005, pp. 193–208.
A. Zomaya (editor): Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer Verlag, 2006
J. M. Arnold and D. A. Buell, "VHDL programming on Splash 2," in More FPGAs, Will Moore and Wayne Luk, editors, Abingdon EE & CS Books, Oxford, England, 1994, pp. 182–191. (Proceedings,International Workshop on Field-Programmable Logic, Oxford, 1993.)
J. M. Arnold, D. A. Buell, D. Hoang, D. V. Pryor, N. Shirazi, M. R. Thistle, "Splash 2 and its applications, "Proceedings, International Conference on Computer Design, Cambridge, 1993, pp. 482–486.
D. A. Buell and Kenneth L. Pocek, "Custom computing machines: An introduction," The Journal of Supercomputing, v. 9, 1995, pp. 219–230.
External links
Lectures on Reconfigurable Computing at Brown University
Introduction to Dynamic Partial Reconfiguration
ReCoBus-Builder project for easily implementing complex reconfigurable systems
DRESD (Dynamic Reconfigurability in Embedded System Design) research project
Digital electronics |
188479 | https://en.wikipedia.org/wiki/List%20of%20file%20formats | List of file formats | This is a list of file formats used by computers, organized by type. Filename extension it is usually noted in parentheses if they differ from the file format name or abbreviation. Many operating systems do not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported the File Allocation Table (FAT) file system. Examples of operating systems that do not impose this limit include Unix-like systems, and Microsoft Windows NT, 95-98, and ME which have no three character limit on extensions for 32-bit or 64-bit applications on file systems other than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension.
Some file formats, such as .txt or .text, may be listed multiple times.
Archive and compressed
.?mn - is a custom file made by Team Gastereler for making it easy to open .arc files for Nintendo which can be opened on PC by these files. These types of files are not available anywhere, as they haven't been released yet.
.?Q? – files that are compressed, often by the SQ program.
7z – 7-Zip compressed file
A - An external file extension for C/C++
AAPKG – ArchestrA IDE
AAC – Advanced Audio Coding
ace – ACE compressed file
ALZ – ALZip compressed file
APK – Android package: Applications installable on Android; package format of the Alpine Linux distribution
APPX – Microsoft Application Package (.appx)
AT3 – Sony's UMD data compression
.bke – BackupEarth.com data compression
ARC – pre-Zip data compression
ARC - Nintendo U8 Archive (mostly Yaz0 compressed)
ARJ – ARJ compressed file
ASS (also SAS) – a subtitles file created by Aegisub, a video typesetting application (also a Halo game engine file)
B – (B file) Similar to .a, but less compressed.
BA – Scifer Archive (.ba), Scifer External Archive Type
BB - Is an 3D image file made with the application, Artlantis.
big – Special file compression format used by Electronic Arts to compress the data for many of EA's games
BIN – compressed archive, can be read and used by CD-ROMs and Java, extractable by 7-zip and WINRAR
bjsn – Used to store The Escapists saves on Android.
BKF (.bkf) – Microsoft backup created by NTBackup.c
Blend - An external 3D file format used by the animation software, Blender.
bzip2 (.bz2) –
BMP - Bitmap Image - You can create one by right-clicking the home screen, next, click new, then, click Bitmap Image
bld – Skyscraper Simulator Building
cab – A cabinet (.cab) file is a library of compressed files stored as one file. Cabinet files are used to organize installation files that are copied to the user's system.
c4 – JEDMICS image files, a DOD system
cals – JEDMICS image files, a DOD system
xaml – Used in programs like Visual Studio to create exe files.
CLIPFLAIR (.clipflair, .clipflair.zip) – ClipFlair Studio ClipFlair component saved state file (contains component options in XML, extra/attached files and nested components' state in child .clipflair.zip files – activities are also components and can be nested at any depth)
CPT, SEA – Compact Pro (Macintosh)
DAA – Closed-format, Windows-only compressed disk image
deb – Debian install package
DMG – an Apple compressed/encrypted format
DDZ – a file which can only be used by the "daydreamer engine" created by "fever-dreamer", a program similar to RAGS, it's mainly used to make somewhat short games.
DN – Adobe Dimension CC file format
DPE – Package of AVE documents made with Aquafadas digital publishing tools.
.egg – Alzip Egg Edition compressed file
EGT (.egt) – EGT Universal Document also used to create compressed cabinet files replaces .ecab
ECAB (.ECAB, .ezip) – EGT Compressed Folder used in advanced systems to compress entire system folders, replaced by EGT Universal Document
ESD – Electronic Software Distribution, a compressed and encrypted WIM File
ESS (.ess) – EGT SmartSense File, detects files compressed using the EGT compression system.
EXE (.exe) – Windows application
Flipchart file (.flipchart) – Used in Promethean ActivInspire Flipchart Software.
GBP – GBP File Extension – What is a .gbp file and how do I open it? 2 types of files: 1. An archive index file that is created by Genie Timeline . It contains references to the files that the user has chosen to backup; the references can be to an archive file or a batch of files. This files can be opened using Genie-Soft Genie Timeline on Windows. 2. A data output file created by CAD Printed Circuit Board (PCB). This type of file can be opened on Windows using Autodesk EAGLE EAGLE | PCB Design Software | Autodesk, Altium Designer , Viewplot Welcome to Viewplot.com ...For PCB Related Software;...Viewplot The Gerber Viewer & editor in one......PCB Elegance a professional layout package for a affordable price, Gerbv gerbv – A Free/Open Source Gerber Viewer on Mac using Autodesk EAGLE, Gerbv, gEDA gplEDA Homepage and on Linux using Autodesk EAGLE, gEDA, Gerbv
GBS (.gbs, .ggp, .gsc) – OtterUI binary scene file
GHO (.gho, .ghs) – Norton Ghost
GIF (.gif) – Graphics Interchange Format
gzip (.gz) – Compressed file
HTML (.html) HTML code file
IPG (.ipg) – Format in which Apple Inc. packages their iPod games. can be extracted through Winrar
jar – ZIP file with manifest for use with Java applications.
JPG - Joints Photographic Experts Group - Image File
JPEG - Joints Photographic Experts Group - Image File
LBR (.Lawrence) – Lawrence Compiler Type file
LBR – Library file
LQR – LBR Library file compressed by the SQ program.
LHA (.lzh) – Lempel, Ziv, Huffman
lzip (.lz) – Compressed file
lzo
lzma – Lempel–Ziv–Markov chain algorithm compressed file
LZX
MBW (.mbw) – MBRWizard archive
MHTML – Mime HTML (Hyper-Text Markup Language) code file
MPQ Archives (.mpq) – Used by Blizzard Entertainment
BIN (.bin) – MacBinary
NL2PKG – NoLimits 2 Package (.nl2pkg)
NTH (.nth) – Nokia Theme Used by Nokia Series 40 Cellphones
OAR (.oar) – OAR archive
OSK - Compressed osu! skin archive
OSR - Compressed osu! replay archive
OSZ – Compressed osu! beatmap archive
PAK – Enhanced type of .ARC archive
PAR (.par, .par2) – Parchive
PAF (.paf) – Portable Application File
PEA (.pea) – PeaZip archive file
PNG - Portable Network Graphic - Image File
PHP (.php) – PHP code file
PYK (.pyk) – Compressed file
PK3 (.pk3) – Quake 3 archive (See note on Doom³)
PK4 (.pk4) – Doom³ archive (Opens similarly to a zip archive.)
PXZ (.pxz) - A compressed layered image file used for the image editing website, pixlr.com .
py / pyw – Python code file
RAR (.rar) – Rar Archive, for multiple file archive (rar to .r01-.r99 to s01 and so on)
RAG, RAGS – Game file, a game playable in the RAGS game-engine, a free program which both allows people to create games, and play games, games created have the format "RAG game file"
RaX – Archive file created by RaX
RBXL – Roblox Studio place file
RBXLX – Roblox Studio XML place file
RBXM - Roblox studio script file
RPM – Red Hat package/installer for Fedora, RHEL, and similar systems.
sb – Scratch file
sb2 – Scratch 2.0 file
sb3 - Scratch 3.0 file
SEN – Scifer Archive (.sen) – Scifer Internal Archive Type
SIT (.sitx) – StuffIt (Macintosh)
SIS/SISX – Symbian Application Package
SKB – Google SketchUp backup File
SQ (.sq) – Squish Compressed Archive
SWM – Splitted WIM File, usually found on OEM Recovery Partition to store preinstalled Windows image, and to make Recovery backup (to USB Drive) easier (due to FAT32 limitations)
SZS – Nintendo Yaz0 Compressed Archive
TAR – group of files, packaged as one file
TGZ (.tar.gz) – gzipped tar file
TB (.tb) – Tabbery Virtual Desktop Tab file
TIB (.tib) – Acronis True Image backup
UHA – Ultra High Archive Compression
UUE (.uue) – unified utility engine – the generic and default format for all things UUe-related.
VIV – Archive format used to compress data for several video games, including Need For Speed: High Stakes.
VOL – video game data package.
VSA – Altiris Virtual Software Archive
WAX – Wavexpress – A ZIP alternative optimized for packages containing video, allowing multiple packaged files to be all-or-none delivered with near-instantaneous unpacking via NTFS file system manipulation.
WIM – A compressed disk image for installing Windows Vista or higher, Windows Fundamentals for Legacy PC, or restoring a system image made from Backup and Restore (Windows Vista/7)
XAP – Windows Phone Application Package
xz – xz compressed files, based on LZMA/LZMA2 algorithm
Z – Unix compress file
zoo – based on LZW
zip – popular compression format
ZIM – an open file format that stores wiki content for offline usage
Physical recordable media archiving
ISO – The generic format for most optical media, including CD-ROM, DVD-ROM, Blu-ray Disc, HD DVD and UMD.
NRG – The proprietary optical media archive format used by Nero applications.
IMG – For archiving DOS formatted floppy disks, larger optical media, and hard disk drives.
ADF – Amiga Disk Format, for archiving Amiga floppy disks
ADZ – The GZip-compressed version of ADF.
DMS – Disk Masher System, a disk-archiving system native to the Amiga.
DSK – For archiving floppy disks from a number of other platforms, including the ZX Spectrum and Amstrad CPC.
D64 – An archive of a Commodore 64 floppy disk.
SDI – System Deployment Image, used for archiving and providing "virtual disk" functionality.
MDS – DAEMON tools native disc image format used for making images from optical CD-ROM, DVD-ROM, HD DVD or Blu-ray Disc. It comes together with MDF file and can be mounted with DAEMON Tools.
MDX – New DAEMON Tools format that allows getting one MDX disc image file instead of two (MDF and MDS).
DMG – Macintosh disk image files
(MPEG-1 is found in a .DAT file on a video CD.)
CDI – DiscJuggler image file
CUE – CDRWrite CUE image file
CIF – Easy CD Creator .cif format
C2D – Roxio-WinOnCD .c2d format
DAA – PowerISO .daa format
B6T – BlindWrite 6 image file
B5T – BlindWrite 5 image file
BWT – BlindWrite 4 image file
FFPPKG - FreeFire Profile Export Package
Other Extensions
HTML - Hypertext Markup Language
Computer-aided design
Computer-aided is a prefix for several categories of tools (e.g., design, manufacture, engineering) which assist professionals in their respective fields (e.g., machining, architecture, schematics).
Computer-aided design (CAD)
Computer-aided design (CAD) software assists engineers, architects and other design professionals in project design.
3DXML – Dassault Systemes graphic representation
3MF – Microsoft 3D Manufacturing Format
ACP – VA Software VA – Virtual Architecture CAD file
AMF – Additive Manufacturing File Format
AEC – DataCAD drawing format
AR – Ashlar-Vellum Argon – 3D Modeling
ART – ArtCAM model
ASC – BRL-CAD Geometry File (old ASCII format)
ASM – Solidedge Assembly, Pro/ENGINEER Assembly
BIN, BIM – Data Design System DDS-CAD
BREP – Open CASCADE 3D model (shape)
C3D – C3D Toolkit File Format
C3P - Construct3 Files
CCC – CopyCAD Curves
CCM – CopyCAD Model
CCS – CopyCAD Session
CAD – CadStd
CATDrawing – CATIA V5 Drawing document
CATPart – CATIA V5 Part document
CATProduct – CATIA V5 Assembly document
CATProcess – CATIA V5 Manufacturing document
cgr – CATIA V5 graphic representation file
ckd – KeyCreator CAD Modeling
ckt – KeyCreator CAD Modeling
CO – Ashlar-Vellum Cobalt – parametric drafting and 3D modeling
DRW – Caddie Early version of Caddie drawing – Prior to Caddie changing to DWG
DFT – Solidedge Draft
DGN – MicroStation design file
DGK – Delcam Geometry
DMT – Delcam Machining Triangles
DXF – ASCII Drawing Interchange file format, AutoCAD
DWB – VariCAD drawing file
DWF – Autodesk's Web Design Format; AutoCAD & Revit can publish to this format; similar in concept to PDF files; Autodesk Design Review is the reader
DWG – Popular file format for Computer Aided Drafting applications, notably AutoCAD, Open Design Alliance applications, and Autodesk Inventor Drawing files
EASM – SolidWorks eDrawings assembly file
EDRW – eDrawings drawing file
EMB – Wilcom ES Designer Embroidery CAD file
EPRT – eDrawings part file
EscPcb – "esCAD pcb" data file by Electro-System (Japan)
EscSch – "esCAD sch" data file by Electro-System (Japan)
ESW – AGTEK format
EXCELLON – Excellon file
EXP – Drawing Express format
F3D – Autodesk Fusion 360 archive file
FCStd – Native file format of FreeCAD CAD/CAM package
FM – FeatureCAM Part File
FMZ – FormZ Project file
G – BRL-CAD Geometry File
GBR – Gerber file
GLM – KernelCAD model
GRB – T-FLEX CAD File
GRI - AppliCad GRIM-In file in readable text form for importing roof and wall cladding job data generated by business management and accounting systems into the modelling/estimating program
GRO - AppliCad GRIM-Out file in readable text form for exporting roof and wall cladding data job material and labour costing data, material lists generated by the modelling/estimating program to business management and accounting systems
IAM – Autodesk Inventor Assembly file
ICD – IronCAD 2D CAD file
IDW – Autodesk Inventor Drawing file
IFC – buildingSMART for sharing AEC and FM data
IGES – Initial Graphics Exchange Specification
Intergraph Standard File Formats – Intergraph
IO – Stud.io 3d model
IPN – Autodesk Inventor Presentation file
IPT – Autodesk Inventor Part file
JT – Jupiter Tesselation
MCD – Monu-CAD (Monument/Headstone Drawing file)
MDG – Model of Digital Geometric Kernel
model – CATIA V4 part document
OCD – Orienteering Computer Aided Design (OCAD) file
PAR – Solidedge Part
PIPE – PIPE-FLO Professional Piping system design file
PLN – ArchiCad project
PRT – NX (recently known as Unigraphics), Pro/ENGINEER Part, CADKEY Part
PSM – Solidedge Sheet
PSMODEL – PowerSHAPE Model
PWI – PowerINSPECT File
PYT – Pythagoras File
SKP – SketchUp Model
RLF – ArtCAM Relief
RVM – AVEVA PDMS 3D Review model
RVT – Autodesk Revit project files
RFA – Autodesk Revit family files
RXF - AppliCad annotated 3D roof and wall geometry data in readable text form used to exchange 3D model geometry with other systems such as truss design software
S12 – Spirit file, by Softtech
SCAD – OpenSCAD 3D part model
SCDOC – SpaceClaim 3D Part/Assembly
SLDASM – SolidWorks Assembly drawing
SLDDRW – SolidWorks 2D drawing
SLDPRT – SolidWorks 3D part model
dotXSI – For Softimage
STEP – Standard for the Exchange of Product model data
STL – Stereo Lithographic data format used by various CAD systems and stereo lithographic printing machines.
STD – Power Vision Plus – Electricity Meter Data (Circutor)
TCT – TurboCAD drawing template
TCW – TurboCAD for Windows 2D and 3D drawing
UNV – I-DEAS I-DEAS (Integrated Design and Engineering Analysis Software)
VC6 – Ashlar-Vellum Graphite – 2D and 3D drafting
VLM – Ashlar-Vellum Vellum, Vellum 2D, Vellum Draft, Vellum 3D, DrawingBoard
VS – Ashlar-Vellum Vellum Solids
WRL – Similar to STL, but includes color. Used by various CAD systems and 3D printing rapid prototyping machines. Also used for VRML models on the web.
X_B – Parasolids binary format
X_T – Parasolids
XE – Ashlar-Vellum Xenon – for associative 3D modeling
ZOFZPROJ – ZofzPCB 3D PCB model, containing mesh, netlist and BOM
Electronic design automation (EDA)
Electronic design automation (EDA), or electronic computer-aided design (ECAD), is specific to the field of electrical engineering.
BRD – Board file for EAGLE Layout Editor, a commercial PCB design tool
BSDL – Description language for testing through JTAG
CDL – Transistor-level netlist format for IC design
CPF – Power-domain specification in system-on-a-chip (SoC) implementation (see also UPF)
DEF – Gate-level layout
DSPF – Detailed Standard Parasitic Format, Analog-level parasitics of interconnections in IC design
EDIF – Vendor neutral gate-level netlist format
FSDB – Analog waveform format (see also Waveform viewer)
GDSII – Format for PCB and layout of integrated circuits
HEX – ASCII-coded binary format for memory dumps
LEF – Library Exchange Format, physical abstract of cells for IC design
LIB – Library modeling (function, timing) format
MS12 – NI Multisim file
OASIS – Open Artwork System Interchange Standard
OpenAccess – Design database format with APIs
PSF – Cadence proprietary format to store simulation results/waveforms (2GB limit)
PSFXL – Cadence proprietary format to store simulation results/waveforms
SDC – Synopsys Design Constraints, format for synthesis constraints
SDF – Standard for gate-level timings
SPEF – Standard format for parasitics of interconnections in IC design
SPI, CIR – SPICE Netlist, device-level netlist and commands for simulation
SREC, S19 – S-record, ASCII-coded format for memory dumps
SST2 – Cadence proprietary format to store mixed-signal simulation results/waveforms
STIL – Standard Test Interface Language, IEEE1450-1999 standard for Test Patterns for IC
SV – SystemVerilog source file
S*P – Touchstone/EEsof Scattering parameter data file – multi-port blackbox performance, measurement or simulated
TLF – Contains timing and logical information about a collection of cells (circuit elements)
UPF – Standard for Power-domain specification in SoC implementation
V – Verilog source file
VCD – Standard format for digital simulation waveform
VHD, VHDL – VHDL source file
WGL – Waveform Generation Language, format for Test Patterns for IC
Test technology
Files output from Automatic Test Equipment or post-processed from such.
Standard Test Data Format
Database
4DB – 4D database Structure file
4DD – 4D database Data file
4DIndy – 4D database Structure Index file
4DIndx – 4D database Data Index file
4DR – 4D database Data resource file (in old 4D versions)
ACCDB – Microsoft Database (Microsoft Office Access 2007 and later)
ACCDE – Compiled Microsoft Database (Microsoft Office Access 2007 and later)
ADT – Sybase Advantage Database Server (ADS)
APR – Lotus Approach data entry & reports
BOX – Lotus Notes Post Office mail routing database
CHML – Krasbit Technologies Encrypted database file for 1 click integration between contact management software and the chameleon(tm) line of imaging workflow solutions
DAF – Digital Anchor data file
DAT – DOS Basic
DAT – Intersystems Caché database file
DB – Paradox
DB – SQLite
DBF – db/dbase II,III,IV and V, Clipper, Harbour/xHarbour, Fox/FoxPro, Oracle
DTA – Sage Sterling database file
EGT – EGT Universal Document, used to compress sql databases to smaller files, may contain original EGT database style.
ESS – EGT SmartSense is a database of files and its compression style. Specific to EGT SmartSense
EAP – Enterprise Architect Project
FDB – Firebird Databases
FDB – Navision database file
FP, FP3, FP5, and FP7 – FileMaker Pro
FRM – MySQL table definition
GDB – Borland InterBase Databases
GTABLE – Google Drive Fusion Table
KEXI – Kexi database file (SQLite-based)
KEXIC – shortcut to a database connection for a Kexi databases on a server
KEXIS – shortcut to a Kexi database
LDB – Temporary database file, only existing when database is open
LIRS - Layered Intager Storage. Stores intageres with characters such as semicolons to create lists of data.
MDA – Add-in file for Microsoft Access
MDB – Microsoft Access database
ADP – Microsoft Access project (used for accessing databases on a server)
MDE – Compiled Microsoft Database (Access)
MDF – Microsoft SQL Server Database
MYD – MySQL MyISAM table data
MYI – MySQL MyISAM table index
NCF – Lotus Notes configuration file
NSF – Lotus Notes database
NTF – Lotus Notes database design template
NV2 – QW Page NewViews object oriented accounting database
ODB – LibreOffice Base or OpenOffice Base database
ORA – Oracle tablespace files sometimes get this extension (also used for configuration files)
PCONTACT – WinIM Contact file
PDB – Palm OS Database
PDI – Portable Database Image
PDX – Corel Paradox database management
PRC – Palm OS resource database
SQL – bundled SQL queries
REC – GNU recutils database
REL – Sage Retrieve 4GL data file
RIN – Sage Retrieve 4GL index file
SDB – StarOffice's StarBase
SDF – SQL Compact Database file
sqlite – SQLite
UDL – Universal Data Link
waData – Wakanda (software) database Data file
waIndx – Wakanda (software) database Index file
waModel – Wakanda (software) database Model file
waJournal – Wakanda (software) database Journal file
WDB – Microsoft Works Database
WMDB – Windows Media Database file – The CurrentDatabase_360.wmdb file can contain file name, file properties, music, video, photo and playlist information.
Big Data (Distributed)
Avro - Data format appropriate for ingestion of record based attributes. Distinguishing characteristic is schema is stored on each row enabling schema evolution.
Parquet - Columnar data storage. It is typically used within the Hadoop ecosystem.
ORC - Similar to Parquet, but has better data compression and schema evolution handling.
Desktop publishing
AI – Adobe Illustrator
AVE / ZAVE – Aquafadas
CDR – CorelDRAW
CHP / pub / STY / CAP / CIF / VGR / FRM – Ventura Publisher – Xerox (DOS / GEM)
CPT – Corel Photo-Paint
DTP – Greenstreet Publisher, GST PressWorks
FM – Adobe FrameMaker
GDRAW – Google Drive Drawing
ILDOC – Broadvision Quicksilver document
INDD – Adobe InDesign
MCF – FotoInsight Designer
PDF – Adobe Acrobat or Adobe Reader
PMD – Adobe PageMaker
PPP – Serif PagePlus
PSD – Adobe Photoshop
PUB – Microsoft Publisher
QXD – QuarkXPress
SLA / SCD – Scribus
XCF – File format used by the GIMP, as well as other programs
Document
These files store formatted text and plain text.
0 – Plain Text Document, normally used for licensing
1ST – Plain Text Document, normally preceded by the words "README" (README.1ST)
600 – Plain Text Document, used in UNZIP history log
602 – Text602 document
ABW – AbiWord document
ACL – MS Word AutoCorrect List
AFP – Advanced Function Presentation – IBc
AMI – Lotus Ami Pro
Amigaguide
ANS – American National Standards Institute (ANSI) text
ASC – ASCII text
AWW – Ability Write
CCF – Color Chat 1.0
CSV – ASCII text as comma-separated values, used in spreadsheets and database management systems
CWK – ClarisWorks-AppleWorks document
DBK – DocBook XML sub-format
DITA – Darwin Information Typing Architecture document
DOC – Microsoft Word document
DOCM – Microsoft Word macro-enabled document
DOCX – Office Open XML document
DOT – Microsoft Word document template
DOTX – Office Open XML text document template
DWD – DavkaWriter Heb/Eng word processor file
EGT – EGT Universal Document
EPUB – EPUB open standard for e-books
EZW – Reagency Systems easyOFFER document
FDX – Final Draft
FTM – Fielded Text Meta
FTX – Fielded Text (Declared)
GDOC – Google Drive Document
HTML – HyperText Markup Language (.html, .htm)
HWP – Haansoft (Hancom) Hangul Word Processor document
HWPML – Haansoft (Hancom) Hangul Word Processor Markup Language document
LOG – Text log file
LWP – Lotus Word Pro
MBP – metadata for Mobipocket documents
MD – Markdown text document
ME – Plain text document normally preceded by the word "READ" (READ.ME)
MCW – Microsoft Word for Macintosh (versions 4.0–5.1)
Mobi – Mobipocket documents
NB – Mathematica Notebook
nb – Nota Bene Document (Academic Writing Software)
NBP – Mathematica Player Notebook
NEIS – 학교생활기록부 작성 프로그램 (Student Record Writing Program) Document
NT – N-Triples RDF container (.nt)
NQ – N-Quads RDF container (.nq)
ODM – OpenDocument master document
ODOC – Synology Drive Office Document
ODT – OpenDocument text document
OSHEET – Synology Drive Office Spreadsheet
OTT – OpenDocument text document template
OMM – OmmWriter text document
PAGES – Apple Pages document
PAP – Papyrus word processor document
PDAX – Portable Document Archive (PDA) document index file
PDF – Portable Document Format
QUOX – Question Object File Format for Quobject Designer or Quobject Explorer
Radix-64
RTF – Rich Text document
RPT – Crystal Reports
SDW – StarWriter text document, used in earlier versions of StarOffice
SE – Shuttle Document
STW – OpenOffice.org XML (obsolete) text document template
Sxw – OpenOffice.org XML (obsolete) text document
TeX – TeX
INFO – Texinfo
Troff
TXT – ASCII or Unicode plain text file
UOF – Uniform Office Format
UOML – Unique Object Markup Language
VIA – Revoware VIA Document Project File
WPD – WordPerfect document
WPS – Microsoft Works document
WPT – Microsoft Works document template
WRD – WordIt! document
WRF – ThinkFree Write
WRI – Microsoft Write document
XHTML (xhtml, xht) – eXtensible HyperText Markup Language
XML – eXtensible Markup Language
XPS – Open XML Paper Specification
Financial records
MYO – MYOB Limited (Windows) File
MYOB – MYOB Limited (Mac) File
TAX – TurboTax File
YNAB – You Need a Budget (YNAB) File
Financial data transfer formats
Interactive Financial Exchange (IFX) – XML-based specification for various forms of financial transactions
Open Financial Exchange (.ofx) – open standard supported by CheckFree and Microsoft and partly by Intuit; SGML and later XML based
QFX – proprietary pay-only format used only by Intuit
Quicken Interchange Format (.qif) – open standard formerly supported by Intuit
Font file
ABF – Adobe Binary Screen Font
AFM – Adobe Font Metrics
BDF – Bitmap Distribution Format
BMF – ByteMap Font Format
BRFNT - Binary Revolution Font Format
FNT – Bitmapped Font – Graphics Environment Manager (GEM)
FON – Bitmapped Font – Microsoft Windows
MGF – MicroGrafx Font
OTF – OpenType Font
PCF – Portable Compiled Format
PostScript Font – Type 1, Type 2
PFA – Printer Font ASCII
PFB – Printer Font Binary – Adobe
PFM – Printer Font Metrics – Adobe
AFM – Adobe Font Metrics
FOND – Font Description resource – Mac OS
SFD – FontForge spline font database Font
SNF – Server Normal Format
TDF – TheDraw Font
TFM – TeX font metric
TTF (.ttf, .ttc) – TrueType Font
UFO – Unified Font Object is a cross-platform, cross-application, human readable, future proof format for storing font data.
WOFF – Web Open Font Format
Geographic information system
ASC – ASCII point of interest (POI) text file
APR – ESRI ArcView 3.3 and earlier project file
DEM – USGS DEM file format
E00 – ARC/INFO interchange file format
GeoJSON –Geographically located data in object notation
GeoTIFF – Geographically located raster data
GML – Geography Markup Language file
GPX – XML-based interchange format
ITN – TomTom Itinerary format
MXD – ESRI ArcGIS project file, 8.0 and higher
NTF – National Transfer Format file
OV2 – TomTom POI overlay file
SHP – ESRI shapefile
TAB – MapInfo Table file format
World TIFF – Geographically located raster data: text file giving corner coordinate, raster cells per unit, and rotation
DTED – Digital Terrain Elevation Data
KML – Keyhole Markup Language, XML-based
Graphical information organizers
3DT – 3D Topicscape, the database in which the meta-data of a 3D Topicscape is held, it is a form of 3D concept map (like a 3D mind-map) used to organize ideas, information, and computer files
ATY – 3D Topicscape file, produced when an association type is exported; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape)
CAG – Linear Reference System
FES – 3D Topicscape file, produced when a fileless occurrence in 3D Topicscape is exported to Windows. Used to permit round-trip (export Topicscape, change files and folders as desired, re-import them to 3D Topicscape)
MGMF – MindGenius Mind Mapping Software file format
MM – FreeMind mind map file (XML)
MMP – Mind Manager mind map file
TPC – 3D Topicscape file, produced when an inter-Topicscape topic link file is exported to Windows; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape)
Graphics
Color palettes
ACT – Adobe Color Table. Contains a raw color palette and consists of 256 24-bit RGB colour values.
ASE – Adobe Swatch Exchange. Used by Adobe Photoshop, Illustrator, and InDesign.
GPL – GIMP palette file. Uses a text representation of color names and RGB values. Various open source graphical editors can read this format, including GIMP, Inkscape, Krita, KolourPaint, Scribus, CinePaint, and MyPaint.
PAL – Microsoft RIFF palette file
Color management
ICC/ICM – Color profile conforming the specification of the ICC.
Raster graphics
Raster or bitmap files store images as a group of pixels.
ART – America Online proprietary format
BLP – Blizzard Entertainment proprietary texture format
BMP – Microsoft Windows Bitmap formatted image
BTI – Nintendo proprietary texture format
CD5 – Chasys Draw IES image
CIT – Intergraph is a monochrome bitmap format
CPT – Corel PHOTO-PAINT image
CR2 – Canon camera raw format; photos have this on some Canon cameras if the quality RAW is selected in camera settings
CLIP – CLIP STUDIO PAINT format
CPL – Windows control panel file
DDS – DirectX texture file
DIB – Device-Independent Bitmap graphic
DjVu – DjVu for scanned documents
EGT – EGT Universal Document, used in EGT SmartSense to compress PNG files to yet a smaller file
Exif – Exchangeable image file format (Exif) is a specification for the image format used by digital cameras
GIF – CompuServe's Graphics Interchange Format
GRF – Zebra Technologies proprietary format
ICNS – format for icons in macOS. Contains bitmap images at multiple resolutions and bitdepths with alpha channel.
ICO – a format used for icons in Microsoft Windows. Contains small bitmap images at multiple resolutions and bitdepths with 1-bit transparency or alpha channel.
IFF (.iff, .ilbm, .lbm) – ILBM
JNG – a single-frame MNG using JPEG compression and possibly an alpha channel
JPEG, JFIF (.jpg or .jpeg) – Joint Photographic Experts Group; a lossy image format widely used to display photographic images
JP2 – JPEG2000
JPS – JPEG Stereo
KRA – Krita image file
LBM – Deluxe Paint image file
MAX – ScanSoft PaperPort document
MIFF – ImageMagick's native file format
MNG – Multiple-image Network Graphics, the animated version of PNG
MSP – a format used by old versions of Microsoft Paint; replaced by BMP in Microsoft Windows 3.0
NITF – A U.S. Government standard commonly used in Intelligence systems
OTB – Over The Air bitmap, a specification designed by Nokia for black and white images for mobile phones
PBM – Portable bitmap
PC1 – Low resolution, compressed Degas picture file
PC2 – Medium resolution, compressed Degas picture file
PC3 – High resolution, compressed Degas picture file
PCF – Pixel Coordination Format
PCX – a lossless format used by ZSoft's PC Paint, popular for a time on DOS systems.
PDN – Paint.NET image file
PGM – Portable graymap
PI1 – Low resolution, uncompressed Degas picture file
PI2 – Medium resolution, uncompressed Degas picture file; also Portrait Innovations encrypted image format
PI3 – High resolution, uncompressed Degas picture file
PICT, PCT – Apple Macintosh PICT image
PNG – Portable Network Graphic (lossless, recommended for display and edition of graphic images)
PNM – Portable anymap graphic bitmap image
PNS – PNG Stereo
PPM – Portable Pixmap (Pixel Map) image
PSB – Adobe Photoshop Big image file (for large files)
PSD, PDD – Adobe Photoshop Drawing
PSP – Paint Shop Pro image
PX – Pixel image editor image file
PXM – Pixelmator image file
PXR – Pixar Image Computer image file
QFX – QuickLink Fax image
RAW – General term for minimally processed image data (acquired by a digital camera)
RLE – a run-length encoding image
SCT – Scitex Continuous Tone image file
SGI, RGB, INT, BW – Silicon Graphics Image
TGA (.tga, .targa, .icb, .vda, .vst, .pix) – Truevision TGA (Targa) image
TIFF (.tif or .tiff) – Tagged Image File Format (usually lossless, but many variants exist, including lossy ones)
TIFF/EP (.tif or .tiff) – Tag Image File Format / Electronic Photography, ISO 12234-2; tends to be used as a basis for other formats rather than in its own right.
VTF – Valve Texture Format
XBM – X Window System Bitmap
XCF – GIMP image (from Gimp's origin at the eXperimental Computing Facility of the University of California)
XPM – X Window System Pixmap
ZIF – Zoomable/Zoomify Image Format (a web-friendly, TIFF-based, zoomable image format)
Vector graphics
Vector graphics use geometric primitives such as points, lines, curves, and polygons to represent images.
3DV – 3-D wireframe graphics by Oscar Garcia
AMF – Additive Manufacturing File Format
AWG – Ability Draw
AI – Adobe Illustrator Document
CGM – Computer Graphics Metafile, an ISO Standard
CDR – CorelDRAW Document
CMX – CorelDRAW vector image
DP – Drawing Program file for PERQ
DRAWIO – Diagrams.net offline diagram
DXF – ASCII Drawing Interchange file Format, used in AutoCAD and other CAD-programs
E2D – 2-dimensional vector graphics used by the editor which is included in JFire
EGT – EGT Universal Document, EGT Vector Draw images are used to draw vector to a website
EPS – Encapsulated Postscript
FS – FlexiPro file
GBR – Gerber file
ODG – OpenDocument Drawing
MOVIE.BYU
RenderMan
SVG – Scalable Vector Graphics, employs XML
Scene description languages (3D vector image formats)
STL – Stereo Lithographic data format (see STL (file format)) used by various CAD systems and stereo lithographic printing machines. See above.
VRML Uses .wrl extension – Virtual Reality Modeling Language, for the creation of 3D viewable web images.
X3D
SXD – OpenOffice.org XML (obsolete) Drawing
TGAX - Texture format used by Zwift
V2D – voucher design used by the voucher management included in JFire
VDOC – Vector format used in AnyCut, CutStorm, DrawCut, DragonCut, FutureDRAW, MasterCut, SignMaster, VinylMaster software by Future Corporation
VSD – Vector format used by Microsoft Visio
VSDX – Vector format used by MS Visio and opened by VSDX Annotator
VND – Vision numeric Drawing file used in TypeEdit, Gravostyle.
WMF – Windows Meta File
EMF – Enhanced (Windows) MetaFile, an extension to WMF
ART – Xara – Drawing (superseded by XAR)
XAR – Xara – Drawing
3D graphics
3D graphics are 3D models that allow building models in real-time or non-real-time 3D rendering.
3DMF – QuickDraw 3D Metafile (.3dmf)
3DM – OpenNURBS Initiative 3D Model (used by Rhinoceros 3D) (.3dm)
3MF – Microsoft 3D Manufacturing Format (.3mf)
3DS – legacy 3D Studio Model (.3ds)
ABC – Alembic (computer graphics)
AC – AC3D Model (.ac)
AMF – Additive Manufacturing File Format
AN8 – Anim8or Model (.an8)
AOI – Art of Illusion Model (.aoi)
ASM – PTC Creo assembly (.asm)
B3D – Blitz3D Model (.b3d)
BLEND – Blender (.blend)
BLOCK – Blender encrypted blend files (.block)
BMD3 – Nintendo GameCube first-party J3D proprietary model format (.bmd)
BDL4 – Nintendo GameCube and Wii first-party J3D proprietary model format (2002, 2006–2010) (.bdl)
BRRES – Nintendo Wii first-party proprietary model format 2010+ (.brres)
BFRES – Nintendo Wii U and later Switch first-party proprietary model format
C4D – Cinema 4D (.c4d)
Cal3D – Cal3D (.cal3d)
CCP4 – X-ray crystallography voxels (electron density)
CFL – Compressed File Library (.cfl)
COB – Caligari Object (.cob)
CORE3D – Coreona 3D Coreona 3D Virtual File(.core3d)
CTM – OpenCTM (.ctm)
DAE – COLLADA (.dae)
DFF – RenderWare binary stream, commonly used by Grand Theft Auto III-era games as well as other RenderWare titles
DPM – deepMesh (.dpm)
DTS – Torque Game Engine (.dts)
EGG – Panda3D Engine
FACT – Electric Image (.fac)
FBX – Autodesk FBX (.fbx)
G – BRL-CAD geometry (.g)
GLB – a binary form of glTF required to be loaded in Facebook 3D Posts. (.glb)
GLM – Ghoul Mesh (.glm)
glTF – the JSON-based standard developed by Khronos Group (.gltf)
IO - Bricklink Stud.io 2.0 Model File (.io)
IOB – Imagine (3D modeling software) (.iob)
JAS – Cheetah 3D file (.jas)
JMESH - Universal mesh data exchange file based on JMesh specification (.jmsh for text/JSON based, .bmsh for binary/UBJSON based)
LDR - LDraw Model File (.ldr)
LWO – Lightwave Object (.lwo)
LWS – Lightwave Scene (.lws)
LXF – LEGO Digital Designer Model file (.lxf)
LXO – Luxology Modo (software) file (.lxo)
M3D – Model3D, universal, engine-neutral format (.m3d)
MA – Autodesk Maya ASCII File (.ma)
MAX – Autodesk 3D Studio Max file (.max)
MB – Autodesk Maya Binary File (.mb)
MPD - LDraw Multi-Part Document Model File (.mpd)
MD2 – Quake 2 model format (.md2)
MD3 – Quake 3 model format (.md3)
MD5 – Doom 3 model format (.md5)
MDX – Blizzard Entertainment's own model format (.mdx)
MESH – New York University(.m)
MESH – Meshwork Model (.mesh)
MM3D – Misfit Model 3d (.mm3d)
MPO – Multi-Picture Object – This JPEG standard is used for 3d images, as with the Nintendo 3DS
MRC – voxels in cryo-electron microscopy
NIF – Gamebryo NetImmerse File (.nif)
OBJ – Wavefront .obj file (.obj)
OFF – OFF Object file format (.off)
OGEX – Open Game Engine Exchange (OpenGEX) format (.ogex)
PLY – Polygon File Format / Stanford Triangle Format (.ply)
PRC – Adobe PRC (embedded in PDF files)
PRT – PTC Creo part (.prt)
POV – POV-Ray document (.pov)
R3D – Realsoft 3D (Real-3D) (.r3d)
RWX – RenderWare Object (.rwx)
SIA – Nevercenter Silo Object (.sia)
SIB – Nevercenter Silo Object (.sib)
SKP – Google Sketchup file (.skp)
SLDASM – SolidWorks Assembly Document (.sldasm)
SLDPRT – SolidWorks Part Document (.sldprt)
SMD – Valve Studiomdl Data format (.smd)
U3D – Universal 3D format (.u3d)
USD – Universal Scene Description (.usd)
USDA – Universal Scene Description , Human-readable text format (.usda)
USDC – Universal Scene Description , Binary format (.usdc)
USDZ – Universal Scene Description Zip (.usdz)
VIM – Revizto visual information model format (.vimproj)
VRML97 – VRML Virtual reality modeling language (.wrl)
VUE – Vue scene file (.vue)
VWX – Vectorworks (.vwx)
WINGS – Wings3D (.wings)
W3D – Westwood 3D Model (.w3d)
X – DirectX 3D Model (.x)
X3D – Extensible 3D (.x3d)
Z3D – Zmodeler (.z3d)
ZBMX - Mecabricks Blender Add-On (.zbmx)
Links and shortcuts
Alias (Mac OS)
JNLP – Java Network Launching Protocol, an XML file used by Java Web Start for starting Java applets over the Internet
LNK – binary-format file shortcut in Microsoft Windows 95 and later
APPREF-MS – File shortcut format used by ClickOnce
NAL - ZENworks Instant shortcut (opens a .EXE not on the C:/ )
URL – INI file pointing to a URL bookmarks/Internet shortcut in Microsoft Windows
WEBLOC – Property list file pointing to a URL bookmarks/Internet shortcut in macOS
SYM – Symbolic link
.desktop – Desktop entry on Linux Desktop environments
Mathematical
Harwell-Boeing file format – a format designed to store sparse matrices
MML – MathML – Mathematical Markup Language
ODF – OpenDocument Math Formula
SXM – OpenOffice.org XML (obsolete) Math Formula
Object code, executable files, shared and dynamically linked libraries
.8BF files – plugins for some photo editing programs including Adobe Photoshop, Paint Shop Pro, GIMP and Helicon Filter.
.a – Objective C native static library
a.out – (no suffix for executable image, .o for object files, .so for shared object files) classic UNIX object format, now often superseded by ELF
APK – Android Application Package
APP – A folder found on macOS systems containing program code and resources, appearing as one file.
BAC – an executable image for the RSTS/E system, created using the BASIC-PLUS COMPILE command
BPL – a Win32 PE file created with Borland Delphi or C++Builder containing a package.
Bundle – a Macintosh plugin created with Xcode or make which holds executable code, data files, and folders for that code.
.Class – used in Java
COFF (no suffix for executable image, .o for object files) – UNIX Common Object File Format, now often superseded by ELF
COM files – commands used in DOS and CP/M
DCU – Delphi compiled unit
DLL – library used in Windows and OS/2 to store data, resources and code.
DOL – the format used by the GameCube and Wii, short for Dolphin, which was the codename of the GameCube.
.EAR – archives of Java enterprise applications
ELF – (no suffix for executable image, .o for object files, .so for shared object files) used in many modern Unix and Unix-like systems, including Solaris, other System V Release 4 derivatives, Linux, and BSD)
expander (see bundle)
DOS executable (.exe – used in DOS)
.IPA – apple IOS application executable file. Another form of zip file.
JEFF – a file format allowing execution directly from static memory
.JAR – archives of Java class files
.XPI – PKZIP archive that can be run by Mozilla web browsers to install software.
Mach-O – (no suffix for executable image, .o for object files, .dylib and .bundle for shared object files) Mach-based systems, notably native format of macOS, iOS, watchOS, and tvOS
NetWare Loadable Module (.NLM) – the native 32-bit binaries compiled for Novell's NetWare Operating System (versions 3 and newer)
New Executable (.EXE – used in multitasking ("European") MS-DOS 4.0, 16-bit Microsoft Windows, and OS/2)
.o – un-linked object files directly from the compiler
Portable Executable (.EXE, – used in Microsoft Windows and some other systems)
Preferred Executable Format – (classic Mac OS for PowerPC applications; compatible with macOS via a classic (Mac OS X) emulator)
RLL – used in Microsoft operating systems together with a DLL file to store program resources
.s1es – Executable used for S1ES learning system.
.so – shared library, typically ELF
Value Added Process (.VAP) – the native 16-bit binaries compiled for Novell's NetWare Operating System (version 2, NetWare 286, Advanced NetWare, etc.)
.WAR – archives of Java Web applications
XBE – Xbox executable
.XAP – Windows Phone package
XCOFF – (no suffix for executable image, .o for object files, .a for shared object files) extended COFF, used in AIX
XEX – Xbox 360 executable
LIST – variable list
Object extensions
.VBX – Visual Basic extensions
.OCX – Object Control extensions
.TLB – Windows Type Library
Page description language
DVI – Device independent format
EGT – Universal Document can be used to store CSS type styles (*.egt)
PLD
PCL
PDF – Portable Document Format
PostScript (.ps, .ps.gz)
SNP – Microsoft Access Report Snapshot
XPS
XSL-FO (Formatting Objects)
Configurations, Metadata
CSS – Cascading Style Sheets
XSLT, XSL – XML Style Sheet (.xslt, .xsl)
TPL – Web template (.tpl)
Personal information manager
MSG – Microsoft Outlook task manager
ORG – Lotus Organizer PIM package
ORG - Emacs Org-Mode Mindmanager, contacts, calendar, email-integration
PST, OST – Microsoft Outlook email communication
SC2 – Microsoft Schedule+ calendar
Presentation
GSLIDES – Google Drive Presentation
KEY, KEYNOTE – Apple Keynote Presentation
NB – Mathematica Slideshow
NBP – Mathematica Player slideshow
ODP – OpenDocument Presentation
OTP – OpenDocument Presentation template
PEZ – Prezi Desktop Presentation
POT – Microsoft PowerPoint template
PPS – Microsoft PowerPoint Show
PPT – Microsoft PowerPoint Presentation
PPTX – Office Open XML Presentation
PRZ – Lotus Freelance Graphics
SDD – StarOffice's StarImpress
SHF – ThinkFree Show
SHOW – Haansoft(Hancom) Presentation software document
SHW – Corel Presentations slide show creation
SLP – Logix-4D Manager Show Control Project
SSPSS – SongShow Plus Slide Show
STI – OpenOffice.org XML (obsolete) Presentation template
SXI – OpenOffice.org XML (obsolete) Presentation
THMX – Microsoft PowerPoint theme template
WATCH – Dataton Watchout Presentation
Project management software
MPP – Microsoft Project
Reference management software
Formats of files used for bibliographic information (citation) management.
bib – BibTeX
enl – EndNote
ris – Research Information Systems RIS (file format)
Scientific data (data exchange)
FITS (Flexible Image Transport System) – standard data format for astronomy (.fits)
Silo – a storage format for visualization developed at Lawrence Livermore National Laboratory
SPC – spectroscopic data
EAS3 – binary format for structured data
EOSSA – Electro-Optic Space Situational Awareness format
OST (Open Spatio-Temporal) – extensible, mainly images with related data, or just pure data; meant as an open alternative for microscope images
CCP4 – X-ray crystallography voxels (electron density)
MRC – voxels in cryo-electron microscopy
HITRAN – spectroscopic data with one optical/infrared transition per line in the ASCII file (.hit)
.root – hierarchical platform-independent compressed binary format used by ROOT
Simple Data Format (SDF) – a platform-independent, precision-preserving binary data I/O format capable of handling large, multi-dimensional arrays.
MYD – Everfine LEDSpec software file for LED measurements
CSDM (Core Scientific Dataset Model) – model for multi-dimensional and correlated datasets from various spectroscopies, diffraction, microscopy, and imaging techniques (.csdf, .csdfe).
Multi-domain
NetCDF – Network common data format
HDR, [HDF], h4 or h5 – Hierarchical Data Format
SDXF – (Structured Data Exchange Format)
CDF – Common Data Format
CGNS – CFD General Notation System
FMF – Full-Metadata Format
Meteorology
GRIB – Grid in Binary, WMO format for weather model data
BUFR – WMO format for weather observation data
PP – UK Met Office format for weather model data
NASA-Ames – Simple text format for observation data. First used in aircraft studies of the atmosphere.
Chemistry
CML – Chemical Markup Language (CML) (.cml)
Chemical table file (CTab) (.mol, .sd, .sdf)
Joint Committee on Atomic and Molecular Physical Data (JCAMP) (.dx, .jdx)
Simplified molecular input line entry specification (SMILES) (.smi)
Mathematics
graph6, sparse6 – ASCII encoding of Adjacency matrices (.g6, .s6)
Biology
Molecular biology and bioinformatics:
AB1 – In DNA sequencing, chromatogram files used by instruments from Applied Biosystems
ACE – A sequence assembly format
ASN.1– Abstract Syntax Notation One, is an International Standards Organization (ISO) data representation format used to achieve interoperability between platforms. NCBI uses ASN.1 for the storage and retrieval of data such as nucleotide and protein sequences, structures, genomes, and PubMed records.
BAM – Binary Alignment/Map format (compressed SAM format)
BCF – Binary compressed VCF format
BED – The browser extensible display format is used for describing genes and other features of DNA sequences
CAF – Common Assembly Format for sequence assembly
CRAM – compressed file format for storing biological sequences aligned to a reference sequence
DDBJ – The flatfile format used by the DDBJ to represent database records for nucleotide and peptide sequences from DDBJ databases.
EMBL – The flatfile format used by the EMBL to represent database records for nucleotide and peptide sequences from EMBL databases.
FASTA – The FASTA format, for sequence data. Sometimes also given as FNA or FAA (Fasta Nucleic Acid or Fasta Amino Acid).
FASTQ – The FASTQ format, for sequence data with quality. Sometimes also given as QUAL.
GCPROJ – The Genome Compiler project. Advanced format for genetic data to be designed, shared and visualized.
GenBank – The flatfile format used by the NCBI to represent database records for nucleotide and peptide sequences from the GenBank and RefSeq databases
GFF – The General feature format is used to describe genes and other features of DNA, RNA, and protein sequences
GTF – The Gene transfer format is used to hold information about gene structure
MAF – The Multiple Alignment Format stores multiple alignments for whole-genome to whole-genome comparisons
NCBI ASN.1 – Structured ASN.1 format used at National Center for Biotechnology Information for DNA and protein data
NEXUS – The Nexus file encodes mixed information about genetic sequence data in a block structured format
NeXML–XML format for phylogenetic trees
NWK – The Newick tree format is a way of representing graph-theoretical trees with edge lengths using parentheses and commas and useful to hold phylogenetic trees.
PDB – structures of biomolecules deposited in Protein Data Bank, also used to exchange protein and nucleic acid structures
PHD – Phred output, from the base-calling software Phred
PLN – Protein Line Notation used in proteax software specification
SAM – Sequence Alignment Map format, in which the results of the 1000 Genomes Project will be released
SBML – The Systems Biology Markup Language is used to store biochemical network computational models
SCF – Staden chromatogram files used to store data from DNA sequencing
SFF – Standard Flowgram Format
SRA – format used by the National Center for Biotechnology Information Short Read Archive to store high-throughput DNA sequence data
Stockholm – The Stockholm format for representing multiple sequence alignments
Swiss-Prot – The flatfile format used to represent database records for protein sequences from the Swiss-Prot database
VCF – Variant Call Format, a standard created by the 1000 Genomes Project that lists and annotates the entire collection of human variants (with the exception of approximately 1.6 million variants).
Biomedical imaging
Digital Imaging and Communications in Medicine (DICOM) (.dcm)
Neuroimaging Informatics Technology Initiative (NIfTI)
.nii – single-file (combined data and meta-data) style
.nii.gz – gzip-compressed, used transparently by some software, notably the FMRIB Software Library (FSL)
.gii – single-file (combined data and meta-data) style; NIfTI offspring for brain surface data
.img,.hdr – dual-file (separate data and meta-data, respectively) style
AFNI data, meta-data (.BRIK,.HEAD)
Massachusetts General Hospital imaging format, used by the FreeSurfer brain analysis package
.MGH – uncompressed
.MGZ – zip-compressed
Analyze data, meta-data (.img,.hdr)
Medical Imaging NetCDF (MINC) format, previously based on NetCDF; since version 2.0, based on HDF5 (.mnc)
Biomedical signals (time series)
ACQ – AcqKnowledge format for Windows/PC from Biopac Systems Inc., Goleta, CA, USA
ADICHT – LabChart format from ADInstruments Pty Ltd, Bella Vista NSW, Australia
BCI2000 – The BCI2000 project, Albany, NY, USA
BDF – BioSemi data format from BioSemi B.V. Amsterdam, Netherlands
BKR – The EEG data format developed at the University of Technology Graz, Austria
CFWB – Chart Data Format from ADInstruments Pty Ltd, Bella Vista NSW, Australia
DICOM – Waveform An extension of Dicom for storing waveform data
ecgML – A markup language for electrocardiogram data acquisition and analysis
EDF/EDF+ – European Data Format
FEF – File Exchange Format for Vital signs, CEN TS 14271
GDF v1.x – The General Data Format for biomedical signals, version 1.x
GDF v2.x – The General Data Format for biomedical signals, version 2.x
HL7aECG – Health Level 7 v3 annotated ECG
MFER – Medical waveform Format Encoding Rules
OpenXDF – Open Exchange Data Format from Neurotronics, Inc., Gainesville, FL, USA
SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography EN1064:2007
SIGIF – A digital SIGnal Interchange Format with application in neurophysiology
WFDB – Format of Physiobank
XDF – eXtensible Data Format
Other biomedical formats
Health Level 7 (HL7) – a framework for exchange, integration, sharing, and retrieval of health information electronically
xDT – a family of data exchange formats for medical records
Biometric formats
CBF – Common Biometric Format, based on CBEFF 2.0 (Common Biometric ExFramework).
EBF – Extended Biometric Format, based on CBF but with S/MIME encryption support and semantic extensions
CBFX – XML Common Biometric Format, based upon XCBF 1.1 (OASIS XML Common Biometric Format)
EBFX – XML Extended Biometric Format, based on CBFX but with W3C XML Encryption support and semantic extensions
ADB – Ada body
ADS – Ada specification
AHK – AutoHotkey script file
APPLESCRIPT- applescript – see SCPT
AS – Adobe Flash ActionScript File
AU3 – AutoIt version 3
BAT – Batch file
BAS – QBasic & QuickBASIC
BTM — Batch file
CLASS — Compiled Java binary
CLJS – ClojureScript
CMD – Batch file
Coffee – CoffeeScript
C – C
CPP – C++
CS - C#
INO – Arduino sketch (program)
EGG – Chicken
EGT – EGT Asterisk Application Source File, EGT Universal Document
ERB – Embedded Ruby, Ruby on Rails Script File
GO – Go
HTA – HTML Application
IBI – Icarus script
ICI – ICI
IJS – J script
.ipynb – IPython Notebook
ITCL – Itcl
JS – JavaScript and JScript
JSFL – Adobe JavaScript language
.kt - Kotlin
LUA – Lua
M – Mathematica package file
MRC – mIRC Script
NCF – NetWare Command File (scripting for Novell's NetWare OS)
NUC – compiled script
NUD – C++ External module written in C++
NUT – Squirrel
O — Compiled and optimized C/C++ binary
pde – Processing (programming language), Processing script
PHP – PHP
PHP? – PHP (? = version number)
PL – Perl
PM – Perl module
PS1 – Windows PowerShell shell script
PS1XML – Windows PowerShell format and type definitions
PSC1 – Windows PowerShell console file
PSD1 – Windows PowerShell data file
PSM1 – Windows PowerShell module file
PY – Python
PYC – Python byte code files
PYO – Python
R – R scripts
r – REBOL scripts
RB – Ruby
RDP – RDP connection
red – Red scripts
RS – Rust (programming language)
SB2/SB3 – Scratch
SCPT – Applescript
SCPTD – See SCPT.
SDL – State Description Language
SH – Shell script
SYJS – SyMAT JavaScript
SYPY – SyMAT Python
TCL – Tcl
TNS – Ti-Nspire Code/File
TS - Typescript
VBS – Visual Basic Script
XPL – XProc script/pipeline
ebuild – Gentoo Linux's portage package.
Security
Authentication and general encryption formats are listed here.
OpenPGP Message Format – used by Pretty Good Privacy, GNU Privacy Guard, and other OpenPGP software; can contain keys, signed data, or encrypted data; can be binary or text ("ASCII armored")
Certificates and keys
GXK – Galaxkey, an encryption platform for authorized, private and confidential email communication
OpenSSH private key (.ssh) – Secure Shell private key; format generated by ssh-keygen or converted from PPK with PuTTYgen
OpenSSH public key (.pub) – Secure Shell public key; format generated by ssh-keygen or PuTTYgen
PuTTY private key (.ppk) – Secure Shell private key, in the format generated by PuTTYgen instead of the format used by OpenSSH
nSign public key (.nSign) - nSign public key in a custom format
X.509
Distinguished Encoding Rules (.cer, .crt, .der) – stores certificates
PKCS#7 SignedData (.p7b, .p7c) – commonly appears without main data, just certificates or certificate revocation lists (CRLs)
PKCS#12 (.p12, .pfx) – can store public certificates and private keys
PEM – Privacy-enhanced Electronic Mail: full format not widely used, but often used to store Distinguished Encoding Rules in Base64 format
PFX – Microsoft predecessor of PKCS#12
Encrypted files
This section shows file formats for encrypted general data, rather than a specific program's data.
AXX – Encrypted file, created with AxCrypt
EEA – An encrypted CAB, ostensibly for protecting email attachments
TC – Virtual encrypted disk container, created by TrueCrypt
KODE – Encrypted file, created with KodeFile
nSignE - An encrypted private key, created by nSign
Password files
Password files (sometimes called keychain files) contain lists of other passwords, usually encrypted.
BPW – Encrypted password file created by Bitser password manager
KDB – KeePass 1 database
KDBX – KeePass 2 database
Signal data (non-audio)
ACQ – AcqKnowledge format for Windows/PC from Biopac
ADICHT – LabChart format from ADInstruments
BKR – The EEG data format developed at the University of Technology Graz
BDF, CFG – Configuration file for Comtrade data
CFWB – Chart Data format from ADInstruments
DAT – Raw data file for Comtrade data
EDF – European data format
FEF – File Exchange Format for Vital signs
GDF – General data formats for biomedical signals
GMS – Gesture And Motion Signal format
IROCK – intelliRock Sensor Data File Format
MFER – Medical waveform Format Encoding Rules
SAC – Seismic Analysis Code, earthquake seismology data format
SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography
SEED, MSEED – Standard for the Exchange of Earthquake Data, seismological data and sensor metadata
SEGY – Reflection seismology data format
SIGIF – SIGnal Interchange Format
WIN, WIN32 – NIED/ERI seismic data format (.cnt)
Sound and music
Lossless audio
Uncompressed
8SVX – Commodore-Amiga 8-bit sound (usually in an IFF container)
16SVX – Commodore-Amiga 16-bit sound (usually in an IFF container)
AIFF, AIF, AIFC – Audio Interchange File Format
AU – Simple audio file format introduced by Sun Microsystems
BWF – Broadcast Wave Format, an extension of WAVE
CDDA – Compact Disc Digital Audio
DSF, DFF – Direct Stream Digital audio file, also used in Super Audio CD
RAW – Raw samples without any header or sync
WAV – Microsoft Wave
Compressed
RA, RM – RealAudio format
FLAC – Free lossless codec of the Ogg project
LA – Lossless audio
PAC – LPAC
APE – Monkey's Audio
OFR, OFS, OFF – OptimFROG
RKA – RKAU
SHN – Shorten
TAK – Tom's Lossless Audio Kompressor
THD – Dolby TrueHD
TTA – Free lossless audio codec (True Audio)
WV – WavPack
WMA – Windows Media Audio 9 Lossless
BRSTM – Binary Revolution Stream
DTS, DTSHD, DTSMA – DTS (sound system)
AST – Nintendo Audio Stream
AW – Nintendo Audio Sample used in first-party games
PSF – Portable Sound Format, PlayStation variant (originally PlayStation Sound Format)
Lossy audio
AC3 – Usually used for Dolby Digital tracks
AMR – For GSM and UMTS based mobile phones
MP1 – MPEG Layer 1
MP2 – MPEG Layer 2
MP3
MPEG Layer 3
SPX – Speex (Ogg project, specialized for voice, low bitrates)
GSM – GSM Full Rate, originally developed for use in mobile phones
WMA – Windows Media Audio
AAC – Advanced Audio Coding (usually in an MPEG-4 container)
MPC – Musepack
VQF – Yamaha TwinVQ
OTS – Audio File (similar to MP3, with more data stored in the file and slightly better compression; designed for use with OtsLabs' OtsAV)
SWA – Adobe Shockwave Audio (Same compression as MP3 with additional header information specific to Adobe Director)
VOX – Dialogic ADPCM Low Sample Rate Digitized Voice
VOC – Creative Labs Soundblaster Creative Voice 8-bit & 16-bit Also output format of RCA Audio Recorders
DWD – DiamondWare Digitized
SMP – Turtlebeach SampleVision
OGG – Ogg Vorbis
Tracker modules and related
MOD – Soundtracker and Protracker sample and melody modules
MT2 – MadTracker 2 module
S3M – Scream Tracker 3 module
XM – Fast Tracker module
IT – Impulse Tracker module
NSF – NES Sound Format
MID, MIDI – Standard MIDI file; most often just notes and controls but occasionally also sample dumps (.mid, .rmi)
FTM – FamiTracker Project file
BTM – BambooTracker Project file
Sheet music files
ABC – ABC Notation sheet music file
DARMS – DARMS File Format also known as the Ford-Columbia Format
ETF – Enigma Transportation Format abandoned sheet music exchange format
GP* – Guitar Pro sheet music and tablature file
KERN – Kern File Format sheet music file
LY – LilyPond sheet music file
MEI – Music Encoding Initiative file format that attempts to encode all musical notations
MUS, MUSX – Finale sheet music file
MXL, XML – MusicXML standard sheet music exchange format
MSCX, MSCZ – MuseScore sheet music file
SMDL – Standard Music Description Language sheet music file
SIB – Sibelius sheet music file
Other file formats pertaining to audio
NIFF – Notation Interchange File Format
PTB – Power Tab Editor tab
ASF – Advanced Systems Format
CUST – DeliPlayer custom sound format
GYM – Genesis YM2612 log
JAM – Jam music format
MNG – Background music for the Creatures game series, starting from Creatures 2
RMJ – RealJukebox Media used for RealPlayer
SID – Sound Interface Device – Commodore 64 instructions to play SID music and sound effects
SPC – Super NES sound format
TXM – Track ax media
VGM – Stands for "Video Game Music", log for several different chips
YM – Atari ST/Amstrad CPC YM2149 sound chip format
PVD – Portable Voice Document used for Oaisys & Mitel call recordings
Playlist formats
AIMPPL – AIMP Playlist format
ASX – Advanced Stream Redirector
RAM – Real Audio Metafile For RealAudio files only.
XPL – HDi playlist
XSPF – XML Shareable Playlist Format
ZPL – Xbox Music (Formerly Zune) Playlist format from Microsoft
M3U – Multimedia playlist file
PLS – Multimedia playlist, originally developed for use with the museArc
Audio editing and music production
ALS – Ableton Live set
ALC – Ableton Live clip
ALP – Ableton Live pack
ATMOS, AUDIO, METADATA – Dolby Atmos Rendering and Mastering related file
AUP – Audacity project file
AUP3 – Audacity 3.0 project file
BAND – GarageBand project file
CEL – Adobe Audition loop file (Cool Edit Loop)
CAU Caustic project file
CPR – Steinberg Cubase project file
CWP – Cakewalk Sonar project file
DRM – Steinberg Cubase drum file
DMKIT – Image-Line's Drumaxx drum kit file
ENS – Native Instruments Reaktor Ensemble
FLP – Image Line FL Studio project file
GRIR – Native Instruments Komplete Guitar Rig Impulse Response
LOGIC – Logic Pro X project file
MMP – LMMS project file (alternatively MMPZ for compressed formats)
MMR – MAGIX Music Maker project file
MX6HS – Mixcraft 6 Home Studio project file
NPR – Steinberg Nuendo project file
OMF, OMFI – Open Media Framework Interchange OMFI succeeds OMF (Open Media Framework)
PTX - Pro Tools 10 or later project file
PTF - Pro Tools 7 up to Pro Tools 9 project file
PTS - Legacy Pro Tools project file
RIN – Soundways RIN-M file containing sound recording participant credits and song information
RPP, RPP-BAK – REAPER project file
REAPEAKS – REAPER peak (waveform cache) file
SES – Adobe Audition multitrack session file
SFK – Sound Forge waveform cache file
SFL – Sound Forge sound file
SNG – MIDI sequence file (MidiSoft, Korg, etc.) or n-Track Studio project file
STF – StudioFactory project file. It contains all necessary patches, samples, tracks and settings to play the file
SND – Akai MPC sound file
SYN – SynFactory project file. It contains all necessary patches, samples, tracks and settings to play the file
UST – Utau Editor sequence excluding wave-file
VCLS – VocaListener project file
VPR – Vocaloid 5 Editor sequence excluding wave-file
VSQ – Vocaloid 2 Editor sequence excluding wave-file
VSQX – Vocaloid 3 & 4 Editor sequence excluding wave-file
Recorded television formats
DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format
WTV – Windows Vista's and up Windows Media Center recorded television format
Source code for computer programs
ADA, ADB, 2.ADA – Ada (body) source
ADS, 1.ADA – Ada (specification) source
ASM, S – Assembly language source
BAS – BASIC, FreeBASIC, Visual Basic, BASIC-PLUS source, PICAXE basic
BB – Blitz Basic Blitz3D
BMX – Blitz Basic BlitzMax
C – C source
CLJ – Clojure source code
CLS – Visual Basic class
COB, CBL – COBOL source
CPP, CC, CXX, C, CBP – C++ source
CS – C# source
CSPROJ – C# project (Visual Studio .NET)
D – D source
DBA – DarkBASIC source
DBPro123 – DarkBASIC Professional project
E – Eiffel source
EFS – EGT Forever Source File
EGT – EGT Asterisk Source File, could be J, C#, VB.net, EF 2.0 (EGT Forever)
EL – Emacs Lisp source
FOR, FTN, F, F77, F90 – Fortran source
FRM – Visual Basic form
FRX – Visual Basic form stash file (binary form file)
FTH – Forth source
GED – Game Maker Extension Editable file as of version 7.0
GM6 – Game Maker Editable file as of version 6.x
GMD – Game Maker Editable file up to version 5.x
GMK – Game Maker Editable file as of version 7.0
GML – Game Maker Language script file
GO – Go source
H – C/C++ header file
HPP, HXX – C++ header file
HS – Haskell source
I – SWIG interface file
INC – Turbo Pascal included source
JAVA – Java source
L – lex source
LGT – Logtalk source
LISP – Common Lisp source
M – Objective-C source
M – MATLAB
M – Mathematica
M4 – m4 source
ML – Standard ML and OCaml source
MSQR – M² source file, created by Mattia Marziali
N – Nemerle source
NB – Nuclear Basic source
P – Parser source
PAS, PP, P – Pascal source (DPR for projects)
PHP, PHP3, PHP4, PHP5, PHPS, Phtml – PHP source
PIV – Pivot stickfigure animator
PL, PM – Perl
PLI, PL1 – PL/I
PRG – Ashton-Tate; dbII, dbIII and dbIV, db, db7, clipper, Microsoft Fox and FoxPro, harbour, xharbour, and Xbase
PRO – IDL
POL – Apcera Policy Language doclet
PY – Python source
R – R source
RED – Red source
REDS – Red/System source
RB – Ruby source
RESX – Resource file for .NET applications
RC, RC2 – Resource script files to generate resources for .NET applications
RKT, RKTL – Racket source
SCALA – Scala source
SCI, SCE – Scilab
SCM – Scheme source
SD7 – Seed7 source
SKB, SKC – Sage Retrieve 4GL Common Area (Main and Amended backup)
SKD – Sage Retrieve 4GL Database
SKF, SKG – Sage Retrieve 4GL File Layouts (Main and Amended backup)
SKI – Sage Retrieve 4GL Instructions
SKK – Sage Retrieve 4GL Report Generator
SKM – Sage Retrieve 4GL Menu
SKO – Sage Retrieve 4GL Program
SKP, SKQ – Sage Retrieve 4GL Print Layouts (Main and Amended backup)
SKS, SKT – Sage Retrieve 4GL Screen Layouts (Main and Amended backup)
SKZ – Sage Retrieve 4GL Security File
SLN – Visual Studio solution
SPIN – Spin source (for Parallax Propeller microcontrollers)
STK – Stickfigure file for Pivot stickfigure animator
SWG – SWIG source code
TCL – TCL source code
VAP – Visual Studio Analyzer project
VB – Visual Basic.NET source
VBG – Visual Studio compatible project group
VBP, VIP – Visual Basic project
VBPROJ – Visual Basic .NET project
VCPROJ – Visual C++ project
VDPROJ – Visual Studio deployment project
XPL – XProc script/pipeline
XQ – XQuery file
XSL – XSLT stylesheet
Y – yacc source
Spreadsheet
123 – Lotus 1-2-3
AB2 – Abykus worksheet
AB3 – Abykus workbook
AWS – Ability Spreadsheet
BCSV – Nintendo proprietary table format
CLF – ThinkFree Calc
CELL – Haansoft(Hancom) SpreadSheet software document
CSV – Comma-Separated Values
GSHEET – Google Drive Spreadsheet
numbers – An Apple Numbers Spreadsheet file
gnumeric – Gnumeric spreadsheet, a gziped XML file
LCW – Lucid 3-D
ODS – OpenDocument spreadsheet
OTS – OpenDocument spreadsheet template
QPW – Quattro Pro spreadsheet
SDC – StarOffice StarCalc Spreadsheet
SLK – SYLK (SYmbolic LinK)
STC – OpenOffice.org XML (obsolete) Spreadsheet template
SXC – OpenOffice.org XML (obsolete) Spreadsheet
TAB – tab delimited columns; also TSV (Tab-Separated Values)
TXT – text file
VC – Visicalc
WK1 – Lotus 1-2-3 up to version 2.01
WK3 – Lotus 1-2-3 version 3.0
WK4 – Lotus 1-2-3 version 4.0
WKS – Lotus 1-2-3
WKS – Microsoft Works
WQ1 – Quattro Pro DOS version
XLK – Microsoft Excel worksheet backup
XLS – Microsoft Excel worksheet sheet (97–2003)
XLSB – Microsoft Excel binary workbook
XLSM – Microsoft Excel Macro-enabled workbook
XLSX – Office Open XML worksheet sheet
XLR – Microsoft Works version 6.0
XLT – Microsoft Excel worksheet template
XLTM – Microsoft Excel Macro-enabled worksheet template
XLW – Microsoft Excel worksheet workspace (version 4.0)
Tabulated data
TSV – Tab-separated values
CSV – Comma-separated values
db – databank format; accessible by many econometric applications
dif – accessible by many spreadsheet applications
Video
AAF – mostly intended to hold edit decisions and rendering information, but can also contain compressed media essence
3GP – the most common video format for cell phones
GIF – Animated GIF (simple animation; until recently often avoided because of patent problems)
ASF – container (enables any form of compression to be used; MPEG-4 is common; video in ASF-containers is also called Windows Media Video (WMV))
AVCHD – Advanced Video Codec High Definition
AVI – container (a shell, which enables any form of compression to be used)
BIK (.bik) – Bink Video file. A video compression system developed by RAD Game Tools
BRAW - a video format used by Blackmagic's Ursa Mini Pro 12K cameras.
CAM – aMSN webcam log file
COLLAB – Blackboard Collaborate session recording
DAT – video standard data file (automatically created when we attempted to burn as video file on the CD)
DSH
DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format
FLV – Flash video (encoded to run in a flash animation)
M1V MPEG-1 – Video
M2V MPEG-2 – Video
NOA - rare movie format use in some Japanese eroges around 2002
FLA – Adobe Flash (for producing)
FLR – (text file which contains scripts extracted from SWF by a free ActionScript decompiler named FLARE)
SOL – Adobe Flash shared object ("Flash cookie")
STR - Sony PlayStation video stream
M4V – video container file format developed by Apple
Matroska (*.mkv) – Matroska is a container format, which enables any video format such as MPEG-4 ASP or AVC to be used along with other content such as subtitles and detailed meta information
WRAP – MediaForge (*.wrap)
MNG – mainly simple animation containing PNG and JPEG objects, often somewhat more complex than animated GIF
QuickTime (.mov) – container which enables any form of compression to be used; Sorenson codec is the most common; QTCH is the filetype for cached video and audio streams
MPEG (.mpeg, .mpg, .mpe)
THP – Nintendo proprietary movie/video format
MPEG-4 Part 14, shortened "MP4" – multimedia container (most often used for Sony's PlayStation Portable and Apple's iPod)
MXF – Material Exchange Format (standardized wrapper format for audio/visual material developed by SMPTE)
ROQ – used by Quake 3
NSV – Nullsoft Streaming Video (media container designed for streaming video content over the Internet)
Ogg – container, multimedia
RM – RealMedia
SVI – Samsung video format for portable players
SMI – SAMI Caption file (HTML like subtitle for movie files)
SMK (.smk) – Smacker video file. A video compression system developed by RAD Game Tools
SWF – Adobe Flash (for viewing)
WMV – Windows Media Video (See ASF)
WTV – Windows Vista's and up Windows Media Center recorded television format
YUV – raw video format; resolution (horizontal x vertical) and sample structure 4:2:2 or 4:2:0 must be known explicitly
WebM – video file format for web video using HTML5
Video editing, production
BRAW – Blackmagic Design RAW video file name
FCP – Final Cut Pro project file
MSWMM – Windows Movie Maker project file
PPJ & PRPROJ– Adobe Premiere Pro video editing file
IMOVIEPROJ – iMovie project file
VEG & VEG-BAK – Sony Vegas project file
SUF – Sony camera configuration file (setup.suf) produced by XDCAM-EX camcorders
WLMP – Windows Live Movie Maker project file
KDENLIVE – Kdenlive project file
VPJ – VideoPad project file
MOTN – Apple Motion project file
IMOVIEMOBILE – iMovie project file for iOS users
WFP / WVE — Wondershare Filmora Project
PDS - Cyberlink PowerDirector project
VPROJ - VSDC Free Video Editor project file
Video game data
List of common file formats of data for video games on systems that support filesystems, most commonly PC games.
Minecraft — files used by Mojang to develop Minecraft
MCADDON – format used by the Bedrock Edition of Minecraft for add-ons; Resource packs for the game
MCFUNCTION – format used by Minecraft for storing functions
MCMETA – format used by Minecraft for storing data for customizable texture packs for the game
MCPACK – format used by the Bedrock Edition of Minecraft for in-game texture packs; full addons for the game
MCR – format used by Minecraft for storing data for in-game worlds before version 1.2
MCTEMPLATE – format used by the Bedrock Edition of Minecraft for world templates
MCWORLD – format used by the Bedrock Edition of Minecraft for in-game worlds
NBS – format used by Note Block Studio, a tool that can be used to make note block songs for Minecraft.
TrackMania/Maniaplanet Engine – Formats used by games based on the TrackMania engine.
GBX - All user-created content is stored in this file type.
REPLAY.GBX - Stores the replay of a race.
CHALLENGE.GBX/MAP.GBX - Stores tracks/maps.
SYSTEMCONFIG.GBX - Launcher info.
TRACKMANIAVEHICLE.GBX - Info about a certain car type.
VEHICLETUNINGS.GBX - Vehicle physics.
SOLID.GBX - A block's model.
ITEM.GBX - Custom Maniaplanet item.
BLOCK.GBX - Custom Maniaplanet block.
TEXTURE.GBX - Info about a texture that are used in materials.
MATERIAL.GBX - Info about a material such as surface type that are used in Solids.
TMEDCLASSIC.GBX - Block info.
GHOST.GBX - Player ghosts in Trackmania and TrackMania Turbo.
CONTROLSTYLE.GBX - Menu files.
SCORES.GBX - Stores info about the player's best times.
PROFILE.GBX - Stores a player's info such as their login.
DDS - Almost every texture in the game uses this format.
PAK - Stores environment data such as valid blocks.
LOC - A locator. Locators allow the game to download content such as car skins from an external server.
SCRIPT.TXT - Scripts for Maniaplanet such as menus and game modes.
XML - ManiaLinks.
Doom engine – Formats used by games based on the Doom engine.
DEH – DeHackEd files to mutate the game executable (not officially part of the DOOM engine)
DSG – Saved game
LMP – A lump is an entry in a DOOM wad.
LMP – Saved demo recording
MUS – Music file (usually contained within a WAD file)
WAD – Data storage (contains music, maps, and textures)
Quake engine – Formats used by games based on the Quake engine.
BSP – (For Binary space partitioning) compiled map format
MAP – Raw map format used by editors like GtkRadiant or QuArK
MDL/MD2/MD3/MD5 – Model for an item used in the game
PAK/PK2 – Data storage
PK3/PK4 – used by the Quake II, Quake III Arena and Quake 4 game engines, respectively, to store game data, textures etc. They are actually .zip files.
.dat – not specific file type, often generic extension for "data" files for a variety of applications
sometimes used for general data contained within the .PK3/PK4 files
.fontdat – a .dat file used for formatting game fonts
.roq – Video format
.sav – Savegame format
Unreal Engine – Formats used by games based on the Unreal engine.
U – Unreal script format
UAX – Animations format for Unreal Engine 2
UMX – Map format for Unreal Tournament
UMX – Music format for Unreal Engine 1
UNR – Map format for Unreal
UPK – Package format for cooked content in Unreal Engine 3
USX – Sound format for Unreal Engine 1 and Unreal Engine 2
UT2 – Map format for Unreal Tournament 2003 and Unreal Tournament 2004
UT3 – Map format for Unreal Tournament 3
UTX – Texture format for Unreal Engine 1 and Unreal Engine 2
UXX – Cache format; these are files a client downloaded from server (which can be converted to regular formats)
Duke Nukem 3D Engine – Formats used by games based on this engine
DMO – Save game
GRP – Data storage
MAP – Map (usually constructed with BUILD.EXE)
Diablo Engine – Formats used by Diablo by Blizzard Entertainment.
SV – Save Game
ITM – Item File
Real Virtuality Engine – Formats used by Bohemia Interactive. Operation:Flashpoint, ARMA 2, VBS2
SQF – Format used for general editing
SQM – Format used for mission files
PBO – Binarized file used for compiled models
LIP – Format that is created from WAV files to create in-game accurate lip-synch for character animations.
Source Engine – Formats used by Valve. Half-Life 2, Counter-Strike: Source, Day of Defeat: Source, Half-Life 2: Episode One, Team Fortress 2, Half-Life 2: Episode Two, Portal, Left 4 Dead, Left 4 Dead 2, Alien Swarm, Portal 2, Counter-Strike: Global Offensive, Titanfall, Insurgency, Titanfall 2, Day of Infamy
VMF – Valve Hammer Map editor raw map file
VMX - Valve Hammer Map editor backup map file
BSP – Source Engine compiled map file
MDL – Source Engine model format
SMD – Source Engine uncompiled model format
PCF – Source Engine particle effect file
HL2 – Half-Life 2 save format
DEM – Source Engine demo format
VPK – Source Engine pack format
VTF – Source Engine texture format
VMT – Source Engine material format.
Pokemon Generation V
CGB - Pokemon Black and White/Pokemon Black 2 and White 2 C-Gear skins.
Other Formats
ARC - used to store New Super Mario Bros. Wii level data
B – used for Grand Theft Auto saved game files
BOL – used for levels on Poing!PC
DBPF – The Sims 2, DBPF, Package
DIVA – Project DIVA timings, element coördinates, MP3 references, notes, animation poses and scores.
ESM, ESP – Master and Plugin data archives for the Creation Engine
HAMBU - format used by the Aidan's Funhouse game RGTW for storing map data
HE0, HE2, HE4 HE games File
GCF – format used by the Steam content management system for file archives
IMG – format used by Renderware-based Grand Theft Auto games for data storage
LOVE – format used by the LOVE2D Engine
MAP – format used by Halo: Combat Evolved for archive compression, Doom³, and various other games
MCA – format used by Minecraft for storing data for in-game worlds
NBT – format used by Minecraft for storing program variables along with their (Java) type identifiers
OEC – format used by OE-Cake for scene data storage
OSB - osu! storyboard data
OSC - osu!stream combined stream data
OSF2 - free osu!stream song file
OSR – osu! replay data
OSU – osu! beatmap data
OSZ2 - paid osu!stream song file
P3D – format for panda3d by Disney
PLAGUEINC - format used by Plague Inc. for storing custom scenario information
POD – format used by Terminal Reality
RCT – Used for templates and save files in RollerCoaster Tycoon games
REP – used by Blizzard Entertainment for scenario replays in StarCraft.
Simcity 4, DBPF (.dat, .SC4Lot, .SC4Model) – All game plugins use this format, commonly with different file extensions
SMZIP – ZIP-based package for StepMania songs, themes and announcer packs.
SOLITAIRETHEME8 - A solitaire theme for Windows solitaire
USLD – format used by Unison Shift to store level layouts.
VVVVVV – format used by VVVVVV
CPS – format used by The Powder Toy, Powder Toy save
STM – format used by The Powder Toy, Powder Toy stamp
PKG – format used by Bungie for the PC Beta of Destiny 2, for nearly all the game's assets.
CHR – format used by Team Salvato, for the character files of Doki Doki Literature Club!
Z5 – format used by Z-machine for story files in interactive fiction.
scworld – format used by Survivalcraft to store sandbox worlds.
scskin – format used by Survivalcraft to store player skins.
scbtex – format used by Survivalcraft to store block textures.
prison – format used by Prison Architect to save prisons
escape – format used by Prison Architect to save escape attempts
Video game storage media
List of the most common filename extensions used when a game's ROM image or storage medium is copied from an original read-only memory (ROM) device to an external memory such as hard disk for back up purposes or for making the game playable with an emulator. In the case of cartridge-based software, if the platform specific extension is not used then filename extensions ".rom" or ".bin" are usually used to clarify that the file contains a copy of a content of a ROM. ROM, disk or tape images usually do not consist of one file or ROM, rather an entire file or ROM structure contained within one file on the backup medium.
A26 – Atari 2600 (.a26)
A52 – Atari 5200 (.a52)
A78 – Atari 7800 (.a78)
LNX – Atari Lynx (.lnx)
JAG,J64 – Atari Jaguar (.jag, .j64)
ISO, WBFS, WAD, WDF – Wii and WiiU (.iso, .wbfs, .wad, .wdf)
GCM, ISO – GameCube (.gcm, .iso)
min - Pokemon mini (.min)
NDS – Nintendo DS (.nds)
3DS – Nintendo 3DS (.3ds)
CIA – Installation File (.cia)
GB – Game Boy (.gb) (this applies to the original Game Boy and the Game Boy Color)
GBC – Game Boy Color (.gbc)
GBA – Game Boy Advance (.gba)
GBA – Game Boy Advance (.gba)
SAV – Game Boy Advance Saved Data Files (.sav)
SGM – Visual Boy Advance Save States (.sgm)
N64, V64, Z64, U64, USA, JAP, PAL, EUR, BIN – Nintendo 64 (.n64, .v64, .z64, .u64, .usa, .jap, .pal, .eur, .bin)
PJ – Project 64 Save States (.pj)
NES – Nintendo Entertainment System (.nes)
FDS – Famicom Disk System (.fds)
JST – Jnes Save States (.jst)
FC? – FCEUX Save States (.fc#, where # is any character, usually a number)
GG – Game Gear (.gg)
SMS – Master System (.sms)
SG – SG-1000 (.sg)
SMD,BIN – Mega Drive/Genesis (.smd or .bin)
32X – Sega 32X (.32x)
SMC,078,SFC – Super NES (.smc, .078, or .sfc) (.078 is for split ROMs, which are rare)
FIG – Super Famicom (Japanese releases are rarely .fig, above extensions are more common)
SRM – Super NES Saved Data Files (.srm)
ZST – ZSNES Save States (.zst, .zs1-.zs9, .z10-.z99)
FRZ – Snes9X Save States (.frz, .000-.008)
PCE – TurboGrafx-16/PC Engine (.pce)
NPC, NGP – Neo Geo Pocket (.npc, .ngp)
NGC – Neo Geo Pocket Color (.ngc)
VB – Virtual Boy (.vb)
INT – Intellivision (.int)
MIN – Pokémon Mini (.min)
VEC – Vectrex (.vec)
BIN – Odyssey² (.bin)
WS – WonderSwan (.ws)
WSC – WonderSwan Color (.wsc)
TZX – ZX Spectrum (.tzx) (for exact copies of ZX Spectrum games)
TAP – for tape images without copy protection
Z80,SNA – (for snapshots of the emulator RAM)
DSK – (for disk images)
TAP – Commodore 64 (.tap) (for tape images including copy protection)
T64 – (for tape images without copy protection, considerably smaller than .tap files)
D64 – (for disk images)
CRT – (for cartridge images)
ADF – Amiga (.adf) (for 880K diskette images)
ADZ – GZip-compressed version of the above.
DMS – Disk Masher System, previously used as a disk-archiving system native to the Amiga, also supported by emulators.
Virtual machines
Microsoft Virtual PC, Virtual Server
VFD – Virtual Floppy Disk (.vfd)
VHD – Virtual Hard Disk (.vhd)
VUD – Virtual Undo Disk (.vud)
VMC – Virtual Machine Configuration (.vmc)
VSV – Virtual Machine Saved State (.vsv)
EMC VMware ESX, GSX, Workstation, Player
LOG – Virtual Machine Logfile (.log)
VMDK, DSK – Virtual Machine Disk (.vmdk, .dsk)
NVRAM – Virtual Machine BIOS (.nvram)
VMEM – Virtual Machine paging file (.vmem)
VMSD – Virtual Machine snapshot metadata (.vmsd)
VMSN – Virtual Machine snapshot (.vmsn)
VMSS,STD – Virtual Machine suspended state (.vmss, .std)
VMTM – Virtual Machine team data (.vmtm)
VMX,CFG – Virtual Machine configuration (.vmx, .cfg)
VMXF – Virtual Machine team configuration (.vmxf)
VirtualBox
VDI – VirtualBox Virtual Disk Image (.vdi)
Vbox-extpack – VirtualBox extension pack. (.vbox-extpack)
Parallels Workstation
HDD – Virtual Machine hard disk (.hdd)
PVS – Virtual Machine preferences/configuration (.pvs)
SAV – Virtual Machine saved state (.sav)
QEMU
COW – Copy-on-write
QCOW – QEMU copy-on-write Qcow
QCOW2 – QEMU copy-on-write – version 2 Qcow
QED – QEMU enhanced disk format
Web page
Static
DTD – Document Type Definition (standard), MUST be public and free
HTML (.html, .htm) – HyperText Markup Language
XHTML (.xhtml, .xht) – eXtensible HyperText Markup Language
MHTML (.mht, .mhtml) – Archived HTML, store all data on one web page (text, images, etc.) in one big file
MAF (.maff) – web archive based on ZIP
Dynamically generated
ASP (.asp) – Microsoft Active Server Page
ASPX – (.aspx) – Microsoft Active Server Page. NET
ADP – AOLserver Dynamic Page
BML – (.bml) – Better Markup Language (templating)
CFM – (.cfm) – ColdFusion
CGI – (.cgi)
iHTML – (.ihtml) – Inline HTML
JSP – (.jsp) JavaServer Pages
Lasso – (.las, .lasso, .lassoapp) – A file created or served with the Lasso Programming Language
PL – Perl (.pl)
PHP – (.php, .php?, .phtml) – ? is version number (previously abbreviated Personal Home Page, later changed to PHP: Hypertext Preprocessor)
SSI – (.shtml) – HTML with Server Side Includes (Apache)
SSI – (.stm) – HTML with Server Side Includes (Apache)
Markup languages and other web standards-based formats
Atom – (.atom, .xml) – Another syndication format.
EML – (.eml) – Format used by several desktop email clients.
JSON-LD – (.jsonld) – A JSON-based serialization for linked data.
KPRX – (.kprx) – A XML-based serialization for workflow definition generated by K2.
PS – (.ps) – A XML-based serialization for test automation scripts called PowerScripts for K2 based applications.
Metalink – (.metalink, .met) – A format to list metadata about downloads, such as mirrors, checksums, and other information.
RSS – (.rss, .xml) – Syndication format.
Markdown – (.markdown, .md) – Plain text formatting syntax, which is popularly used to format "readme" files.
Shuttle – (.se) – Another lightweight markup language.
Other
AXD – cookie extensions found in temporary internet folder
BDF – Binary Data Format – raw data from recovered blocks of unallocated space on a hard drive
CBP – CD Box Labeler Pro, CentraBuilder, Code::Blocks Project File, Conlab Project
CEX – SolidWorks Enterprise PDM Vault File
COL – Nintendo GameCube proprietary collision file (.col)
CREDX – CredX Dat File
DDB – Generating code for Vocaloid singers voice (see .DDI)
DDI – Vocaloid phoneme library (Japanese, English, Korean, Spanish, Chinese, Catalan)
DUPX – DuupeCheck database management tool project file
FTM – Family Tree Maker data file
FTMB – Family Tree Maker backup file
GA3 – Graphical Analysis 3
GEDCOM (.ged) – (GEnealogical Data COMmunication) format to exchange genealogy data between different genealogy software
HLP – Windows help file
IGC – flight tracks downloaded from GPS devices in the FAI's prescribed format
INF – similar format to INI file; used to install device drivers under Windows, inter alia.
JAM – JAM Message Base Format for BBSes
KMC – tests made with KatzReview's MegaCrammer
KCL – Nintendo GameCube/Wii proprietary collision file (.kcl)
KTR – Hitachi Vantara Pentaho Data Integration/Kettle Transformation Project file
LNK – Microsoft Windows format for Hyperlinks to Executables
LSM – LSMaker script file (program using layered .jpg to create special effects; specifically designed to render lightsabers from the Star Wars universe) (.lsm)
NARC – Archive format used in Nintendo DS games.
OER – AU OER Tool, Open Educational Resource editor
PA – Used to assign sound effects to materials in KCL files (.pa)
PIF – Used to run MS-DOS programs under Windows
POR – So called "portable" SPSS files, readable by PSPP
PXZ – Compressed file to exchange media elements with PSALMO
RISE – File containing RISE generated information model evolution
SCR - Windows Screen Saver file
TOPC – TopicCrunch SEO Project file holding keywords, domain, and search engine settings (ASCII)
XLF – Utah State University Extensible LADAR Format
XMC – Assisted contact lists format, based on XML and used in kindergartens and schools
ZED – My Heritage Family Tree
Zone file – a text file containing a DNS zone
Cursors
ANI – Animated cursor
CUR – Cursor file
Smes – Hawk's Dock configuration file
Generalized files
General data formats
These file formats are fairly well defined by long-term use or a general standard, but the content of each file is often highly specific to particular software or has been extended by further standards for specific uses.
Text-based
CSV – comma-separated values
HTML – hyper text markup language
CSS – cascading style sheets
INI – a configuration text file whose format is substantially similar between applications
JSON – JavaScript Object Notation is an openly used data format now used by many languages, not just JavaScript
TSV – tab-separated values
XML – an open data format
YAML – an open data format
ReStructuredText – an open text format for technical documents used mainly in the Python programming language
Markdown (.md) – an open lightweight markup language to create simple but rich text, often used to format README files
AsciiDoc – an open human-readable markup document format semantically equivalent to DocBook
Generic file extensions
These are filename extensions and broad types reused frequently with differing formats or no specific format by different programs.
Binary files
Bak file (.bak, .bk) – various backup formats: some just copies of data files, some in application-specific data backup formats, some formats for general file backup programs
BIN – binary data, often memory dumps of executable code or data to be re-used by the same software that originated it
DAT – data file, usually binary data proprietary to the program that created it, or an MPEG-1 stream of Video CD
DSK – file representations of various disk storage images
RAW – raw (unprocessed) data
Text files
configuration file (.cnf, .conf, .cfg) – substantially software-specific
logfiles (.log) – usually text, but sometimes binary
plain text (.asc or .txt) – human-readable plain text, usually no more specific
Partial files
Differences and patches
diff – text file differences created by the program diff and applied as updates by patch
Incomplete transfers
!UT (.!ut) – partly complete uTorrent download
CRDOWNLOAD (.crdownload) – partly complete Google Chrome download
OPDOWNLOAD (.opdownload) – partly complete Opera download
PART (.part) – partly complete Mozilla Firefox or Transmission download
PARTIAL (.partial) – partly complete Internet Explorer or Microsoft Edge download
Temporary files
Temporary file (.temp, .tmp, various others) – sometimes in a specific format, but often just raw data in the middle of processing
Pseudo-pipeline file – used to simulate a software pipe
See also
List of filename extensions
MIME#Content-Type, a standard for referring to file formats
List of motion and gesture file formats
List of file signatures, or "magic numbers"
References
External links |
188488 | https://en.wikipedia.org/wiki/ZIP%20%28file%20format%29 | ZIP (file format) | ZIP is an archive file format that supports lossless data compression. A ZIP file may contain one or more files or directories that may have been compressed. The ZIP file format permits a number of compression algorithms, though DEFLATE is the most common. This format was originally created in 1989 and was first implemented in PKWARE, Inc.'s PKZIP utility, as a replacement for the previous ARC compression format by Thom Henderson. The ZIP format was then quickly supported by many software utilities other than PKZIP. Microsoft has included built-in ZIP support (under the name "compressed folders") in versions of Microsoft Windows since 1998 via the "Windows Plus!" addon for Windows 98. Native support was added as of the year 2000 in Windows ME. Apple has included built-in ZIP support in Mac OS X 10.3 (via BOMArchiveHelper, now Archive Utility) and later. Most free operating systems have built in support for ZIP in similar manners to Windows and Mac OS X.
ZIP files generally use the file extensions or and the MIME media type . ZIP is used as a base file format by many programs, usually under a different name. When navigating a file system via a user interface, graphical icons representing ZIP files often appear as a document or other object prominently featuring a zipper.
History
The file format was designed by Phil Katz of PKWARE and Gary Conway of Infinity Design Concepts. The format was created after Systems Enhancement Associates (SEA) filed a lawsuit against PKWARE claiming that the latter's archiving products, named PKARC, were derivatives of SEA's ARC archiving system. The name "zip" (meaning "move at high speed") was suggested by Katz's friend, Robert Mahoney. They wanted to imply that their product would be faster than ARC and other compression formats of the time. By distributing the zip file format within APPNOTE.TXT, compatibility with the zip file format proliferated widely on the public Internet during the 1990s.
PKWARE and Infinity Design Concepts made a joint press release on February 14, 1989, releasing the file format into the public domain.
Version history
The .ZIP File Format Specification has its own version number, which does not necessarily correspond to the version numbers for the PKZIP tool, especially with PKZIP 6 or later. At various times, PKWARE has added preliminary features that allow PKZIP products to extract archives using advanced features, but PKZIP products that create such archives are not made available until the next major release. Other companies or organizations support the PKWARE specifications at their own pace.
The .ZIP file format specification is formally named "APPNOTE - .ZIP File Format Specification" and it is published on the PKWARE.com website since the late 1990s. Several versions of the specification were not published. Specifications of some features such as BZIP2 compression, strong encryption specification and others were published by PKWARE a few years after their creation. The URL of the online specification was changed several times on the PKWARE website.
A summary of key advances in various versions of the PKWARE specification:
2.0: (1993) File entries can be compressed with DEFLATE and use traditional PKWARE encryption (ZipCrypto).
2.1: (1996) Deflate64 compression
4.5: (2001) Documented 64-bit zip format.
4.6: (2001) BZIP2 compression (not published online until the publication of APPNOTE 5.2)
5.0: (2002) SES: DES, Triple DES, RC2, RC4 supported for encryption (not published online until the publication of APPNOTE 5.2)
5.2: (2003) AES encryption support for SES (defined in APPNOTE 5.1 that was not published online) and AES from WinZip ("AE-x"); corrected version of RC2-64 supported for SES encryption.
6.1: (2004) Documented certificate storage.
6.2.0: (2004) Documented Central Directory Encryption.
6.3.0: (2006) Documented Unicode (UTF-8) filename storage. Expanded list of supported compression algorithms (LZMA, PPMd+), encryption algorithms (Blowfish, Twofish), and hashes.
6.3.1: (2007) Corrected standard hash values for SHA-256/384/512.
6.3.2: (2007) Documented compression method 97 (WavPack).
6.3.3: (2012) Document formatting changes to facilitate referencing the PKWARE Application Note from other standards using methods such as the JTC 1 Referencing Explanatory Report (RER) as directed by JTC 1/SC 34 N 1621.
6.3.4: (2014) Updates the PKWARE, Inc. office address.
6.3.5: (2018) Documented compression methods 16, 96 and 99, DOS timestamp epoch and precision, added extra fields for keys and decryption, as well as typos and clarifications.
6.3.6: (2019) Corrected typographical error.
6.3.7: (2020) Added Zstandard compression method ID 20.
6.3.8: (2020) Moved Zstandard compression method ID from 20 to 93, deprecating the former. Documented method IDs 94 and 95 (MP3 and XZ respectively).
6.3.9: (2020) Corrected a typo in Data Stream Alignment description.
WinZip, starting with version 12.1, uses the extension for ZIP files that use compression methods newer than DEFLATE; specifically, methods BZip, LZMA, PPMd, Jpeg and Wavpack. The last 2 are applied to appropriate file types when "Best method" compression is selected.
Standardization
In April 2010, ISO/IEC JTC 1 initiated a ballot to determine whether a project should be initiated to create an ISO/IEC International Standard format compatible with ZIP. The proposed project, entitled Document Packaging, envisaged a ZIP-compatible 'minimal compressed archive format' suitable for use with a number of existing standards including OpenDocument, Office Open XML and EPUB.
In 2015, ISO/IEC 21320-1 "Document Container File — Part 1: Core" was published which states that "Document container files are conforming Zip files". It requires the following main restrictions of the ZIP file format:
Files in ZIP archives may only be stored uncompressed, or using the "deflate" compression (i.e. compression method may contain the value "0" - stored or "8" - deflated).
The encryption features are prohibited.
The digital signature features (from SES) are prohibited.
The "patched data" features (from PKPatchMaker) are prohibited.
Archives may not span multiple volumes or be segmented.
Design
files are archives that store multiple files. ZIP allows contained files to be compressed using many different methods, as well as simply storing a file without compressing it. Each file is stored separately, allowing different files in the same archive to be compressed using different methods. Because the files in a ZIP archive are compressed individually, it is possible to extract them, or add new ones, without applying compression or decompression to the entire archive. This contrasts with the format of compressed tar files, for which such random-access processing is not easily possible.
A directory is placed at the end of a ZIP file. This identifies what files are in the ZIP and identifies where in the ZIP that file is located. This allows ZIP readers to load the list of files without reading the entire ZIP archive. ZIP archives can also include extra data that is not related to the ZIP archive. This allows for a ZIP archive to be made into a self-extracting archive (application that decompresses its contained data), by prepending the program code to a ZIP archive and marking the file as executable. Storing the catalog at the end also makes possible hiding a zipped file by appending it to an innocuous file, such as a GIF image file.
The format uses a 32-bit CRC algorithm and includes two copies of the directory structure of the archive to provide greater protection against data loss.
Structure
A ZIP file is correctly identified by the presence of an end of central directory record which is located at the end of the archive structure in order to allow the easy appending of new files. If the end of central directory record indicates a non-empty archive, the name of each file or directory within the archive should be specified in a central directory entry, along with other metadata about the entry, and an offset into the ZIP file, pointing to the actual entry data. This allows a file listing of the archive to be performed relatively quickly, as the entire archive does not have to be read to see the list of files. The entries within the ZIP file also include this information, for redundancy, in a local file header. Because ZIP files may be appended to, only files specified in the central directory at the end of the file are valid. Scanning a ZIP file for local file headers is invalid (except in the case of corrupted archives), as the central directory may declare that some files have been deleted and other files have been updated.
For example, we may start with a ZIP file that contains files A, B and C. File B is then deleted and C updated. This may be achieved by just appending a new file C to the end of the original ZIP file and adding a new central directory that only lists file A and the new file C. When ZIP was first designed, transferring files by floppy disk was common, yet writing to disks was very time-consuming. If you had a large zip file, possibly spanning multiple disks, and only needed to update a few files, rather than reading and re-writing all the files, it would be substantially faster to just read the old central directory, append the new files then append an updated central directory.
The order of the file entries in the central directory need not coincide with the order of file entries in the archive.
Each entry stored in a ZIP archive is introduced by a local file header with information about the file such as the comment, file size and file name, followed by optional "extra" data fields, and then the possibly compressed, possibly encrypted file data. The "Extra" data fields are the key to the extensibility of the ZIP format. "Extra" fields are exploited to support the ZIP64 format, WinZip-compatible AES encryption, file attributes, and higher-resolution NTFS or Unix file timestamps. Other extensions are possible via the "Extra" field. ZIP tools are required by the specification to ignore Extra fields they do not recognize.
The ZIP format uses specific 4-byte "signatures" to denote the various structures in the file. Each file entry is marked by a specific signature. The end of central directory record is indicated with its specific signature, and each entry in the central directory starts with the 4-byte central file header signature.
There is no BOF or EOF marker in the ZIP specification. Conventionally the first thing in a ZIP file is a ZIP entry, which can be identified easily by its local file header signature. However, this is not necessarily the case, as this not required by the ZIP specification - most notably, a self-extracting archive will begin with an executable file header.
Tools that correctly read ZIP archives must scan for the end of central directory record signature, and then, as appropriate, the other, indicated, central directory records. They must not scan for entries from the top of the ZIP file, because (as previously mentioned in this section) only the central directory specifies where a file chunk starts and that it has not been deleted. Scanning could lead to false positives, as the format does not forbid other data to be between chunks, nor file data streams from containing such signatures. However, tools that attempt to recover data from damaged ZIP archives will most likely scan the archive for local file header signatures; this is made more difficult by the fact that the compressed size of a file chunk may be stored after the file chunk, making sequential processing difficult.
Most of the signatures end with the short integer 0x4b50, which is stored in little-endian ordering. Viewed as an ASCII string this reads "PK", the initials of the inventor Phil Katz. Thus, when a ZIP file is viewed in a text editor the first two bytes of the file are usually "PK". (DOS, OS/2 and Windows self-extracting ZIPs have an EXE before the ZIP so start with "MZ"; self-extracting ZIPs for other operating systems may similarly be preceded by executable code for extracting the archive's content on that platform.)
The specification also supports spreading archives across multiple file-system files. Originally intended for storage of large ZIP files across multiple floppy disks, this feature is now used for sending ZIP archives in parts over email, or over other transports or removable media.
The FAT filesystem of DOS has a timestamp resolution of only two seconds; ZIP file records mimic this. As a result, the built-in timestamp resolution of files in a ZIP archive is only two seconds, though extra fields can be used to store more precise timestamps. The ZIP format has no notion of time zone, so timestamps are only meaningful if it is known what time zone they were created in.
In September 2007, PKWARE released a revision of the ZIP specification providing for the storage of file names using UTF-8, finally adding Unicode compatibility to ZIP.
File headers
All multi-byte values in the header are stored in little-endian byte order. All length fields count the length in bytes.
Local file header
The extra field contains a variety of optional data such as OS-specific attributes. It is divided into records, each with at minimum a 16-bit signature and a 16-bit length. A ZIP64 local file extra field record, for example, has the signature 0x0001 and a length of 16 bytes (or more) so that two 64-bit values (the compressed and uncompressed sizes) may follow. Another common local file extension is 0x5455 (or "UT") which contains 32-bit UTC UNIX timestamps.
This is immediately followed by the compressed data.
Data descriptor
If the bit at offset 3 (0x08) of the general-purpose flags field is set, then the CRC-32 and file sizes are not known when the header is written. If the archive is in Zip64 format, the compressed and uncompressed size fields are 8 bytes long instead of 4 bytes long (see section 4.3.9.2). The equivalent fields in the local header (or in the Zip64 extended information extra field in the case of archives in Zip64 format) are filled with zero, and the CRC-32 and size are appended in a 12-byte structure (optionally preceded by a 4-byte signature) immediately after the compressed data:
Central directory file header
The central directory entry is an expanded form of the local header:
End of central directory record (EOCD)
After all the central directory entries comes the end of central directory (EOCD) record, which marks the end of the ZIP file:
This ordering allows a ZIP file to be created in one pass, but the central directory is also placed at the end of the file in order to facilitate easy removal of files from multiple-part (e.g. "multiple floppy-disk") archives, as previously discussed.
Compression methods
The .ZIP File Format Specification documents the following compression methods: Store (no compression), Shrink (LZW), Reduce (levels 1–4; LZ77 + probabilistic), Implode, Deflate, Deflate64, bzip2, LZMA, WavPack, PPMd, and a LZ77 variant provided by IBM z/OS CMPSC instruction. The most commonly used compression method is DEFLATE, which is described in IETF .
Other methods mentioned, but not documented in detail in the specification include: PKWARE DCL Implode (old IBM TERSE), new IBM TERSE, IBM LZ77 z Architecture (PFS), and a JPEG variant. A "Tokenize" method was reserved for a third party, but support was never added.
The word Implode is overused by PKWARE: the DCL/TERSE Implode is distinct from the old PKZIP Implode, a predecessor to Deflate. The DCL Implode is undocumented partially due to its proprietrary nature held by IBM, but Mark Adler has nevertheless provided a decompressor called "blast" alongside zlib.
Encryption
ZIP supports a simple password-based symmetric encryption system generally known as ZipCrypto. It is documented in the ZIP specification, and known to be seriously flawed. In particular, it is vulnerable to known-plaintext attacks, which are in some cases made worse by poor implementations of random-number generators.
New features including new compression and encryption (e.g. AES) methods have been documented in the ZIP File Format Specification since version 5.2. A WinZip-developed AES-based open standard ("AE-x" in APPNOTE) is used also by 7-Zip and Xceed, but some vendors use other formats. PKWARE SecureZIP (SES, proprietary) also supports RC2, RC4, DES, Triple DES encryption methods, Digital Certificate-based encryption and authentication (X.509), and archive header encryption. It is, however, patented (see ).
File name encryption is introduced in .ZIP File Format Specification 6.2, which encrypts metadata stored in Central Directory portion of an archive, but Local Header sections remain unencrypted. A compliant archiver can falsify the Local Header data when using Central Directory Encryption. As of version 6.2 of the specification, the Compression Method and Compressed Size fields within Local Header are not yet masked.
ZIP64
The original format had a 4 GB (232 bytes) limit on various things (uncompressed size of a file, compressed size of a file, and total size of the archive), as well as a limit of 65,535 (216-1) entries in a ZIP archive. In version 4.5 of the specification (which is not the same as v4.5 of any particular tool), PKWARE introduced the "ZIP64" format extensions to get around these limitations, increasing the limits to 16 EB (264 bytes). In essence, it uses a "normal" central directory entry for a file, followed by an optional "zip64" directory entry, which has the larger fields.
The format of the Local file header (LOC) and Central directory entry (CEN) are the same in ZIP and ZIP64. However, ZIP64 specifies an extra field that may be added to those records at the discretion of the compressor, whose purpose is to store values that do not fit in the classic LOC or CEN records. To signal that the actual values are stored in ZIP64 extra fields, they are set to 0xFFFF or 0xFFFFFFFF in the corresponding LOC or CEN record.
On the other hand, the format of EOCD for ZIP64 is slightly different from the normal ZIP version.
It is also not necessarily the last record in the file. A End of Central Directory Locator follows (an additional 20 bytes at the end).
The File Explorer in Windows XP does not support ZIP64, but the Explorer in Windows Vista and later do. Likewise, some extension libraries support ZIP64, such as DotNetZip, QuaZIP and IO::Compress::Zip in Perl. Python's built-in zipfile supports it since 2.5 and defaults to it since 3.4. OpenJDK's built-in java.util.zip supports ZIP64 from version Java 7. Android Java API support ZIP64 since Android 6.0. Mac OS Sierra's Archive Utility notably does not support ZIP64, and can create corrupt archives when ZIP64 would be required. However, the ditto command shipped with Mac OS will unzip ZIP64 files. More recent versions of Mac OS ship with info-zip's zip and unzip command line tools which do support Zip64: to verify run zip -v and look for "ZIP64_SUPPORT".
Combination with other file formats
The file format allows for a comment containing up to 65,535 (216−1) bytes of data to occur at the end of the file after the central directory. Also, because the central directory specifies the offset of each file in the archive with respect to the start, it is possible for the first file entry to start at an offset other than zero, although some tools, for example gzip, will not process archive files that do not start with a file entry at offset zero.
This allows arbitrary data to occur in the file both before and after the ZIP archive data, and for the archive to still be read by a ZIP application. A side-effect of this is that it is possible to author a file that is both a working ZIP archive and another format, provided that the other format tolerates arbitrary data at its end, beginning, or middle. Self-extracting archives (SFX), of the form supported by WinZip, take advantage of this, in that they are executable () files that conform to the PKZIP AppNote.txt specification, and can be read by compliant zip tools or libraries.
This property of the format, and of the JAR format which is a variant of ZIP, can be exploited to hide rogue content (such as harmful Java classes) inside a seemingly harmless file, such as a GIF image uploaded to the web. This so-called GIFAR exploit has been demonstrated as an effective attack against web applications such as Facebook.
Limits
The minimum size of a file is 22 bytes. Such an empty zip file contains only an End of Central Directory Record (EOCD):
The maximum size for both the archive file and the individual files inside it is 4,294,967,295 bytes (232−1 bytes, or 4 GB minus 1 byte) for standard ZIP. For ZIP64, the maximum size is 18,446,744,073,709,551,615 bytes (264−1 bytes, or 16 EB minus 1 byte).
Proprietary extensions
Extra field
file format includes an extra field facility within file headers, which can be used to store extra data not defined by existing ZIP specifications, and which allow compliant archivers that do not recognize the fields to safely skip them. Header IDs 0–31 are reserved for use by PKWARE. The remaining IDs can be used by third-party vendors for proprietary usage.
Strong encryption controversy
When WinZip 9.0 public beta was released in 2003, WinZip introduced its own AES-256 encryption, using a different file format, along with the documentation for the new specification. The encryption standards themselves were not proprietary, but PKWARE had not updated APPNOTE.TXT to include Strong Encryption Specification (SES) since 2001, which had been used by PKZIP versions 5.0 and 6.0. WinZip technical consultant Kevin Kearney and StuffIt product manager Mathew Covington accused PKWARE of withholding SES, but PKZIP chief technology officer Jim Peterson claimed that certificate-based encryption was still incomplete.
In another controversial move, PKWare applied for a patent on 16 July 2003 describing a method for combining ZIP and strong encryption to create a secure file.
In the end, PKWARE and WinZip agreed to support each other's products. On 21 January 2004, PKWARE announced the support of WinZip-based AES compression format. In a later version of WinZip beta, it was able to support SES-based ZIP files. PKWARE eventually released version 5.2 of the .ZIP File Format Specification to the public, which documented SES. The Free Software project 7-Zip also supports AES, but not SES in ZIP files (as does its POSIX port p7zip).
When using AES encryption under WinZip, the compression method is always set to 99, with the actual compression method stored in an AES extra data field. In contrast, Strong Encryption Specification stores the compression method in the basic file header segment of Local Header and Central Directory, unless Central Directory Encryption is used to mask/encrypt metadata.
Implementation
There are numerous .ZIP tools available, and numerous .ZIP libraries for various programming environments; licenses used include proprietary and free software. WinZip, WinRAR, Info-ZIP, 7-Zip, PeaZip and B1 Free Archiver are well-known .ZIP tools, available on various platforms. Some of those tools have library or programmatic interfaces.
Some development libraries licensed under open source agreement are libzip, libarchive, and Info-ZIP. For Java: Java Platform, Standard Edition contains the package "java.util.zip" to handle standard .ZIP files; the Zip64File library specifically supports large files (larger than 4 GB) and treats .ZIP files using random access; and the Apache Ant tool contains a more complete implementation released under the Apache Software License.
The Info-ZIP implementations of the .ZIP format adds support for Unix filesystem features, such as user and group IDs, file permissions, and support for symbolic links. The Apache Ant implementation is aware of these to the extent that it can create files with predefined Unix permissions. The Info-ZIP implementations also know how to use the error correction capabilities built into the .ZIP compression format. Some programs do not, and will fail on a file that has errors.
The Info-ZIP Windows tools also support NTFS filesystem permissions, and will make an attempt to translate from NTFS permissions to Unix permissions or vice versa when extracting files. This can result in potentially unintended combinations, e.g. .exe files being created on NTFS volumes with executable permission denied.
Versions of Microsoft Windows have included support for .ZIP compression in Explorer since the Microsoft Plus! pack was released for Windows 98. Microsoft calls this feature "Compressed Folders". Not all .ZIP features are supported by the Windows Compressed Folders capability. For example, encryption is not supported in Windows 10 Home edition, although it can decrypt. Unicode entry encoding is not supported until Windows 7, while split and spanned archives are not readable or writable by the Compressed Folders feature, nor is AES Encryption supported.
Microsoft Office started using the zip archive format in 2006 for their Office Open XML .docx, .xlsx, .pptx, etc. files, which became the default file format with Microsoft Office 2007.
Legacy
There are numerous other standards and formats using "zip" as part of their name. For example, zip is distinct from gzip, and the latter is defined in IETF . Both zip and gzip primarily use the DEFLATE algorithm for compression. Likewise, the ZLIB format (IETF ) also uses the DEFLATE compression algorithm, but specifies different headers for error and consistency checking. Other common, similarly named formats and programs with different native formats include 7-Zip, bzip2, and rzip.
Concerns
The theoretical maximum compression factor for a raw DEFLATE stream is about 1032 to one, but by exploiting the ZIP format in unintended ways, ZIP archives with compression ratios of billions to one can be constructed. These zip bombs unzip to extremely large sizes, overwhelming the capacity of the computer they are decompressed on.
See also
Comparison of file archivers
Comparison of archive formats
List of archive formats
References
External links
.ZIP Application Note landing page for PKWARE's current and historical .ZIP file
ISO/IEC 21320-1:2015 — Document Container File — Part 1: Core
Zip Files: History, Explanation and Implementation
Shrink, Reduce, and Implode: The Legacy Zip Compression Methods
APPNOTE.TXT mirror
Structure of PKZip file Format specifications, graphical tables
American inventions
Archive formats |
189297 | https://en.wikipedia.org/wiki/MSN%20TV | MSN TV | MSN TV (formerly WebTV) was a web access product consisting of a thin client device which used a television for display (instead of using a computer monitor), and the online service that supported it. The device design and service was developed by WebTV Networks, Inc., a company started in 1995. The WebTV product was announced in July 1996 and later released on September 18, 1996. In April 1997, the company was purchased by Microsoft Corporation and in July 2001, was rebranded to MSN TV and absorbed into MSN.
While most thin clients developed in the mid-1990s were positioned as diskless workstations for corporate intranets, WebTV was positioned as a consumer product, primarily targeting those looking for a low-cost alternative to a computer for Internet access. The WebTV and MSN TV devices allowed a television set to be connected to the Internet, mainly for web browsing and e-mail. The WebTV/MSN TV service, however, also offered its own exclusive services such as a "walled garden" newsgroup service, news and weather reports, storage for user bookmarks (Favorites), IRC (and for a time, MSN Chat) chatrooms, a Page Builder service that let WebTV users create and host webpages that could later be shared to others via a link if desired, the ability to play background music from a predefined list of songs as you surfed the web, dedicated sections for aggregated content covering various topics (entertainment, romance, stocks, etc.), and a few years after Microsoft bought out WebTV, integration with MSN Messenger and Hotmail. The setup included a thin client in the form of a set-top box, a remote, a network connection using dial-up, or with the introduction of Rogers Interactive TV and the MSN TV 2, the option to use broadband, and a wireless keyboard, which was sold optionally up until the 2000s.
The WebTV/MSN TV service lasted for 17 years, shutting down on September 30, 2013, and allowing subscribers to migrate their data well before that date arrived.
The original WebTV network relied on a Solaris backend network and telephone lines to deliver service to customers via dial-up, with "frontend servers" that talk directly to boxes using a custom protocol, the WebTV Protocol (WTVP), to authenticate users and deliver content to boxes. For the MSN TV 2, however, a completely new service based on IIS servers and regular HTTP/HTTPS services was used.
History
Concept
Co-founder Steve Perlman is credited with the idea for the device. He first combined computer and television as a high-school student when he decided his home PC needed a graphics display. He went on to build software for companies such as Apple and Atari. While working at General Magic, the idea of bringing TVs and computers together resurfaced.
One night, Perlman was browsing the web and came across a Campbell's soup website with recipes. He thought that the people who might be interested in what the site had to offer were not using the web. It occurred to him that if the television audience was enabled by a device to augment television viewing with receiving information or commercial offers through the television, then perhaps the web address could act as a signal and the television cable could be the conduit.
Early history
A Silicon Valley startup, WebTV Networks was founded in July 1995. Perlman brought along co-founders Bruce Leak and Phil Goldman shortly after conceiving the basic concept. The company operated out of half of a former BMW car dealership building on Alma Street in Palo Alto, California, which was being used for storage by the Museum of American Heritage. WebTV had been able to obtain the space for very low rent, but it was suboptimal for technology development.
Before incorporation, the company referred to itself as Artemis Research to disguise the nature of its business. The info page of its original website explained that it was studying "sleep deprivation, poor diet and no social life for extended periods on humans and dwarf rabbits". The dwarf rabbit reference was an inside joke among WebTV's hard-working engineers—Phil Goldman's pet house rabbit Bowser (inspiration for the General Magic logo) was often found roaming the WebTV building late into the night while the engineers were working—although WebTV actually received inquiries from real research groups conducting similar studies and seeking to exchange data.
The company hired many engineers and a few business development employees early on, having about 30 total employees by October 1995. Two early employees of Artemis were from Apple Inc: Andy Rubin, creator of the Android cell phone OS, and Joe Britt. Both men would later be two of the founders of Danger, Inc. (originally Danger Research).
WebTV Networks' business model was to license a reference design to consumer electronics companies for a WebTV Internet Terminal, a set-top box that attached to a telephone line and automatically connected to the Internet through a dial-up modem. The consumer electronics companies' income was derived from selling the WebTV set-top box. WebTV's income was derived from operating the WebTV Service, the Internet-based service to which the set-top boxes connected and for which it collected a fee from WebTV subscribers. The service provided features such as HTML-based email, and proxied websites, which were reformatted by the service before they were sent to set-top box, to make them display more efficiently on a television screen.
WebTV closed its first round of financing, US$1,500,000, from Marvin Davis in September 1995, which it used to develop its prototype set-top box, using proprietary hardware and firmware. The company also used the financing to develop the online service that the set-top boxes connected to. WebTV leveraged their limited startup funds by licensing a reference design for the appliance to Sony and Philips. Eventually other companies would also become licensees and WebTV would profit on the monthly service fees. After 22 months, the company was sold to Microsoft for $425 million, with each of the three founders receiving $64 million.
Barely surviving to reach announcement
By the spring of 1996 WebTV Networks employed approximately 70 people, many of them finishing their senior year at nearby Stanford University, or former employees of either Apple Computer or General Magic. WebTV had started negotiating with Sony to manufacture and distribute the WebTV set-top box, but negotiations had taken much longer than WebTV had expected, and WebTV had used up its initial funding. Steve Perlman liquidated his assets, ran up his credit cards and mortgaged his house to provide bridge financing while seeking additional venture capital. Because Sony had insisted upon exclusive distribution rights for the first year, WebTV had no other distribution partner in place, and just before WebTV was to close venture capital financing from Brentwood Associates, Sony sent WebTV a certified letter stating it had decided not to proceed with WebTV. It was a critical juncture for WebTV, because the Brentwood financing had been predicated on the expectation of a future relationship with Sony, and if Brentwood had decided to not proceed with the financing after being told that Sony had backed out, WebTV would have gone bankrupt and Perlman would have lost everything. But Brentwood decided to proceed with the financing despite losing Sony's involvement, and further financing from Paul Allen's Vulcan Ventures soon followed.
WebTV then proceeded to close a non-exclusive WebTV set-top box distribution deal with Philips, which provided competitive pressure causing Sony to change its mind, to resume its relationship with WebTV and also to distribute WebTV.
WebTV was announced on July 10, 1996, generating a large wave of press attention as not only the first television-based use of the World Wide Web, but also as the first consumer-electronics device to access the World Wide Web without a personal computer. After the product's announcement, the company closed additional venture financing, including investments from Microsoft Corporation, Citicorp, Seagate Technology, Inc., Soros Capital, L.P., St. Paul Venture Capital and Times Mirror Company.
The launch
WebTV was launched on September 18, 1996, within one year after its first round of financing, with WebTV set-top boxes in stores from Sony and Philips, and WebTV's online service running from servers in its tiny office, still based in the former BMW dealership.
The initial price for the WebTV set-top box was US$349 for the Sony version and US$329 for the Philips version, with a wireless keyboard available for about an extra US$50. The monthly service fee initially was US$19.95 per month for unlimited Web surfing and e-mail.
There was little difference between the first Sony and the Philips WebTV set-top boxes, except for the housing and packaging. The WebTV set-top box had very limited processing and memory resources (just a 112 MHz MIPS CPU, 2 megabytes of RAM, 2 megabytes of ROM, 1 megabyte of Flash memory) and the device relied upon a connection through a 33.6 kbit/s dialup modem to connect to the WebTV Service, where powerful servers provide back-end support to the WebTV set-top boxes to support a full Web-browsing and email experience for the subscribers.
Initial sales were slow. By April 1997, WebTV had only 56,000 subscribers, but the pace of subscriber growth accelerated after that, achieving 150,000 subscribers by Autumn 1997, about 325,000 subscribers by April 1998 and about 800,000 subscribers by May 1999. WebTV achieved profitability by Spring 1998, and grossed over US$1.3 billion in revenue through its first 8 years of operation. In 2005 WebTV was still grossing US$150 million per year in revenue with 65% gross margin.
WebTV briefly classified as a weapon
Because WebTV utilized strong encryption, specifically the 128-bit encryption (not SSL) used to communicate with its proprietary service, upon launch in 1996, WebTV was classified as "munitions" (a military weapon) by the United States government and was therefore barred from export under United States security laws at the time. Because WebTV was widely distributed in consumer electronic stores under the Sony and Philips brands for only US$325, its munitions classification was used to argue that the US should no longer consider devices incorporating strong encryption to be munitions, and should permit their export. Two years later, in October 1998, WebTV obtained a special exemption permitting its export, despite the strong encryption, and shortly thereafter, laws concerning export of cryptography in the United States were changed to generally permit the export of strong encryption.
Microsoft takes notice
In February 1997, in an investor meeting with Microsoft, Steve Perlman was approached by Microsoft's Senior Vice President for Consumer Platforms Division, Craig Mundie. Despite the fact that the initial WebTV sales had been modest, Mundie expressed that Microsoft was impressed with WebTV and saw significant potential both in WebTV's product offering and in applying the technology to other Microsoft consumer and video product offerings. Microsoft offered to acquire WebTV, build a Microsoft campus in Silicon Valley around WebTV, and establish WebTV as a Microsoft division to develop television-based products and services, with Perlman as the division's president.
Discussions proceeded rapidly, involving Bill Gates, then CEO of Microsoft, personally. Gates called Perlman at his home on Easter Sunday in March 1997, and Perlman described to Gates WebTV's next generation products in development, which would be the first consumer devices to incorporate hard disks, including the WebTV Plus, and the WebTV Digital Video Recorders. Gates' interest was piqued, and negotiations between Microsoft and WebTV rapidly proceeded to closure, with both sides working around the clock to get the deal done. Negotiation time was so short that the hour lost due to the change to Daylight Saving Time the night before the planned announcement, which the parties had neglected to factor into their schedule, almost left them without enough time to finish the deal.
On April 6, 1997, 20 months after WebTV's founding, and only six weeks after negotiations with Microsoft began, during a scheduled speech at the National Association of Broadcasters conference in Las Vegas, Nevada, Craig Mundie announced that Microsoft had acquired WebTV. The acquisition price was US$503 million, but WebTV was so young a company that most of the employees' stock options had yet to be vested. As such, the vested shares at the time of the announcement amounted to US$425 million, and that was the acquisition price announced.
Subsequent to the acquisition, WebTV became a Silicon Valley-based division of Microsoft, with Steve Perlman as its president. The WebTV division began developing most of Microsoft's television-based products, including the first satellite Digital Video Recorders (the DishPlayer for EchoStar's Dish Network and UltimateTV for DirecTV), Microsoft's cable TV products, the Xbox 360 hardware, and Microsoft's Mediaroom IPTV platform.
In May 1999, America Online announced that it was going to compete directly with Microsoft in delivering Internet over television sets by introducing AOL TV.
In June 1999, Steve Perlman left Microsoft and started Rearden, a business incubator for new companies in media and entertainment technology.
MSN TV rebranding
In July 2001, six years after WebTV's founding, Microsoft rebranded WebTV as MSN TV. Contracts were terminated with all other licensed manufacturers of the WebTV hardware except RCA, leaving them as the sole manufacturer of further hardware. Promotion of the WebTV brand ended.
In later years, the number of consumers using dialup access had dropped and as the Classic and Plus clients were restricted to dialup access, their subscriber count began to drop. Because the WebTV client was subsidized hardware, the company had always required individual subscriptions for each box, but with the subsidies ended, MSN started offering free use of MSN TV boxes to their computer users who subscribed to MSN as an incentive not to depart for discount dialup ISPs.
Broadband MSN TV
In 2001, Rogers Cable partnered with Microsoft to introduce "Rogers Interactive TV" in Canada. The service enabled Rogers' subscribers to access the Web via their TV sets, create their own websites, shop online, chat, and access e-mail. This initiative was the first broadband implementation of MSN TV.
In late 2004, Microsoft introduced MSN TV 2. Codenamed the "Deuce", it was capable of broadband access, and it introduced a revamped user interface and new capabilities. These include offline viewing of media (so long as you were already logged in), audio and video streaming (broadband only), Adobe Reader, support for viewing Microsoft Office documents (namely Microsoft Word), Windows Media Player, the ability to access Windows computers on a home network to function as a media player, and even the ability the use of a mouse, although that was most likely unintentional at first. MSN TV 2 also kept some key features from the first generation of WebTV/MSN TV, such as its MIDI engine and the ability to play background music as you surfed the web. MSN TV 2 used a different online service from the original WebTV/MSN TV, but it offered many of the same services, such as chatrooms, instant messaging, weather, news, aggregated "info centers", and newsgroups, and like that service, still required a subscription to use. For those with broadband, the fee was US$99 yearly.
For inexpensive devices, the cost of licensing the operating system is substantial. For Microsoft, however, it would be actualizing a sunk cost, and when Microsoft released the MSN TV 2 model, they adopted standard PC architecture and used a customized version of Windows CE as the operating system. This allowed MSN TV 2 to more easily and inexpensively keep current.
Discontinuation
By late 2009, MSN TV hardware was no longer being sold by Microsoft, although service continued for existing users for the next four years. Attempting to go to the "Buy MSN TV" section on the MSN TV website at the time resulted in the following message being shown:
"Sorry, MSN TV hardware is no longer available for purchase from Microsoft. Microsoft continues to support the subscription service for existing WebTV and MSN TV customers."
On July 1, 2013, an email was sent out to subscribers stating that the MSN TV service would be shutting down on September 30, 2013. During that time, subscribers were advised to convert any accounts on the first-generation service to Microsoft accounts and to migrate any favorites and other data they had on their MSN TV accounts to SkyDrive. Once September 30, 2013 finally arrived, the WebTV/MSN TV service fully closed. Existing customers were offered MSN Dial-Up Internet Access accounts with a promotion. Customer service was available for non-technical and billing questions until January 15, 2014.
Technology
Set-top box
Since the WebTV set-top box was a dedicated web-browsing appliance that did not need to be based on a standard operating system, the cost of licensing an operating system could be avoided. All first generation boxes featured a 64-bit MIPS RISC CPU, boot ROM and flash ROM storage for all Classic and New Plus models, RAM, and a smart card reader, which wasn't significantly utilized. The web browser that ran on the set-top box was compatible with both Netscape Navigator and Microsoft Internet Explorer standards. The first WebTV Classic set-top boxes from Sony and Philips had a 33.6k modem, and 2 MB of RAM, boot ROM, and flash ROM. Later models had 56k modems and increased ROM/RAM capacity. The WebTV set-top boxes leveraged the service's server-side caching proxy which reformatted and compressed web pages before sending them to the box, a feature generally unavailable to dial-up ISP users at the time and as such, had to be developed by WebTV. For web browsing purposes, given WebTV's thin client software, there was no need for a hard disk, but by putting the browser in non-volatile memory, upgrades could be downloaded from the WebTV service onto the set-top box.
The WebTV set-top box was designed so that at a specified time, it would check to see if there was any email waiting. If there was, it would illuminate a red LED on the device so the consumer would know it was worth connecting to pick up their mail.
A second model, the "Plus", was introduced a year later. This model featured a TV tuner to allow watching television in a PIP (Picture-In-Picture) window, allowed one to capture video stills from the tuner or composite inputs as a JPEG that could then be uploaded to a WebTV discussion post, email, or a "scrapbook" on a user's account for later use, and included a video tuner that allowed one to schedule a VCR in a manner like TiVo allowed several years later. The Plus also included a 56k modem, support for ATVEF, a technology that allowed users to download special script-laden pages to interact with television shows, and in original models, had a 1.1 GB hard drive for storage in place of the ROM chips used in the previous Classic models, mainly in order to accommodate large nightly downloads of television schedules. Around Fall 1998, plans for a "Derby" revision of the WebTV Plus were announced, which was rumored to have a faster CPU and more memory. By early 1999, only one Derby unit was produced by Sony as a revision of their INT-W200 Plus model, but no substantial changes were made to the hardware outside of the CPU being upgraded with no change in clock speed, and the modem being changed to a softmodem. As chip prices dropped, later versions of the Plus used an M-Systems DiskOnChip flash ROM instead, alongside increasing RAM capacity to 16 MB.
WebTV produced reference designs of models incorporating a disk-based personal video recorder and a satellite tuner for EchoStar's Dish Network (referred to as the DishPlayer) and for DirecTV (called UltimateTV). In 2001, EchoStar sued Microsoft for failing to support the WebTV DishPlayer. EchoStar subsequently sought to acquire DirecTV and was the presumptive acquirer, but EchoStar was ultimately blocked by the Federal Communications Commission. While EchoStar's lawsuit against Microsoft was in process, DirecTV (presumptively acquired and controlled by EchoStar) dropped UltimateTV (thus ending Microsoft's satellite product initiatives) and picked TiVo's DirecTV product as its only Digital Video Recorder offering.
As an ease-of-use design consideration, WebTV early decided to reformat pages rather than have users doing sideways scrolling. As entry-level PCs evolved from VGA resolution of 640x480 to SVGA resolution of 800x600, and web site dimensions followed suit, reformatting the PC-sized web pages to fit the 560-pixel width of a United States NTSC television screen became less satisfactory. The WebTV browser also translated HTML frames as tables in order to avoid the need for a mouse.
In Japan, WebTV had a small run starting around late 1997, with a couple "Classic" Japanese units being released with hard drives and two times more RAM than American Classic and Old Plus units at the time, and in Spring of 1999, allowed customers to choose the option of utilizing Sega's Dreamcast video game console, which came with a built-in modem, to access WebTV. This was possible as Sega and Microsoft collaborated to create a port of the WebTV technology on the Dreamcast, using the Windows CE abstraction layer supported on the console and what's believed to be a version of the Internet Explorer 2.0 browser engine. The Japanese service ended sometime in March 2002.
Security
Security was always an issue with the WebTV/MSN TV service. This was primarily due to the fact that proprietary URLs used to perform certain actions on the service had very little verification procedures in place and for a while, could easily be executed through the URL panel on the set-top box. Starting in around 1998, self-proclaimed WebTV hackers quickly figured out ways to exploit the service's poor security with these vulnerable URLs, resulting in many things which include but aren't limited to: access to internal sections of the production WebTV service such as "Tricks," which hosted several pages designed to troubleshoot the WebTV box and service; the ability to remotely change the settings of a subscriber's box; or even remotely performing actions on any account, including deleting them, which were not verified by the service as to whether the requests were coming from the account holder or not. These "hackers" even found a way to connect to internal WebTV services and discovered WebTV content that was previously unknown to the public, including a version of Doom for WebTV Plus units that could be downloaded from one of these services at one point. At the same year WebTV hacking started to pick up, WebTV Networks had tried their hardest to keep these rogue users back on the production service, and even going as far as terminating people involved with any unauthorized usage of the WebTV service, regardless of their motives. The most notable of these terminations is of WebTV user Matt Squadere, known by his internet handle MattMan69, who is well known for having his and others access terminated without warning due to connecting to the internal WebTV services "TestDrive" and "Weekly", which was possible from accessing the Tricks section of WebTV with a password that was shared around at the time. Matt was specifically terminated when he accessed TestDrive a second time and reported it to WebTV Networks' 1-800 number, which he was initially rewarded for with a WebTV shirt. At the same time, WebTV had its privacy policy changed without warning subscribers prior to doing so, which legally gave them the right to terminate any user for any reason without making it necessary to warn them as to what they've done. This caused a massive uproar from subscribers towards WebTV Networks regarding their fairness and ethics with their legal agreements. It appears after this major incident though that future WebTV hacking endeavors were kept secret between those well known in the hacking scene and were not reported to WebTV Networks directly, supposedly to be able to keep using already discovered methods that were not already nicked. This included any findings on the more technical workings of the WebTV service, including protocol security and service URLs that were still exploitable. Some of the remaining hacks were also used to target unsuspecting WebTV users. One example of this, which concerns being able to have unauthorized access to one's WebTV account, has been documented on the "Tricks/Hacks Archive" section of Matt Squadere's current site, "WebTV MsnTV Secret Pics":
...I chose my victims by reading the News Groups. I would look for those punks that liked to talk sh*t, you know the ones that swore up and down that they could hack your account of fry your webtv unit but in reality they couldn't even access the home page. I would also target those lamers that thought they were cool cause they knew how to send a e-mail bomb that could power off your webtv box and thought they were king shit, lmao!
WebTV/MSN TV was also victim to a virus written in July 2002 by 43 year old David Jeansonne dubbed "NEAT", which changed the local dial-up access number on victims' boxes to 911, which would be dialed the next time the WebTV/MSN TV box had to dial in. It was sent to 18 MSN TV users through an attachment in an email with the subject "NEAT", disguising itself as a tool that could change the colors and fonts of the MSN TV user interface. It was supposedly forwarded to 3 other users by some of the initial victims, making the total victim count 21. At least 10 of the victims reported having the police show up at their homes as a result of their boxes dialing 911. There are also claims of the virus having the ability to mass-mail itself, although this wasn't properly confirmed at the time the virus was prevalent. The writer of this virus was eventually arrested in February 2004 and charged with cyberterrorism.
Protocols
With the first generation of the WebTV/MSN TV service, it appears that a few protocols were used to allow communication with the service, but the main one used for the majority of service communication was WTVP, or the WebTV Protocol. It's a TCP-based protocol that is essentially a proprietary version of HTTP 1.0 with the ability to serve both standard web content and specialized service content to WebTV/MSN TV users. It also introduced its own protocol extensions, which include but aren't limited to 128-bit RC4-based message encryption, ticket-based authorization, proprietary challenge-response authentication to both verify clients logging in to the service and to supply them session keys used for message encryption, and persistent connections. This protocol was supported by all first-generation WebTV/MSN TV devices and the Sega Dreamcast release of WebTV up until the September 2013 discontinuation of the entire service (March 2002 for those in Japan). Another protocol believed to have been used by the service is dubbed "Mail Notify", which is a UDP-based protocol that is believed to have taken part in delivering e-mail notifications to WebTV boxes. Its existence has only been confirmed in a leaked Microsoft document and how it operated or whether it was used on the client-side or as a server-side component isn't clear at the moment. WTVP had extremely minimal documentation back in WebTV's prime and only by 2019, 6 whole years after the service shut down, did more attempts to document it crop up, although not by much initially as it was first done by releasing a third party proof-of-concept WebTV server, dubbed the "WebTV Server Emulator," that only implemented the bare minimum of the service and didn't properly document a whole lot about it. In general, it has been proven difficult to find WebTV staff who remember any more technical details on the protocol, let alone find any who have contact information at all, and WebTV hacking scene members who know how the protocols work (which generally speaking is very few people) have been hesitant to release any more significant information on them when asked. As of 2021, there has been an attempt to further explore the WTVP protocol in a more detailed and concise fashion along with other technical parts of WebTV/MSN TV with the "WebTV/MSN TV Wiki" project, run by someone outside WebTV staff and its hacking scene. There have also since been open-source software projects available started by others that aim to create a working WebTV/MSN TV service while documenting as much of the service protocols as possible. With a lack of sufficient resources on any technical WebTV information and not many people showing enough interest to figure out how the service as a whole worked and share their findings publicly, though, progress on overall documentation on these protocols has been very slow.
The MSN TV 2's service was completely separate from the original one and ran on completely different infrastructure. It, like the first generation service's protocols, had next to no documentation available publicly during its original run. Unlike the original iteration of WebTV/MSN TV, however, the MSN TV 2 barely attracted any talent willing to study how it worked. This is partly because when it released, people already into WebTV/MSN TV hacking started losing interest and some felt that the MSN TV 2 was not worth putting time into hacking. As a result, the MSN TV 2 service was almost entirely undocumented. From what little information that has now been recently disclosed by the very few people from the original hacking scene that stuck around for the MSN TV 2, the service ran on IIS servers and used standard HTTP/HTTPS web services and webpages to communicate with set top boxes. It's also believed XML was one of the formats used by the MSN TV 2 service, although this cannot be 100% confirmed right now.
WebTV/MSN TV client hardware
Models
Confirmed
Not Confirmed
Hacking attempts
In February 2006, Chris Wade analyzed the proprietary BIOS of the MSN TV 2 set top box, and created a sophisticated memory patch which allowed it to be flashed and used to boot Linux on it. An open-source solution to enabling TV output on the MSN TV 2 and similar devices was made available in 2009. There were also recorded attempts to make use of unused IDE pins on the MSN TV 2's motherboard and supply a hard drive, most likely to add extra storage beyond the 64 MB given by the default CompactFlash storage. Outside of these attempts, though, not much was done in the realm of hacking the WebTV/MSN TV hardware.
See also
Microsoft Venus
Set-top box
SmartTV
AOL TV
Google TV (smart TV platform)
Caldera DR-WebSpyder
References
External links
.
"WebTV/MSN TV Wiki", focused on documenting all information about the WebTV/MSN TV product and service
.
Interactive television
Streaming television
MSN
Set-top box
Thin clients
Products and services discontinued in 2013
Telecommunications-related introductions in 1996
Computer-related introductions in 1996 |
189512 | https://en.wikipedia.org/wiki/Multi-function%20printer | Multi-function printer | An MFP (multi-function product/printer/peripheral), multi-functional, all-in-one (AIO), or multi-function device (MFD), is an office machine which incorporates the functionality of multiple devices in one, so as to have a smaller footprint in a home or small business setting (the SOHO market segment), or to provide centralized document management/distribution/production in a large-office setting. A typical MFP may act as a combination of some or all of the following devices: email, fax, photocopier, printer, scanner.
Types of MFPs
MFP manufacturers traditionally divided MFPs into various segments. The segments roughly divided the MFPs according to their speed in pages-per-minute (ppm) and duty-cycle/robustness. However, many manufacturers are beginning to avoid the segment definition for their products, as speed and basic functionality alone do not always differentiate the many features that the devices include. Two color MFPs of a similar speed may end in the same segment, despite having potentially very different feature-sets, and therefore very different prices. From a marketing perspective, the manufacturer of the more expensive MFP would want to differentiate their product as much as possible to justify the price difference, and therefore avoids the segment definition.
Many MFP types, regardless of the category they fall into, also come in a "printer only" variety, which is the same model without the scanner unit included. This can even occur with devices where the scanner unit physically appears highly integrated into the product.
, almost all printer manufacturers offer multifunction printers. They are designed for home, small business, enterprise and commercial use. Naturally, the cost, usability, robustness, throughput, output quality, etc. all vary with the various use cases. However, they all generally do the same functions; Print, Scan, Fax, and Photocopy. In the commercial/enterprise area, most MFP have used laser-printer technology, while the personal, SOHO environments, utilize inkjet methods. Typically, inkjet printers have struggled with delivering the performance and color-saturation demanded by enterprise/large business use. However, HP has recently launched a business-grade MFP using inkjet technology.
In any case, instead of rigidly defined segments based on speed, more general definitions based on intended target audience and capabilities are becoming much more common . While the sector lacks formal definitions, it is common agreed amongst MFP manufacturers that the products fall roughly into the following categories:
All-in-one
An All-in-one is a small desktop unit, designed for home or home-office use.
These devices focus on scan and print functionality for home use, and may come with bundled software for organising photos, simple OCR and other uses of interest to a home user. An All-in-one will always include the basic functions of Print and Scan, with most also including Copy functionality and a lesser number with Fax capabilities.
In the past, these devices were usually not networked, and were generally connected by USB or Parallel. even inexpensive all-in-one devices support ethernet and/or Wi-Fi connections. In some cases the wireless devices require connection to a host computer by wire (usually USB) to initialize the device, and once initial setup is done, support wireless operations for all the work performed thereafter.
All-in-one devices may have features oriented to home and personal use that are not found in larger devices. These functions include smart card readers, direct connection to digital cameras (e.g. PictBridge technology) and other similar uses.
The print engine of most All-in-one devices is based either on a home desktop inkjet printer, or on a home desktop laser printer. They may be black-and-white or colour capable. Laser models provide a better result for text while inkjet gives a more convincing result for images and they are a cheaper multifunctional.
Some of these devices, like the Hewlett-Packard Photosmart C8180 printer, have a DVD burner and LightScribe functionality where the user could burn DVDs and create an image on a special Lightscribe DVD or CD using special software like Roxio or Nero AG Software Suite to create the image. To create a Lightscribe image takes about 10 to 25 minutes.
SOHO MFP
A large desktop or small freestanding unit, designed for Small Office/Home Office use. Often, the form factor of the MFP (desktop or freestanding) depends on the options added, such as extra paper trays.
Generally a SOHO MFP will have basic Print, Copy, Scan and Fax functionality only, but towards the larger end of the scale, may include simple document storage and retrieval, basic authentication functions and so on, making the higher end of the "SOHO" scale difficult to differentiate from the lower end of the "Office" MFP scale.
SOHO MFPs are usually networked, however may also be connected via USB or, less frequently, parallel. SOHO MFPs may have basic finishing functionality such as duplexing, stapling and hole-punching, however this is rare. In general, document output offset, sorting and collation are standard capabilities.
By comparison to an All-in-one product, a SOHO MFP is more likely to have an automatic document feeder, greater fax capabilities and faster output-performance.
Most SOHO MFPs have their history in low-end black and white photocopiers, and the print engine is accordingly based around this type of technology.
Office MFP
A mid-sized free-standing unit, designed as a central office system.
These units are usually the most fully featured type of MFP. They include the basic Print, Copy and Scan functions with optional Fax functionality as well as networked document storage with security, authentication using common network user credentials, ability to run custom software (often a manufacturer will supply a Software development kit), advanced network scan destinations such as FTP, WebDAV, Email, SMB and NFS stores, encryption for data transmission and so on.
Office MFPs usually have moderately advanced finishing functions as options such as duplexing, stapling, holepunching, offset modes and booklet-creation.
Office MFPs are almost always networked, however some have optional or standard (but infrequently used) USB and parallel connections.
Most Office MFPs have their history in mid-range photocopiers (both colour and black-and-white), and the print engine is therefore based around this type of technology, however, Hewlett-Packard recently introduced two Office MFPs based on fixed-head inkjet technology.
Production printing MFP
A large free-standing unit, designed as a central printing-device or reprographic-department device.
These devices, while far larger and more expensive than Office MFPs, generally do not have all of the advanced network functionality of their smaller relations. They instead concentrate on high-speed, high-quality output, and highly advanced finishing functionality including book creation with cover insertion (including hot-glue binding) and so on. Production printing itself is often further divided into "light" production printing and "heavy" production printing, with the differentiating factor being the speed. A 100ppm device for example, falls into the light production printing category by the standards of most manufacturers.
Because of the focus on printing, while most Production Printing MFPs have a scanner, it is infrequently used and often only has very basic functionality.
There are a variety of different print engines for Production Printing MFPs, however in the "light" end of the Production Printing market, most are based on the large Office MFPs, which themselves are based on photocopier technology as described above.
Production Printing MFPs may also be known as "Print on demand" devices, or "Digital presses". This latter term can also be used to refer to the print controller controlling the MFP, however.
Characteristics
It is useful to consider the features and functions of an MFP before integrating it into a home or office environment. It is possible to have an MFP with almost all of the features and functions listed below, however a typical AIO or SOHO MFP is unlikely to incorporate many of these.
An (incomplete) list of features that an MFP may offer or will vary depending on the MFP under consideration (in any segment):
Print features/functions
Input
Network print types available (Raw, LPR, IPP, FTP printing, print via email etc.)
Network, USB, Parallel or other connection types
PDLs (PostScript, PCL, XPS etc.) and direct interpreters (PDF, TIFF, etc.) supported
Printer drivers available for different operating systems
Output
Ability to print directly to the MFP's internal storage function
Capability of using the MFP's finishing functions (see below under Copy features/functions)
Direct CD/DVD Label Printing (usually only available on some InkJet AIO models)
Duplex printing capability - Whether the MFP can print on both sides of a sheet of paper without manual intervention by the user
Paper formats (what kind of paper sizes and stocks the MFP can output)
Printer technology (e.g. InkJet/Laser/Color Laser)
Printing speed (typically given in pages per minute or ppm)
Resolution DPI - this is an important metric for both printing and scanning quality. (Note that print DPI is rarely greater than 600dpi actual. Some MFPs use a system similar to sub-pixel rendering on computer displays to give "enhanced" resolutions of 1200x600 or 1800x600, however it is important to note that this is not a true resolution)
Scan features/functions
Input
Ability to retrieve a document from internal storage and send it as if it was a new "scan"
Automatic document feeder (ADF) - this allows multiple sheets of paper to be input without manually placing each piece of paper on the platen glass.
Duplex scanning capability (depends on the ADF) - Whether the MFP can scan both sides of a sheet of paper without manual intervention by the user.
Output
Scan file formats available (e.g. PDF, TIFF, JPEG, XPS, etc.)
Scan transfer methods available (e.g. FTP, WebDAV, Email, SMB, NFS, TWAIN)
Security of scanned documents - such as PDF encryption, digital signatures and so on.
Fax features/functions
Answering machine
Cordless telephone (generally only a consideration for AIO or smaller SOHO products)
Color Fax capability
PC Fax send and receive capability
Sent / Received Faxes Forwarding to E-mail capability (via SMTP)
TCP/IP Fax methods such as SIP Fax (Fax over IP), Network Fax (via SMTP), Internet Fax and so on
Copy features/functions
Document Finishing capabilities
Duplex output
Stapling
Single point
Staple positioning
Two point
Hole punching
International standard ISO 838 2-hole
Swedish "triohålning" 4-hole
US 3-hole
"888" 4-hole
Folding
Cover binding (generally only available on production printing models) - differs from "cover insertion", in that a cover is physically bound to the book instead of simply placing it around the other pages. Cover binding often uses hot glue to bind the cover to the finished book.
Cover insertion for booklets
Fold and centre staple (for Booklet pagination)
Half fold / crease
Tri-fold / Envelope-folding
Trimming for folded documents to avoid "creep"
Document editing modes
Booklet pagination / "perfect binding" booklet pagination
Image scaling / rotation
n-in-one (2 in 1, 4 in 1 etc.)
Page numbering / text & image stamping / watermarking
Plus, see items under "Print features/functions" output and "Scan features/functions" input
Document storage features/functions
Documents storage capability the MFP
Storage (HDD) capacity
User authentication for the stored document, and any relationship to the user authentication of the MFP (e.g. Network authentication with a server or custom software, internal only, etc.)
Network features/functions
Active Directory or other authentication functionality
Data encryption
IPv6 support
SNMP support - both private and public MIB specifications
Wireless network capability
Other features/functions
SDK availability and licensing model
Software - Many MFPs support advanced functionality through third party software such as optical character recognition. In some cases, these software components are not specific to the MFP being used, however it is important to determine this, as in other cases proprietary technologies are used that effectively tie the software to the platform.
User interface - By their nature, MFPs are complex devices. Many MFPs now include LCD screens and other user interface aids. Generally, AIO and SOHO products contain simple LCD displays, while Office MFPs contain advanced LCD panels resembling a custom computer-like user interface (some MFPs also offer optional keyboard and mouse attachments).
Internal architecture
Hardware
MFPs, like most external peripherals that are capable of functioning without a computer, are essentially a type of computer themselves. They contain memory, one or more processors, and often some kind of local storage, such as a hard disk drive or flash memory. As mentioned in the Types of MFP section, the physical print engine may be based on several technologies, however most larger MFPs are an evolution of a digital photocopier.
Security
When disposing of old printers with local storage, one should keep in mind that confidential documents (print, scan, copy jobs) are potentially still unencrypted on the printer's local storage and can be undeleted. Crypto-shredding can be a countermeasure.
Software
MFPs also run a set of instructions from their internal storage, which is comparable to a computer's operating system.
Generally, as the size and complexity of an MFP increases, the more like a computer the device becomes. It is uncommon for a small AIO or even a SOHO MFP to use a general purpose operating system, however many larger MFPs run Linux or VxWorks.
Additionally, many print controllers, separate, but integral to the MFP, also run computer operating systems, with Linux and Microsoft Windows (often Windows NT 4.0 Embedded, Windows XP Embedded).
On top of the core operating system and firmware, the MFP will also provide several functions, equivalent to applications or in some cases daemons or services.
These functions may include (amongst many others):
Bytecode interpreters or virtual machines for internally hosted third party applications
Image conversion and processing functions
MFP Panel control for user input
Network service clients for sending of documents to different destinations
Network service servers for receiving documents for print or storage
Raster image processing functions (although, often this task is handled by a separate print controller unit instead)
Web server for remote management functions
Software
Computer systems equipped with the proper software must be able to take advantage of the MFP's capabilities, an important requirement to research when considering integrating an MFP with an existing office. Some or all of the following functionality might be provided:
Device administration and configuration
Document imaging, such as ad hoc scanning
Document management such as remote scanning, document type conversion from text to PDF, OCR, etc.
Document type/paper input mode selection
Monitoring of print quotas, toner/ink levels etc.
Software development kits
In addition to specific software packages, many vendors also provide the ability for the user to develop software to communicate with the MFP through a Software development kit. Different vendors have different licensing models, from completely "closed" proprietary systems (often with large costs involved) to open strategies with no direct cost involved.
An incomplete list of these technologies is:
Nuance OmniPage
Canon MEAP (Multifunctional Embedded Application Platform)
HP Open Extensibility Platform (OXP)
Konica Minolta OpenAPI
Lexmark Embedded Solutions Framework (eSF)
Ricoh’s Device SDK
Samsung XOA - eXtensible Open Architecture
Sharp OSA (Open Systems Architecture)
Toshiba OPA (Open Platform Architecture)
Xerox EIP (Extensible Interface Platform)
In general, these technologies fall into one of two technical models - Server based, or MFP internal software.
Server based technologies use a method to communicate information to and from the MFP (often SOAP/XML based), running the operating code on a suitably powered computer on the network. This method has the advantage of being very flexible, in that the software is free to do anything that the developer can make the computer do. The only limit from the MFP itself is the capability of the MFP to display a user interface to the workings of the application. As many of the applications are based around custom printing, scanning and authentication requirements, the MFP manufacturers that use this method gravitate towards these core technologies in the user interface.
MFP internal software, by comparison, has the advantage of not requiring anything outside of the MFP. The software runs within the MFP itself and so even a complete network outage will not disrupt the software from working (unless of course the software requires a network connection for other reasons). MFP internal software is often, but not always, Java based and runs in a Java virtual machine within the MFP. The negative side to this kind of software is usually that it is much more limited in capabilities than Server based systems.
Manufacturers
MFP manufacturers/brands include
Brother
Canon
Dell
Epson
Hewlett-Packard
Kodak
Konica Minolta
Kyocera
Lexmark
Océ (Canon)
Okidata
Olivetti
Panasonic
Ricoh
Samsung
Sharp
Sindoh
Toshiba
Utax
Xerox
Infoeglobe
Note that not all of these manufacturers produce all types of MFP - some may only focus on AIO products, whilst others may only focus on Production Printing, while yet more may cover a wider range.
See also
PictBridge allows images to be printed directly from digital cameras to a printer, without a computer.
Computer printer
Canon NoteJet
References
Office equipment
Information technology management
Computer printers |
191866 | https://en.wikipedia.org/wiki/RADIUS | RADIUS | Remote Authentication Dial-In User Service (RADIUS) is a networking protocol that provides centralized authentication, authorization, and accounting (AAA) management for users who connect and use a network service. RADIUS was developed by Livingston Enterprises in 1991 as an access server authentication and accounting protocol. It was later brought into IEEE 802 and IETF standards.
RADIUS is a client/server protocol that runs in the application layer, and can use either TCP or UDP. Network access servers, which control access to a network, usually contain a RADIUS client component that communicates with the RADIUS server. RADIUS is often the back-end of choice for 802.1X authentication. A RADIUS server is usually a background process running on UNIX or Microsoft Windows.
Protocol components
RADIUS is an AAA (authentication, authorization, and accounting) protocol that manages network access. RADIUS uses two types of packets to manage the full AAA process: Access-Request, which manages authentication and authorization; and Accounting-Request, which manages accounting. Authentication and authorization are defined in RFC 2865 while accounting is described by RFC 2866.
Authentication and authorization
The user or machine sends a request to a Network Access Server (NAS) to gain access to a particular network resource using access credentials. The credentials are passed to the NAS device via the link-layer protocol—for example, Point-to-Point Protocol (PPP) in the case of many dialup or DSL providers or posted in an HTTPS secure web form.
In turn, the NAS sends a RADIUS Access Request message to the RADIUS server, requesting authorization to grant access via the RADIUS protocol.
This request includes access credentials, typically in the form of username and password or security certificate provided by the user. Additionally, the request may contain other information which the NAS knows about the user, such as its network address or phone number, and information regarding the user's physical point of attachment to the NAS.
The RADIUS server checks that the information is correct using authentication schemes such as PAP, CHAP or EAP. The user's proof of identification is verified, along with, optionally, other information related to the request, such as the user's network address or phone number, account status, and specific network service access privileges. Historically, RADIUS servers checked the user's information against a locally stored flat file database. Modern RADIUS servers can do this, or can refer to external sources—commonly SQL, Kerberos, LDAP, or Active Directory servers—to verify the user's credentials.
The RADIUS server then returns one of three responses to the NAS: 1) Access Reject, 2) Access Challenge, or 3) Access Accept.
Access Reject The user is unconditionally denied access to all requested network resources. Reasons may include failure to provide proof of identification or an unknown or inactive user account.
Access Challenge Requests additional information from the user such as a secondary password, PIN, token, or card. Access Challenge is also used in more complex authentication dialogs where a secure tunnel is established between the user machine and the Radius Server in a way that the access credentials are hidden from the NAS.
Access Accept The user is granted access. Once the user is authenticated, the RADIUS server will often check that the user is authorized to use the network service requested. A given user may be allowed to use a company's wireless network, but not its VPN service, for example. Again, this information may be stored locally on the RADIUS server, or may be looked up in an external source such as LDAP or Active Directory.
Each of these three RADIUS responses may include a Reply-Message attribute which may give a reason for the rejection, the prompt for the challenge, or a welcome message for the accept. The text in the attribute can be passed on to the user in a return web page.
Authorization attributes are conveyed to the NAS stipulating terms of access to be granted. For example, the following authorization attributes may be included in an Access-Accept:
The specific IP address to be assigned to the user
The address pool from which the user's IP address should be chosen
The maximum length of time that the user may remain connected
An access list, priority queue or other restrictions on a user's access
L2TP parameters
VLAN parameters
Quality of Service (QoS) parameters
When a client is configured to use RADIUS, any user of the client presents authentication information to the client. This might be with a customizable login prompt, where the user is expected to enter their username and password. Alternatively, the user might use a link framing protocol such as the Point-to-Point Protocol (PPP), which has authentication packets which carry this information.
Once the client has obtained such information, it may choose to authenticate using RADIUS. To do so, the client creates an "Access- Request" containing such Attributes as the user's name, the user's password, the ID of the client and the port ID which the user is accessing. When a password is present, it is hidden using a method based on the RSA Message Digest Algorithm MD5.
Accounting
Accounting is described in RFC 2866.
When network access is granted to the user by the NAS, an Accounting Start (a RADIUS Accounting Request packet containing an Acct-Status-Type attribute with the value "start") is sent by the NAS to the RADIUS server to signal the start of the user's network access. "Start" records typically contain the user's identification, network address, point of attachment and a unique session identifier.
Periodically, Interim Update records (a RADIUS Accounting Request packet containing an Acct-Status-Type attribute with the value "interim-update") may be sent by the NAS to the RADIUS server, to update it on the status of an active session. "Interim" records typically convey the current session duration and information on current data usage.
Finally, when the user's network access is closed, the NAS issues a final Accounting Stop record (a RADIUS Accounting Request packet containing an Acct-Status-Type attribute with the value "stop") to the RADIUS server, providing information on the final usage in terms of time, packets transferred, data transferred, reason for disconnect and other information related to the user's network access.
Typically, the client sends Accounting-Request packets until it receives an Accounting-Response acknowledgement, using some retry interval.
The primary purpose of this data is that the user can be billed accordingly; the data is also commonly used for statistical purposes and for general network monitoring.
Roaming
RADIUS is commonly used to facilitate roaming between ISPs, including by:
Companies which provide a single global set of credentials that are usable on many public networks;
Independent, but collaborating, institutions issuing their own credentials to their own users, that allow a visitor from one to another to be authenticated by their home institution, such as in eduroam.
RADIUS facilitates this by the use of realms, which identify where the RADIUS server should forward the AAA requests for processing.
Realms
A realm is commonly appended to a user's user name and delimited with an '@' sign, resembling an email address domain name. This is known as postfix notation for the realm. Another common usage is prefix notation, which involves prepending the realm to the username and using '\' as a delimiter.
Modern RADIUS servers allow any character to be used as a realm delimiter, although in practice '@' and '\' are usually used.
Realms can also be compounded using both prefix and postfix notation, to allow for complicated roaming scenarios; for example, somedomain.com\username@anotherdomain.com could be a valid username with two realms.
Although realms often resemble domains, it is important to note that realms are in fact arbitrary text and need not contain real domain names. Realm formats are standardized in RFC 4282, which defines a Network Access Identifier (NAI) in the form of 'user@realm'. In that specification, the 'realm' portion is required to be a domain name. However, this practice is not always followed. RFC 7542 replaced RFC 4282 in May 2015.
Proxy operations
When a RADIUS server receives an AAA request for a user name containing a realm, the server will reference a table of configured realms. If the realm is known, the server will then proxy the request to the configured home server for that domain. The behavior of the proxying server regarding the removal of the realm from the request ("stripping") is configuration-dependent on most servers. In addition, the proxying server can be configured to add, remove or rewrite AAA requests when they are proxied over time again.
Proxy Chaining is possible in RADIUS and authentication/authorization and accounting packets are usually routed between a NAS Device and a Home server through a series of proxies. Some of advantages of using proxy chains include scalability improvements, policy implementations and capability adjustments. But in roaming scenarios, the NAS, Proxies and Home Server could be typically managed by different administrative entities. Hence, the trust factor among the proxies gains more significance under such Inter-domain applications. Further, the absence of end to end security in RADIUS adds to the criticality of trust among the Proxies involved. Proxy Chains are explained in RFC 2607.
Security
Roaming with RADIUS exposes the users to various security and privacy concerns. More generally, some roaming partners establish a secure tunnel between the RADIUS servers to ensure that users' credentials cannot be intercepted while being proxied across the internet. This is a concern as the MD5 hash built into RADIUS is considered insecure.
Packet structure
RADIUS is transported over UDP/IP on ports 1812 and 1813.
The RADIUS packet data format is shown to the right. The fields are transmitted from left to right, starting with the code, the identifier, the length, the authenticator and the attributes.
Assigned RADIUS Codes (decimal) include the following:
The Identifier field aids in matching requests and replies.
The Length field indicates the length of the entire RADIUS packet including the Code, Identifier, Length, Authenticator and optional Attribute fields.
The Authenticator is used to authenticate the reply from the RADIUS server, and is used in encrypting passwords; its length is 16 bytes.
Attribute value pairs
The RADIUS Attribute Value Pairs (AVP) carry data in both the request and the response for the authentication, authorization, and accounting transactions. The length of the radius packet is used to determine the end of the AVPs.
Vendor-specific attributes
RADIUS is extensible; many vendors of RADIUS hardware and software implement their own variants using Vendor-Specific Attributes (VSAs). Microsoft has published some of their VSAs. VSA definitions from many other companies remain proprietary and/or ad hoc, nonetheless many VSA dictionaries can be found by downloading the source code of open source RADIUS implementations, for example FreeRADIUS.
Security
The RADIUS protocol transmits obfuscated passwords using a shared secret and the MD5 hashing algorithm. As this particular implementation provides only weak protection of the user's credentials, additional protection, such as IPsec tunnels or physically secured data-center networks, should be used to further protect the RADIUS traffic between the NAS device and the RADIUS server. Additionally, the user's security credentials are the only part protected by RADIUS itself, yet other user-specific attributes such as tunnel-group IDs or VLAN memberships passed over RADIUS may be considered sensitive (helpful to an attacker) or private (sufficient to identify the individual client) information as well. The RadSec protocol claims to solve aforementioned security issues.
History
As more dial-up customers used the NSFnet a request for proposal was sent out by Merit Network in 1991 to consolidate their various proprietary authentication, authorization and accounting systems. Among the early respondents was Livingston Enterprises and an early version of the RADIUS was written after a meeting. The early RADIUS server was installed on a UNIX operating system. Livingston Enterprises was acquired by Lucent and together with Merit steps were taken to gain industry acceptance for RADIUS as a protocol. Both companies offered a RADIUS server at no charge. RADIUS was in 1997 published as RFC 2058 and RFC 2059, current versions are RFC 2865 and RFC 2866.
The original RADIUS standard specified that RADIUS is stateless and should run over the User Datagram Protocol (UDP). For authentication it was envisaged that RADIUS should support the Password Authentication Protocol (PAP) and the Challenge-Handshake Authentication Protocol (CHAP) over the Point-to-Point Protocol. Passwords are hidden by taking the MD5 hash of the packet and a shared secret, and then XORing that hash with the password. The original RADIUS also provided more than 50 attribute-value pairs, with the possibility for vendors to configure their own pairs.
The choice of the hop-by-hop security model, rather than end-to-end encryption, meant that if several proxy RADIUS servers are in use, every server must examine, perform logic on and pass on all data in a request. This exposes data such as passwords and certificates at every hop. RADIUS servers also did not have the ability to stop access to resources once an authorisation had been issued. Subsequent standards such as RFC 3576 and its successor RFC 5176 allowed for RADIUS servers to dynamically change a users authorization, or to disconnect a user entirely.
Now, several commercial and open-source RADIUS servers exist. Features can vary, but most can look up the users in text files, LDAP servers, various databases, etc. Accounting records can be written to text files, various databases, forwarded to external servers, etc. SNMP is often used for remote monitoring and keep-alive checking of a RADIUS server. RADIUS proxy servers are used for centralized administration and can rewrite RADIUS packets on the fly for security reasons, or to convert between vendor dialects.
The Diameter protocol was intended as the replacement for RADIUS. While both are Authentication, Authorization, and Accounting (AAA) protocols, the use-cases for the two protocols have since diverged. Diameter is largely used in the 3G space. RADIUS is used elsewhere. One of the largest barriers to having Diameter replace RADIUS is that switches and Access Points typically implement RADIUS, but not Diameter. Diameter uses SCTP or TCP while RADIUS typically uses UDP as the transport layer. As of 2012, RADIUS can also use TCP as the transport layer with TLS for security.
Standards documentation
The RADIUS protocol is currently defined in the following IETF RFC documents.
See also
Security Assertion Markup Language
TACACS
References
Bibliography
External links
Radius Types
An Analysis of the RADIUS Authentication Protocol
Decoding a Sniffer-trace of RADIUS Transaction
Using Wireshark to debug RADIUS
Internet protocols
Internet Standards
Application layer protocols
Computer access control protocols
Network protocols |
192134 | https://en.wikipedia.org/wiki/TACAMO | TACAMO | TACAMO (Take Charge And Move Out) is a United States military system of survivable communications links designed to be used in nuclear warfare to maintain communications between the decision-makers (the National Command Authority) and the triad of strategic nuclear weapon delivery systems. Its primary mission is serving as a signals relay, where it receives orders from a command plane such as Operation Looking Glass, and verifies and retransmits their Emergency Action Messages (EAMs) to US strategic forces. As it is a dedicated communications post, it features the ability to communicate on virtually every radio frequency band from very low frequency (VLF) up through super high frequency (SHF) using a variety of modulations, encryptions and networks, minimizing the likelihood an emergency message will be jammed by an enemy. This airborne communications capability largely replaced the land-based extremely low frequency (ELF) broadcast sites which became vulnerable to nuclear strike.
Components
The current TACAMO system comprises several components. The main part is the airborne portion, the U.S. Navy's Strategic Communications Wing One (STRATCOMWING 1), a U.S. Strategic Command (USSTRATCOM) organization based at Tinker Air Force Base, Oklahoma. STRATCOMWING 1 consists of three fleet air reconnaissance squadrons (VQ-3, VQ-4, and VQ-7) equipped with Boeing IDS E-6B Mercury TACAMO aircraft. As well as the main operating base at Tinker, there are a west coast alert base at Travis AFB, California, and an east coast alert base at NAS Patuxent River, Maryland.
History
The acronym was coined in 1961 and the first aircraft modified for TACAMO testing was a Lockheed KC-130 Hercules which in 1962 was fitted with a VLF transmitter and trailing wire antenna to test communications with the fleet ballistic missile submarines (see communication with submarines).
The Naval Air Development Center developed the required technique of "stalling" the trailing antenna to achieve the long vertical antenna needed. The VLF system is currently known as VERDIN (VERy low frequency Digital Information Network). The program was expanded in 1966 using modified C-130s designated Lockheed EC-130G/Q carrying a VLF system built by Collins Radio Company.
The first two squadrons were established in 1968: VQ-4 initially operated from Naval Air Station Patuxent River, Maryland and VQ-3 was initially formed at NAS Barbers Point, Hawaii before moving to Naval Air Station Agana, Guam, but later returned to NAS Barbers Point. The system known as TACAMO (from "take charge and move out") has been operationally deployed in 1969. TACAMO consisted of twelve Lockheed EC-130Q aircraft equipped with VLF transmitters using long trailing wire antennas. VLF system was repeatedly upgraded to improve signal strength.
By 1971, TACAMO IV incorporated a 200 kW transmitter and dual antenna. Actual transmission power and capabilities remain classified. Airborne ELF was tested but considered infeasible. The aircraft were upgraded to the E-6 Mercury beginning in 1990, and the E-6A was upgraded to the dual-role E-6B from 1998. With the introduction of the E-6, the Navy also stood up a Fleet Replacement Squadron (FRS), VQ-7, to provide initial training for new Naval Aviators, Naval Flight Officers and enlisted Naval Aircrewmen, and recurrent training for former TACAMO crewmembers returning to aircraft for second and third tours.
The E-6 aircraft is based on the Boeing 707. The wings were redesigned to meet new wing-loading characteristics; the tail was redesigned after a catastrophic failure of the vertical stabilizer during flight tests. The cockpit was copied from the Boeing 737NG commercial airliner, and the landing gear was modified to handle the added weight. Larger fuel tanks were installed and the fuselage was extensively modified to accommodate the 31 antennas, including the trailing wire antenna and reel assembly. After the upgrade to the E-6B, the TACAMO aircraft—with the addition of an Airborne Launch Control System (ALCS)—took over the EC-135 Looking Glass mission formerly conducted by the USAF 55th Wing at Offutt AFB, Nebraska.
See also
Boeing E-4
Survivable Low Frequency Communications System (SLFCS)
Ground Wave Emergency Network (GWEN)
Minimum Essential Emergency Communications Network (MEECN)
Emergency Rocket Communications System (ERCS)
References
External links
Strategic Communications Wing ONE website
USSTRATCOM ABNCP Fact Sheet
"Old TACAMO" Veterans website
TACAMO Community Veterans website
2017 Popular Mechanics article on TACAMO
Military radio systems of the United States
Military communications of the United States
United States nuclear command and control |
192397 | https://en.wikipedia.org/wiki/Wireless%20access%20point | Wireless access point | In computer networking, a wireless access point (WAP), or more generally just access point (AP), is a networking hardware device that allows other Wi-Fi devices to connect to a wired network. As a standalone device, the AP may have a wired connection to a router, but, in a wireless router, it can also be an integral component of the router itself. An AP is differentiated from a hotspot which is a physical location where Wi-Fi access is available.
Connections
An AP connects directly to a wired local area network, typically Ethernet, and the AP then provides wireless connections using wireless LAN technology, typically Wi-Fi, for other devices to use that wired connection. APs support the connection of multiple wireless devices through their one wired connection.
Wireless data standards
There are many wireless data standards that have been introduced for wireless access point and wireless router technology. New standards have been created to accommodate the increasing need for faster wireless connections. Some wireless routers provide backward compatibility with older Wi-Fi technologies as many devices were manufactured for use with older standards.
802.11a
802.11b
802.11g
802.11n
802.11ac
802.11ax, also known as Wi-Fi 6
Wireless access point vs. ad hoc network
Some people confuse wireless access points with wireless ad hoc networks. An ad hoc network uses a connection between two or more devices without using a wireless access point; The devices communicate directly when in range. Because setup is easy and does not require an access point, an ad hoc network is used in situations such as a quick data exchange or a multiplayer video game. Due to its peer-to-peer layout, ad hoc Wi-Fi connections are similar to connections available using Bluetooth.
Ad hoc connections are generally not recommended for a permanent installation. Internet access via ad hoc networks, using features like Windows' Internet Connection Sharing, may work well with a small number of devices that are close to each other, but ad hoc networks do not scale well. Internet traffic will converge to the nodes with direct internet connection, potentially congesting these nodes. For internet-enabled nodes, access points have a clear advantage, with the possibility of having a wired LAN.
Limitations
It is generally recommended that one IEEE 802.11 AP should have, at a maximum, 10-25 clients. However, the actual maximum number of clients that can be supported can vary significantly depending on several factors, such as type of APs in use, density of client environment, desired client throughput, etc. The range of communication can also vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might actively interfere with the signal by broadcasting on the same frequency, type of antenna, the current weather, operating radio frequency, and the power output of devices. Network designers can extend the range of APs through the use of repeaters, which amplify a radio signal, and reflectors, which only bounce it. In experimental conditions, wireless networking has operated over distances of several hundred kilometers.
Most jurisdictions have only a limited number of frequencies legally available for use by wireless networks. Usually, adjacent APs will use different frequencies (Channels) to communicate with their clients in order to avoid interference between the two nearby systems. Wireless devices can "listen" for data traffic on other frequencies, and can rapidly switch from one frequency to another to achieve better reception. However, the limited number of frequencies becomes problematic in crowded downtown areas with tall buildings using multiple APs. In such an environment, signal overlap becomes an issue causing interference, which results in signal degradation and data errors.
Wireless networking lags wired networking in terms of increasing bandwidth and throughput. While (as of 2013) high-density 256-QAM (TurboQAM) modulation, 3-antenna wireless devices for the consumer market can reach sustained real-world speeds of some 240 Mbit/s at 13 m behind two standing walls (NLOS) depending on their nature or 360 Mbit/s at 10 m line of sight or 380 Mbit/s at 2 m line of sight (IEEE 802.11ac) or 20 to 25 Mbit/s at 2 m line of sight (IEEE 802.11g), wired hardware of similar cost reaches closer to 1000 Mbit/s up to specified distance of 100 m with twisted-pair cabling in optimal conditions (Category 5 (known as Cat-5) or better cabling with Gigabit Ethernet). One impediment to increasing the speed of wireless communications comes from Wi-Fi's use of a shared communications medium: Thus, two stations in infrastructure mode that are communicating with each other even over the same AP must have each and every frame transmitted twice: from the sender to the AP, then from the AP to the receiver. This approximately halves the effective bandwidth, so an AP is only able to use somewhat less than half the actual over-the-air rate for data throughput. Thus a typical 54 Mbit/s wireless connection actually carries TCP/IP data at 20 to 25 Mbit/s. Users of legacy wired networks expect faster speeds, and people using wireless connections keenly want to see the wireless networks catch up.
By 2012, 802.11n based access points and client devices have already taken a fair share of the marketplace and with the finalization of the 802.11n standard in 2009 inherent problems integrating products from different vendors are less prevalent.
Security
Wireless access has special security considerations. Many wired networks base the security on physical access control, trusting all the users on the local network, but if wireless access points are connected to the network, anybody within range of the AP (which typically extends farther than the intended area) can attach to the network.
The most common solution is wireless traffic encryption. Modern access points come with built-in encryption. The first generation encryption scheme, WEP, proved easy to crack; the second and third generation schemes, WPA and WPA2, are considered secure if a strong enough password or passphrase is used.
Some APs support hotspot style authentication using RADIUS and other authentication servers.
Opinions about wireless network security vary widely. For example, in a 2008 article for Wired magazine, Bruce Schneier asserted the net benefits of open Wi-Fi without passwords outweigh the risks, a position supported in 2014 by Peter Eckersley of the Electronic Frontier Foundation. The opposite position was taken by Nick Mediati in an article for PC World, in which he advocates that every wireless access point should be protected with a password.
See also
Femtocell – a local-area base station using cellular network standards such as UMTS, rather than Wi-Fi
HomePlug – wired LAN technology that has a few elements in common with Wi-Fi
Lightweight Access Point Protocol – used to manage a large set of APs
List of router firmware projects
Wi-Fi array – system of multiple APs
Wi-Fi Direct – a Wi-Fi standard that enables devices to connect with each other without requiring a (hardware) wireless access point and to communicate at typical Wi-Fi speeds
WiMAX – wide-area wireless standard that has a few elements in common with Wi-Fi
References
Access point
Network access
Telecommunications infrastructure
Access point |
192455 | https://en.wikipedia.org/wiki/XMPP | XMPP | Extensible Messaging and Presence Protocol (XMPP, originally named Jabber) is an open communication protocol designed for instant messaging (IM), presence information, and contact list maintenance. Based on XML (Extensible Markup Language), it enables the near-real-time exchange of structured data between two or more network entities. Designed to be extensible, the protocol offers a multitude of applications beyond traditional IM in the broader realm of message-oriented middleware, including signalling for VoIP, video, file transfer, gaming and other uses.
Unlike most commercial instant messaging protocols, XMPP is defined in an open standard in the application layer. The architecture of the XMPP network is similar to email; anyone can run their own XMPP server and there is no central master server. This federated open system approach allows users to interoperate with others on any server using a 'JID' user account, similar to an email address. XMPP implementations can be developed using any software license and many server, client, and library implementations are distributed as free and open-source software. Numerous freeware and commercial software implementations also exist.
Originally developed by the open-source community, the protocols were formalized as an approved instant messaging standard in 2004 and has been continuously developed with new extensions and features. Various XMPP client software are available on both desktop and mobile platforms and devices - by 2003 the protocol was used by over ten million people worldwide on the network, according to the XMPP Standards Foundation.
Protocol characteristics
Decentralization
The XMPP network architecture is reminiscent of the Simple Mail Transfer Protocol (SMTP), a client–server model; clients do not talk directly to one another as it is decentralized - anyone can run a server. By design, there is no central authoritative server as there is with messaging services such as AIM, WLM, WhatsApp or Telegram. Some confusion often arises on this point as there is a public XMPP server being run at jabber.org, to which many users subscribe. However, anyone may run their own XMPP server on their own domain.
Addressing
Every user on the network has a unique XMPP address, called JID (for historical reasons, XMPP addresses are often called Jabber IDs). The JID is structured like an email address with a username and a domain name (or IP address) for the server where that user resides, separated by an at sign (@) - for example, “alice@example.com“: here alice is the username and example.com the server with which the user is registered.
Since a user may wish to log in from multiple locations, they may specify a resource. A resource identifies a particular client belonging to the user (for example home, work, or mobile). This may be included in the JID by appending a slash followed by the name of the resource. For example, the full JID of a user's mobile account could be username@example.com/mobile.
Each resource may have specified a numerical value called priority. Messages simply sent to username@example.com will go to the client with highest priority, but those sent to username@example.com/mobile will go only to the mobile client. The highest priority is the one with largest numerical value.
JIDs without a username part are also valid, and may be used for system messages and control of special features on the server. A resource remains optional for these JIDs as well.
The means to route messages based on a logical endpoint identifier - the JID, instead of by an explicit IP Address present opportunities to use XMPP as an Overlay network implementation on top of different underlay networks.
XMPP via HTTP
The original and "native" transport protocol for XMPP is Transmission Control Protocol (TCP), using open-ended XML streams over long-lived TCP connections. As an alternative to the TCP transport, the XMPP community has also developed an HTTP transport for web clients as well as users behind restricted firewalls. In the original specification, XMPP could use HTTP in two ways: polling and binding. The polling method, now deprecated, essentially implies messages stored on a server-side database are being fetched (and posted) regularly by an XMPP client by way of HTTP 'GET' and 'POST' requests. The binding method, implemented using Bidirectional-streams Over Synchronous HTTP (BOSH), allows servers to push messages to clients as soon as they are sent. This push model of notification is more efficient than polling, where many of the polls return no new data.
Because the client uses HTTP, most firewalls allow clients to fetch and post messages without any hindrances. Thus, in scenarios where the TCP port used by XMPP is blocked, a server can listen on the normal HTTP port and the traffic should pass without problems. Various websites let people sign into XMPP via a browser. Furthermore, there are open public servers that listen on standard http (port 80) and https (port 443) ports, and hence allow connections from behind most firewalls. However, the IANA-registered port for BOSH is actually 5280, not 80.
Extensibility
The XMPP Standards Foundation or XSF (formerly the Jabber Software Foundation) is active in developing open XMPP extensions, so called XEP. However, extensions can also be defined by any individual, software project, or organization. To maintain interoperability, common extensions are managed by the XSF. XMPP applications beyond IM include: chat rooms, network management, content syndication, collaboration tools, file sharing, gaming, remote systems control and monitoring, geolocation, middleware and cloud computing, VoIP, and identity services.
Building on its capability to support discovery across local network domains, XMPP is well-suited for cloud computing where virtual machines, networks, and firewalls would otherwise present obstacles to alternative service discovery and presence-based solutions. Cloud computing and storage systems rely on various forms of communication over multiple levels, including not only messaging between systems to relay state but also the migration or distribution of larger objects, such as storage or virtual machines. Along with authentication and in-transit data protection, XMPP can be applied at a variety of levels and may prove ideal as an extensible middleware or Message-oriented middleware (MOM) protocol.
Current limitations
At the moment, XMPP does not support Quality of Service (QoS); assured delivery of messages has to be built on top of the XMPP layer. There are two XEPs proposed to deal with this issue, XEP-0184 Message delivery receipts which is currently a draft standard, and XEP-0333 Chat Markers which is considered experimental.
Since XML is text based, normal XMPP has a higher network overhead compared to purely binary solutions. This issue was being addressed by the experimental XEP-0322: Efficient XML Interchange (EXI) Format, where XML is serialized in a very efficient binary manner, especially in schema-informed mode. This XEP is currently deferred.
In-band binary data transfer is limited. Binary data must be first base64 encoded before it can be transmitted in-band. Therefore, any significant amount of binary data (e.g., file transfers) is best transmitted out-of-band, using in-band messages to coordinate. The best example of this is the Jingle XMPP Extension Protocol, XEP-0166.
Features
Peer-to-peer sessions
Using the extension called Jingle, XMPP can provide an open means to support machine-to-machine or peer-to-peer communications across a diverse set of networks. This feature is mainly used for IP telephony (VoIP).
Multi-user chat
XMPP supports conferences with multiple users, using the specification Multi-User Chat (MUC) (XEP-0045). From the point of view of a normal user, it is comparable to Internet Relay Chat (IRC).
Security and encryption
XMPP servers can be isolated (e.g., on a company intranet), and secure authentication (SASL) and point-to-point encryption (TLS) have been built into the core XMPP specifications, as well as
Off-the-Record Messaging (OTR) is an extension of XMPP enabling encryption of messages and data. It has since been replaced by a better extension, multi-end-to-multi-end encryption (OMEMO, XEP-0384) end-to-end encryption between users. This gives a higher level of security, by encrypting all data from the source client and decrypting again at the target client; the server operator cannot decrypt the data they are forwarding.
Messages can also be encrypted with OpenPGP, for example with the software Gajim.
Service discovery
While several service discovery protocols exist today (such as zeroconf or the Service Location Protocol), XMPP provides a solid base for the discovery of services residing locally or across a network, and the availability of these services (via presence information), as specified by XEP-0030 DISCO.
Connecting to other protocols
One of the original design goals of the early Jabber open-source community was enabling users to connect to multiple instant messaging systems (especially non-XMPP systems) through a single client application. This was done through entities called transports or gateways to other instant messaging protocols like ICQ, AIM or Yahoo Messenger, but also to protocols such as SMS, IRC or email. Unlike multi-protocol clients, XMPP provides this access at the server level by communicating via special gateway services running alongside an XMPP server. Any user can "register" with one of these gateways by providing the information needed to log on to that network, and can then communicate with users of that network as though they were XMPP users. Thus, such gateways function as client proxies (the gateway authenticates on the user's behalf on the non-XMPP service). As a result, any client that fully supports XMPP can access any network with a gateway without extra code in the client, and without the need for the client to have direct access to the Internet. However, the client proxy model may violate terms of service on the protocol used (although such terms of service are not legally enforceable in several countries) and also requires the user to send their IM username and password to the third-party site that operates the transport (which may raise privacy and security concerns).
Another type of gateway is a server-to-server gateway, which enables a non-XMPP server deployment to connect to native XMPP servers using the built in interdomain federation features of XMPP. Such server-to-server gateways are offered by several enterprise IM software products, including:
IBM Lotus Sametime
Skype for Business Server (formerly named Microsoft Lync Server and Microsoft Office Communications Server – OCS)
Software
XMPP is implemented by many clients, servers, and code libraries. These implementations are provided under a variety of software licenses.
Servers
Numerous XMPP server software exist, some well known ones include ejabberd and Prosody.
Clients
A large number of XMPP client software exist on various modern and legacy platforms, including both graphical and command line based clients. According to the XMPP website, some of the most popular software include Conversations (Android), Converse.js (web browser, Linux, Windows, macOS), Gajim (Windows, Linux), Monal (macOS, iOS), and Swift.IM (macOS, Windows, Linux).
Other clients include: Bombus, ChatSecure, Coccinella, JWChat.org, MCabber, Miranda, Pidgin, Psi, Tkabber, Trillian, and Xabber.
Deployment and distribution
There are thousands of XMPP servers worldwide, many public ones as well as private individuals or organizations running their own servers without commercial intent. Numerous websites show a list of public XMPP servers where users may register at (for example on the XMPP.net website).
Several large public IM services natively use or used XMPP, including LiveJournal's "LJ Talk", Nimbuzz, and HipChat. Various hosting services, such as DreamHost, enable hosting customers to choose XMPP services alongside more traditional web and email services. Specialized XMPP hosting services also exist in form of cloud so that domain owners need not directly run their own XMPP servers, including Cisco Webex Connect, Chrome.pl, Flosoft.biz, i-pobox.net, and hosted.im.
XMPP is also used in deployments of non-IM services, including smart grid systems such as demand response applications, message-oriented middleware, and as a replacement for SMS to provide text messaging on many smartphone clients.
Non-native deployments
Some of the largest messaging providers use, or have been using, various forms of XMPP based protocols in their backend systems without necessarily exposing this fact to their end users. One example is Google, which in August 2005 introduced Google Talk, a combination VoIP and IM system that uses XMPP for instant messaging and as a base for a voice and file transfer signaling protocol called Jingle. The initial launch did not include server-to-server communications; Google enabled that feature on January 17, 2006. Google has since added video functionality to Google Talk, also using the Jingle protocol for signaling. In May 2013, Google announced XMPP compatibility would be dropped from Google Talk for server-to-server federation, although it would retain client-to-server support. In January 2008, AOL introduced experimental XMPP support for its AOL Instant Messenger (AIM) service, allowing AIM users to communicate using XMPP. However, in March 2008, this service was discontinued. As of May 2011, AOL offers limited XMPP support. In February 2010, the social-networking site Facebook opened up its chat feature to third-party applications via XMPP. Some functionality was unavailable through XMPP, and support was dropped in April 2014. Similarly, in December 2011, Microsoft released an XMPP interface to its Microsoft Messenger service. Skype, its de facto successor, also provided limited XMPP support. Apache Wave is another example.
XMPP is the de facto standard for private chat in gaming related platforms such as Origin, and PlayStation, as well as the now discontinued Xfire and Raptr. Two notable exceptions are Steam and Xbox LIVE; both use their own proprietary messaging protocols.
History and development
Jeremie Miller began working on the Jabber technology in 1998 and released the first version of the jabberd server on January 4, 1999. The early Jabber community focused on open-source software, mainly the jabberd server, but its major outcome proved to be the development of the XMPP protocol.
The Internet Engineering Task Force (IETF) formed an XMPP working group in 2002 to formalize the core protocols as an IETF instant messaging and presence technology. The early Jabber protocol, as developed in 1999 and 2000, formed the basis for XMPP as published in RFC 3920 and RFC 3921 in October 2004 (the primary changes during formalization by the IETF's XMPP Working Group were the addition of TLS for channel encryption and SASL for authentication). The XMPP Working group also produced specifications RFC 3922 and RFC 3923. In 2011, RFC 3920 and RFC 3921 were superseded by RFC 6120 and RFC 6121 respectively, with RFC 6122 specifying the XMPP address format. In 2015, RFC 6122 was superseded by RFC 7622. In addition to these core protocols standardized at the IETF, the XMPP Standards Foundation (formerly the Jabber Software Foundation) is active in developing open XMPP extensions.
The first IM service based on XMPP was Jabber.org, which has operated continuously and offered free accounts since 1999. From 1999 until February 2006, the service used jabberd as its server software, at which time it migrated to ejabberd (both of which are free software application servers). In January 2010, the service migrated to the proprietary M-Link server software produced by Isode Ltd.
In September 2008, Cisco Systems acquired Jabber, Inc., the creators of the commercial product Jabber XCP.
The XMPP Standards Foundation (XSF) develops and publishes extensions to XMPP through a standards process centered on XMPP Extension Protocols (XEPs, previously known as Jabber Enhancement Proposals - JEPs). The following extensions are in especially wide use:
Data Forms
Service Discovery
Multi-User Chat
Publish-Subscribe and Personal Eventing Protocol
XHTML-IM
File Transfer
Entity Capabilities
HTTP Binding
Jingle for voice and video
Internet of Things
XMPP features such as federation across domains, publish/subscribe, authentication and its security even for mobile endpoints are being used to implement the Internet of Things. Several XMPP extensions are part of the experimental implementation: Efficient XML Interchange (EXI) Format; Sensor Data; Provisioning; Control; Concentrators; Discovery.
These efforts are documented on a page in the XMPP wiki dedicated to Internet of Things and the XMPP IoT mailing list.
Specifications and standards
The IETF XMPP working group has produced a series of Request for Comments (RFC) documents:
(superseded by RFC 6120)
(superseded by RFC 6121)
(superseded by RFC 5122)
(superseded by RFC 7622)
The most important and most widely implemented of these specifications are:
, Extensible Messaging and Presence Protocol (XMPP): Core, which describes client–server messaging using two open-ended XML streams. XML streams consist of <presence/>, <message/> and <iq/> (info/query). A connection is authenticated with Simple Authentication and Security Layer (SASL) and encrypted with Transport Layer Security (TLS).
, Extensible Messaging and Presence Protocol (XMPP): Instant Messaging and Presence describes instant messaging (IM), the most common application of XMPP.
, Extensible Messaging and Presence Protocol (XMPP): Address Format describes the rules for XMPP addresses, also called JabberIDs or JIDs. Currently JIDs use PRECIS (as defined in RFC 7564) for handling of Unicode characters outside the ASCII range.
Competing standards
XMPP has often been regarded as a competitor to SIMPLE, based on Session Initiation Protocol (SIP), as the standard protocol for instant messaging and presence notification.
The XMPP extension for multi-user chat can be seen as a competitor to Internet Relay Chat (IRC), although IRC is far simpler, has far fewer features, and is far more widely used.
The XMPP extensions for publish-subscribe provide many of the same features as the Advanced Message Queuing Protocol (AMQP).
See also
XMPP clients
Comparison of instant messaging clients
Comparison of instant messaging protocols
Comparison of XMPP server software
Secure communication
SIMPLE
Matrix (protocol)
References
External links
Open list of public XMPP servers
xmpp-iot.org - the XMPP-IoT (Internet of Things) initiative
Real-Time Communications Quick Start Guide
Jabber User Guide
, interviewed by Randal Schwartz and Leo Laporte
Application layer protocols
Cloud standards
Cross-platform software
Instant messaging protocols
Online chat
Open standards
XML-based standards |
194112 | https://en.wikipedia.org/wiki/Public%20key%20infrastructure | Public key infrastructure | A public key infrastructure (PKI) is a set of roles, policies, hardware, software and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. The purpose of a PKI is to facilitate the secure electronic transfer of information for a range of network activities such as e-commerce, internet banking and confidential email. It is required for activities where simple passwords are an inadequate authentication method and more rigorous proof is required to confirm the identity of the parties involved in the communication and to validate the information being transferred.
In cryptography, a PKI is an arrangement that binds public keys with respective identities of entities (like people and organizations). The binding is established through a process of registration and issuance of certificates at and by a certificate authority (CA). Depending on the assurance level of the binding, this may be carried out by an automated process or under human supervision. When done over a network, this requires using a secure certificate enrollment or certificate management protocol such as CMP.
The PKI role that may be delegated by a CA to assure valid and correct registration is called a registration authority (RA). Basically, an RA is responsible for accepting requests for digital certificates and authenticating the entity making the request. The Internet Engineering Task Force's RFC 3647 defines an RA as "An entity that is responsible for one or more of the following functions: the identification and authentication of certificate applicants, the approval or rejection of certificate applications, initiating certificate revocations or suspensions under certain circumstances, processing subscriber requests to revoke or suspend their certificates, and approving or rejecting requests by subscribers to renew or re-key their certificates. RAs, however, do not sign or issue certificates (i.e., an RA is delegated certain tasks on behalf of a CA)." While Microsoft may have referred to a subordinate CA as an RA, this is incorrect according to the X.509 PKI standards. RAs do not have the signing authority of a CA and only manage the vetting and provisioning of certificates. So in the Microsoft PKI case, the RA functionality is provided either by the Microsoft Certificate Services web site or through Active Directory Certificate Services which enforces Microsoft Enterprise CA and certificate policy through certificate templates and manages certificate enrollment (manual or auto-enrollment). In the case of Microsoft Standalone CAs, the function of RA does not exist since all of the procedures controlling the CA are based on the administration and access procedure associated with the system hosting the CA and the CA itself rather than Active Directory. Most non-Microsoft commercial PKI solutions offer a stand-alone RA component.
An entity must be uniquely identifiable within each CA domain on the basis of information about that entity. A third-party validation authority (VA) can provide this entity information on behalf of the CA.
The X.509 standard defines the most commonly used format for public key certificates.
Capabilities
PKI provides "trust services" - in plain terms trusting the actions or outputs of entities, be they people or computers. Trust service objectives respect one or more of the following capabilities: Confidentiality, Integrity and Authenticity (CIA).
Confidentiality: Assurance that no entity can maliciously or unwittingly view a payload in clear text. Data is encrypted to make it secret, such that even if it was read, it appears as gibberish. Perhaps the most common use of PKI for confidentiality purposes is in the context of Transport Layer Security (TLS). TLS is a capability underpinning the security of data in transit, i.e. during transmission. A classic example of TLS for confidentiality is when using an internet browser to log on to a service hosted on an internet based web site by entering a password.
Integrity: Assurance that if an entity changed (tampered) with transmitted data in the slightest way, it would be obvious it happened as its integrity would have been compromised. Often it is not of utmost importance to prevent the integrity being compromised (tamper proof), however, it is of utmost importance that if integrity is compromised there is clear evidence of it having done so (tamper evident).
Authenticity: Assurance that you have certainty of what you are connecting to, or evidencing your legitimacy when connecting to a protected service. The former is termed server-side authentication - typically used when authenticating to a web server using a password. The latter is termed client-side authentication - sometimes used when authenticating using a smart card (hosting a digital certificate and private key).
Design
Public key cryptography is a cryptographic technique that enables entities to securely communicate on an insecure public network, and reliably verify the identity of an entity via digital signatures.
A public key infrastructure (PKI) is a system for the creation, storage, and distribution of digital certificates which are used to verify that a particular public key belongs to a certain entity. The PKI creates digital certificates which map public keys to entities, securely stores these certificates in a central repository and revokes them if needed.
A PKI consists of:
A certificate authority (CA) that stores, issues and signs the digital certificates;
A registration authority (RA) which verifies the identity of entities requesting their digital certificates to be stored at the CA;
A central directory—i.e., a secure location in which keys are stored and indexed;
A certificate management system managing things like the access to stored certificates or the delivery of the certificates to be issued;
A certificate policy stating the PKI's requirements concerning its procedures. Its purpose is to allow outsiders to analyze the PKI's trustworthiness.
Methods of certification
Broadly speaking, there have traditionally been three approaches to getting this trust: certificate authorities (CAs), web of trust (WoT), and simple public key infrastructure (SPKI).
Certificate authorities
The primary role of the CA is to digitally sign and publish the public key bound to a given user. This is done using the CA's own private key, so that trust in the user key relies on one's trust in the validity of the CA's key. When the CA is a third party separate from the user and the system, then it is called the Registration Authority (RA), which may or may not be separate from the CA. The key-to-user binding is established, depending on the level of assurance the binding has, by software or under human supervision.
The term trusted third party (TTP) may also be used for certificate authority (CA). Moreover, PKI is itself often used as a synonym for a CA implementation.
Certificate revocation
Authorities in the WebPKI provide revocation services to allow invalidation of previously issued certificates. According to the Baseline Requirements by the CA/Browser forum, the CAs must maintain revocation status until certificate expiration. The status must be delivered using Online Certificate Status Protocol. Most revocation statuses on the Internet disappear soon after the expiration of the certificates.
Issuer market share
In this model of trust relationships, a CA is a trusted third party – trusted both by the subject (owner) of the certificate and by the party relying upon the certificate.
According to NetCraft report from 2015, the industry standard for monitoring active Transport Layer Security (TLS) certificates, states that "Although the global [TLS] ecosystem is competitive, it is dominated by a handful of major CAs — three certificate authorities (Symantec, Sectigo, GoDaddy) account for three-quarters of all issued [TLS] certificates on public-facing web servers. The top spot has been held by Symantec (or VeriSign before it was purchased by Symantec) ever since [our] survey began, with it currently accounting for just under a third of all certificates. To illustrate the effect of differing methodologies, amongst the million busiest sites Symantec issued 44% of the valid, trusted certificates in use — significantly more than its overall market share."
Following major issues in how certificate issuing were managed, all major players gradually distrusted Symantec issued certificates starting from 2017.
Temporary certificates and single sign-on
This approach involves a server that acts as an offline certificate authority within a single sign-on system. A single sign-on server will issue digital certificates into the client system, but never stores them. Users can execute programs, etc. with the temporary certificate. It is common to find this solution variety with X.509-based certificates.
Starting Sep 2020, TLS Certificate Validity reduced to 13 Months.
Web of trust
An alternative approach to the problem of public authentication of public key information is the web-of-trust scheme, which uses self-signed certificates and third-party attestations of those certificates. The singular term "web of trust" does not imply the existence of a single web of trust, or common point of trust, but rather one of any number of potentially disjoint "webs of trust". Examples of implementations of this approach are PGP (Pretty Good Privacy) and GnuPG (an implementation of OpenPGP, the standardized specification of PGP). Because PGP and implementations allow the use of e-mail digital signatures for self-publication of public key information, it is relatively easy to implement one's own web of trust.
One of the benefits of the web of trust, such as in PGP, is that it can interoperate with a PKI CA fully trusted by all parties in a domain (such as an internal CA in a company) that is willing to guarantee certificates, as a trusted introducer. If the "web of trust" is completely trusted then, because of the nature of a web of trust, trusting one certificate is granting trust to all the certificates in that web. A PKI is only as valuable as the standards and practices that control the issuance of certificates and including PGP or a personally instituted web of trust could significantly degrade the trustworthiness of that enterprise's or domain's implementation of PKI.
The web of trust concept was first put forth by PGP creator Phil Zimmermann in 1992 in the manual for PGP version 2.0:
Simple public key infrastructure
Another alternative, which does not deal with public authentication of public key information, is the simple public key infrastructure (SPKI) that grew out of three independent efforts to overcome the complexities of X.509 and PGP's web of trust. SPKI does not associate users with persons, since the key is what is trusted, rather than the person. SPKI does not use any notion of trust, as the verifier is also the issuer. This is called an "authorization loop" in SPKI terminology, where authorization is integral to its design. This type of PKI is specially useful for making integrations of PKI that do not rely on third parties for certificate authorization, certificate information, etc.; A good example of this is an Air-gapped network in an office.
Decentralized PKI
Decentralized identifiers (DIDs) eliminates dependence on centralized registries for identifiers as well as centralized certificate authorities for key management, which is the standard in hierarchical PKI. In cases where the DID registry is a distributed ledger, each entity can serve as its own root authority. This architecture is referred to as decentralized PKI (DPKI).
History
Developments in PKI occurred in the early 1970s at the British intelligence agency GCHQ, where James Ellis, Clifford Cocks and others made important discoveries related to encryption algorithms and key distribution. Because developments at GCHQ are highly classified, the results of this work were kept secret and not publicly acknowledged until the mid-1990s.
The public disclosure of both secure key exchange and asymmetric key algorithms in 1976 by Diffie, Hellman, Rivest, Shamir, and Adleman changed secure communications entirely. With the further development of high-speed digital electronic communications (the Internet and its predecessors), a need became evident for ways in which users could securely communicate with each other, and as a further consequence of that, for ways in which users could be sure with whom they were actually interacting.
Assorted cryptographic protocols were invented and analyzed within which the new cryptographic primitives could be effectively used. With the invention of the World Wide Web and its rapid spread, the need for authentication and secure communication became still more acute. Commercial reasons alone (e.g., e-commerce, online access to proprietary databases from web browsers) were sufficient. Taher Elgamal and others at Netscape developed the SSL protocol ('https' in Web URLs); it included key establishment, server authentication (prior to v3, one-way only), and so on. A PKI structure was thus created for Web users/sites wishing secure communications.
Vendors and entrepreneurs saw the possibility of a large market, started companies (or new projects at existing companies), and began to agitate for legal recognition and protection from liability. An American Bar Association technology project published an extensive analysis of some of the foreseeable legal aspects of PKI operations (see ABA digital signature guidelines), and shortly thereafter, several U.S. states (Utah being the first in 1995) and other jurisdictions throughout the world began to enact laws and adopt regulations. Consumer groups raised questions about privacy, access, and liability considerations, which were more taken into consideration in some jurisdictions than in others.
The enacted laws and regulations differed, there were technical and operational problems in converting PKI schemes into successful commercial operation, and progress has been much slower than pioneers had imagined it would be.
By the first few years of the 21st century, the underlying cryptographic engineering was clearly not easy to deploy correctly. Operating procedures (manual or automatic) were not easy to correctly design (nor even if so designed, to execute perfectly, which the engineering required). The standards that existed were insufficient.
PKI vendors have found a market, but it is not quite the market envisioned in the mid-1990s, and it has grown both more slowly and in somewhat different ways than were anticipated. PKIs have not solved some of the problems they were expected to, and several major vendors have gone out of business or been acquired by others. PKI has had the most success in government implementations; the largest PKI implementation to date is the Defense Information Systems Agency (DISA) PKI infrastructure for the Common Access Cards program.
Uses
PKIs of one type or another, and from any of several vendors, have many uses, including providing public keys and bindings to user identities which are used for:
Encryption and/or sender authentication of e-mail messages (e.g., using OpenPGP or S/MIME);
Encryption and/or authentication of documents (e.g., the XML Signature or XML Encryption standards if documents are encoded as XML);
Authentication of users to applications (e.g., smart card logon, client authentication with SSL/TLS). There's experimental usage for digitally signed HTTP authentication in the Enigform and mod_openpgp projects;
Bootstrapping secure communication protocols, such as Internet key exchange (IKE) and SSL/TLS. In both of these, initial set-up of a secure channel (a "security association") uses asymmetric key—i.e., public key—methods, whereas actual communication uses faster symmetric key—i.e., secret key—methods;
Mobile signatures are electronic signatures that are created using a mobile device and rely on signature or certification services in a location independent telecommunication environment;
Internet of things requires secure communication between mutually trusted devices. A public key infrastructure enables devices to obtain and renew X509 certificates which are used to establish trust between devices and encrypt communications using TLS.
Open source implementations
OpenSSL is the simplest form of CA and tool for PKI. It is a toolkit, developed in C, that is included in all major Linux distributions, and can be used both to build your own (simple) CA and to PKI-enable applications. (Apache licensed)
EJBCA is a full-featured, enterprise-grade, CA implementation developed in Java. It can be used to set up a CA both for internal use and as a service. (LGPL licensed)
XiPKI, CA and OCSP responder. With SHA-3 support, implemented in Java. (Apache licensed)
XCA is a graphical interface, and database. XCA uses OpenSSL for the underlying PKI operations.
DogTag is a full featured CA developed and maintained as part of the Fedora Project.
CFSSL open source toolkit developed by CloudFlare for signing, verifying, and bundling TLS certificates. (BSD 2-clause licensed)
Vault tool for securely managing secrets (TLS certificates included) developed by HashiCorp. (Mozilla Public License 2.0 licensed)
Boulder, an ACME-based CA written in Go. Boulder is the software that runs Let's Encrypt.
Criticism
Some argue that purchasing certificates for securing websites by SSL/TLS and securing software by code signing is a costly venture for small businesses. However, the emergence of free alternatives, such as Let's Encrypt, has changed this. HTTP/2, the latest version of HTTP protocol, allows unsecured connections in theory; in practice, major browser companies have made it clear that they would support this protocol only over a PKI secured TLS connection. Web browser implementation of HTTP/2 including Chrome, Firefox, Opera, and Edge supports HTTP/2 only over TLS by using the ALPN extension of the TLS protocol. This would mean that, to get the speed benefits of HTTP/2, website owners would be forced to purchase SSL/TLS certificates controlled by corporations.
Currently the majority of web browsers are shipped with pre-installed intermediate certificates issued and signed by a certificate authority, by public keys certified by a so-called root certificates. This means browsers need to carry a large number of different certificate providers, increasing the risk of a key compromise.
When a key is known to be compromised, it could be fixed by revoking the certificate, but such a compromise is not easily detectable and can be a huge security breach. Browsers have to issue a security patch to revoke intermediary certificates issued by a compromised root certificate authority.
See also
Cryptographic agility (crypto-agility)
Certificate Management Protocol (CMP)
Certificate Management over CMS (CMC)
Simple Certificate Enrollment Protocol (SCEP)
Enrollment over Secure Transport (EST)
Automated Certificate Management Environment (ACME)
References
External links
Market share trends for SSL certificate authorities (W3Techs)
Public-key cryptography
Key management
IT infrastructure
Transport Layer Security |
194340 | https://en.wikipedia.org/wiki/Internet%20Key%20Exchange | Internet Key Exchange | In computing, Internet Key Exchange (IKE, sometimes IKEv1 or IKEv2, depending on version) is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication ‒ either pre-shared or distributed using DNS (preferably with DNSSEC) ‒ and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived. In addition, a security policy for every peer which will connect must be manually maintained.
History
The Internet Engineering Task Force (IETF) originally defined IKE in November 1998 in a series of publications (Request for Comments) known as RFC 2407, RFC 2408 and RFC 2409:
defined the Internet IP Security Domain of Interpretation for ISAKMP.
defined the Internet Security Association and Key Management Protocol (ISAKMP).
defined the Internet Key Exchange (IKE).
updated IKE to version two (IKEv2) in December 2005. clarified some open details in October 2006. combined these two documents plus additional clarifications into the updated IKEv2, published in September 2010. A later update upgraded the document from Proposed Standard to Internet Standard, published as in October 2014.
The parent organization of the IETF, The Internet Society (ISOC), has maintained the copyrights of these standards as freely available to the Internet community.
Architecture
Most IPsec implementations consist of an IKE daemon that runs in user space and an IPsec stack in the kernel that processes the actual IP packets.
User-space daemons have easy access to mass storage containing configuration information, such as the IPsec endpoint addresses, keys and certificates, as required. Kernel modules, on the other hand, can process packets efficiently and with minimum overhead—which is important for performance reasons.
The IKE protocol uses UDP packets, usually on port 500, and generally requires 4–6 packets with 2–3 round trips to create an SA (security association) on both sides. The negotiated key material is then given to the IPsec stack. For instance, this could be an AES key, information identifying the IP endpoints and ports that are to be protected, as well as what type of IPsec tunnel has been created. The IPsec stack, in turn, intercepts the relevant IP packets if and where appropriate and performs encryption/decryption as required. Implementations vary on how the interception of the packets is done—for example, some use virtual devices, others take a slice out of the firewall, etc.
IKEv1 consists of two phases: phase 1 and phase 2.
IKEv1 phases
IKE phase one's purpose is to establish a secure authenticated communication channel by using the Diffie–Hellman key exchange algorithm to generate a shared secret key to encrypt further IKE communications. This negotiation results in one single bi-directional ISAKMP Security Association (SA). The authentication can be performed using either pre-shared key (shared secret), signatures, or public key encryption. Phase 1 operates in either Main Mode or Aggressive Mode. Main Mode protects the identity of the peers and the hash of the shared key by encrypting them; Aggressive Mode does not.
During IKE phase two, the IKE peers use the secure channel established in Phase 1 to negotiate Security Associations on behalf of other services like IPsec. The negotiation results in a minimum of two unidirectional security associations (one inbound and one outbound). Phase 2 operates only in Quick Mode.
Problems with IKE
Originally, IKE had numerous configuration options but lacked a general facility for automatic negotiation of a well-known default case that is universally implemented. Consequently, both sides of an IKE had to exactly agree on the type of security association they wanted to create – option by option – or a connection could not be established. Further complications arose from the fact that in many implementations the debug output was difficult to interpret, if there was any facility to produce diagnostic output at all.
The IKE specifications were open to a significant degree of interpretation, bordering on design faults (Dead-Peer-Detection being a case in point), giving rise to different IKE implementations not being able to create an agreed-upon security association at all for many combinations of options, however correctly configured they might appear at either end.
Improvements with IKEv2
The IKEv2 protocol was described in Appendix A of RFC 4306 in 2005. The following issues were addressed:
Fewer Request for Comments (RFCs): The specifications for IKE were covered in at least three RFCs, more if one takes into account NAT traversal and other extensions that are in common use. IKEv2 combines these in one RFC as well as making improvements to support for NAT traversal (Network Address Translation (NAT)) and firewall traversal in general.
Standard Mobility support: There is a standard extension for IKEv2 named [rfc:4555 Mobility and Multihoming Protocol] (MOBIKE) (see also, IPsec) used to support mobility and multihoming for it and Encapsulating Security Payload (ESP). By use of this extension IKEv2 and IPsec can be used by mobile and multihomed users.
NAT traversal: The encapsulation of IKE and ESP in User Datagram Protocol (UDP port 4500) enables these protocols to pass through a device or firewall performing NAT.
Stream Control Transmission Protocol (SCTP) support: IKEv2 allows for the SCTP protocol as used in Internet telephony protocol, Voice over IP (VoIP).
Simple message exchange: IKEv2 has one four-message initial exchange mechanism where IKE provided eight distinctly different initial exchange mechanisms, each one of which had slight advantages and disadvantages.
Fewer cryptographic mechanisms: IKEv2 uses cryptographic mechanisms to protect its packets that are very similar to what IPsec ESP uses to protect the IPsec packets. This led to simpler implementations and certifications for Common Criteria and FIPS 140-2 (Federal Information Processing Standard (FIPS), which require each cryptographic implementation to be separately validated.
Reliability and State management: IKEv2 uses sequence numbers and acknowledgments to provide reliability and mandates some error processing logistics and shared state management. IKE could end up in a dead state due to the lack of such reliability measures, where both parties were expecting the other to initiate an action - which never eventuated. Work arounds (such as Dead-Peer-Detection) were developed but not standardized. This meant that different implementations of work-arounds were not always compatible.
Denial of Service (DoS) attack resilience: IKEv2 does not perform much processing until it determines if the requester actually exists. This addressed some of the DoS problems suffered by IKE which would perform a lot of expensive cryptographic processing from spoofed locations.
Supposing HostA has a Security Parameter Index (SPI) of A and HostB has an SPI of B, the scenario would look like this:
HostA -------------------------------------------------- HostB
|HDR(A,0),sai1,kei,Ni--------------------------> |
| <----------------------------HDR(A,0),N(cookie)|
|HDR(A,0),N(cookie),sai1,kei,Ni----------------> |
| <--------------------------HDR(A,B),SAr1,ker,Nr|
If HostB (the responder) is experiencing large amounts of half-open IKE connections, it will send an unencrypted reply message of IKE_SA_INIT to HostA (the initiator) with a notify message of type COOKIE, and will expect HostA to send an IKE_SA_INIT request with that cookie value in a notify payload to HostB. This is to ensure that the initiator is really capable of handling an IKE response from the responder.
Protocol extensions
The IETF ipsecme working group has standardized a number of extensions, with the goal
of modernizing the IKEv2 protocol and adapting it better to high volume,
production environments. These extensions include:
IKE session resumption: the ability to resume a failed IKE/IPsec "session" after a failure, without the need to go through the entire IKE setup process ().
IKE redirect: redirection of incoming IKE requests, allowing for simple load-balancing between multiple IKE endpoints ().
IPsec traffic visibility: special tagging of ESP packets that are authenticated but not encrypted, with the goal of making it easier for middleboxes (such as intrusion detection systems) to analyze the flow ().
Mutual EAP authentication: support for EAP-only (i.e., certificate-less) authentication of both of the IKE peers; the goal is to allow for modern password-based authentication methods to be used ().
Quick crash detection: minimizing the time until an IKE peer detects that its opposite peer has crashed ().
High availability extensions: improving IKE/IPsec-level protocol synchronization between a cluster of IPsec endpoints and a peer, to reduce the probability of dropped connections after a failover event ().
Implementations
IKE is supported as part of the IPsec implementation in Windows 2000, Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008. The ISAKMP/IKE implementation was jointly developed by Cisco and Microsoft.
Microsoft Windows 7 and Windows Server 2008 R2 partially support IKEv2 () as well as MOBIKE () through the VPN Reconnect feature (also known as Agile VPN).
There are several open source implementations of IPsec with associated IKE capabilities. On Linux, Libreswan, Openswan and strongSwan implementations provide an IKE daemon which can configure (i.e., establish SAs) to the KLIPS or XFRM/NETKEY kernel-based IPsec stacks. XFRM/NETKEY is the Linux native IPsec implementation available as of version 2.6.
The Berkeley Software Distributions also have an IPsec implementation and IKE daemon, and most importantly a cryptographic framework (OpenBSD Cryptographic Framework, OCF), which makes supporting cryptographic accelerators much easier. OCF has recently been ported to Linux.
A significant number of network equipment vendors have created their own IKE daemons (and IPsec implementations), or license a stack from one another.
There are a number of implementations of IKEv2 and some of the companies dealing in IPsec certification and interoperability testing are starting to hold workshops for testing as well as updated certification requirements to deal with IKEv2 testing.
The following open source implementations of IKEv2 are currently available:
OpenIKEv2,
strongSwan,
Libreswan,
Openswan,
Racoon from the KAME project,
iked from the OpenBSD project.,
Rockhopper VPN Software
Vulnerabilities
Leaked NSA presentations released by Der Spiegel indicate that IKE is being exploited in an unknown manner to decrypt IPsec traffic, as is ISAKMP. The researchers who discovered the Logjam attack state that breaking a 1024-bit Diffie–Hellman group would break 66% of VPN servers, 18% of the top million HTTPS domains, and 26% of SSH servers, which the researchers claim is consistent with the leaks. This claim was refuted by both Eyal Ronen and Adi Shamir in their paper "Critical Review of Imperfect Forward Secrecy" and by Paul Wouters of Libreswan in an article "66% of VPN’s are not in fact broken"
IPsec VPN configurations which allow for negotiation of multiple configurations are subject to MITM-based downgrade attacks between the offered configurations, with both IKEv1 and IKEv2. This can be avoided by careful segregation of client systems onto multiple service access points with stricter configurations.
Both versions of the IKE standard are susceptible to an offline dictionary attack when a low entropy password is used. For the IKEv1 this is true for main mode and aggressive mode.
See also
Computer network
Group Domain of Interpretation
IPsec
Kerberized Internet Negotiation of Keys
Key-agreement protocol
References
External links
RFC 2407 Internet Security Association and Key Management Protocol (ISAKMP), Internet Engineering Task Force (IETF)
RFC 2409 The Internet Key Exchange (IKE), Internet Engineering Task Force (IETF)
RFC 7296: Internet Key Exchange Protocol Version 2 (IKEv2), Internet Engineering Task Force (IETF)
Overview of IKE (from Cisco)
IPsec
Cryptographic protocols |
194787 | https://en.wikipedia.org/wiki/Tandberg%20Data | Tandberg Data | Tandberg Data GmbH is a company focused on data storage products, especially streamers, headquartered in Dortmund, Germany. They are the only company still selling drives that use the QIC (also known as SLR) and VXA formats, but also produce LTO along with autoloaders, tape libraries, NAS devices, RDX Removable Disk Drives, Media and Virtual Tape Libraries.
Tandberg Data used to manufacture computer terminals (e.g. TDV 2200), keyboards, and other hardware.
They have offices in Dortmund, Germany; Tokyo, Japan; Singapore; Guangzhou, China and Westminster, Colorado, USA.
History
Tandberg radio factory was founded in Oslo on January 25, 1933 by Vebjørn Tandberg.
In 1970 Tandberg produces its first data tape drives.
In December 1978 Tandbergs Radiofabrikk goes bankrupt.
In January 1979, Siemens and the state of Norway establish Tandberg Data, rescuing the data storage and display divisions from the ashes. Siemens holds 51% of the new company and controls it. The other divisions of Tandberg go to Norsk Data.
In 1981 Tandberg becomes a founding member of QIC committee for standardising interfaces and recording formats, and produces its first streaming linear tape drive.
In 1984 Tandberg Data goes public.
In 1990 Siemens sells most of its shares when merging its computer business with Nixdorf.
In 1991 The terminal business is split off as Tandberg Data Display, which ends up in the Swedish company MultiQ.
In 2003 Tandberg Storage and its subsidiary O-Mass split and became separate companies, also listed on Oslo Stock Exchange. Tandberg Data is the largest owner of Tandberg Storage with a 33.48% stake.
On August 30, 2006, Tandberg Data purchased the assets of Exabyte. Combined revenue is expected to be USD 215 Million for 2006.
On May 15, 2007, Tandberg Data sold all remaining Tandberg Storage shares.
On January 9, 2008, Pat Clarke was promoted CEO of Tandberg Data.
On September 12, 2008, Tandberg Data announced the reacquisition of Tandberg Storage.
On April 24, 2009, Tandberg Data ASA and Tandberg Storage ASA filed for bankruptcy.
On May 19, 2009, Tandberg Data announced that the new holding company, TAD Holding AS, was established, owning all global Tandberg Data subsidiaries, including Tandberg Storage ASA. Cyrus Capital is the majority shareholder and owner of the newly established company. Operations in Norway continue in the newly formed company Tandberg Data Norge AS.
On January 22, 2014 Tandberg Data was acquired by Overland Storage.
Tandberg Storage
Tandberg Storage ASA was a magnetic tape data storage company based in Lysaker, Norway. The company was a subsidiary of Tandberg Data. The company was spun off from Tandberg Data in 2003 to focus exclusively on tape drives. It was purchased by the same company in 2008. Tandberg Storage developed four drive series, all based on Linear Tape-Open (LTO) specifications. Manufacturing was outsourced to the Chinese-based Lafè Peripherals International. Tandberg Storage also owned 93.5% of O-Mass AS. The company was declared bankrupt together with Tandberg Data in 2009.
History
Tandberg Storage was established as a spin-off of Tandberg Data on 22 May 2003. Tandberg Storage had previously been an integrated part of Tandberg Data, but management wanted the two companies to follow separate research and development strategies. While Tandberg Data retained responsibility on complete storage and automation systems, Tandberg Storage would focus on advanced tape-drive technologies. Tandberg Storage was established with 37 research and development employees, plus a 93.5% ownership of O-Mass. The company was listed on the Oslo Stock Exchange on 2 October 2003, with the owners of Tandberg Data receiving all the shares in Tandberg Storage.
The initial goal of the company was to develop a LTO-2 linear tape-open drive within a half-height form factor. While the underlying technology had been developed, the main components needed to be developed, in particular the drive mechanism. A working system was demonstrated in December 2003, and in June 2004 the first complete prototype could be tested. In October, the test program started, and from December verification was initiated with the LTO Committee. The drive was approved on 11 March 2005. In the second half of 2005, Tandberg Storage developed Serial Attached SCSI and application and data integration. These were both launched in 2006. In 2005, the company also started development of a half-height LTO-3 drive. The product was launched in 2007. The following year, a no-encryption LTO-4 was launched.
In November 2008, Tandberg Storage merged with Tandberg Data, with the latter paying the former's owners in shares. Both companies had been having financial problems, and the cooperation between the two had been difficult during 2008. Tandberg Storage was at the time the largest supplier to Tandberg Data. By merging, the managements hoped to gain synergy effects between the two companies. Until the announcement of the merger in September, Tandberg Storage's share price had fallen 89% since the start of the year. Following the announcement, the share price fell a further 35%. The take-over involved a refinancing of the debt in Tandberg Storage. Tandberg Storage remained a subsidiary.
Operations
The company was based at Lysaker in Bærum, just outside Oslo, Norway. Of the 54 employees in 2007, 45 worked within research and development. The main competitors offering LTO drives were Hewlett-Packard, IBM and Quantum.
Products
Tandberg Storage produced a full range of Linear Tape-Open drives, between 100 and 800 gigabytes. Manufactured by Lafè Peripherals International of China, there are four models available. All drives were built around a common half-height aluminum casting. All drives, except the TS200, have variable transfer rate systems to match host transfer speeds. All drives have the lowest power consumption in the industry, and do not require external fans. In 2006, Tandberg Storage held a 26% worldwide market share.
Other Tandberg companies
For other Tandberg companies see Tandberg (disambiguation)#Companies
Tandbergs Radiofabrikk – The original Tandberg company.
Tandberg – The parent company that spun off Tandberg Data. It now focuses on video conferencing.
Tandberg Storage – The storage research and development company spun off from Tandberg Data and later reacquired.
O-Mass – O-Mass AS was a research and development subsidiary responsible for the development of a new read-write head technology, that could allow tape sizes to reach 10 terabytes. A conceptual 2 TB demonstration was produced. Tandberg Storage owned 93.5%, while Imation of the United States held 6.5% of the company. Three people worked for O-Mass.
References
External links
Company homepage
Electronics companies of Germany
Manufacturing companies based in Dortmund
Electronics companies established in 1979
Tandberg
1979 establishments in West Germany
German companies established in 1979 |
194839 | https://en.wikipedia.org/wiki/Hebern%20rotor%20machine | Hebern rotor machine | The Hebern Rotor Machine was an electro-mechanical encryption machine built by combining the mechanical parts of a standard typewriter with the electrical parts of an electric typewriter, connecting the two through a scrambler. It is the first example (though just barely) of a class of machines known as rotor machines that would become the primary form of encryption during World War II and for some time after, and which included such famous examples as the German Enigma.
History
Edward Hugh Hebern was a building contractor who was jailed in 1908 for stealing a horse. It is claimed that, with time on his hands, he started thinking about the problem of encryption, and eventually devised a means of mechanizing the process with a typewriter. He filed his first patent application for a cryptographic machine (not a rotor machine) in 1912. At the time he had no funds to be able to spend time working on such a device, but he continued to produce designs. Hebern made his first drawings of a rotor-based machine in 1917, and in 1918 he built a model of it. In 1921 he applied for a patent for it, which was issued in 1924. He continued to make improvements, adding more rotors. Agnes Driscoll, the chief civilian employee of the US Navy's cryptography operation (later to become OP-20-G) between WWI and WWII, spent some time working with Hebern before returning to Washington and OP-20-G in the mid-'20s.
Hebern was so convinced of the future success of the system that he formed the Hebern Electric Code company with money from several investors. Over the next few years he repeatedly tried to sell the machines both to the US Navy and Army, as well as to commercial interests such as banks. None was terribly interested, as at the time cryptography was not widely considered important outside governments. It was probably because of William F. Friedman's confidential analysis of the Hebern machine's weaknesses (substantial, though repairable) that its sales to the US government were so limited; Hebern was never told of them. Perhaps the best indication of a general distaste for such matters was the statement by Henry Stimson in his memoirs that "Gentlemen do not read each other's mail." It was Stimson, as Secretary of State under Hoover, who withdrew State Department support for Herbert Yardley's American Black Chamber, leading to its closing.
Eventually his investors ran out of patience, and sued Hebern for stock manipulation. He spent another brief period in jail, but never gave up on the idea of his machine. In 1931 the Navy finally purchased several systems, but this was to be his only real sale.
There were three other patents for rotor machines issued in 1919, and several other rotor machines were designed independently at about the same time. The most successful and widely used was the Enigma machine.
Description
The key to the Hebern design was a disk with electrical contacts on either side, known today as a rotor. Linking the contacts on either side of the rotor were wires, with each letter on one side being wired to another on the far side in a random fashion. The wiring encoded a single substitution alphabet.
When the user pressed a key on the typewriter keyboard, a small amount of current from a battery flowed through the key into one of the contacts on the input side of the disk, through the wiring, and back out a different contact. The power then operated the mechanicals of an electric typewriter to type the encrypted letter, or alternately simply lit a bulb or paper tape punch from a teletype machine.
Normally such a system would be no better than the single-alphabet systems of the 16th century. However the rotor in the Hebern machine was geared to the keyboard on the typewriter, so that after every keypress, the rotor turned and the substitution alphabet thus changed slightly. This turns the basic substitution into a polyalphabetic one similar to the well known Vigenère cipher, with the exception that it required no manual lookup of the keys or cyphertext. Operators simply turned the rotor to a pre-chosen starting position and started typing. To decrypt the message, they turned the rotor around in its socket so it was "backwards", thus reversing all the substitutions. They then typed in the ciphertext and out came the plaintext.
Better yet, several rotors can be placed such that the output of the first is connected to the input of the next. In this case the first rotor operates as before, turning once with each keypress. Additional rotors are then spun with a cam on the one beside it, each one being turned one position after the one beside it rotates a full turn. In this way the number of such alphabets increases dramatically. For a rotor with 26 letters in its alphabet, five such rotors "stacked" in this fashion allows for 265 = 11,881,376 different possible substitutions.
William F. Friedman attacked the Hebern machine soon after it came on the market in the 1920s. He quickly "solved" any machine that was built similar to the Hebern, in which the rotors were stacked with the rotor at one end or the other turning with each keypress, the so-called fast rotor. In these cases the resulting ciphertext consisted of a series of single-substitution cyphers, each one 26 letters long. He showed that fairly standard techniques could be used against such systems, given enough effort.
Of course, this fact was itself a great secret. This may explain why the Army and Navy were unwilling to use Hebern's design, much to his surprise.
References
External links
The Hebern Code machines
Cryptographic hardware
Rotor machines |
195337 | https://en.wikipedia.org/wiki/Zigbee | Zigbee | Zigbee is an IEEE 802.15.4-based specification for a suite of high-level communication protocols used to create personal area networks with small, low-power digital radios, such as for home automation, medical device data collection, and other low-power low-bandwidth needs, designed for small scale projects which need wireless connection. Hence, Zigbee is a low-power, low data rate, and close proximity (i.e., personal area) wireless ad hoc network.
The technology defined by the Zigbee specification is intended to be simpler and less expensive than other wireless personal area networks (WPANs), such as Bluetooth or more general wireless networking such as Wi-Fi. Applications include wireless light switches, home energy monitors, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer.
Its low power consumption limits transmission distances to 10–100 meters line-of-sight, depending on power output and environmental characteristics. Zigbee devices can transmit data over long distances by passing data through a mesh network of intermediate devices to reach more distant ones. Zigbee is typically used in low data rate applications that require long battery life and secure networking. (Zigbee networks are secured by 128 bit symmetric encryption keys.) Zigbee has a defined rate of 250 kbit/s, best suited for intermittent data transmissions from a sensor or input device.
Zigbee was conceived in 1998, standardized in 2003, and revised in 2006. The name refers to the waggle dance of honey bees after their return to the beehive.
Overview
Zigbee is a low-cost, low-power, wireless mesh network standard targeted at battery-powered devices in wireless control and monitoring applications. Zigbee delivers low-latency communication. Zigbee chips are typically integrated with radios and with microcontrollers. Zigbee operates in the industrial, scientific and medical (ISM) radio bands: 2.4 GHz in most jurisdictions worldwide; though some devices also use 784 MHz in China, 868 MHz in Europe and 915 MHz in the US and Australia, however even those regions and countries still use 2.4 GHz for most commercial Zigbee devices for home use. Data rates vary from 20 kbit/s (868 MHz band) to 250 kbit/s (2.4 GHz band).
Zigbee builds on the physical layer and media access control defined in IEEE standard 802.15.4 for low-rate wireless personal area networks (WPANs). The specification includes four additional key components: network layer, application layer, Zigbee Device Objects (ZDOs) and manufacturer-defined application objects. ZDOs are responsible for some tasks, including keeping track of device roles, managing requests to join a network, as well as device discovery and security.
The Zigbee network layer natively supports both star and tree networks, and generic mesh networking. Every network must have one coordinator device. Within star networks, the coordinator must be the central node. Both trees and meshes allow the use of Zigbee routers to extend communication at the network level. Another defining feature of Zigbee is facilities for carrying out secure communications, protecting establishment and transport of cryptographic keys, ciphering frames, and controlling device. It builds on the basic security framework defined in IEEE 802.15.4.
History
Zigbee-style self-organizing ad hoc digital radio networks were conceived in the 1990s. The IEEE 802.15.4-2003 Zigbee specification was ratified on December 14, 2004. The Zigbee Alliance announced availability of Specification 1.0 on June 13, 2005, known as the ZigBee 2004 Specification.
Cluster library
In September 2006, the Zigbee 2006 Specification was announced, obsoleting the 2004 stack The 2006 specification replaces the message and key–value pair structure used in the 2004 stack with a cluster library. The library is a set of standardised commands, attributes and global artifacts organised under groups known as clusters with names such as Smart Energy, Home Automation, and Zigbee Light Link.
In January 2017, Zigbee Alliance renamed the library to Dotdot and announced it as a new protocol to be represented by an emoticon (||:). They also announced it will now additionally run over other network types using Internet Protocol and will interconnect with other standards such as Thread. Since its unveiling, Dotdot has functioned as the default application layer for almost all Zigbee devices.
Zigbee Pro
Zigbee Pro, also known as Zigbee 2007, was finalized in 2007. A Zigbee Pro device may join and operate on a legacy Zigbee network and vice versa. Due to differences in routing options, Zigbee Pro devices must become non-routing Zigbee end devices (ZEDs) on a legacy Zigbee network, and legacy Zigbee devices must become ZEDs on a Zigbee Pro network. It operates using the 2.4 GHz ISM band, and adds a sub-GHz band.
Use cases
Zigbee protocols are intended for embedded applications requiring low power consumption and tolerating low data rates. The resulting network will use very little power—individual devices must have a battery life of at least two years to pass certification.
Typical application areas include:
Home automation
Wireless sensor networks
Industrial control systems
Embedded sensing
Medical data collection
Smoke and intruder warning
Building automation
Remote wireless microphone configuration
Zigbee is not for situations with high mobility among nodes. Hence, it is not suitable for tactical ad hoc radio networks in the battlefield, where high data rate and high mobility is present and needed.
Application profiles
The first Zigbee application profile, Home Automation, was announced November 2, 2007. Additional application profiles have since been published.
The specifications define an Internet Protocol-based communication protocol to monitor, control, inform, and automate the delivery and use of energy and water. It is an enhancement of the Zigbee Smart Energy version 1 specifications. It adds services for plug-in electric vehicle charging, installation, configuration and firmware download, prepay services, user information and messaging, load control, demand response and common information and application profile interfaces for wired and wireless networks. It is being developed by partners including:
HomeGrid Forum responsible for marketing and certifying ITU-T G.hn technology and products
HomePlug Powerline Alliance
International Society of Automotive Engineers SAE International
IPSO Alliance
SunSpec Alliance
Wi-Fi Alliance
Zigbee Smart Energy relies on Zigbee IP, a network layer that routes standard IPv6 traffic over IEEE 802.15.4 using 6LoWPAN header compression.
In 2009, the Radio Frequency for Consumer Electronics Consortium (RF4CE) and Zigbee Alliance agreed to deliver jointly a standard for radio frequency remote controls. Zigbee RF4CE is designed for a broad range of consumer electronics products, such as TVs and set-top boxes. It promised many advantages over existing remote control solutions, including richer communication and increased reliability, enhanced features and flexibility, interoperability, and no line-of-sight barrier. The Zigbee RF4CE specification uses a subset of Zigbee functionality allowing to run on smaller memory configurations in lower-cost devices, such as remote control of consumer electronics.
Radio hardware
The radio design used by Zigbee has few analog stages and uses digital circuits wherever possible. Products that integrate the radio and microcontroller into a single module are available.
The Zigbee qualification process involves a full validation of the requirements of the physical layer. All radios derived from the same validated semiconductor mask set would enjoy the same RF characteristics. Zigbee radios have very tight constraints on power and bandwidth. An uncertified physical layer that malfunctions can increase the power consumption of other devices on a Zigbee network. Thus, radios are tested with guidance given by Clause 6 of the 802.15.4-2006 Standard.
This standard specifies operation in the unlicensed 2.4 to 2.4835 GHz (worldwide), 902 to 928 MHz (Americas and Australia) and 868 to 868.6 MHz (Europe) ISM bands. Sixteen channels are allocated in the 2.4 GHz band, spaced 5 MHz apart, though using only 2 MHz of bandwidth each. The radios use direct-sequence spread spectrum coding, which is managed by the digital stream into the modulator. Binary phase-shift keying (BPSK) is used in the 868 and 915 MHz bands, and offset quadrature phase-shift keying (OQPSK) that transmits two bits per symbol is used in the 2.4 GHz band.
The raw, over-the-air data rate is 250 kbit/s per channel in the 2.4 GHz band, 40 kbit/s per channel in the 915 MHz band, and 20 kbit/s in the 868 MHz band. The actual data throughput will be less than the maximum specified bit rate due to the packet overhead and processing delays. For indoor applications at 2.4 GHz transmission distance is 10–20 m, depending on the construction materials, the number of walls to be penetrated and the output power permitted in that geographical location. The output power of the radios is generally 0–20 dBm (1–100 mW).
Device types and operating modes
There are three classes of Zigbee devices:
Zigbee coordinator (ZC): The most capable device, the coordinator forms the root of the network tree and may bridge to other networks. There is precisely one Zigbee coordinator in each network since it is the device that started the network originally (the Zigbee LightLink specification also allows operation without a Zigbee coordinator, making it more usable for off-the-shelf home products). It stores information about the network, including acting as the trust center and repository for security keys.
Zigbee router (ZR): As well as running an application function, a router can act as an intermediate router, passing data on from other devices.
Zigbee end device (ZED): Contains just enough functionality to talk to the parent node (either the coordinator or a router); it cannot relay data from other devices. This relationship allows the node to be asleep a significant amount of the time thereby giving long battery life. A ZED requires the least amount of memory and thus can be less expensive to manufacture than a ZR or ZC.
The current Zigbee protocols support beacon-enabled and non-beacon-enabled networks. In non-beacon-enabled networks, an unslotted CSMA/CA channel access mechanism is used. In this type of network, Zigbee routers typically have their receivers continuously active, requiring additional power. However, this allows for heterogeneous networks in which some devices receive continuously while others transmit when necessary. The typical example of a heterogeneous network is a wireless light switch: The Zigbee node at the lamp may constantly receive since it is reliably powered by the mains supply to the lamp, while a battery-powered light switch would remain asleep until the switch is thrown. In which case, the switch wakes up, sends a command to the lamp, receives an acknowledgment, and returns to sleep. In such a network the lamp node will be at least a Zigbee router, if not the Zigbee coordinator; the switch node is typically a Zigbee end device. In beacon-enabled networks, Zigbee routers transmit periodic beacons to confirm their presence to other network nodes. Nodes may sleep between beacons, thus extending their battery life. Beacon intervals depend on data rate; they may range from 15.36 milliseconds to 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 393.216 seconds at 40 kbit/s and from 48 milliseconds to 786.432 seconds at 20 kbit/s. Long beacon intervals require precise timing, which can be expensive to implement in low-cost products.
In general, the Zigbee protocols minimize the time the radio is on, so as to reduce power use. In beaconing networks, nodes only need to be active while a beacon is being transmitted. In non-beacon-enabled networks, power consumption is decidedly asymmetrical: Some devices are always active while others spend most of their time sleeping.
Except for Smart Energy Profile 2.0, Zigbee devices are required to conform to the IEEE 802.15.4-2003 Low-rate Wireless Personal Area Network (LR-WPAN) standard. The standard specifies the lower protocol layers—the physical layer (PHY), and the media access control portion of the data link layer. The basic channel access mode is carrier-sense multiple access with collision avoidance (CSMA/CA). That is, the nodes communicate in a way somewhat analogous to how humans converse: a node briefly checks to see that other nodes are not talking before it starts. CSMA/CA is not used in three notable exceptions:
Message acknowledgments
Beacons are sent on a fixed-timing schedule.
Devices in beacon-enabled networks that have low-latency, real-time requirements may also use guaranteed time slots.
Network layer
The main functions of the network layer are to ensure correct use of the MAC sublayer and provide a suitable interface for use by the next upper layer, namely the application layer. The network layer deals with network functions such as connecting, disconnecting, and setting up networks. It can establish a network, allocate addresses, and add and remove devices. This layer makes use of star, mesh and tree topologies.
The data entity of the transport layer creates and manages protocol data units at the direction of the application layer and performs routing according to the current topology. The control entity handles the configuration of new devices and establishes new networks. It can determine whether a neighboring device belongs to the network and discovers new neighbors and routers.
The routing protocol used by the network layer is AODV. To find a destination device, AODV is used to broadcast a route request to all of its neighbors. The neighbors then broadcast the request to their neighbors and onward until the destination is reached. Once the destination is reached, a route reply is sent via unicast transmission following the lowest cost path back to the source. Once the source receives the reply, it updates its routing table with the destination address of the next hop in the path and the associated path cost.
Application layer
The application layer is the highest-level layer defined by the specification and is the effective interface of the Zigbee system to its end users. It comprises the majority of components added by the Zigbee specification: both ZDO (Zigbee device object) and its management procedures, together with application objects defined by the manufacturer, are considered part of this layer. This layer binds tables, sends messages between bound devices, manages group addresses, reassembles packets and also transports data. It is responsible for providing service to Zigbee device profiles.
Main components
The ZDO (Zigbee device object), a protocol in the Zigbee protocol stack, is responsible for overall device management, security keys, and policies. It is responsible for defining the role of a device as either coordinator or end device, as mentioned above, but also for the discovery of new devices on the network and the identification of their offered services. It may then go on to establish secure links with external devices and reply to binding requests accordingly.
The application support sublayer (APS) is the other main standard component of the stack, and as such it offers a well-defined interface and control services. It works as a bridge between the network layer and the other elements of the application layer: it keeps up-to-date binding tables in the form of a database, which can be used to find appropriate devices depending on the services that are needed and those the different devices offer. As the union between both specified layers, it also routes messages across the layers of the protocol stack.
Communication models
An application may consist of communicating objects which cooperate to carry out the desired tasks. Tasks will typically be largely local to each device, for instance, the control of each household appliance. The focus of Zigbee is to distribute work among many different devices which reside within individual Zigbee nodes which in turn form a network.
The objects that form the network communicate using the facilities provided by APS, supervised by ZDO interfaces. Within a single device, up to 240 application objects can exist, numbered in the range 1–240. 0 is reserved for the ZDO data interface and 255 for broadcast; the 241-254 range is not currently in use but may be in the future.
Two services are available for application objects to use (in Zigbee 1.0):
The key-value pair service (KVP) is meant for configuration purposes. It enables description, request and modification of object attribute through a simple interface based on get, set and event primitives, some allowing a request for a response. Configuration uses XML.
The message service is designed to offer a general approach to information treatment, avoiding the necessity to adapt application protocols and potential overhead incurred by KVP. It allows arbitrary payloads to be transmitted over APS frames.
Addressing is also part of the application layer. A network node consists of an IEEE 802.15.4-conformant radio transceiver and one or more device descriptions (collections of attributes that can be polled or set, or can be monitored through events). The transceiver is the basis for addressing, and devices within a node are specified by an endpoint identifier in the range 1 to 240.
Communication and device discovery
For applications to communicate, the devices that support them must use a common application protocol (types of messages, formats and so on); these sets of conventions are grouped in profiles. Furthermore, binding is decided upon by matching input and output unique within the context of a given profile and associated to an incoming or outgoing data flow in a device. Binding tables contain source and destination pairs.
Depending on the available information, device discovery may follow different methods. When the network address is known, the IEEE address can be requested using unicast communication. When it is not, petitions are broadcast. End devices will simply respond with the requested address while a network coordinator or a router will also send the addresses of all the devices associated with it.
This permits external devices to find out about devices in a network and the services that they offer, which endpoints can report when queried by the discovering device (which has previously obtained their addresses). Matching services can also be used.
The use of cluster identifiers enforces the binding of complementary entities using the binding tables, which are maintained by Zigbee coordinators, as the table must always be available within a network and coordinators are most likely to have a permanent power supply. Backups, managed by higher-level layers, may be needed by some applications. Binding requires an established communication link; after it exists, whether to add a new node to the network is decided, according to the application and security policies.
Communication can happen right after the association. Direct addressing uses both radio address and endpoint identifier, whereas indirect addressing uses every relevant field (address, endpoint, cluster, and attribute) and requires that they are sent to the network coordinator, which maintains associations and translates requests for communication. Indirect addressing is particularly useful to keep some devices very simple and minimize their need for storage. Besides these two methods, broadcast to all endpoints in a device is available, and group addressing is used to communicate with groups of endpoints belonging to a specified set of devices.
Security services
As one of its defining features, Zigbee provides facilities for carrying out secure communications, protecting establishment and transport of cryptographic keys, cyphering frames, and controlling devices. It builds on the basic security framework defined in IEEE 802.15.4. This part of the architecture relies on the correct management of symmetric keys and the correct implementation of methods and security policies.
Basic security model
The basic mechanism to ensure confidentiality is the adequate protection of all keying material. Trust must be assumed in the initial installation of the keys, as well as in the processing of security information. For an implementation to globally work, its general conformance to specified behaviors is assumed.
Keys are the cornerstone of the security architecture; as such their protection is of paramount importance, and keys are never supposed to be transported through an insecure channel. A momentary exception to this rule occurs during the initial phase of the addition to the network of a previously unconfigured device. The Zigbee network model must take particular care of security considerations, as ad hoc networks may be physically accessible to external devices. Also the state of the working environment cannot be predicted.
Within the protocol stack, different network layers are not cryptographically separated, so access policies are needed, and conventional design assumed. The open trust model within a device allows for key sharing, which notably decreases potential cost. Nevertheless, the layer which creates a frame is responsible for its security. If malicious devices may exist, every network layer payload must be ciphered, so unauthorized traffic can be immediately cut off. The exception, again, is the transmission of the network key, which confers a unified security layer to the grid, to a new connecting device.
Security architecture
Zigbee uses 128-bit keys to implement its security mechanisms. A key can be associated either to a network, being usable by both Zigbee layers and the MAC sublayer, or to a link, acquired through pre-installation, agreement or transport. Establishment of link keys is based on a master key which controls link key correspondence. Ultimately, at least, the initial master key must be obtained through a secure medium (transport or pre-installation), as the security of the whole network depends on it. Link and master keys are only visible to the application layer. Different services use different one-way variations of the link key to avoid leaks and security risks.
Key distribution is one of the most important security functions of the network. A secure network will designate one special device which other devices trust for the distribution of security keys: the trust center. Ideally, devices will have the center trust address and initial master key preloaded; if a momentary vulnerability is allowed, it will be sent as described above. Typical applications without special security needs will use a network key provided by the trust center (through the initially insecure channel) to communicate.
Thus, the trust center maintains both the network key and provides point-to-point security. Devices will only accept communications originating from a key supplied by the trust center, except for the initial master key. The security architecture is distributed among the network layers as follows:
The MAC sublayer is capable of single-hop reliable communications. As a rule, the security level it is to use is specified by the upper layers.
The network layer manages routing, processing received messages and being capable of broadcasting requests. Outgoing frames will use the adequate link key according to the routing if it is available; otherwise, the network key will be used to protect the payload from external devices.
The application layer offers key establishment and transport services to both ZDO and applications.
The security levels infrastructure is based on CCM*, which adds encryption- and integrity-only features to CCM.
According to the German computer e-magazine Heise Online, Zigbee Home Automation 1.2 is using fallback keys for encryption negotiation which are known and cannot be changed. This makes the encryption highly vulnerable.
Simulation
Network simulators, like ns2, OMNeT++, OPNET, and NetSim can be used to simulate IEEE 802.15.4 Zigbee networks.
These simulators come with open source C or C++ libraries for users to modify. This way users can determine the validity of new algorithms before hardware implementation.
See also
Comparison of 802.15.4 radio modules
Comparison of wireless data standards
Connected Home over IP
Mobile ad hoc networks
Thread
References
External links
IEEE 802
Home automation
Building automation
Personal area networks
Mesh networking
Computer-related introductions in 2004
Wireless communication systems |
196223 | https://en.wikipedia.org/wiki/Rotor%20machine | Rotor machine | In cryptography, a rotor machine is an electro-mechanical stream cipher device used for encrypting and decrypting messages. Rotor machines were the cryptographic state-of-the-art for a prominent period of history; they were in widespread use in the 1920s–1970s. The most famous example is the German Enigma machine, the output of which was deciphered by the Allies during World War II, producing intelligence code-named Ultra.
Description
The primary component of a rotor machine is a set of rotors, also termed wheels or drums, which are rotating disks with an array of electrical contacts on either side. The wiring between the contacts implements a fixed substitution of letters, replacing them in some complex fashion. On its own, this would offer little security; however, before or after encrypting each letter, the rotors advance positions, changing the substitution. By this means, a rotor machine produces a complex polyalphabetic substitution cipher, which changes with every key press.
Background
In classical cryptography, one of the earliest encryption methods was the simple substitution cipher, where letters in a message were systematically replaced using some secret scheme. Monoalphabetic substitution ciphers used only a single replacement scheme — sometimes termed an "alphabet"; this could be easily broken, for example, by using frequency analysis. Somewhat more secure were schemes involving multiple alphabets, polyalphabetic ciphers. Because such schemes were implemented by hand, only a handful of different alphabets could be used; anything more complex would be impractical. However, using only a few alphabets left the ciphers vulnerable to attack. The invention of rotor machines mechanised polyalphabetic encryption, providing a practical way to use a much larger number of alphabets.
The earliest cryptanalytic technique was frequency analysis, in which letter patterns unique to every language could be used to discover information about the substitution alphabet(s) in use in a mono-alphabetic substitution cipher. For instance, in English, the plaintext letters E, T, A, O, I, N and S, are usually easy to identify in ciphertext on the basis that since they are very frequent (see ETAOIN SHRDLU), their corresponding ciphertext letters will also be as frequent. In addition, bigram combinations like NG, ST and others are also very frequent, while others are rare indeed (Q followed by anything other than U for instance). The simplest frequency analysis relies on one ciphertext letter always being substituted for a plaintext letter in the cipher: if this is not the case, deciphering the message is more difficult. For many years, cryptographers attempted to hide the telltale frequencies by using several different substitutions for common letters, but this technique was unable to fully hide patterns in the substitutions for plaintext letters. Such schemes were being widely broken by the 16th century.
In the mid-15th century, a new technique was invented by Alberti, now known generally as polyalphabetic ciphers, which recognised the virtue of using more than a single substitution alphabet; he also invented a simple technique for "creating" a multitude of substitution patterns for use in a message. Two parties exchanged a small amount of information (referred to as the key) and used it to create many substitution alphabets, and so many different substitutions for each plaintext letter over the course of a single plaintext. The idea is simple and effective, but proved more difficult to use than might have been expected. Many ciphers were only partial implementations of Alberti's, and so were easier to break than they might have been (e.g. the Vigenère cipher).
Not until the 1840s (Babbage) was any technique known which could reliably break any of the polyalphabetic ciphers. His technique also looked for repeating patterns in the ciphertext, which provide clues about the length of the key. Once this is known, the message essentially becomes a series of messages, each as long as the length of the key, to which normal frequency analysis can be applied. Charles Babbage, Friedrich Kasiski, and William F. Friedman are among those who did most to develop these techniques.
Cipher designers tried to get users to use a different substitution for every letter, but this usually meant a very long key, which was a problem in several ways. A long key takes longer to convey (securely) to the parties who need it, and so mistakes are more likely in key distribution. Also, many users do not have the patience to carry out lengthy, letter-perfect evolutions, and certainly not under time pressure or battlefield stress. The 'ultimate' cipher of this type would be one in which such a 'long' key could be generated from a simple pattern (ideally automatically), producing a cipher in which there are so many substitution alphabets that frequency counting and statistical attacks would be effectively impossible. Enigma, and the rotor machines generally, were just what was needed since they were seriously polyalphabetic, using a different substitution alphabet for each letter of plaintext, and automatic, requiring no extraordinary abilities from their users. Their messages were, generally, much harder to break than any previous ciphers.
Mechanization
It is straightforward to create a machine for performing simple substitution. In an electrical system with 26 switches attached to 26 light bulbs, any one of the switches will illuminate one of the bulbs.
If each switch is operated by a key on a typewriter, and the bulbs are labelled with letters, then such a system can be used for encryption by choosing the wiring between the keys and the bulb: for example, typing the letter A would make the bulb labelled Q light up. However, the wiring is fixed, providing little security.
Rotor machines change the interconnecting wiring with each key stroke. The wiring is placed inside a rotor, and then rotated with a gear every time a letter is pressed.
So while pressing A the first time might generate a Q, the next time it might generate a J. Every letter pressed on the keyboard increments the rotor position and get a new substitution, implementing a polyalphabetic substitution cipher.
Depending on the size of the rotor, this may, or may not, be more secure than hand ciphers. If the rotor has only 26 positions on it, one for each letter, then all messages will have a (repeating) key 26 letters long. Although the key itself (mostly hidden in the wiring of the rotor) might not be known, the methods for attacking these types of ciphers don't need that information. So while such a single rotor machine is certainly easy to use, it is no more secure than any other partial polyalphabetic cipher system.
But this is easy to correct. Simply stack more rotors next to each other, and gear them together. After the first rotor spins "all the way", make the rotor beside it spin one position. Now you would have to type 26 × 26 = 676 letters (for the Latin alphabet) before the key repeats, and yet it still only requires you to communicate a key of two letters/numbers to set things up. If a key of 676 length is not long enough, another rotor can be added, resulting in a period 17,576 letters long.
In order to be as easy to decipher as encipher, some rotor machines, most notably the Enigma machine, embodied a symmetric-key algorithm, i.e., encrypting twice with the same settings recovers the original message (see involution).
History
Invention
The concept of a rotor machine occurred to a number of inventors independently at a similar time.
In 2003, it emerged that the first inventors were two Dutch naval officers, Theo A. van Hengel (1875–1939) and R. P. C. Spengler (1875–1955) in 1915 (De Leeuw, 2003). Previously, the invention had been ascribed to four inventors working independently and at much the same time: Edward Hebern, Arvid Damm, Hugo Koch and Arthur Scherbius.
In the United States Edward Hugh Hebern built a rotor machine using a single rotor in 1917. He became convinced he would get rich selling such a system to the military, the Hebern Rotor Machine, and produced a series of different machines with one to five rotors. His success was limited, however, and he went bankrupt in the 1920s. He sold a small number of machines to the US Navy in 1931.
In Hebern's machines the rotors could be opened up and the wiring changed in a few minutes, so a single mass-produced system could be sold to a number of users who would then produce their own rotor keying. Decryption consisted of taking out the rotor(s) and turning them around to reverse the circuitry. Unknown to Hebern, William F. Friedman of the US Army's SIS promptly demonstrated a flaw in the system that allowed the ciphers from it, and from any machine with similar design features, to be cracked with enough work.
Another early rotor machine inventor was Dutchman Hugo Koch, who filed a patent on a rotor machine in 1919. At about the same time in Sweden, Arvid Gerhard Damm invented and patented another rotor design. However, the rotor machine was ultimately made famous by Arthur Scherbius, who filed a rotor machine patent in 1918. Scherbius later went on to design and market the Enigma machine.
The Enigma machine
The most widely known rotor cipher device is the German Enigma machine used during World War II, of which there were a number of variants.
The standard Enigma model, Enigma I, used three rotors. At the end of the stack of rotors was an additional, non-rotating disk, the "reflector," wired such that the input was connected electrically back out to another contact on the same side and thus was "reflected" back through the three-rotor stack to produce the ciphertext.
When current was sent into most other rotor cipher machines, it would travel through the rotors and out the other side to the lamps. In the Enigma, however, it was "reflected" back through the disks before going to the lamps. The advantage of this was that there was nothing that had to be done to the setup in order to decipher a message; the machine was "symmetrical".
The Enigma's reflector guaranteed that no letter could be enciphered as itself, so an A could never turn back into an A. This helped Polish and, later, British efforts to break the cipher. (See Cryptanalysis of the Enigma.)
Scherbius joined forces with a mechanical engineer named Ritter and formed Chiffriermaschinen AG in Berlin before demonstrating Enigma to the public in Bern in 1923, and then in 1924 at the World Postal Congress in Stockholm. In 1927 Scherbius bought Koch's patents, and in 1928 they added a plugboard, essentially a non-rotating manually rewireable fourth rotor, on the front of the machine. After the death of Scherbius in 1929, Willi Korn was in charge of further technical development of Enigma.
As with other early rotor machine efforts, Scherbius had limited commercial success. However, the German armed forces, responding in part to revelations that their codes had been broken during World War I, adopted the Enigma to secure their communications. The Reichsmarine adopted Enigma in 1926, and the German Army began to use a different variant around 1928.
The Enigma (in several variants) was the rotor machine that Scherbius's company and its successor, Heimsoth & Reinke, supplied to the German military and to such agencies as the Nazi party security organization, the SD.
The Poles broke the German Army Enigma beginning in December 1932, not long after it had been put into service. On July 25, 1939, just five weeks before Hitler's invasion of Poland, the Polish General Staff's Cipher Bureau shared its Enigma-decryption methods and equipment with the French and British as the Poles' contribution to the common defense against Nazi Germany. Dilly Knox had already broken Spanish Nationalist messages on a commercial Enigma machine in 1937 during the Spanish Civil War.
A few months later, using the Polish techniques, the British began reading Enigma ciphers in collaboration with Polish Cipher Bureau cryptologists who had escaped Poland, overrun by the Germans, to reach Paris. The Poles continued breaking German Army Enigma—along with Luftwaffe Enigma traffic—until work at Station PC Bruno in France was shut down by the German invasion of May–June 1940.
The British continued breaking Enigma and, assisted eventually by the United States, extended the work to German Naval Enigma traffic (which the Poles had been reading before the war), most especially to and from U-boats during the Battle of the Atlantic.
Various machines
During World War II (WWII), both the Germans and Allies developed additional rotor machines. The Germans used the Lorenz SZ 40/42 and Siemens and Halske T52 machines to encipher teleprinter traffic which used the Baudot code; this traffic was known as Fish to the Allies. The Allies developed the Typex (British) and the SIGABA (American). During the War the Swiss began development on an Enigma improvement which became the NEMA machine which was put into service after World War II. There was even a Japanese developed variant of the Enigma in which the rotors sat horizontally; it was apparently never put into service. The Japanese PURPLE machine was not a rotor machine, being built around electrical stepping switches, but was conceptually similar.
Rotor machines continued to be used even in the computer age. The KL-7 (ADONIS), an encryption machine with 8 rotors, was widely used by the U.S. and its allies from the 1950s until the 1980s. The last Canadian message encrypted with a KL-7 was sent on June 30, 1983. The Soviet Union and its allies used a 10-rotor machine called Fialka well into the 1970s.
A unique rotor machine called the Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This unusual device is inspired by Enigma, but makes use of 40-point rotors, allowing letters, numbers and some punctuation; each rotor contains 509 parts.
A software implementation of a rotor machine was used in the crypt command that was part of early UNIX operating systems. It was among the first software programs to run afoul of U.S. export regulations which classified cryptographic implementations as munitions.
List of rotor machines
BID/60 (Singlet)
Combined Cipher Machine
Enigma machine
Fialka
Hagelin's machines including
C-36,
C-52
CD-57
M-209
Hebern rotor machine
HX-63
KL-7
Lacida
Lorenz SZ 40/42
M-325
Mercury
NEMA
OMI cryptograph
Portex
RED
Siemens and Halske T52
SIGABA
SIGCUM
Typex
References
Friedrich L. Bauer, "An error in the history of rotor encryption devices", Cryptologia 23(3), July 1999, page 206.
Cipher A. Deavours, Louis Kruh, "Machine Cryptography and Modern Cryptanalysis", Artech House, 1985. .
Karl de Leeuw, "The Dutch invention of the rotor machine, 1915 - 1923." Cryptologia 27(1), January 2003, pp73–94.
External links
Site with cipher machine images, many of rotor machines
Rotor machine photographs
Timeline of Cipher Machines |
196882 | https://en.wikipedia.org/wiki/Burrows%E2%80%93Abadi%E2%80%93Needham%20logic | Burrows–Abadi–Needham logic | Burrows–Abadi–Needham logic (also known as the BAN logic) is a set of rules for defining and analyzing information exchange protocols. Specifically, BAN logic helps its users determine whether exchanged information is trustworthy, secured against eavesdropping, or both. BAN logic starts with the assumption that all information exchanges happen on media vulnerable to tampering and public monitoring. This has evolved into the popular security mantra, "Don't trust the network."
A typical BAN logic sequence includes three steps:
Verification of message origin
Verification of message freshness
Verification of the origin's trustworthiness.
BAN logic uses postulates and definitions – like all axiomatic systems – to analyze authentication protocols. Use of the BAN logic often accompanies a security protocol notation formulation of a protocol and is sometimes given in papers.
Language type
BAN logic, and logics in the same family, are decidable: there exists an algorithm taking BAN hypotheses and a purported conclusion, and that answers whether or not the conclusion is derivable from the hypotheses. The proposed algorithms use a variant of magic sets.
Alternatives and criticism
BAN logic inspired many other similar formalisms, such as GNY logic. Some of these try to repair one weakness of BAN logic: the lack of a good semantics with a clear meaning in terms of knowledge and possible universes. However, starting in the mid-1990s, crypto protocols were analyzed in operational models (assuming perfect cryptography) using model checkers, and numerous bugs were found in protocols that were "verified" with BAN logic and related formalisms. In some cases a protocol was reasoned as secure by the BAN analysis but were in fact insecure. This has led to the abandonment of BAN-family logics in favor of proof methods based on standard invariance reasoning.
Basic rules
The definitions and their implications are below (P and Q are network agents, X is a message,
and K is an encryption key):
P believes X: P acts as if X is true, and may assert X in other messages.
P has jurisdiction over X: P's beliefs about X should be trusted.
P said X: At one time, P transmitted (and believed) message X, although P might no longer believe X.
P sees X: P receives message X, and can read and repeat X.
{X}K: X is encrypted with key K.
fresh(X): X has not previously been sent in any message.
key(K, P↔Q): P and Q may communicate with shared key K
The meaning of these definitions is captured in a series of postulates:
If P believes key(K, P↔Q), and P sees {X}K, then P believes (Q said X)
If P believes (Q said X) and P believes fresh(X), then P believes (Q believes X).
P must believe that X is fresh here. If X is not known to be fresh, then it might be an obsolete message, replayed by an attacker.
If P believes (Q has jurisdiction over X) and P believes (Q believes X), then P believes X
There are several other technical postulates having to do with composition of messages. For example, if P believes that Q said <X, Y>, the concatenation of X and Y, then P also believes that Q said X, and P also believes that Q said Y.
Using this notation, the assumptions behind an authentication protocol
can be formalized. Using the postulates, one can prove that certain
agents believe that they can communicate using certain keys. If the
proof fails, the point of failure usually suggests an attack which
compromises the protocol.
BAN logic analysis of the Wide Mouth Frog protocol
A very simple protocol — the Wide Mouth Frog protocol — allows two agents, A and B, to establish secure communications, using a trusted authentication server, S, and synchronized clocks all around. Using standard notation the protocol can be specified as follows:
Agents A and B are equipped with keys Kas and Kbs, respectively, for communicating securely with S. So we have assumptions:
A believes key(Kas, A↔S)
S believes key(Kas, A↔S)
B believes key(Kbs, B↔S)
S believes key(Kbs, B↔S)
Agent A wants to initiate a secure conversation with B. It therefore invents a key, Kab, which it will use to communicate with B. A believes that this key is secure, since it made up the key itself:
A believes key(Kab, A↔B)
B is willing to accept this key, as long as it is sure that it came from A:
B believes (A has jurisdiction over key(K, A↔B))
Moreover, B is willing to trust S to accurately relay keys from A:
B believes (S has jurisdiction over (A believes key(K, A↔B)))
That is, if B believes that S believes that A wants to use a particular key to communicate with B, then B will trust S and believe it also.
The goal is to have
B believes key(Kab, A↔B)
A reads the clock, obtaining the current time t, and sends the following message:
1 A→S: {t, key(Kab, A↔B)}Kas
That is, it sends its chosen session key and the current time to S, encrypted with its private authentication server key Kas.
Since S believes that key(Kas, A↔S), and S sees {t, key(Kab, A↔B)}Kas,
then S concludes that A actually said {t, key(Kab, A↔B)}. (In particular, S believes that the message was not manufactured out of whole cloth by some attacker.)
Since the clocks are synchronized, we can assume
S believes fresh(t)
Since S believes fresh(t) and S believes A said {t, key(Kab, A↔B)},
S believes that A actually believes that key(Kab, A↔B). (In particular, S believes that the message was not replayed by some attacker who captured it at some time in the past.)
S then forwards the key to B:
2 S→B: {t, A, A believes key(Kab, A↔B)}Kbs
Because message 2 is encrypted with Kbs, and B
believes key(Kbs, B↔S), B now believes that S
said {t, A, A believes key(Kab, A↔B)}.
Because the clocks are synchronized, B believes fresh(t), and so
fresh(A believes key(Kab, A↔B)). Because B
believes that S's statement is fresh, B believes that S believes that
(A believes key(Kab, A↔B)). Because B
believes that S is authoritative about what A believes, B believes
that (A believes key(Kab, A↔B)). Because B
believes that A is authoritative about session keys between A and B, B
believes key(Kab, A↔B). B can now contact A
directly, using Kab as a secret session key.
Now let's suppose that we abandon the assumption that the clocks are
synchronized. In that case, S gets message 1 from A with {t,
key(Kab, A↔B)}, but it can no longer conclude
that t is fresh. It knows that A sent this message at some time
in the past (because it is encrypted with Kas)
but not that this is a recent message, so S doesn't believe that A
necessarily wants to continue to use the key
Kab. This points directly at an attack on the
protocol: An attacker who can capture messages can guess one of the
old session keys Kab. (This might take a long
time.) The attacker then replays the old {t,
key(Kab, A↔B)} message, sending it to S. If
the clocks aren't synchronized (perhaps as part of the same attack), S might
believe this old message and request that B use the old, compromised key
over again.
The original Logic of Authentication paper (linked below) contains this example and many others, including analyses of the Kerberos handshake protocol, and two versions of the Andrew Project RPC handshake (one of which is defective).
References
Further reading
Source: The Burrows–Abadi–Needham logic
Theory of cryptography
Automated theorem proving |
197489 | https://en.wikipedia.org/wiki/SIM%20card | SIM card | A SIM card, also known as subscriber identity module or subscriber identification module (SIM), is an integrated circuit intended to securely store the international mobile subscriber identity (IMSI) number and its related key, which are used to identify and authenticate subscribers on mobile telephony devices (such as mobile phones and computers). It is also possible to store contact information on many SIM cards. SIM cards are always used on GSM phones; for CDMA phones, they are needed only for LTE-capable handsets. SIM cards can also be used in satellite phones, smart watches, computers, or cameras.
The SIM circuit is part of the function of a universal integrated circuit card (UICC) physical smart card, which is usually made of PVC with embedded contacts and semiconductors. SIM cards are transferable between different mobile devices. The first UICC smart cards were the size of credit and bank cards; sizes were reduced several times over the years, usually keeping electrical contacts the same, so that a larger card could be cut down to a smaller size.
A SIM card contains a unique serial number (ICCID), international mobile subscriber identity (IMSI) number, security authentication and ciphering information, temporary information related to the local network, a list of the services the user has access to, and two passwords: a personal identification number (PIN) for ordinary use, and a personal unblocking key (PUK) for PIN unlocking. In Europe, the serial SIM number (SSN) is also sometimes accompanied by an international article number (IAN) or a European article number (EAN) required when registering online for the subscription of a prepaid card.
History and procurement
The SIM card is a type of smart card, the basis for which is the silicon integrated circuit (IC) chip. The idea of incorporating a silicon IC chip onto a plastic card originates from the late 1960s. Smart cards have since used MOS integrated circuit chips, along with MOS memory technologies such as flash memory and EEPROM (electrically erasable programmable read-only memory).
The SIM was initially specified by the European Telecommunications Standards Institute in the specification with the number TS 11.11. This specification describes the physical and logical behaviour of the SIM. With the development of UMTS, the specification work was partially transferred to 3GPP. 3GPP is now responsible for the further development of applications like SIM (TS 51.011) and USIM (TS 31.102) and ETSI for the further development of the physical card UICC.
The first SIM card was developed in 1991 by Munich smart-card maker Giesecke & Devrient, who sold the first 300 SIM cards to the Finnish wireless network operator Radiolinja.
Today, SIM cards are ubiquitous, allowing over 7 billion devices to connect to cellular networks around the world. According to the International Card Manufacturers Association (ICMA), there were 5.4 billion SIM cards manufactured globally in 2016 creating over $6.5 billion in revenue for traditional SIM card vendors. The rise of cellular IoT and 5G networks is predicted to drive the growth of the addressable market for SIM card manufacturers to over 20 billion cellular devices by 2020. The introduction of embedded-SIM (eSIM) and remote SIM provisioning (RSP) from the GSMA may disrupt the traditional SIM card ecosystem with the entrance of new players specializing in "digital" SIM card provisioning and other value-added services for mobile network operators.
Design
There are three operating voltages for SIM cards: , and (ISO/IEC 7816-3 classes A, B and C, respectively). The operating voltage of the majority of SIM cards launched before 1998 was . SIM cards produced subsequently are compatible with and . Modern cards support , and .
Modern SIM cards allow applications to load when the SIM is in use by the subscriber. These applications communicate with the handset or a server using SIM Application Toolkit, which was initially specified by 3GPP in TS 11.14. (There is an identical ETSI specification with different numbering.) ETSI and 3GPP maintain the SIM specifications. The main specifications are: ETSI TS 102 223 (the toolkit for smartcards), ETSI TS 102 241 (API), ETSI TS 102 588 (application invocation), and ETSI TS 131 111 (toolkit for more SIM-likes). SIM toolkit applications were initially written in native code using proprietary APIs. To provide interoperability of the applications, ETSI chose Java Card. A multi-company collaboration called GlobalPlatform defines some extensions on the cards, with additional APIs and features like more cryptographic security and RFID contactless use added.
Data
SIM cards store network-specific information used to authenticate and identify subscribers on the network. The most important of these are the ICCID, IMSI, authentication key (Ki), local area identity (LAI) and operator-specific emergency number. The SIM also stores other carrier-specific data such as the SMSC (Short Message service center) number, service provider name (SPN), service dialling numbers (SDN), advice-of-charge parameters and value-added service (VAS) applications. (Refer to GSM 11.11.)
SIM cards can come in various data capacities, from to at least . All can store a maximum of 250 contacts on the SIM, but while the has room for 33 mobile network codes (MNCs) or network identifiers, the version has room for 80 MNCs. This is used by network operators to store data on preferred networks, mostly used when the SIM is not in its home network but is roaming. The network operator that issued the SIM card can use this to have a phone connect to a preferred network that is more economic for the provider instead of having to pay the network operator that the phone discovered first. This does not mean that a phone containing this SIM card can connect to a maximum of only 33 or 80 networks, but it means that the SIM card issuer can specify only up to that number of preferred networks. If a SIM is outside these preferred networks it uses the first or best available network.
ICCID
Each SIM is internationally identified by its integrated circuit card identifier (ICCID). ICCID is the identifier of the actual SIM card itself – i.e. an identifier for the SIM chip. Nowadays ICCID numbers are also used to identify eSIM profiles, and not only physical SIM cards. ICCIDs are stored in the SIM cards and are also engraved or printed on the SIM card body during a process called personalisation. The ICCID is defined by the ITU-T recommendation E.118 as the primary account number. Its layout is based on ISO/IEC 7812. According to E.118, the number can be up to 19 digits long, including a single check digit calculated using the Luhn algorithm. However, the GSM Phase 1 defined the ICCID length as an opaque data field, 10 octets (20 digits) in length, whose structure is specific to a mobile network operator.
The number is composed of the following subparts:
Issuer identification number (IIN)
Maximum of seven digits:
Major industry identifier (MII), 2 fixed digits, 89 for telecommunication purposes.
Country code, 2 or 3 digits, as defined by ITU-T recommendation E.164.
NANP countries, apart from Canada, use 01, i.e. prepending a zero to their common calling code +1
Canada uses 302
Russia uses 701, i.e. appending 01 to its calling code +7
Kazakhstan uses 997, even though it shares the calling code +7 with Russia
Issuer identifier, 1–4 digits.
Often identical to the mobile network code (MNC).
Individual account identification
Individual account identification number. Its length is variable, but every number under one IIN has the same length.
Often identical to the mobile subscription identification number (MSIN).
Check digit
Single digit calculated from the other digits using the Luhn algorithm.
With the GSM Phase 1 specification using 10 octets into which ICCID is stored as packed BCD, the data field has room for 20 digits with hexadecimal digit "F" being used as filler when necessary.
In practice, this means that on GSM SIM cards there are 20-digit (19+1) and 19-digit (18+1) ICCIDs in use, depending upon the issuer. However, a single issuer always uses the same size for its ICCIDs.
To confuse matters more, SIM factories seem to have varying ways of delivering electronic copies of SIM personalization datasets. Some datasets are without the ICCID checksum digit, others are with the digit.
As required by E.118, the ITU-T updates a list of all current internationally assigned IIN codes in its Operational Bulletins which are published twice a month (the last as of January 2019 was No. 1163 from 1 January 2019). ITU-T also publishes complete lists: as of January 2019, the list issued on 1 December 2018 was current, having all issuer identifier numbers before 1 December 2018.
International mobile subscriber identity (IMSI)
SIM cards are identified on their individual operator networks by a unique international mobile subscriber identity (IMSI). Mobile network operators connect mobile phone calls and communicate with their market SIM cards using their IMSIs. The format is:
The first three digits represent the mobile country code (MCC).
The next two or three digits represent the mobile network code (MNC). Three-digit MNC codes are allowed by E.212 but are mainly used in the United States and Canada. One MCC can have both 2 digit and 3 digit MNCs, an example is 350 007.
The next digits represent the mobile subscriber identification number (MSIN). Normally there are 10 digits, but can be fewer in the case of a 3-digit MNC or if national regulations indicate that the total length of the IMSI should be less than 15 digits.
Digits are different from country to country.
Authentication key (Ki)
The Ki is a 128-bit value used in authenticating the SIMs on a GSM mobile network (for USIM network, you still need K but other parameters are also needed). Each SIM holds a unique Ki assigned to it by the operator during the personalization process. The Ki is also stored in a database (termed authentication center or AuC) on the carrier's network.
The SIM card is designed to prevent someone from getting the Ki by using the smart-card interface. Instead, the SIM card provides a function, Run GSM Algorithm, that the phone uses to pass data to the SIM card to be signed with the Ki. This, by design, makes using the SIM card mandatory unless the Ki can be extracted from the SIM card, or the carrier is willing to reveal the Ki. In practice, the GSM cryptographic algorithm for computing a signed response (SRES_1/SRES_2: see steps 3 and 4, below) from the Ki has certain vulnerabilities that can allow the extraction of the Ki from a SIM card and the making of a duplicate SIM card.
Authentication process:
When the mobile equipment starts up, it obtains the international mobile subscriber identity (IMSI) from the SIM card, and passes this to the mobile operator, requesting access and authentication. The mobile equipment may have to pass a PIN to the SIM card before the SIM card reveals this information.
The operator network searches its database for the incoming IMSI and its associated Ki.
The operator network then generates a random number (RAND, which is a nonce) and signs it with the Ki associated with the IMSI (and stored on the SIM card), computing another number, that is split into the Signed Response 1 (SRES_1, 32 bits) and the encryption key Kc (64 bits).
The operator network then sends the RAND to the mobile equipment, which passes it to the SIM card. The SIM card signs it with its Ki, producing Signed Response 2 (SRES_2) and Kc, which it gives to the mobile equipment. The mobile equipment passes SRES_2 on to the operator network.
The operator network then compares its computed SRES_1 with the computed SRES_2 that the mobile equipment returned. If the two numbers match, the SIM is authenticated and the mobile equipment is granted access to the operator's network. Kc is used to encrypt all further communications between the mobile equipment and the operator.
Location area identity
The SIM stores network state information, which is received from the location area identity (LAI). Operator networks are divided into location areas, each having a unique LAI number. When the device changes locations, it stores the new LAI to the SIM and sends it back to the operator network with its new location. If the device is power cycled, it takes data off the SIM, and searches for the prior LAI.
SMS messages and contacts
Most SIM cards store a number of SMS messages and phone book contacts. It stores the contacts in simple "name and number" pairs. Entries that contain multiple phone numbers and additional phone numbers are usually not stored on the SIM card. When a user tries to copy such entries to a SIM, the handset's software breaks them into multiple entries, discarding information that is not a phone number. The number of contacts and messages stored depends on the SIM; early models stored as few as five messages and 20 contacts, while modern SIM cards can usually store over 250 contacts.
Formats
SIM cards have been made smaller over the years; functionality is independent of format. Full-size SIM was followed by mini-SIM, micro-SIM, and nano-SIM. SIM cards are also made to embed in devices.
All versions of the non-embedded SIM cards share the same ISO/IEC 7816 pin arrangement.
Full-size SIM
The full-size SIM (or 1FF, 1st form factor) was the first form factor to appear. It was the size of a credit card (85.60 mm × 53.98 mm × 0.76 mm). Later smaller SIMs are often supplied embedded in a full-size card from which they can be removed.
Mini-SIM
The mini-SIM (or 2FF) card has the same contact arrangement as the full-size SIM card and is normally supplied within a full-size card carrier, attached by a number of linking pieces. This arrangement (defined in ISO/IEC 7810 as ID-1/000) lets such a card be used in a device that requires a full-size card or in a device that requires a mini-SIM card, after breaking the linking pieces. As the full-size SIM is no longer used, some suppliers refer to the mini-SIM as a "standard SIM" or "regular SIM".
Micro-SIM
The micro-SIM (or 3FF) card has the same thickness and contact arrangements, but reduced length and width as shown in the table above.
The micro-SIM was introduced by the European Telecommunications Standards Institute (ETSI) along with SCP, 3GPP (UTRAN/GERAN), 3GPP2 (CDMA2000), ARIB, GSM Association (GSMA SCaG and GSMNA), GlobalPlatform, Liberty Alliance, and the Open Mobile Alliance (OMA) for the purpose of fitting into devices too small for a mini-SIM card.
The form factor was mentioned in the December 1998 3GPP SMG9 UMTS Working Party, which is the standards-setting body for GSM SIM cards, and the form factor was agreed upon in late 2003.
The micro-SIM was designed for backward compatibility. The major issue for backward compatibility was the contact area of the chip. Retaining the same contact area makes the micro-SIM compatible with the prior, larger SIM readers through the use of plastic cutout surrounds. The SIM was also designed to run at the same speed (5 MHz) as the prior version. The same size and positions of pins resulted in numerous "How-to" tutorials and YouTube videos with detailed instructions how to cut a mini-SIM card to micro-SIM size.
The chairman of EP SCP, Dr Klaus Vedder, said
Micro-SIM cards were introduced by various mobile service providers for the launch of the original iPad, and later for smartphones, from April 2010. The iPhone 4 was the first smartphone to use a micro-SIM card in June 2010, followed by many others.
Nano-SIM
The nano-SIM (or 4FF) card was introduced on 11 October 2012, when mobile service providers in various countries started to supply it for phones that supported the format. The nano-SIM measures and reduces the previous format to the contact area while maintaining the existing contact arrangements. A small rim of isolating material is left around the contact area to avoid short circuits with the socket. The nano-SIM is thick, compared to the of its predecessors. 4FF can be put into adapters for use with devices designed for 2FF or 3FF SIMs, and is made thinner for that purpose, and telephone companies give due warning about this.
The iPhone 5, released in September 2012, was the first device to use a nano-SIM card, followed by other handsets.
Security
In July 2013, Karsten Nohl, a security researcher from SRLabs, described vulnerabilities in some SIM cards that supported DES, which, despite its age, is still used by some operators. The attack could lead to the phone being remotely cloned or let someone steal payment credentials from the SIM. Further details of the research were provided at BlackHat on 31 July 2013.
In response, the International Telecommunication Union said that the development was "hugely significant" and that it would be contacting its members.
In February 2015, it was reported by The Intercept that the NSA and GCHQ had stolen the encryption keys (Ki's) used by Gemalto (the manufacturer of 2 billion SIM cards annually), enabling these intelligence agencies to monitor voice and data communications without the knowledge or approval of cellular network providers or judicial oversight. Having finished its investigation, Gemalto claimed that it has “reasonable grounds” to believe that the NSA and GCHQ carried out an operation to hack its network in 2010 and 2011, but says the number of possibly stolen keys would not have been massive.
In September 2019, Cathal Mc Daid, a security researcher from AdaptiveMobile Security, described how vulnerabilities in some SIM cards that contained the S@T Browser library were being actively exploited. This vulnerability was named Simjacker. Attackers were using the vulnerability to track the location of thousands of mobile phone users in several countries. Further details of the research were provided at VirusBulletin on 3 October 2019.
Developments
When GSM was already in use, the specifications were further developed and enhanced with functionality such as SMS and GPRS. These development steps are referred as releases by ETSI. Within these development cycles, the SIM specification was enhanced as well: new voltage classes, formats and files were introduced.
USIM
In GSM-only times, the SIM consisted of the hardware and the software. With the advent of UMTS, this naming was split: the SIM was now an application and hence only software. The hardware part was called UICC. This split was necessary because UMTS introduced a new application, the universal subscriber identity module (USIM). The USIM brought, among other things, security improvements like mutual authentication and longer encryption keys and an improved address book.
UICC
"SIM cards" in developed countries today are usually UICCs containing at least a SIM application and a USIM application. This configuration is necessary because older GSM only handsets are solely compatible with the SIM application and some UMTS security enhancements rely on the USIM application.
Other variants
On cdmaOne networks, the equivalent of the SIM card is the R-UIM and the equivalent of the SIM application is the CSIM.
A virtual SIM is a mobile phone number provided by a mobile network operator that does not require a SIM card to connect phone calls to a user's mobile phone.
Embedded-SIM (eSIM)
An embedded SIM (eSIM) is a form of programmable SIM that is embedded directly into a device. The surface mount format provides the same electrical interface as the full size, 2FF and 3FF SIM cards, but is soldered to a circuit board as part of the manufacturing process. In M2M applications where there is no requirement to change the SIM card, this avoids the requirement for a connector, improving reliability and security. An eSIM can be provisioned remotely; end-users can add or remove operators without the need to physically swap a SIM from the device.
Usage in mobile phone standards
The use of SIM cards is mandatory in GSM devices.
The satellite phone networks Iridium, Thuraya and Inmarsat's BGAN also use SIM cards. Sometimes, these SIM cards work in regular GSM phones and also allow GSM customers to roam in satellite networks by using their own SIM cards in a satellite phone.
Japan's 2G PDC system (which was shut down in 2012; SoftBank Mobile has already shut down PDC from 31 March 2010) also specifies a SIM, but this has never been implemented commercially. The specification of the interface between the Mobile Equipment and the SIM is given in the RCR STD-27 annexe 4. The Subscriber Identity Module Expert Group was a committee of specialists assembled by the European Telecommunications Standards Institute (ETSI) to draw up the specifications (GSM 11.11) for interfacing between smart cards and mobile telephones. In 1994, the name SIMEG was changed to SMG9.
Japan's current and next-generation cellular systems are based on W-CDMA (UMTS) and CDMA2000 and all use SIM cards. However, Japanese CDMA2000-based phones are locked to the R-UIM they are associated with and thus, the cards are not interchangeable with other Japanese CDMA2000 handsets (though they may be inserted into GSM/WCDMA handsets for roaming purposes outside Japan).
CDMA-based devices originally did not use a removable card, and the service for these phones is bound to a unique identifier contained in the handset itself. This is most prevalent in operators in the Americas. The first publication of the TIA-820 standard (also known as 3GPP2 C.S0023) in 2000 defined the Removable User Identity Module (R-UIM). Card-based CDMA devices are most prevalent in Asia.
The equivalent of a SIM in UMTS is called the universal integrated circuit card (UICC), which runs a USIM application. The UICC is still colloquially called a SIM card.
SIM and carriers
The SIM card introduced a new and significant business opportunity for MVNOs mobile virtual network operators who lease capacity from one of the network operators rather than owning or operating a cellular telecoms network and only provide a SIM card to their customers. MVNOs first appeared in Denmark, Hong Kong, Finland and the UK. Today they exist in over 50 countries, including most of Europe, the United States, Canada, Mexico, Australia and parts of Asia, and account for approximately 10% of all mobile phone subscribers around the world.
On some networks, the mobile phone is locked to its carrier SIM card, meaning that the phone only works with SIM cards from the specific carrier. This is more common in markets where mobile phones are heavily subsidised by the carriers, and the business model depends on the customer staying with the service provider for a minimum term (typically 12, 18 or 24 months). SIM cards that are issued by providers with an associated contract are called SIM-only deals. Common examples are the GSM networks in the United States, Canada, Australia, the UK and Poland. Many businesses offer the ability to remove the SIM lock from a phone, effectively making it possible to then use the phone on any network by inserting a different SIM card. Mostly, GSM and 3G mobile handsets can easily be unlocked and used on any suitable network with any SIM card.
In countries where the phones are not subsidised, e.g., India, Israel and Belgium, all phones are unlocked. Where the phone is not locked to its SIM card, the users can easily switch networks by simply replacing the SIM card of one network with that of another while using only one phone. This is typical, for example, among users who may want to optimise their carrier's traffic by different tariffs to different friends on different networks, or when travelling internationally.
In 2016, carriers started using the concept of automatic SIM reactivation whereby they let users reuse expired SIM cards instead of purchasing new ones when they wish to re-subscribe to that operator. This is particularly useful in countries where prepaid calls dominate and where competition drives high churn rates, as users had to return to a carrier shop to purchase a new SIM each time they wanted to churn back to an operator.
SIM-only
Commonly sold as a product by mobile telecommunications companies, "SIM-only" refers to a type of legally binding contract between a mobile network provider and a customer. The contract itself takes the form of a credit agreement and is subject to a credit check.
Within a SIM-only contract, the mobile network provider supplies their customer with just one piece of hardware, a SIM card, which includes an agreed amount of network usage in exchange for a monthly payment. Network usage within a SIM-only contract can be measured in minutes, text, data or any combination of these. The duration of a SIM-only contract varies depending on the deal selected by the customer, but in the UK they are available over 1, 3, 6, and 12-month periods.
SIM-only contracts differ from mobile phone contracts in that they do not include any hardware other than a SIM card. In terms of network usage, SIM-only is typically more cost-effective than other contracts because the provider does not charge more to offset the cost of a mobile device over the contract period. The short contract length is one of the key features of SIM-only made possible by the absence of a mobile device.
SIM-only is increasing in popularity very quickly. In 2010 pay monthly based mobile phone subscriptions grew from 41 per cent to 49 per cent of all UK mobile phone subscriptions. According to German research company GfK, 250,000 SIM-only mobile contracts were taken up in the UK during July 2012 alone, the highest figure since GfK began keeping records.
Increasing smartphone penetration combined with financial concerns is leading customers to save money by moving onto a SIM-only when their initial contract term is over.
Multiple-SIM devices
Dual SIM devices have two SIM card slots for the use of two SIM cards, from one or multiple carriers. Multiple SIM devices are commonplace in developing markets such as in Africa, East Asia, South Asia and Southeast Asia, where variable billing rates, network coverage and speed make it desirable for consumers to use multiple SIMs from competing networks. Dual-SIM phones are also useful to separate one's personal phone number from a business phone number, without having to carry multiple devices. Some popular devices, such as the BlackBerry KeyOne, have dual-SIM variants; however, dual-SIM devices were not common in the US or Europe due to lack of demand. This has changed with mainline products from Apple and Google featuring either two SIM slots or a combination of a physical SIM slot and an eSIM.
Thin SIM
A thin SIM (or overlay SIM or SIM overlay) is a very thin device shaped like a SIM card, approximately 120 microns thick. It has contacts on its front and back. It is used by sticking it on top of a regular SIM card. It provides its own functionality while passing through the functionality of the SIM card underneath. It can be used to bypass the mobile operating network and run custom applications, particularly on non-programmable cell phones.
Its top surface is a connector that connects to the phone in place of the normal SIM. Its bottom surface is a connector that connects to the SIM in place of the phone. With electronics, it can modify signals in either direction, thus presenting a modified SIM to the phone, and/or presenting a modified phone to the SIM. It is a similar concept to the Game Genie, which connects between a game console and a game cartridge, creating a modified game. Similar devices have also been developed for iPhones to circumvent SIM card restrictions on carrier-locked models.
In 2014, Equitel, an MVNO operated by Kenya's Equity Bank, announced its intention to begin issuing thin SIMs to customers, raising security concerns by competition, particularly concerning the safety of mobile money accounts. However, after months of security testing and legal hearings before the country's Parliamentary Committee on Energy, Information and Communications, the Communications Authority of Kenya (CAK) gave the bank the green light to roll out its thin SIM cards.
See also
Apple SIM
GSM 03.48
International Mobile Equipment Identity (IMEI)
IP Multimedia Services Identity Module (ISIM)
Mobile broadband
Mobile equipment identifier (MEID)
Mobile signature
Regional lockout
SIM cloning
SIM connector
Single Wire Protocol (SWP)
Tethering
Transponder
GSM USSD codes Unstructured Supplementary Service Data: list of standard GSM codes for network and SIM related functions
VMAC
W-SIM (Willcom-SIM)
References
External links
GSM 11.11 – Specification of the Subscriber Identity Module-Mobile Equipment (SIM-ME) interface.
GSM 11.14 – Specification of the SIM Application Toolkit for the Subscriber Identity Module-Mobile Equipment (SIM-ME) interface
GSM 03.48 – Specification of the security mechanisms for SIM application toolkit
GSM 03.48 Java API – API and realization of GSM 03.48 in Java
ITU-T E.118 – The International Telecommunication Charge Card 2006 ITU-T
German inventions
Mobile phone standards
Cryptographic hardware
Smart cards
Computer access control |
198222 | https://en.wikipedia.org/wiki/Mo | Mo | Mo or MO may refer to:
Arts and entertainment
Fictional characters
Mo, a girl in the Horrible Histories TV series
Mo, also known as Mortimer, in the novel Inkheart by Cornelia Funke
Mo, in the webcomic Jesus and Mo
Mo, the main character in the Mo's Mischief children's book series
Mo, an ophthalmosaurus from The Land Before Time franchise
MO (Maintenance Operator), a robot in the Filmation series Young Sentinels
Mo, a main character in Zoey's Extraordinary Playlist
M-O (Microbe Obliterator), a robot in film WALL-E
Mo the clown, a character played by Roy Rene, 20th-century Australian stage comedian
Mo Effanga, in the BBC medical drama series Holby City
Mo Harris, in the BBC soap opera EastEnders
Little Mo Mitchell, in the BBC soap opera EastEnders
Films
"Mo" (魔 demon), original title of The Boxer's Omen, a 1983 Hong Kong film
Mo (2010 film), a television movie about British politician Mo Mowlam
Mo (2016 film), a Tamil horror film
Music
M.O. (album), a 2013 album by American hip hop artist Nelly
M.O, an English pop trio
The MO, a Dutch pop band
Mo Awards, annual awards for Australian live entertainment
Yamaha MO, a music synthesizer
Other arts and entertainment
Mo (Oz), a fictional country in the book The Magical Monarch of Mo by L. Frank Baum
Businesses and organizations
Altria Group, formerly Philip Morris (New York Stock Exchange symbol MO)
Calm Air (IATA airline designator MO), an airline based in Thompson, Manitoba, Canada
Milicja Obywatelska, a state police institution in Poland from 1944 to 1990
Language
Mo (kana), Romanisation of the Japanese kana も and モ
Mo language (disambiguation)
Moldavian language (deprecated ISO 639-1 language code "mo")
People
Mo (given name)
Emperor Mo (disambiguation), the posthumous name of various Chinese emperors
Mo (Chinese surname)
Mo (Korean surname)
MØ, Danish singer songwriter Karen Marie Ørsted (born 1988)
Mr. Mo (rapper), rapper, member of the group Jim Crow
Mr. Mo (singer), member of the Danish band Kaliber
Mo Twister, Filipino radio DJ and TV host Mohan Gumatay (born 1977)
Mo Hayder, a pen name of British crime novelist Beatrice Clare Dunkel (1962–2021)
Mo (wrestler), ring name of Robert Horne (born 1964), professional wrestler
Places
Norway
Mo i Rana, a town in Rana municipality, Nordland county
Mo, Agder, a village in Vegårshei municipality, Agder county
Mo, Møre og Romsdal, a village in Surnadal municipality, Møre og Romsdal county
Mo, Norway, a village in Nord-Odal municipality, Innlandet county
Mo, Telemark, a former municipality in the old Telemark county
Mo, Vestland, a village in Modalen municipality, Vestland county
Mo Church (disambiguation), a list of several churches by this name in Norway
Elsewhere
County Mayo, Ireland (vehicle plate code MO)
Macau (ISO 3166-1 alpha-2 country code MO)
Missouri, US (postal abbreviation)
Moscow Oblast, Russia
Province of Modena, Italy (vehicle plate code MO)
Religion
Mo (divination), a traditional Tibetan Buddhist technique of divination
Mo (religion), an animist religion of the Zhuang people of China
Modern Orthodox Judaism, a movement that attempts to synthesize Orthodox Jewish values with the secular world
Science and technology
Computing
.mo, country code top level domain of Macau
Magneto-optical drive (magneto-optical storage), a data storage medium
Microsoft Office, an office software suite
Mode of operation, in encryption block ciphers
Motivating operation, a term describing the effectiveness of consequences in operant conditioning
Other uses in science and technology
Mo (grist mill) (磨), ancient Chinese stone implements used to grind grain into flour
Magnus and Oberhettinger aka "Formulas and Theorems for the Functions of Mathematical Physics", a mathematics book on special functions
Manual override, a mechanism wherein control is taken from an automated system and given to the user
Metalorganics, also known as organometallics, in chemistry and materials science
Molecular orbital, a mathematical function describing the wave-like behavior of an electron in a molecule
Molybdenum (symbol Mo), a chemical element
Momentary open (MO), a group of electrical switches
Vehicles
MO-class small guard ship, a class of small ships produced before and during World War II for the Soviet Navy
Morris Oxford MO, an automobile produced by Morris Motors of the United Kingdom from 1948 to 1954
Other uses
Mo (Chinese zoology), a name that semantically changed from "giant panda", to "a mythical chimera", to "tapir"
Modus operandi (abbreviation m.o.), Latin meaning "mode of operation"; distinctive behavior patterns of an entity
Month (abbreviation mo.), a unit of time of approximately 30 days
Operation Mo, or the Port Moresby Operation, a Japanese plan to take the Australian Territory of New Guinea during World War II
See also
Meaux (disambiguation)
mho, in physics, the reciprocal of the "ohm" unit of resistance
Mø (disambiguation)
Mobile (disambiguation)
Moe (disambiguation)
Moe's (disambiguation)
Mohs (disambiguation)
Mow (disambiguation) |
198584 | https://en.wikipedia.org/wiki/Laptop | Laptop | A laptop, laptop computer, or notebook computer is a small, portable personal computer (PC) with a screen and alphanumeric keyboard. These typically have a clam shell form factor with the screen mounted on the inside of the upper lid and the keyboard on the inside of the lower lid, although 2-in-1 PCs with a detachable keyboard are often marketed as laptops or as having a laptop mode. Laptops are folded shut for transportation, and thus are suitable for mobile use. Its name comes from lap, as it was deemed practical to be placed on a person's lap when being used. Today, laptops are used in a variety of settings, such as at work, in education, for playing games, web browsing, for personal multimedia, and general home computer use.
As of 2021, in American English, the terms 'laptop computer' and 'notebook computer' are used interchangeably; in other dialects of English one or the other may be preferred. Although the terms 'notebook computers' or 'notebooks' originally referred to a specific size of laptop (originally smaller and lighter than mainstream laptops of the time), the terms have come to mean the same thing and notebook no longer refers to any specific size.
Laptops combine all the input/output components and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, data storage device, sometimes an optical disc drive, pointing devices (such as a touch pad or pointing stick), with an operating system, a processor and memory into a single unit. Most modern laptops feature integrated webcams and built-in microphones, while many also have touchscreens. Laptops can be powered either from an internal battery or by an external power supply from an AC adapter. Hardware specifications, such as the processor speed and memory capacity, significantly vary between different types, models and price points.
Design elements, form factor and construction can also vary significantly between models depending on the intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low production cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or traveling sales representatives. As portable computers evolved into modern laptops, they became widely used for a variety of purposes.
History
As the personal computer (PC) became feasible in 1971, the idea of a portable personal computer soon followed. A "personal, portable information manipulator" was imagined by Alan Kay at Xerox PARC in 1968, and described in his 1972 paper as the "Dynabook". The IBM Special Computer APL Machine Portable (SCAMP) was demonstrated in 1973. This prototype was based on the IBM PALM processor. The IBM 5100, the first commercially available portable computer, appeared in September 1975, and was based on the SCAMP prototype.
As 8-bit CPU machines became widely accepted, the number of portables increased rapidly. The first "laptop-sized notebook computer" was the Epson HX-20, invented (patented) by Suwa Seikosha's Yukio Yokozawa in July 1980, introduced at the COMDEX computer show in Las Vegas by Japanese company Seiko Epson in 1981, and released in July 1982. It had an LCD screen, a rechargeable battery, and a calculator-size printer, in a chassis, the size of an A4 notebook. It was described as a "laptop" and "notebook" computer in its patent.
The portable micro computer Portal of the French company R2E Micral CCMC officially appeared in September 1980 at the Sicob show in Paris. It was a portable microcomputer designed and marketed by the studies and developments department of R2E Micral at the request of the company CCMC specializing in payroll and accounting. It was based on an Intel 8085 processor, 8-bit, clocked at 2 MHz. It was equipped with a central 64 KB RAM, a keyboard with 58 alphanumeric keys and 11 numeric keys (separate blocks), a 32-character screen, a floppy disk: capacity = 140,000 characters, of a thermal printer: speed = 28 characters / second, an asynchronous channel, asynchronous channel, a 220 V power supply. It weighed 12 kg and its dimensions were 45 x 45 x 15 cm. It provided total mobility. Its operating system was aptly named Prologue.
The Osborne 1, released in 1981, was a luggable computer that used the Zilog Z80 and weighed . It had no battery, a cathode ray tube (CRT) screen, and dual single-density floppy drives. Both Tandy/RadioShack and Hewlett Packard (HP) also produced portable computers of varying designs during this period. The first laptops using the flip form factor appeared in the early 1980s. The Dulmont Magnum was released in Australia in 1981–82, but was not marketed internationally until 1984–85. The US$8,150 (US$ today) GRiD Compass 1101, released in 1982, was used at NASA and by the military, among others. The Sharp PC-5000, Ampere and Gavilan SC released in 1983. The Gavilan SC was described as a "laptop" by its manufacturer, while the Ampere had a modern clamshell design. The Toshiba T1100 won acceptance not only among PC experts but the mass market as a way to have PC portability.
From 1983 onward, several new input techniques were developed and included in laptops, including the touch pad (Gavilan SC, 1983), the pointing stick (IBM ThinkPad 700, 1992), and handwriting recognition (Linus Write-Top, 1987). Some CPUs, such as the 1990 Intel i386SL, were designed to use minimum power to increase battery life of portable computers and were supported by dynamic power management features such as Intel SpeedStep and AMD PowerNow! in some designs.
Displays reached 640x480 (VGA) resolution by 1988 (Compaq SLT/286), and color screens started becoming a common upgrade in 1991, with increases in resolution and screen size occurring frequently until the introduction of 17" screen laptops in 2003. Hard drives started to be used in portables, encouraged by the introduction of 3.5" drives in the late 1980s, and became common in laptops starting with the introduction of 2.5" and smaller drives around 1990; capacities have typically lagged behind physically larger desktop drives.
Common resolutions of laptop webcams are 720p (HD), and in lower-end laptops 480p. The earliest known laptops with 1080p (Full HD) webcams like the Samsung 700G7C were released in the early 2010s.
Optical disc drives became common in full-size laptops around 1997; this initially consisted of CD-ROM drives, which were supplanted by CD-R, DVD, and Blu-ray drives with writing capability over time. Starting around 2011, the trend shifted against internal optical drives, and as of 2021, they have largely disappeared; they are still readily available as external peripherals.
Etymology
While the terms laptop and notebook are used interchangeably today, there is some question as to the original etymology and specificity of either term—the term laptop appears to have been coined in the early 1980s to describe a mobile computer which could be used on one's lap, and to distinguish these devices from earlier and much heavier, portable computers (informally called "luggables"). The term "notebook" appears to have gained currency somewhat later as manufacturers started producing even smaller portable devices, further reducing their weight and size and incorporating a display roughly the size of A4 paper; these were marketed as notebooks to distinguish them from bulkier mainstream or desktop replacement laptops.
Types
Since the introduction of portable computers during the late 1970s, their form has changed significantly, spawning a variety of visually and technologically differing subclasses. Except where there is a distinct legal trademark around a term (notably, Ultrabook), there are rarely hard distinctions between these classes and their usage has varied over time and between different sources. Since the late 2010s, the use of more specific terms has become less common, with sizes distinguished largely by the size of the screen.
Smaller and Larger Laptops
There were in the past a number of marketing categories for smaller and larger laptop computers; these included "subnotebook" models, low cost "netbooks", and "Ultra-mobile PCs" where the size class overlapped with devices like smartphone and handheld tablets, and "Desktop replacement" laptops for machines notably larger and heavier than typical to operate more powerful processors or graphics hardware. All of these terms have fallen out of favor as the size of mainstream laptops has gone down and their capabilities have gone up; except for niche models, laptop sizes tend to be distinguished by the size of the screen, and for more powerful models, by any specialized purpose the machine is intended for, such as a "gaming laptop" or a "mobile workstation" for professional use.
Convertible, hybrid, 2-in-1
The latest trend of technological convergence in the portable computer industry spawned a broad range of devices, which combined features of several previously separate device types. The hybrids, convertibles, and 2-in-1s emerged as crossover devices, which share traits of both tablets and laptops. All such devices have a touchscreen display designed to allow users to work in a tablet mode, using either multi-touch gestures or a stylus/digital pen.
Convertibles are devices with the ability to conceal a hardware keyboard. Keyboards on such devices can be flipped, rotated, or slid behind the back of the chassis, thus transforming from a laptop into a tablet. Hybrids have a keyboard detachment mechanism, and due to this feature, all critical components are situated in the part with the display. 2-in-1s can have a hybrid or a convertible form, often dubbed 2-in-1 detachable and 2-in-1 convertibles respectively, but are distinguished by the ability to run a desktop OS, such as Windows 10. 2-in-1s are often marketed as laptop replacement tablets.
2-in-1s are often very thin, around , and light devices with a long battery life. 2-in-1s are distinguished from mainstream tablets as they feature an x86-architecture CPU (typically a low- or ultra-low-voltage model), such as the Intel Core i5, run a full-featured desktop OS like Windows 10, and have a number of typical laptop I/O ports, such as USB 3 and Mini DisplayPort.
2-in-1s are designed to be used not only as a media consumption device but also as valid desktop or laptop replacements, due to their ability to run desktop applications, such as Adobe Photoshop. It is possible to connect multiple peripheral devices, such as a mouse, keyboard, and several external displays to a modern 2-in-1.
Microsoft Surface Pro-series devices and Surface Book are examples of modern 2-in-1 detachable, whereas Lenovo Yoga-series computers are a variant of 2-in-1 convertibles. While the older Surface RT and Surface 2 have the same chassis design as the Surface Pro, their use of ARM processors and Windows RT do not classify them as 2-in-1s, but as hybrid tablets. Similarly, a number of hybrid laptops run a mobile operating system, such as Android. These include Asus's Transformer Pad devices, examples of hybrids with a detachable keyboard design, which do not fall in the category of 2-in-1s.
Rugged laptop
A rugged laptop is designed to reliably operate in harsh usage conditions such as strong vibrations, extreme temperatures, and wet or dusty environments. Rugged laptops are bulkier, heavier, and much more expensive than regular laptops, and thus are seldom seen in regular consumer use.
Hardware
The basic components of laptops function identically to their desktop counterparts. Traditionally they were miniaturized and adapted to mobile use, although desktop systems increasingly use the same smaller, lower-power parts which were originally developed for mobile use. The design restrictions on power, size, and cooling of laptops limit the maximum performance of laptop parts compared to that of desktop components, although that difference has increasingly narrowed.
In general, laptop components are not intended to be replaceable or upgradable by the end-user, except for components that can be detached; in the past, batteries and optical drives were commonly exchangeable. This restriction is one of the major differences between laptops and desktop computers, because the large "tower" cases used in desktop computers are designed so that new motherboards, hard disks, sound cards, RAM, and other components can be added. Memory and storage can often be upgraded with some disassembly, but with the most compact laptops, there may be no upgradeable components at all.
Intel, Asus, Compal, Quanta, and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards and inability to upgrade components.
The following sections summarizes the differences and distinguishing features of laptop components in comparison to desktop personal computer parts.
Display
Internally, a display is usually an LCD panel, although occasionally OLEDs are used. These interface to the laptop using the LVDS or embedded DisplayPort protocol, while externally, it can be a glossy screen or a matte (anti-glare) screen. As of 2021, mainstream consumer laptops tend to come with either 13" or 15"-16" screens; 14" models are more popular among business machines. Larger and smaller models are available, but less common – there is no clear dividing line in minimum or maximum size. Machines small enough to be handheld (screens in the 6–8" range) can be marketed either as very small laptops or "handheld PCs," while the distinction between the largest laptops and "All-in-One" desktops is whether they fold for travel.
Sizes
In the past, there was a broader range of marketing terms (both formal and informal) to distinguish between different sizes of laptops. These included Netbooks, subnotebooks, Ultra-mobile PC, and Desktop replacement computers; these are sometimes still used informally, although they are essentially dead in terms of manufacturer marketing.
Resolution
Having a higher resolution display allows more items to fit onscreen at a time, improving the user's ability to multitask, although at the higher resolutions on smaller screens, the resolution may only serve to display sharper graphics and text rather than increasing the usable area. Since the introduction of the MacBook Pro with Retina display in 2012, there have been an increase in the availability of "HiDPI" (or high Pixel density) displays; as of 2021, this is generally considered to be anything higher than 1920 pixels wide. This has increasingly converged around 4K (3840-pixel-wide) resolutions.
External displays can be connected to most laptops, and models with a Mini DisplayPort can handle up to three.
Refresh rates and 3D
The earliest laptops known to feature a display with doubled 120 Hz of refresh rate and active shutter 3D system were released in 2011 by Dell (M17x) and Samsung (700G7A).
Central processing unit
A laptop's central processing unit (CPU) has advanced power-saving features and produces less heat than one intended purely for desktop use. Mainstream laptop CPUs made after 2018 have four processor cores, although some inexpensive models still have 2-core CPUs, and 6-core and 8-core models are also available.
For the low price and mainstream performance, there is no longer a significant performance difference between laptop and desktop CPUs, but at the high end, the fastest desktop CPUs still substantially outperform the fastest laptop processors, at the expense of massively higher power consumption and heat generation; the fastest laptop processors top out at 56 watts of heat, while the fastest desktop processors top out at 150 watts.
There has been a wide range of CPUs designed for laptops available from both Intel, AMD, and other manufacturers. On non-x86 architectures, Motorola and IBM produced the chips for the former PowerPC-based Apple laptops (iBook and PowerBook). Between around 2000 to 2014, most full-size laptops had socketed, replaceable CPUs; on thinner models, the CPU was soldered on the motherboard and was not replaceable or upgradable without replacing the motherboard. Since 2015, Intel has not offered new laptop CPU models with pins to be interchangeable, preferring ball grid array chip packages which have to be soldered;and as of 2021, only a few rare models using desktop parts.
In the past, some laptops have used a desktop processor instead of the laptop version and have had high-performance gains at the cost of greater weight, heat, and limited battery life; this is not unknown as of 2021, but since around 2010, the practice has been restricted to small-volume gaming models. Laptop CPUs are rarely able to be overclocked; most use locked processors. Even on gaming models where unlocked processors are available, the cooling system in most laptops is often very close to its limits and there is rarely headroom for an overclocking–related operating temperature increase.
Graphical processing unit
On most laptops, a graphical processing unit (GPU) is integrated into the CPU to conserve power and space. This was introduced by Intel with the Core i-series of mobile processors in 2010, and similar accelerated processing unit (APU) processors by AMD later that year.
Before that, lower-end machines tended to use graphics processors integrated into the system chipset, while higher-end machines had a separate graphics processor. In the past, laptops lacking a separate graphics processor were limited in their utility for gaming and professional applications involving 3D graphics, but the capabilities of CPU-integrated graphics have converged with the low-end of dedicated graphics processors since the mid-2010s.
Higher-end laptops intended for gaming or professional 3D work still come with dedicated and in some cases even dual, graphics processors on the motherboard or as an internal expansion card. Since 2011, these almost always involve switchable graphics so that when there is no demand for the higher performance dedicated graphics processor, the more power-efficient integrated graphics processor will be used. Nvidia Optimus and AMD Hybrid Graphics are examples of this sort of system of switchable graphics.
Memory
Since around the year 2000, most laptops have used SO-DIMM RAM, although, as of 2021, an increasing number of models use memory soldered to the motherboard. Before 2000, most laptops used proprietary memory modules if their memory was upgradable.
In the early 2010s, high end laptops such as the 2011 Samsung 700G7A have passed the 10 GB RAM barrier, featuring 16 GB of RAM.
When upgradeable, memory slots are sometimes accessible from the bottom of the laptop for ease of upgrading; in other cases, accessing them requires significant disassembly. Most laptops have two memory slots, although some will have only one, either for cost savings or because some amount of memory is soldered. Some high-end models have four slots; these are usually mobile engineering workstations, although a few high-end models intended for gaming do as well.
As of 2021, 8 GB RAM is most common, with lower-end models occasionally having 4GB. Higher-end laptops may come with 16 GB of RAM or more.
Internal storage
The earliest laptops most often used floppy disk for storage, although a few used either RAM disks or tape, by the late 1980s hard disk drives had become the standard form of storage.
Between 1990 and 2009, almost all laptops typically had a hard disk drive (HDD) for storage; since then, solid-state drives (SSD) have gradually come to supplant hard drives in all but some inexpensive consumer models. Solid-state drives are faster and more power-efficient, as well as eliminating the hazard of drive and data corruption caused by a laptop's physical impacts, as they use no mechanical parts such as a rotational platter. In many cases, they are more compact as well. Initially, in the late 2000s, SSDs were substantially more expensive than HDDs, but as of 2021 prices on smaller capacity (under 1 terabyte) drives have converged; larger capacity drives remain more expensive than comparable-sized HDDs.
Since around 1990, where a hard drive is present it will typically be a 2.5-inch drive; some very compact laptops support even smaller 1.8-inch HDDs, and a very small number used 1" Microdrives. Some SSDs are built to match the size/shape of a laptop hard drive, but increasingly they have been replaced with smaller mSATA or M.2 cards. SSDs using the newer and much faster NVM Express standard for connecting are only available as cards.
As of 2021, many laptops no longer contain space for a 2.5" drive, accepting only M.2 cards; a few of the smallest have storage soldered to the motherboard. For those that can, they can typically contain a single 2.5-inch drive, but a small number of laptops with a screen wider than 15 inches can house two drives.
A variety of external HDDs or NAS data storage servers with support of RAID technology can be attached to virtually any laptop over such interfaces as USB, FireWire, eSATA, or Thunderbolt, or over a wired or wireless network to further increase space for the storage of data. Many laptops also incorporate a card reader which allows for use of memory cards, such as those used for digital cameras, which are typically SD or microSD cards. This enables users to download digital pictures from an SD card onto a laptop, thus enabling them to delete the SD card's contents to free up space for taking new pictures.
Removable media drive
Optical disc drives capable of playing CD-ROMs, compact discs (CD), DVDs, and in some cases, Blu-ray discs (BD), were nearly universal on full-sized models between the mid-1990s and the early 2010s. As of 2021, drives are uncommon in compact or premium laptops; they remain available in some bulkier models, but the trend towards thinner and lighter machines is gradually eliminating these drives and players – when needed they can be connected via USB instead.
Inputs
An alphanumeric keyboard is used to enter text, data, and other commands (e.g., function keys). A touchpad (also called a trackpad), a pointing stick, or both, are used to control the position of the cursor on the screen, and an integrated keyboard is used for typing. Some touchpads have buttons separate from the touch surface, while others share the surface. A quick double-tap is typically registered as a click, and operating systems may recognize multi-finger touch gestures.
An external keyboard and mouse may be connected using a USB port or wirelessly, via Bluetooth or similar technology. Some laptops have multitouch touchscreen displays, either available as an option or standard. Most laptops have webcams and microphones, which can be used to communicate with other people with both moving images and sound, via web conferencing or video-calling software.
Laptops typically have USB ports and a combined headphone/microphone jack, for use with headphones, a combined headset, or an external mic. Many laptops have a card reader for reading digital camera SD cards.
Input/output (I/O) ports
On a typical laptop there are several USB ports; if they use only the older USB connectors instead of USB-C, they will typically have an external monitor port (VGA, DVI, HDMI or Mini DisplayPort or occasionally more than one), an audio in/out port (often in form of a single socket) is common. It is possible to connect up to three external displays to a 2014-era laptop via a single Mini DisplayPort, using multi-stream transport technology.
Apple, in a 2015 version of its MacBook, transitioned from a number of different I/O ports to a single USB-C port. This port can be used both for charging and connecting a variety of devices through the use of aftermarket adapters. Google, with its updated version of Chromebook Pixel, shows a similar transition trend towards USB-C, although keeping older USB Type-A ports for a better compatibility with older devices. Although being common until the end of the 2000s decade, Ethernet network port are rarely found on modern laptops, due to widespread use of wireless networking, such as Wi-Fi. Legacy ports such as a PS/2 keyboard/mouse port, serial port, parallel port, or FireWire are provided on some models, but they are increasingly rare. On Apple's systems, and on a handful of other laptops, there are also Thunderbolt ports, but Thunderbolt 3 uses USB-C. Laptops typically have a headphone jack, so that the user can connect external headphones or amplified speaker systems for listening to music or other audio.
Expansion cards
In the past, a PC Card (formerly PCMCIA) or ExpressCard slot for expansion was often present on laptops to allow adding and removing functionality, even when the laptop is powered on; these are becoming increasingly rare since the introduction of USB 3.0. Some internal subsystems such as Ethernet, Wi-Fi, or a wireless cellular modem can be implemented as replaceable internal expansion cards, usually accessible under an access cover on the bottom of the laptop. The standard for such cards is PCI Express, which comes in both mini and even smaller M.2 sizes. In newer laptops, it is not uncommon to also see Micro SATA (mSATA) functionality on PCI Express Mini or M.2 card slots allowing the use of those slots for SATA-based solid-state drives.
Battery and power supply
Since the late 1990s, laptops have typically used lithium ion or lithium polymer batteries, These replaced the older nickel metal-hydride typically used in the 1990s, and nickel–cadmium batteries used in most of the earliest laptops. A few of the oldest laptops used non-rechargeable batteries, or lead–acid batteries.
Battery life is highly variable by model and workload and can range from one hour to nearly a day. A battery's performance gradually decreases over time; a substantial reduction in capacity is typically evident after one to three years of regular use, depending on the charging and discharging pattern and the design of the battery. Innovations in laptops and batteries have seen situations in which the battery can provide up to 24 hours of continued operation, assuming average power consumption levels. An example is the HP EliteBook 6930p when used with its ultra-capacity battery.
Laptops with removable batteries may support larger replacement batteries with extended capacity.
A laptop's battery is charged using an external power supply, which is plugged into a wall outlet. The power supply outputs a DC voltage typically in the range of 7.2—24 volts. The power supply is usually external and connected to the laptop through a DC connector cable. In most cases, it can charge the battery and power the laptop simultaneously. When the battery is fully charged, the laptop continues to run on power supplied by the external power supply, avoiding battery use. If the used power supply is not strong enough to power computing components and charge the battery simultaneously, the battery may charge in a shorter period of time if the laptop is turned off or sleeping. The charger typically adds about to the overall transporting weight of a laptop, although some models are substantially heavier or lighter. Most 2016-era laptops use a smart battery, a rechargeable battery pack with a built-in battery management system (BMS). The smart battery can internally measure voltage and current, and deduce charge level and State of Health (SoH) parameters, indicating the state of the cells.
Power connectors
Historically, DC connectors, typically cylindrical/barrel-shaped coaxial power connectors have been used in laptops. Some vendors such as Lenovo made intermittent use of a rectangular connector.
Some connector heads feature a center pin to allow the end device to determine the power supply type by measuring the resistance between it and the connector's negative pole (outer surface). Vendors may block charging if a power supply is not recognized as original part, which could deny the legitimate use of universal third-party chargers.
With the advent of USB-C, portable electronics made increasing use of it for both power delivery and data transfer. Its support for 20 V (common laptop power supply voltage) and 5 A typically suffices for low to mid-end laptops, but some with higher power demands such as gaming laptops depend on dedicated DC connectors to handle currents beyond 5 A without risking overheating, some even above 10 A. Additionally, dedicated DC connectors are more durable and less prone to wear and tear from frequent reconnection, as their design is less delicate.
Cooling
Waste heat from the operation is difficult to remove in the compact internal space of a laptop. The earliest laptops used passive cooling; this gave way to heat sinks placed directly on the components to be cooled, but when these hot components are deep inside the device, a large space-wasting air duct is needed to exhaust the heat. Modern laptops instead rely on heat pipes to rapidly move waste heat towards the edges of the device, to allow for a much smaller and compact fan and heat sink cooling system. Waste heat is usually exhausted away from the device operator towards the rear or sides of the device. Multiple air intake paths are used since some intakes can be blocked, such as when the device is placed on a soft conforming surface like a chair cushion. Secondary device temperature monitoring may reduce performance or trigger an emergency shutdown if it is unable to dissipate heat, such as if the laptop were to be left running and placed inside a carrying case. Aftermarket cooling pads with external fans can be used with laptops to reduce operating temperatures.
Docking station
A docking station (sometimes referred to simply as a dock) is a laptop accessory that contains multiple ports and in some cases expansion slots or bays for fixed or removable drives. A laptop connects and disconnects to a docking station, typically through a single large proprietary connector. A docking station is an especially popular laptop accessory in a corporate computing environment, due to a possibility of a docking station transforming a laptop into a full-featured desktop replacement, yet allowing for its easy release. This ability can be advantageous to "road warrior" employees who have to travel frequently for work, and yet who also come into the office. If more ports are needed, or their position on a laptop is inconvenient, one can use a cheaper passive device known as a port replicator. These devices mate to the connectors on the laptop, such as through USB or FireWire.
Charging trolleys
Laptop charging trolleys, also known as laptop trolleys or laptop carts, are mobile storage containers to charge multiple laptops, netbooks, and tablet computers at the same time. The trolleys are used in schools that have replaced their traditional static computer labs suites of desktop equipped with "tower" computers, but do not have enough plug sockets in an individual classroom to charge all of the devices. The trolleys can be wheeled between rooms and classrooms so that all students and teachers in a particular building can access fully charged IT equipment.
Laptop charging trolleys are also used to deter and protect against opportunistic and organized theft. Schools, especially those with open plan designs, are often prime targets for thieves who steal high-value items. Laptops, netbooks, and tablets are among the highest–value portable items in a school. Moreover, laptops can easily be concealed under clothing and stolen from buildings. Many types of laptop–charging trolleys are designed and constructed to protect against theft. They are generally made out of steel, and the laptops remain locked up while not in use. Although the trolleys can be moved between areas from one classroom to another, they can often be mounted or locked to the floor or walls to prevent thieves from stealing the laptops, especially overnight.
Solar panels
In some laptops, solar panels are able to generate enough solar power for the laptop to operate. The One Laptop Per Child Initiative released the OLPC XO-1 laptop which was tested and successfully operated by use of solar panels. Presently, they are designing an OLPC XO-3 laptop with these features. The OLPC XO-3 can operate with 2 watts of electricity because its renewable energy resources generate a total of 4 watts. Samsung has also designed the NC215S solar–powered notebook that will be sold commercially in the U.S. market.
Accessories
A common accessory for laptops is a laptop sleeve, laptop skin, or laptop case, which provides a degree of protection from scratches. Sleeves, which are distinguished by being relatively thin and flexible, are most commonly made of neoprene, with sturdier ones made of low-resilience polyurethane. Some laptop sleeves are wrapped in ballistic nylon to provide some measure of waterproofing. Bulkier and sturdier cases can be made of metal with polyurethane padding inside and may have locks for added security. Metal, padded cases also offer protection against impacts and drops. Another common accessory is a laptop cooler, a device that helps lower the internal temperature of the laptop either actively or passively. A common active method involves using electric fans to draw heat away from the laptop, while a passive method might involve propping the laptop up on some type of pad so it can receive more airflow. Some stores sell laptop pads that enable a reclining person on a bed to use a laptop.
Modularity
Some of the components of earlier models of laptops can easily be replaced without opening completely its bottom part, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc.
Some of the components of recent models of laptops reside inside. Replacing most of its components, such as keyboard, battery, hard disk, memory modules, CPU cooling fan, etc., requires removal of its either top or bottom part, removal of the motherboard, and returning them.
In some types, solder and glue are used to mount components such as RAM, storage, and batteries, making repairs additionally difficult.
Obsolete features
Features that certain early models of laptops used to have that are not available in most current laptops include:
Reset ("cold restart") button in a hole (needed a thin metal tool to press)
Instant power off button in a hole (needed a thin metal tool to press)
Integrated charger or power adapter inside the laptop
Floppy disk drive
Serial port
Parallel port
Modem
Shared PS/2 input device port
IrDA
S-video port
S/PDIF audio port
PC Card / PCMCIA slot
ExpressCard slot
CD/DVD Drives (starting with 2013 models)
VGA port (starting with 2013 models)
Comparison with desktops
Advantages
Portability is usually the first feature mentioned in any comparison of laptops versus desktop PCs. Physical portability allows a laptop to be used in many places—not only at home and the office but also during commuting and flights, in coffee shops, in lecture halls and libraries, at clients' locations or a meeting room, etc. Within a home, portability enables laptop users to move their devices from the living room to the dining room to the family room. Portability offers several distinct advantages:
Productivity: Using a laptop in places where a desktop PC cannot be used can help employees and students to increase their productivity on work or school tasks, such as an office worker reading their work e-mails during an hour-long commute by train, or a student doing their homework at the university coffee shop during a break between lectures, for example.
Immediacy: Carrying a laptop means having instant access to information, including personal and work files. This allows better collaboration between coworkers or students, as a laptop can be flipped open to look at a report, document, spreadsheet, or presentation anytime and anywhere.
Up-to-date information: If a person has more than one desktop PC, a problem of synchronization arises: changes made on one computer are not automatically propagated to the others. There are ways to resolve this problem, including physical transfer of updated files (using a USB flash memory stick or CD-ROMs) or using synchronization software over the Internet, such as cloud computing. However, transporting a single laptop to both locations avoids the problem entirely, as the files exist in a single location and are always up-to-date.
Connectivity: In the 2010s, a proliferation of Wi-Fi wireless networks and cellular broadband data services (HSDPA, EVDO and others) in many urban centers, combined with near-ubiquitous Wi-Fi support by modern laptops meant that a laptop could now have easy Internet and local network connectivity while remaining mobile. Wi-Fi networks and laptop programs are especially widespread at university campuses.
Other advantages of laptops:
Size: Laptops are smaller than desktop PCs. This is beneficial when space is at a premium, for example in small apartments and student dorms. When not in use, a laptop can be closed and put away in a desk drawer.
Low power consumption: Laptops are several times more power-efficient than desktops. A typical laptop uses 20–120 W, compared to 100–800 W for desktops. This could be particularly beneficial for large businesses, which run hundreds of personal computers thus multiplying the potential savings, and homes where there is a computer running 24/7 (such as a home media server, print server, etc.).
Quiet: Laptops are typically much quieter than desktops, due both to the components (quieter, slower 2.5-inch hard drives) and to less heat production leading to the use of fewer and slower cooling fans.
Battery: a charged laptop can continue to be used in case of a power outage and is not affected by short power interruptions and blackouts. A desktop PC needs an uninterruptible power supply (UPS) to handle short interruptions, blackouts, and spikes; achieving on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS.
All-in-One: designed to be portable, most 2010-era laptops have all components integrated into the chassis (however, some small laptops may not have an internal CD/CDR/DVD drive, so an external drive needs to be used). For desktops (excluding all-in-ones) this is usually divided into the desktop "tower" (the unit with the CPU, hard drive, power supply, etc.), keyboard, mouse, display screen, and optional peripherals such as speakers.
Disadvantages
Compared to desktop PCs, laptops have disadvantages in the following areas:
Performance
While the performance of mainstream desktops and laptops are comparable, and the cost of laptops has fallen less rapidly than desktops, laptops remain more expensive than desktop PCs at the same performance level. The upper limits of performance of laptops remain much lower than the highest-end desktops (especially "workstation class" machines with two processor sockets), and "leading-edge" features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops.
For Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even relatively low-end laptops (such as Netbooks) can be fast enough for some users. Most higher-end laptops are sufficiently powerful for high-resolution movie playback, some 3D gaming and video editing and encoding. However, laptop processors can be disadvantaged when dealing with a higher-end database, maths, engineering, financial software, virtualization, etc. This is because laptops use the mobile versions of processors to conserve power, and these lag behind desktop chips when it comes to performance. Some manufacturers work around this performance problem by using desktop CPUs for laptops.
Upgradeability
The upgradeability of laptops is very limited compared to thoroughly standardized desktops. In general, hard drives and memory can be upgraded easily. Optical drives and internal expansion cards may be upgraded if they follow an industry standard, but all other internal components, including the motherboard, CPU, and graphics, are not always intended to be upgradeable. Intel, Asus, Compal, Quanta and some other laptop manufacturers have created the Common Building Block standard for laptop parts to address some of the inefficiencies caused by the lack of standards. The reasons for limited upgradeability are both technical and economic. There is no industry-wide standard form factor for laptops; each major laptop manufacturer pursues its own proprietary design and construction, with the result that laptops are difficult to upgrade and have high repair costs. Moreover, starting with 2013 models, laptops have become increasingly integrated (soldered) with the motherboard for most of its components (CPU, SSD, RAM, keyboard, etc.) to reduce size and upgradeability prospects. Devices such as sound cards, network adapters, hard and optical drives, and numerous other peripherals are available, but these upgrades usually impair the laptop's portability, because they add cables and boxes to the setup and often have to be disconnected and reconnected when the laptop is on the move.
Ergonomics and health effects
Wrists
Prolonged use of laptops can cause repetitive strain injury because of their small, flat keyboard and trackpad pointing devices. Usage of separate, external ergonomic keyboards and pointing devices is recommended to prevent injury when working for long periods of time; they can be connected to a laptop easily by USB, Bluetooth or via a docking station. Some health standards require ergonomic keyboards at workplaces.
Neck and spine
A laptop's integrated screen often requires users to lean over for a better view, which can cause neck or spinal injuries. A larger and higher-quality external screen can be connected to almost any laptop to alleviate this and to provide additional screen space for more productive work. Another solution is to use a computer stand.
Possible effect on fertility
A study by State University of New York researchers found that heat generated from laptops can increase the temperature of the lap of male users when balancing the computer on their lap, potentially putting sperm count at risk. The study, which included roughly two dozen men between the ages of 21 and 35, found that the sitting position required to balance a laptop can increase scrotum temperature by as much as . However, further research is needed to determine whether this directly affects male sterility. A later 2010 study of 29 males published in Fertility and Sterility found that men who kept their laptops on their laps experienced scrotal hyperthermia (overheating) in which their scrotal temperatures increased by up to . The resulting heat increase, which could not be offset by a laptop cushion, may increase male infertility.
A common practical solution to this problem is to place the laptop on a table or desk or to use a book or pillow between the body and the laptop. Another solution is to obtain a cooling unit for the laptop. These are usually USB powered and consist of a hard thin plastic case housing one, two, or three cooling fans – with the entire assembly designed to sit under the laptop in question – which results in the laptop remaining cool to the touch, and greatly reduces laptop heat buildup.
Thighs
Heat generated from using a laptop on the lap can also cause skin discoloration on the thighs known as "toasted skin syndrome".
Durability
Laptops are less durable than desktops/PCs. However, the durability of the laptop depends on the user if proper maintenance is done then the laptop can work longer.
Equipment wear
Because of their portability, laptops are subject to more wear and physical damage than desktops. Components such as screen hinges, latches, power jacks, and power cords deteriorate gradually from ordinary use and may have to be replaced. A liquid spill onto the keyboard, a rather minor mishap with a desktop system (given that a basic keyboard costs about US$20), can damage the internals of a laptop and destroy the computer, result in a costly repair or entire replacement of laptops. One study found that a laptop is three times more likely to break during the first year of use than a desktop. To maintain a laptop, it is recommended to clean it every three months for dirt, debris, dust, and food particles. Most cleaning kits consist of a lint-free or microfiber cloth for the LCD screen and keyboard, compressed air for getting dust out of the cooling fan, and a cleaning solution. Harsh chemicals such as bleach should not be used to clean a laptop, as they can damage it.
Heating and cooling
Laptops rely on extremely compact cooling systems involving a fan and heat sink that can fail from blockage caused by accumulated airborne dust and debris. Most laptops do not have any type of removable dust collection filter over the air intake for these cooling systems, resulting in a system that gradually conducts more heat and noise as the years pass. In some cases, the laptop starts to overheat even at idle load levels. This dust is usually stuck inside where the fan and heat sink meet, where it can not be removed by a casual cleaning and vacuuming. Most of the time, compressed air can dislodge the dust and debris but may not entirely remove it. After the device is turned on, the loose debris is reaccumulated into the cooling system by the fans. Complete disassembly is usually required to clean the laptop entirely. However, preventative maintenance such as regular cleaning of the heat sink via compressed air can prevent dust build-up on the heat sink. Many laptops are difficult to disassemble by the average user and contain components that are sensitive to electrostatic discharge (ESD).
Battery life
Battery life is limited because the capacity drops with time, eventually requiring replacement after as little as a year. A new battery typically stores enough energy to run the laptop for three to five hours, depending on usage, configuration, and power management settings. Yet, as it ages, the battery's energy storage will dissipate progressively until it lasts only a few minutes. The battery is often easily replaceable and a higher capacity model may be obtained for longer charging and discharging time. Some laptops (specifically ultrabooks) do not have the usual removable battery and have to be brought to the service center of their manufacturer or a third-party laptop service center to have their battery replaced. Replacement batteries can also be expensive.
Security and privacy
Because they are valuable, commonly used, portable, and easy to hide in a backpack or other type of travel bag, laptops are often stolen. Every day, over 1,600 laptops go missing from U.S. airports. The cost of stolen business or personal data, and of the resulting problems (identity theft, credit card fraud, breach of privacy), can be many times the value of the stolen laptop itself. Consequently, the physical protection of laptops and the safeguarding of data contained on them are both of great importance. Most laptops have a Kensington security slot, which can be used to tether them to a desk or other immovable object with a security cable and lock. In addition, modern operating systems and third-party software offer disk encryption functionality, which renders the data on the laptop's hard drive unreadable without a key or a passphrase. As of 2015, some laptops also have additional security elements added, including eye recognition software and fingerprint scanning components.
Software such as LoJack for Laptops, Laptop Cop, and GadgetTrack have been engineered to help people locate and recover their stolen laptops in the event of theft. Setting one's laptop with a password on its firmware (protection against going to firmware setup or booting), internal HDD/SSD (protection against accessing it and loading an operating system on it afterward), and every user account of the operating system are additional security measures that a user should do. Fewer than 5% of lost or stolen laptops are recovered by the companies that own them, however, that number may decrease due to a variety of companies and software solutions specializing in laptop recovery. In the 2010s, the common availability of webcams on laptops raised privacy concerns. In Robbins v. Lower Merion School District (Eastern District of Pennsylvania 2010), school-issued laptops loaded with special software enabled staff from two high schools to take secret webcam shots of students at home, via their students' laptops.
Sales
Manufacturers
There are many laptop brands and manufacturers. Several major brands that offer notebooks in various classes are listed in the adjacent box.
The major brands usually offer good service and support, including well-executed documentation and driver downloads that remain available for many years after a particular laptop model is no longer produced. Capitalizing on service, support, and brand image, laptops from major brands are more expensive than laptops by smaller brands and ODMs. Some brands specialize in a particular class of laptops, such as gaming laptops (Alienware), high-performance laptops (HP Envy), netbooks (EeePC) and laptops for children (OLPC).
Many brands, including the major ones, do not design and do not manufacture their laptops. Instead, a small number of Original Design Manufacturers (ODMs) design new models of laptops, and the brands choose the models to be included in their lineup. In 2006, 7 major ODMs manufactured 7 of every 10 laptops in the world, with the largest one (Quanta Computer) having 30% of the world market share. Therefore, identical models are available both from a major label and from a low-profile ODM in-house brand.
Market share
Battery-powered portable computers had just 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008 it was estimated that 145.9 million notebooks were sold, and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units.
May 2005 was the first time notebooks outsold desktops in the US over the course of a full month; at the time notebooks sold for an average of $1,131 while desktops sold for an average of $696. When looking at operating systems, for Microsoft Windows laptops the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing an average US$689 at U.S. retail stores in August 2008. In 2009, ASP had further fallen to $602 by January and to $560 in February. While Windows machines ASP fell $129 in these seven months, Apple macOS laptop ASP declined just $12 from $1,524 to $1,512.
Disposal
The list of materials that go into a laptop computer is long, and many of the substances used, such as beryllium (used in beryllium-copper alloy contacts in some connectors and sockets), lead (used in lead-tin solder), chromium, and mercury (used in CCFL LCD backlights) compounds, are toxic or carcinogenic to humans. Although these toxins are relatively harmless when the laptop is in use, concerns that discarded laptops cause a serious health risk and toxic environmental damage, were so strong, that the Waste Electrical and Electronic Equipment Directive (WEEE Directive) in Europe specified that all laptop computers must be recycled by law. Similarly, the U.S. Environmental Protection Agency (EPA) has outlawed landfill dumping or the incinerating of discarded laptop computers.
Most laptop computers begin the recycling process with a method known as Demanufacturing, this involves the physical separation of the components of the laptop. These components are then either grouped into materials (e.g. plastic, metal and glass) for recycling or more complex items that require more advanced materials separation (e.g.) circuit boards, hard drives and batteries.
Corporate laptop recycling can require an additional process known as data destruction. The data destruction process ensures that all information or data that has been stored on a laptop hard drive can never be retrieved again. Below is an overview of some of the data protection and environmental laws and regulations applicable for laptop recycling data destruction:
Data Protection Act 1998 (DPA)
EU Privacy Directive (Due 2016)
Financial Conduct Authority
Sarbanes-Oxley Act
PCI-DSS Data Security Standard
Waste, Electronic & Electrical Equipment Directive (WEEE)
Basel Convention
Bank Secrecy Act (BSA)
FACTA Sarbanes-Oxley Act
FDA Security Regulations (21 C.F.R. part 11)
Gramm-Leach-Bliley Act (GLBA)
HIPAA (Health Insurance Portability and Accountability Act)
NIST SP 800–53
Add NIST SP 800–171
Identity Theft and Assumption Deterrence Act
Patriot Act of 2002
PCI Data Security Standard
US Safe Harbor Provisions
Various state laws
JAN 6/3
Gramm-leach-Bliley Act
DCID
Extreme use
The ruggedized Grid Compass computer was used since the early days of the Space Shuttle program. The first commercial laptop used in space was a Macintosh portable in 1991 aboard Space Shuttle mission STS-43. Apple and other laptop computers continue to be flown aboard crewed spaceflights, though the only long-duration flight certified computer for the International Space Station is the ThinkPad. As of 2011, over 100 ThinkPads were aboard the ISS. Laptops used aboard the International Space Station and other spaceflights are generally the same ones that can be purchased by the general public but needed modifications are made to allow them to be used safely and effectively in a weightless environment such as updating the cooling systems to function without relying on hot air rising and accommodation for the lower cabin air pressure. Laptops operating in harsh usage environments and conditions, such as strong vibrations, extreme temperatures, and wet or dusty conditions differ from those used in space in that they are custom designed for the task and do not use commercial off-the-shelf hardware.
See also
List of computer size categories
List of laptop brands and manufacturers
Netbook
Smartbook
Chromebook
Ultrabook
Smartphone
Subscriber Identity Module
Mobile broadband
Mobile Internet device (MID)
Personal digital assistant
VIA OpenBook
Tethering
XJACK
Open-source computer hardware
Novena
Portal laptop computer
Mobile modem
Stereoscopy glasses
Notes
References
Classes of computers
Japanese inventions
Mobile computers
Office equipment
Personal computers
1980s neologisms |