id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
49007 | https://en.wikipedia.org/wiki/Stream%20cipher | Stream cipher | A stream cipher is a symmetric key cipher where plaintext digits are combined with a pseudorandom cipher digit stream (keystream). In a stream cipher, each plaintext digit is encrypted one at a time with the corresponding digit of the keystream, to give a digit of the ciphertext stream. Since encryption of each digit is dependent on the current state of the cipher, it is also known as state cipher. In practice, a digit is typically a bit and the combining operation is an exclusive-or (XOR).
The pseudorandom keystream is typically generated serially from a random seed value using digital shift registers. The seed value serves as the cryptographic key for decrypting the ciphertext stream. Stream ciphers represent a different approach to symmetric encryption from block ciphers. Block ciphers operate on large blocks of digits with a fixed, unvarying transformation. This distinction is not always clear-cut: in some modes of operation, a block cipher primitive is used in such a way that it acts effectively as a stream cipher. Stream ciphers typically execute at a higher speed than block ciphers and have lower hardware complexity. However, stream ciphers can be susceptible to security breaches (see stream cipher attacks); for example, when the same starting state (seed) is used twice.
Loose inspiration from the one-time pad
Stream ciphers can be viewed as approximating the action of a proven unbreakable cipher, the one-time pad (OTP). A one-time pad uses a keystream of completely random digits. The keystream is combined with the plaintext digits one at a time to form the ciphertext. This system was proved to be secure by Claude E. Shannon in 1949. However, the keystream must be generated completely at random with at least the same length as the plaintext and cannot be used more than once. This makes the system cumbersome to implement in many practical applications, and as a result the one-time pad has not been widely used, except for the most critical applications. Key generation, distribution and management are critical for those applications.
A stream cipher makes use of a much smaller and more convenient key such as 128 bits. Based on this key, it generates a pseudorandom keystream which can be combined with the plaintext digits in a similar fashion to the one-time pad. However, this comes at a cost. The keystream is now pseudorandom and so is not truly random. The proof of security associated with the one-time pad no longer holds. It is quite possible for a stream cipher to be completely insecure.
Types
A stream cipher generates successive elements of the keystream based on an internal state. This state is updated in essentially two ways: if the state changes independently of the plaintext or ciphertext messages, the cipher is classified as a synchronous stream cipher. By contrast, self-synchronising stream ciphers update their state based on previous ciphertext digits.
Synchronous stream ciphers
In a synchronous stream cipher a stream of pseudorandom digits is generated independently of the plaintext and ciphertext messages, and then combined with the plaintext (to encrypt) or the ciphertext (to decrypt). In the most common form, binary digits are used (bits), and the keystream is combined with the plaintext using the exclusive or operation (XOR). This is termed a binary additive stream cipher.
In a synchronous stream cipher, the sender and receiver must be exactly in step for decryption to be successful. If digits are added or removed from the message during transmission, synchronisation is lost. To restore synchronisation, various offsets can be tried systematically to obtain the correct decryption. Another approach is to tag the ciphertext with markers at regular points in the output.
If, however, a digit is corrupted in transmission, rather than added or lost, only a single digit in the plaintext is affected and the error does not propagate to other parts of the message. This property is useful when the transmission error rate is high; however, it makes it less likely the error would be detected without further mechanisms. Moreover, because of this property, synchronous stream ciphers are very susceptible to active attacks: if an attacker can change a digit in the ciphertext, they might be able to make predictable changes to the corresponding plaintext bit; for example, flipping a bit in the ciphertext causes the same bit to be flipped in the plaintext.
Self-synchronizing stream ciphers
Another approach uses several of the previous N ciphertext digits to compute the keystream. Such schemes are known as self-synchronizing stream ciphers, asynchronous stream ciphers or ciphertext autokey (CTAK). The idea of self-synchronization was patented in 1946 and has the advantage that the receiver will automatically synchronise with the keystream generator after receiving N ciphertext digits, making it easier to recover if digits are dropped or added to the message stream. Single-digit errors are limited in their effect, affecting only up to N plaintext digits.
An example of a self-synchronising stream cipher is a block cipher in cipher feedback (CFB) mode.
Based on linear-feedback shift registers
Binary stream ciphers are often constructed using linear-feedback shift registers (LFSRs) because they can be easily implemented in hardware and can be readily analysed mathematically. The use of LFSRs on their own, however, is insufficient to provide good security. Various schemes have been proposed to increase the security of LFSRs.
Non-linear combining functions
Because LFSRs are inherently linear, one technique for removing the linearity is to feed the outputs of several parallel LFSRs into a non-linear Boolean function to form a combination generator. Various properties of such a combining function are critical for ensuring the security of the resultant scheme, for example, in order to avoid correlation attacks.
Clock-controlled generators
Normally LFSRs are stepped regularly. One approach to introducing non-linearity is to have the LFSR clocked irregularly, controlled by the output of a second LFSR. Such generators include the stop-and-go generator, the alternating step generator and the shrinking generator.
An alternating step generator comprises three LFSRs, which we will call LFSR0, LFSR1 and LFSR2 for convenience. The output of one of the registers decides which of the other two is to be used; for instance, if LFSR2 outputs a 0, LFSR0 is clocked, and if it outputs a 1, LFSR1 is clocked instead. The output is the exclusive OR of the last bit produced by LFSR0 and LFSR1. The initial state of the three LFSRs is the key.
The stop-and-go generator (Beth and Piper, 1984) consists of two LFSRs. One LFSR is clocked if the output of a second is a 1, otherwise it repeats its previous output. This output is then (in some versions) combined with the output of a third LFSR clocked at a regular rate.
The shrinking generator takes a different approach. Two LFSRs are used, both clocked regularly. If the output of the first LFSR is 1, the output of the second LFSR becomes the output of the generator. If the first LFSR outputs 0, however, the output of the second is discarded, and no bit is output by the generator. This mechanism suffers from timing attacks on the second generator, since the speed of the output is variable in a manner that depends on the second generator's state. This can be alleviated by buffering the output.
Filter generator
Another approach to improving the security of an LFSR is to pass the entire state of a single LFSR into a non-linear filtering function.
Other designs
Instead of a linear driving device, one may use a nonlinear update function. For example, Klimov and Shamir proposed triangular functions (T-functions) with a single cycle on n-bit words.
Security
For a stream cipher to be secure, its keystream must have a large period, and it must be impossible to recover the cipher's key or internal state from the keystream. Cryptographers also demand that the keystream be free of even subtle biases that would let attackers distinguish a stream from random noise, and free of detectable relationships between keystreams that correspond to related keys or related cryptographic nonces. That should be true for all keys (there should be no weak keys), even if the attacker can know or choose some plaintext or ciphertext.
As with other attacks in cryptography, stream cipher attacks can be certificational so they are not necessarily practical ways to break the cipher but indicate that the cipher might have other weaknesses.
Securely using a secure synchronous stream cipher requires that one never reuse the same keystream twice. That generally means a different nonce or key must be supplied to each invocation of the cipher. Application designers must also recognize that most stream ciphers provide not authenticity but privacy: encrypted messages may still have been modified in transit.
Short periods for stream ciphers have been a practical concern. For example, 64-bit block ciphers like DES can be used to generate a keystream in output feedback (OFB) mode. However, when not using full feedback, the resulting stream has a period of around 232 blocks on average; for many applications, the period is far too low. For example, if encryption is being performed at a rate of 8 megabytes per second, a stream of period 232 blocks will repeat after about a half an hour.
Some applications using the stream cipher RC4 are attackable because of weaknesses in RC4's key setup routine; new applications should either avoid RC4 or make sure all keys are unique and ideally unrelated (such as generated by a well-seeded CSPRNG or a cryptographic hash function) and that the first bytes of the keystream are discarded.
The elements of stream ciphers are often much simpler to understand than block ciphers and are thus less likely to hide any accidental or malicious weaknesses.
Usage
Stream ciphers are often used for their speed and simplicity of implementation in hardware, and in applications where plaintext comes in quantities of unknowable length like a secure wireless connection. If a block cipher (not operating in a stream cipher mode) were to be used in this type of application, the designer would need to choose either transmission efficiency or implementation complexity, since block ciphers cannot directly work on blocks shorter than their block size. For example, if a 128-bit block cipher received separate 32-bit bursts of plaintext, three quarters of the data transmitted would be padding. Block ciphers must be used in ciphertext stealing or residual block termination mode to avoid padding, while stream ciphers eliminate this issue by naturally operating on the smallest unit that can be transmitted (usually bytes).
Another advantage of stream ciphers in military cryptography is that the cipher stream can be generated in a separate box that is subject to strict security measures and fed to other devices such as a radio set, which will perform the XOR operation as part of their function. The latter device can then be designed and used in less stringent environments.
ChaCha is becoming the most widely used stream cipher in software; others include:
RC4,
A5/1,
A5/2,
Chameleon,
FISH,
Helix,
ISAAC,
MUGI,
Panama,
Phelix,
Pike,
Salsa20,
SEAL,
SOBER,
SOBER-128,
and
WAKE.
Comparison
Trivia
United States National Security Agency documents sometimes use the term combiner-type algorithms, referring to algorithms that use some function to combine a pseudorandom number generator (PRNG) with a plaintext stream.
See also
eSTREAM
Linear-feedback shift register (LFSR)
Nonlinear-feedback shift register (NLFSR)
Notes
References
Matt J. B. Robshaw, Stream Ciphers Technical Report TR-701, version 2.0, RSA Laboratories, 1995 (PDF).
Christof Paar, Jan Pelzl, "Stream Ciphers", Chapter 2 of "Understanding Cryptography, A Textbook for Students and Practitioners". (companion web site contains online cryptography course that covers stream ciphers and LFSR), Springer, 2009.
External links
RSA technical report on stream cipher operation.
Cryptanalysis and Design of Stream Ciphers (thesis by Hongjun Wu).
Analysis of Lightweight Stream Ciphers (thesis by S. Fischer).
Cryptographic primitives |
49716 | https://en.wikipedia.org/wiki/Programmable%20ROM | Programmable ROM | A programmable read-only memory (PROM) is a form of digital memory where the contents can be changed once after manufacture of the device. The data is then permanent and cannot be changed. It is one type of read-only memory (ROM). PROMs are used in digital electronic devices to store permanent data, usually low level programs such as firmware or microcode. The key difference from a standard ROM is that the data is written into a ROM during manufacture, while with a PROM the data is programmed into them after manufacture. Thus, ROMs tend to be used only for large production runs with well-verified data. PROMs may be used where the volume required does not make a factory-programmed ROM economical, or during development of a system that may ultimately be converted to ROMs in a mass produced version.
PROMs are manufactured blank and, depending on the technology, can be programmed at wafer, final test, or in system. Blank PROM chips are programmed by plugging them into a device called a PROM programmer. Companies can keep a supply of blank PROMs in stock, and program them at the last minute to avoid large volume commitment. These types of memories are frequently used in microcontrollers, video game consoles, mobile phones, radio-frequency identification (RFID) tags, implantable medical devices, high-definition multimedia interfaces (HDMI) and in many other consumer and automotive electronics products.
History
The PROM was invented in 1956 by Wen Tsing Chow, working for the Arma Division of the American Bosch Arma Corporation in Garden City, New York. The invention was conceived at the request of the United States Air Force to come up with a more flexible and secure way of storing the targeting constants in the Atlas E/F ICBM's airborne digital computer. The patent and associated technology were held under secrecy order for several years while the Atlas E/F was the main operational missile of the United States ICBM force. The term burn, referring to the process of programming a PROM, is also in the original patent, as one of the original implementations was to literally burn the internal whiskers of diodes with a current overload to produce a circuit discontinuity. The first PROM programming machines were also developed by Arma engineers under Mr. Chow's direction and were located in Arma's Garden City lab and Air Force Strategic Air Command (SAC) headquarters.
OTP (one time programmable) memory is a special type of non-volatile memory (NVM) that permits data to be written to memory only once. Once the memory has been programmed, it retains its value upon loss of power (i.e., is non-volatile). OTP memory is used in applications where reliable and repeatable reading of data is required. Examples include boot code, encryption keys and configuration parameters for analog, sensor or display circuitry. OTP NVM is characterized, over other types of NVM like eFuse or EEPROM, by offering a low power, small area footprint memory structure. As such OTP memory finds application in products from microprocessors & display drivers to Power Management ICs (PMICs).
Commercially available semiconductor antifuse-based OTP memory arrays have been around at least since 1969, with initial antifuse bit cells dependent on blowing a capacitor between crossing conductive lines. Texas Instruments developed a MOS gate oxide breakdown antifuse in 1979. A dual-gate-oxide two-transistor (2T) MOS antifuse was introduced in 1982. Early oxide breakdown technologies exhibited a variety of scaling, programming, size and manufacturing problems that prevented volume production of memory devices based on these technologies.
Another form of one-time programmable memory device uses the same semiconductor chip as an ultraviolet-erasable programmable read-only memory (UV-EPROM), but the finished device is put into an opaque package, instead of the expensive ceramic package with transparent quartz window required for erasing. These devices are programmed with the same methods as the UV EPROM parts but are less costly. Embedded controllers may be available in both field-erasable and one-time styles, allowing a cost saving in volume production without the expense and lead time of factory-programmed mask ROM chips.
Although antifuse-based PROM has been available for decades, it wasn’t available in standard CMOS until 2001 when Kilopass Technology Inc. patented 1T, 2T, and 3.5T antifuse bit cell technologies using a standard CMOS process, enabling integration of PROM into logic CMOS chips. The first process node antifuse can be implemented in standard CMOS is 0.18 um. Since the gate oxide breakdown is less than the junction breakdown, special diffusion steps were not required to create the antifuse programming element. In 2005, a split channel antifuse device was introduced by Sidense. This split channel bit cell combines the thick (IO) and thin (gate) oxide devices into one transistor (1T) with a common polysilicon gate.
Programming
A typical PROM comes with all bits reading as "1". Burning a fuse bit during programming causes the bit to be read as "0" by "blowing" the fuses, which is an irreversible process. Some devices can be "reprogrammed" if the new data replaces "1"s with "0"s. Some CPU instruction sets (MOS_Technology_6502#Bugs_and_quirks) took advantage of this by defining a break (BRK) instruction with the operation code of '00'. In cases where there was an incorrect instruction, it could be "reprogrammed" to a BRK causing the CPU to transfer control to a patch. This would execute the correct instruction and return to the instruction after the BRK.
The bit cell is programmed by applying a high-voltage pulse not encountered during a normal operation across the gate and substrate of the thin oxide transistor (around 6V for a 2 nm thick oxide, or 30MV/cm) to break down the oxide between gate and substrate. The positive voltage on the transistor's gate forms an inversion channel in the substrate below the gate, causing a tunneling current to flow through the oxide. The current produces additional traps in the oxide, increasing the current through the oxide and ultimately melting the oxide and forming a conductive channel from gate to substrate. The current required to form the conductive channel is around 100µA/100nm and the breakdown occurs in approximately 100µs or less.
Notes
References
1977 Intel Memory Design Handbook - archive.org
Intel PROM datasheets - intel-vintage.info
View the US "Switch Matrix" Patent #3028659 at US Patent Office or Google
View Kilopass Technology Patent US "High density semiconductor memory cell and memory array using a single transistor and having variable gate oxide breakdown" Patent #6940751 at US Patent Office or Google
View Sidense US "Split Channel Antifuse Array Architecture" Patent #7402855 at US Patent Office or Google
View the US "Method of Manufacturing Semiconductor Integrated Circuits" Patent #3634929 at US Patent Office or Google
CHOI et al. (2008). "New Non-Volatile Memory Structures for FPGA Architectures"
For the Advantages and Disadvantages table, see Ramamoorthy, G: "Dataquest Insight: Nonvolatile Memory IP Market, Worldwide, 2008-2013", page 10. Gartner, 2009
External links
Looking inside a 1970s PROM chip that stores data in microscopic fuse - shows die of a 256x4 MMI 5300 PROM
Non-volatile memory
Computer memory
Computer-related introductions in 1956
American inventions |
50373 | https://en.wikipedia.org/wiki/Type%20B%20Cipher%20Machine | Type B Cipher Machine | In the history of cryptography, the "System 97 Typewriter for European Characters" (九七式欧文印字機) or "Type B Cipher Machine", codenamed Purple by the United States, was an encryption machine used by the Japanese Foreign Office from February 1939 to the end of World War II. The machine was an electromechanical device that used stepping-switches to encrypt the most sensitive diplomatic traffic. All messages were written in the 26-letter English alphabet, which was commonly used for telegraphy. Any Japanese text had to be transliterated or coded. The 26-letters were separated using a plug board into two groups, of six and twenty letters respectively. The letters in the sixes group were scrambled using a 6 × 25 substitution table, while letters in the twenties group were more thoroughly scrambled using three successive 20 × 25 substitution tables.
The cipher codenamed "Purple" replaced the Type A Red machine previously used by the Japanese Foreign Office. The sixes and twenties division was familiar to U.S. Army Signals Intelligence Service (SIS) cryptographers from their work on the Type A cipher and it allowed them to make early progress on the sixes portion of messages. The twenties cipher proved much more difficult, but a breakthrough in September 1940 allowed the Army cryptographers to construct a machine that duplicated the behavior (was an analog) of the Japanese machines, even though no one in the U.S. had any description of one.
The Japanese also used stepping-switches in systems, codenamed Coral and Jade, that did not divide their alphabets. American forces referred to information gained from decryptions as Magic.
Development of Japanese cipher machines
Overview
The Imperial Japanese Navy did not cooperate with the Army in pre-war cipher machine development, and that lack of cooperation continued into World War II. The Navy believed the Purple machine was sufficiently difficult to break that it did not attempt to revise it to improve security. This seems to have been on the advice of a mathematician, Teiji Takagi, who lacked a background in cryptanalysis. The Ministry of Foreign Affairs was supplied Red and Purple by the Navy. No one in Japanese authority noticed the weak points in both machines.
Just before the end of the war, the Army warned the Navy of a weak point of Purple, but the Navy failed to act on this advice.
The Army developed their own cipher machines on the same principle as Enigma -- 92-shiki injiki, 97-shiki injiki and 1-shiki 1-go injiki -- from 1932 to 1941. The Army judged that these machines had lower security than the Navy's Purple design, so the Army's two cipher machines were less used.
Prototype of Red
Japanese diplomatic communications at negotiations for the Washington Naval Treaty were broken by the American Black Chamber in 1922, and when this became publicly known, there was considerable pressure to improve their security. In any case, the Japanese Navy had planned to develop their first cipher machine for the following London Naval Treaty. Japanese Navy Captain Risaburo Ito, of Section 10 (cipher & code) of the Japanese Navy General Staff Office, supervised the work.
The development of the machine was the responsibility of the Japanese Navy Institute of Technology, Electric Research Department, Section 6. In 1928, the chief designer Kazuo Tanabe and Navy Commander Genichiro Kakimoto developed a prototype of Red, "Roman-typewriter cipher machine".
The prototype used the same principle as the Kryha cipher machine, having a plug-board, and was used by the Japanese Navy and Ministry of Foreign Affairs at negotiations for the London Naval Treaty in 1930.
Red
The prototype machine was finally completed as "Type 91 Typewriter" in 1931. The year 1931 was year 2591 in the Japanese Imperial calendar. Thus it was prefixed "91-shiki" from the year it was developed.
The 91-shiki injiki Roman-letter model was also used by the Ministry of Foreign Affairs as "Type A Cipher Machine", codenamed "Red" by United States cryptanalysts.
The Red machine was unreliable unless the contacts in its half-rotor switch were cleaned every day. It enciphered vowels (AEIOUY) and consonants separately, perhaps to reduce telegram costs, and this was a significant weak point. The Navy also used the 91-shiki injiki Kana-letter model at its bases and on its vessels.
Purple
In 1937, the Japanese completed the next generation "Type 97 Typewriter". The Ministry of Foreign Affairs machine was the "Type B Cipher Machine", codenamed Purple by United States cryptanalysts.
The chief designer of Purple was Kazuo Tanabe. His engineers were Masaji Yamamoto and Eikichi Suzuki. Eikichi Suzuki suggested the use of a stepping switch instead of the more troublesome half-rotor switch.
Clearly, the Purple machine was more secure than Red, but the Navy did not recognize that Red had already been broken. The Purple machine inherited a weakness from the Red machine that six letters of the alphabet were encrypted separately. It differed from Red in that the group of letters was changed and announced every nine days, whereas in Red they were permanently fixed as the Latin vowels 'a', 'e', 'i', 'o', 'u' and 'y'. Thus US Army SIS was able to break the cipher used for the six letters before it was able to break the one used for the 20 others.
Design
The Type B Cipher Machine consisted of several components. As reconstructed by the US Army, there were electric typewriters at either end, similar to those used with the Type A Machine. The Type B was organized for encryption as follows:
An input typewriter
An input plugboard that permutes the letters from the typewriter keyboard and separates them into a group of 6 letters and a group of 20 letters
A stepping switch with 6 layers wired to select one out of 25 permutations of the letters in the sixes group
Three stages of stepping switches (I, II, and III), connected in series. Each stage is effectively a 20 layer switch with 25 outputs on each layer. Each stage selects one out of 25 permutations of the letters in the twenties group. The Japanese used three 7-layer stepping switches geared together to build each stage (see photos). The U.S. SIS used four 6-layer switches per stage in their first analog machine.
An output plug board that reverses the input permutation and sends the letters to the output typewriter for printing
The output typewriter
For decryption, the data flow is reversed. The keyboard on the second typewriter becomes the input and the twenties letters pass through the stepping switch stages in the opposite order.
Stepping switches
A stepping switch is a multi-layer mechanical device that was commonly used at the time in telephone switching systems. Each layer has a set of electrical connects, 25 in the Type B, arranged in a semicircular arc. These do not move and are called the stator. A wiper arm on a rotor at the focus of the semicircle connects with one stator contact at a time. The rotors on each layer are attached to a single shaft that advances from one stator contact to the next whenever an electromagnet connected to a ratchet is pulsed. There are actually two wiper arms on each level, connected together, so that when one wiper advance past the last contact in the semicircle, the other engages the first contact. This allows the rotor connections to keeps cycling through all 25 stator contacts as the electromagnet is pulsed.
To encrypt the twenties letters, a 20-layer stepping switch was needed in each of the three stages. Both the Japanese version and the early American analog constructed each stage from several smaller stepping switches of the type used in telephone central offices. The American analog used four 6-level switches to create one 20-layer switch. The four switches in each stage were wired to step synchronously. The fragment of a Type 97 Japanese machine on display at the National Cryptologic Museum, the largest piece known in existence, has three 7-layer stepping switches (see photo). The U.S. Army developed an improved analog in 1944 that has all the layers needed for each stage on a single shaft. An additional layer was used in the improved analog to automatically set each switch bank to the initial position specified in the key.
However implemented, the 20-layer stepping switch in each stage had 20 rotor connections and 500 stator connections, one wiper and 25 stator contacts on each layer. Each stage must have exactly 20 connections on each end to connect with the adjacent stage or plugboard. On the rotor side, that is not a problem as there are 20 rotors. On the stator end of a stage, every column of stator contacts corresponding to the same rotor position on each of the 20 layers is connected to the 20 output wires (leads in the diagram) in a scrambled order, creating a permutation of the 20 inputs. This is done differently for each of the rotor positions. Thus each stator output wire has 25 connections, one from each rotor position, though from different levels. The connections needed to do this created a "rats nest" of wires in the early U.S. analog. The improved analog organized the wiring more neatly with three matrices of soldering terminals visible above each stepping switch in the photograph.
Stepping order
The stages were bi-directional. Signals went through each stage in one direction for encryption and in the other direction for decryption. Unlike the system in the German Enigma machine, the order of the stages was fixed and there was no reflector. However the stepping arrangement could be changed.
The sixes switches stepped one position for each character encrypted or decrypted. The motions of the switches in the twenties stages were more complex. The three stages were assigned to step fast, medium or slow. There were six possible ways to make this assignment and the choice was determined by a number included at the beginning of each message called the message indicator. The U.S. improved analog has a six-position switch for making this assignment, see photo. The message indicator also specified the initial positions of the twenties switches. The indicator was different for each message or part of a message, when multi-part messages were sent. The final part of the key, the alphabet plugboard arrangement, was changed daily.
The twenties switch stepping was controlled in part by the sixes switch. Exactly one of the three switches stepped for each character. The fast switch stepped for each character except when the sixes switch was in its 25th position. Then the medium switch stepped, unless it too was in its 25th position, in which case the slow switch stepped.
Weaknesses and cryptanalysis
The SIS learned in 1938 of the forthcoming introduction of a new diplomatic cipher from decoded messages. Type B messages began to appear in February 1939.
The Type B had several weaknesses, some in its design, others in the way it was used. Frequency analysis could often make 6 of the 26 letters in the ciphertext alphabet letters stand out from the other 20 letters, which were more uniformly distributed. This suggested the Type B used a similar division of plaintext letters as used in the Type A. The weaker encryption used for the "sixes" was easier to analyze. The sixes cipher turned out to be polyalphabetic with 25 fixed permuted alphabets, each used in succession. The only difference between messages with different indicators was the starting position in the list of alphabets. The SIS team recovered the 25 permutations by 10 April 1939. The frequency analysis was complicated by the presence of romanized Japanese text and the introduction in early May of a Japanese version of the Phillips Code.
Knowing the plaintext of 6 out of 26 letters scattered throughout the message sometimes enabled parts of the rest of the message to be guessed, especially when the writing was highly stylized. Some diplomatic messages included the text of letters from the U.S. government to the Japanese government. The English text of such messages could usually be obtained. Some diplomatic stations did not have the Type B, especially early in its introduction, and sometimes the same message was sent in Type B and in the Type A Red cipher, which the SIS had broken. All these provided cribs for attacking the twenties cipher.
William F. Friedman was assigned to lead the group of cryptographers attacking the B system in August 1939. Even with the cribs, progress was difficult. The permutations used in the twenties cipher were "brilliantly" chosen, according to Friedman, and it became clear that periodicities would be unlikely to be discovered by waiting for enough traffic encrypted on a single indicator, since the plugboard alphabets changed daily. The cryptographers developed a way to transform messages sent on different days with the same indicator into homologous messages that would appear to have been sent on the same day. This provided enough traffic based on the identical settings (6 messages with indicator 59173) to have a chance of finding some periodicity that would reveal the inner workings of the twenties cipher.
On 20 September 1940 at about 2 pm Genevieve Grotjan, carrying a set of work sheets walked up to a group of men engrossed in conversation and politely attempted to get Frank Rowlett's attention. She had found evidence of cycles in the twenties cipher. Celebration ensued at this first break in the 20s cipher and it soon enabled a replica machine to be built. A pair of other messages using indicator 59173 were decrypted by 27 September, coincidentally the date that the Tripartite Agreement between Nazi Germany, Italy, and Japan was announced. There was still a lot of work to do to recover the meaning of the other 119 possible indicators. As of October 1940, one third of the indicator settings had been recovered. From time to time the Japanese instituted new operating procedures to strengthen the Type B system, but these were often described in messages to diplomatic outputs in the older system, giving the Americans warning.
Reconstruction of the Purple machine was based on ideas of Larry Clark. Advances into the understanding of Purple keying procedures were made by Lt Francis A. Raven, USN. After the initial break, Raven discovered that the Japanese had divided the month into three 10-days periods, and, within each period, they used the keys of the first day, with small, predictable changes.
The Japanese believed Type B to be unbreakable throughout the war, and even for some time after the war, even though they had been informed otherwise by the Germans. In April 1941, Hans Thomsen, a diplomat at the German embassy in Washington, D.C., sent a message to Joachim von Ribbentrop, the German foreign minister, informing him that "an absolutely reliable source" had told Thomsen that the Americans had broken the Japanese diplomatic cipher (that is, Purple). That source apparently was Konstantin Umansky, the Soviet ambassador to the US, who had deduced the leak based upon communications from U.S. Undersecretary of State Sumner Welles. The message was duly forwarded to the Japanese; but use of the code continued.
American analogs
The SIS built its first machine that could decrypt Purple messages in late 1940. A second Purple analog was built by the SIS for the US Navy. A third was sent to England in January 1941 on HMS King George V, which had brought Ambassador Halifax to the U.S. That Purple analog was accompanied by a team of four American cryptologists, two Army, two Navy, who received information on British successes against German ciphers in exchange. This machine was subsequently sent to Singapore, and after Japanese moves south through Malaya, on to India. A fourth Purple analog was sent to the Philippines and a fifth was kept by the SIS. A sixth, originally intended for Hawaii, was sent to England for use there. The Purple intercepts proved important in the European theater due to the detailed reports on German plans sent in that cipher by the Japanese ambassador in Berlin.
Fragmentary recovery of Japanese machines
The United States obtained portions of a Purple machine from the Japanese Embassy in Germany following Germany's defeat in 1945 (see image above) and discovered that the Japanese had used a stepping switch almost identical in its construction to the one Leo Rosen of SIS had chosen when building a duplicate (or Purple analog machine) in Washington in 1939 and 1940. The stepping switch was a uniselector; a standard component used in large quantities in automatic telephone exchanges in countries like America, Britain, Canada, Germany and Japan, with extensive dial-telephone systems. The U.S. used four 6-level switches in each stage of its Purple analogs, the Japanese used three 7-level switches. Both represented the 20s cipher identically. Note however that these were not two-motion or Strowger switches as sometimes claimed: twenty-five Strolger-type (sic) stepper switches ...
Apparently, all other Purple machines at Japanese embassies and consulates around the world (e.g. in Axis countries, Washington, London, Moscow, and in neutral countries) and in Japan itself, were destroyed and ground into small particles by the Japanese. American occupation troops in Japan in 1945−52 searched for any remaining units. A complete Jade cipher machine, built on similar principles but without the sixes and twenties separation, was captured and is on display at NSA's National Cryptologic Museum.
Impact of Allied decryption
The Purple machine itself was first used by Japan in June 1938, but American and British cryptanalysts had broken some of its messages well before the attack on Pearl Harbor. US cryptanalysts decrypted and translated Japan's 14-part message to its Washington embassy to break off negotiations with the United States at 1 p.m., Washington time, on 7 December 1941, before the Japanese Embassy in Washington had done so. Decryption and typing difficulties at the embassy, coupled with ignorance of the importance of it being on time, were major reasons for the "Nomura Note" to be delivered late.
During World War II, the Japanese ambassador to Nazi Germany, General Hiroshi Oshima, was well-informed on German military affairs. His reports went to Tokyo in Purple-enciphered radio messages. One had a comment that Hitler told him on 3 June 1941 that "in every probability war with Russia cannot be avoided." In July and August 1942, he toured the Eastern Front, and in 1944, he toured the Atlantic Wall fortifications against invasion along the coasts of France and Belgium. On 4 September, Hitler told him that Germany would strike in the West, probably in November.
Since those messages were being read by the Allies, they provided valuable intelligence about German military preparations against the forthcoming invasion of Western Europe. He was described by General George Marshall as "our main basis of information regarding Hitler's intentions in Europe."
The decrypted Purple traffic and Japanese messages generally were the subject of acrimonious hearings in Congress after World War II in connection with an attempt to decide who, if anyone, had allowed the attack at Pearl Harbor to happen and so should be blamed. It was during those hearings that the Japanese for the first time learned that the Purple cipher machine had indeed been broken. (See the Pearl Harbor advance-knowledge conspiracy theory article for additional detail on the controversy and the investigations.)
The Soviets also succeeded in breaking the Purple system in late 1941, and together with reports from Richard Sorge, learned that Japan was not going to attack the Soviet Union. Instead, its targets were southward, toward Southeast Asia and American and British interests there. That allowed Stalin to move considerable forces from the Far East to Moscow in time to help stop the German push to Moscow in December.
References
Further reading
Big Machines, by Stephen J. Kelley (Aegean Park Press, Walnut Creek, 2001, ) – Contains a lengthy, technically detailed description of the history of the creation of the PURPLE machine, along with its breaking by the US SIS, and an analysis of its cryptographic security and flaws
- Appendix C: Cryptanalysis of the Purple Machine
Clark, Ronald W. "The Man Who Broke Purple: the Life of Colonel William F. Friedman, Who Deciphered the Japanese Code in World War II", September 1977, Little Brown & Co, .
Combined Fleet Decoded by J. Prados
The Story of Magic: Memoirs of an American Cryptologic Pioneer, by Frank B. Rowlett (Aegean Park Press, Laguna Hills, 1998, ) – A first-hand memoir from a lead team member of the team which 'broke' both Red and Purple, it contains detailed descriptions of both 'breaks'
External links
The Japanese Wikipedia article on the Type B machine has much technical information including the substitution tables, detailed stepping algorithm, punctuation codes and a sample decryption. It also has reactions from Japanese sources to the American decryption. Entering the website link https://ja.wikipedia.org/wiki/パープル暗号 into Google Translate and clicking "Translate this page" will provide a serviceable English translation.
Red and Purple: A Story Retold NSA analysts' modern-day attempt to duplicate solving the Red and Purple ciphers. Cryptologic Quarterly Article (NSA), Fall/Winter 1984-1985 - Vol. 3, Nos. 3-4 (last accessed: 22 August 2016).
A web-based Purple Simulator (last accessed: 10 February 2019)
A Purple Machine simulator written in Python
A GUI Purple Machine simulator written in Java
Purple, Coral, and Jade
The Purple Machine Information and a simulator (for very old Windows).
Attack on Pearl Harbor
Encryption devices
Japan–United States relations
World War II Japanese cryptography |
51429 | https://en.wikipedia.org/wiki/Hyperreal%20number | Hyperreal number | In mathematics, the system of hyperreal numbers is a way of treating infinite and infinitesimal (infinitely small but non-zero) quantities. The hyperreals, or nonstandard reals, *R, are an extension of the real numbers R that contains numbers greater than anything of the form
(for any finite number of terms).
Such numbers are infinite, and their reciprocals are infinitesimals. The term "hyper-real" was introduced by Edwin Hewitt in 1948.
The hyperreal numbers satisfy the transfer principle, a rigorous version of Leibniz's heuristic law of continuity. The transfer principle states that true first-order statements about R are also valid in *R. For example, the commutative law of addition, , holds for the hyperreals just as it does for the reals; since R is a real closed field, so is *R. Since for all integers n, one also has for all hyperintegers H. The transfer principle for ultrapowers is a consequence of Łoś' theorem of 1955.
Concerns about the soundness of arguments involving infinitesimals date back to ancient Greek mathematics, with Archimedes replacing such proofs with ones using other techniques such as the method of exhaustion. In the 1960s, Abraham Robinson proved that the hyperreals were logically consistent if and only if the reals were. This put to rest the fear that any proof involving infinitesimals might be unsound, provided that they were manipulated according to the logical rules that Robinson delineated.
The application of hyperreal numbers and in particular the transfer principle to problems of analysis is called nonstandard analysis. One immediate application is the definition of the basic concepts of analysis such as the derivative and integral in a direct fashion, without passing via logical complications of multiple quantifiers. Thus, the derivative of f(x) becomes for an infinitesimal , where st(·) denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Similarly, the integral is defined as the standard part of a suitable infinite sum.
The transfer principle
The idea of the hyperreal system is to extend the real numbers R to form a system *R that includes infinitesimal and infinite numbers, but without changing any of the elementary axioms of algebra. Any statement of the form "for any number x..." that is true for the reals is also true for the hyperreals. For example, the axiom that states "for any number x, x + 0 = x" still applies. The same is true for quantification over several numbers, e.g., "for any numbers x and y, xy = yx." This ability to carry over statements from the reals to the hyperreals is called the transfer principle. However, statements of the form "for any set of numbers S ..." may not carry over. The only properties that differ between the reals and the hyperreals are those that rely on quantification over sets, or other higher-level structures such as functions and relations, which are typically constructed out of sets. Each real set, function, and relation has its natural hyperreal extension, satisfying the same first-order properties. The kinds of logical sentences that obey this restriction on quantification are referred to as statements in first-order logic.
The transfer principle, however, does not mean that R and *R have identical behavior. For instance, in *R there exists an element ω such that
but there is no such number in R. (In other words, *R is not Archimedean.) This is possible because the nonexistence of ω cannot be expressed as a first-order statement.
Use in analysis
Calculus with algebraic functions
Informal notations for non-real quantities have historically appeared in calculus in two contexts: as infinitesimals, like dx, and as the symbol ∞, used, for example, in limits of integration of improper integrals.
As an example of the transfer principle, the statement that for any nonzero number x, 2x ≠ x, is true for the real numbers, and it is in the form required by the transfer principle, so it is also true for the hyperreal numbers. This shows that it is not possible to use a generic symbol such as ∞ for all the infinite quantities in the hyperreal system; infinite quantities differ in magnitude from other infinite quantities, and infinitesimals from other infinitesimals.
Similarly, the casual use of 1/0 = ∞ is invalid, since the transfer principle applies to the statement that division by zero is undefined. The rigorous counterpart of such a calculation would be that if ε is a non-zero infinitesimal, then 1/ε is infinite.
For any finite hyperreal number x, its standard part, st(x), is defined as the unique real number that differs from it only infinitesimally. The derivative of a function y(x) is defined not as dy/dx but as the standard part of the corresponding difference quotient.
For example, to find the derivative f′(x) of the function f(x) = x2, let dx be a non-zero infinitesimal. Then,
{|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|-
|
|
|-
|
|
|}
The use of the standard part in the definition of the derivative is a rigorous alternative to the traditional practice of neglecting the square of an infinitesimal quantity. Dual numbers are a number system based on this idea. After the third line of the differentiation above, the typical method from Newton through the 19th century would have been simply to discard the dx2 term. In the hyperreal system,
dx2 ≠ 0, since dx is nonzero, and the transfer principle can be applied to the statement that the square of any nonzero number is nonzero. However, the quantity dx2 is infinitesimally small compared to dx; that is, the hyperreal system contains a hierarchy of infinitesimal quantities.
Integration
One way of defining a definite integral in the hyperreal system is as the standard part of an infinite sum on a hyperfinite lattice defined as , where dx is infinitesimal, n is an infinite hypernatural, and the lower and upper bounds of integration are a and b = a + n dx.
Properties
The hyperreals *R form an ordered field containing the reals R as a subfield. Unlike the reals, the hyperreals do not form a standard metric space, but by virtue of their order they carry an order topology.
The use of the definite article the in the phrase the hyperreal numbers is somewhat misleading in that there is not a unique ordered field that is referred to in most treatments.
However, a 2003 paper by Vladimir Kanovei and Saharon Shelah shows that there is a definable, countably saturated (meaning ω-saturated, but not, of course, countable) elementary extension of the reals, which therefore has a good claim to the title of the hyperreal numbers. Furthermore, the field obtained by the ultrapower construction from the space of all real sequences, is unique up to isomorphism if one assumes the continuum hypothesis.
The condition of being a hyperreal field is a stronger one than that of being a real closed field strictly containing R. It is also stronger than that of being a superreal field in the sense of Dales and Woodin.
Development
The hyperreals can be developed either axiomatically or by more constructively oriented methods. The essence of the axiomatic approach is to assert (1) the existence of at least one infinitesimal number, and (2) the validity of the transfer principle. In the following subsection we give a detailed outline of a more constructive approach. This method allows one to construct the hyperreals if given a set-theoretic object called an ultrafilter, but the ultrafilter itself cannot be explicitly constructed.
From Leibniz to Robinson
When Newton and (more explicitly) Leibniz introduced differentials, they used infinitesimals and these were still regarded as useful by later mathematicians such as Euler and Cauchy. Nonetheless these concepts were from the beginning seen as suspect, notably by George Berkeley. Berkeley's criticism centered on a perceived shift in hypothesis in the definition of the derivative in terms of infinitesimals (or fluxions), where dx is assumed to be nonzero at the beginning of the calculation, and to vanish at its conclusion (see Ghosts of departed quantities for details). When in the 1800s calculus was put on a firm footing through the development of the (ε, δ)-definition of limit by Bolzano, Cauchy, Weierstrass, and others, infinitesimals were largely abandoned, though research in non-Archimedean fields continued (Ehrlich 2006).
However, in the 1960s Abraham Robinson showed how infinitely large and infinitesimal numbers can be rigorously defined and used to develop the field of nonstandard analysis. Robinson developed his theory nonconstructively, using model theory; however it is possible to proceed using only algebra and topology, and proving the transfer principle as a consequence of the definitions. In other words hyperreal numbers per se, aside from their use in nonstandard analysis, have no necessary relationship to model theory or first order logic, although they were discovered by the application of model theoretic techniques from logic. Hyper-real fields were in fact originally introduced by Hewitt (1948) by purely algebraic techniques, using an ultrapower construction.
The ultrapower construction
We are going to construct a hyperreal field via sequences of reals. In fact we can add and multiply sequences componentwise; for example:
and analogously for multiplication.
This turns the set of such sequences into a commutative ring, which is in fact a real algebra A. We have a natural embedding of R in A by identifying the real number r with the sequence (r, r, r, …) and this identification preserves the corresponding algebraic operations of the reals. The intuitive motivation is, for example, to represent an infinitesimal number using a sequence that approaches zero. The inverse of such a sequence would represent an infinite number. As we will see below, the difficulties arise because of the need to define rules for comparing such sequences in a manner that, although inevitably somewhat arbitrary, must be self-consistent and well defined. For example, we may have two sequences that differ in their first n members, but are equal after that; such sequences should clearly be considered as representing the same hyperreal number. Similarly, most sequences oscillate randomly forever, and we must find some way of taking such a sequence and interpreting it as, say, , where is a certain infinitesimal number.
Comparing sequences is thus a delicate matter. We could, for example, try to define a relation between sequences in a componentwise fashion:
but here we run into trouble, since some entries of the first sequence may be bigger than the corresponding entries of the second sequence, and some others may be smaller. It follows that the relation defined in this way is only a partial order. To get around this, we have to specify which positions matter. Since there are infinitely many indices, we don't want finite sets of indices to matter. A consistent choice of index sets that matter is given by any free ultrafilter U on the natural numbers; these can be characterized as ultrafilters that do not contain any finite sets. (The good news is that Zorn's lemma guarantees the existence of many such U; the bad news is that they cannot be explicitly constructed.) We think of U as singling out those sets of indices that "matter": We write (a0, a1, a2, ...) ≤ (b0, b1, b2, ...) if and only if the set of natural numbers { n : an ≤ bn } is in U.
This is a total preorder and it turns into a total order if we agree not to distinguish between two sequences a and b if a ≤ b and b ≤ a. With this identification, the ordered field *R of hyperreals is constructed. From an algebraic point of view, U allows us to define a corresponding maximal ideal I in the commutative ring A (namely, the set of the sequences that vanish in some element of U), and then to define *R as A/I; as the quotient of a commutative ring by a maximal ideal, *R is a field. This is also notated A/U, directly in terms of the free ultrafilter U; the two are equivalent. The maximality of I follows from the possibility of, given a sequence a, constructing a sequence b inverting the non-null elements of a and not altering its null entries. If the set on which a vanishes is not in U, the product ab is identified with the number 1, and any ideal containing 1 must be A. In the resulting field, these a and b are inverses.
The field A/U is an ultrapower of R.
Since this field contains R it has cardinality at least that of the continuum. Since A has cardinality
it is also no larger than , and hence has the same cardinality as R.
One question we might ask is whether, if we had chosen a different free ultrafilter V, the quotient field A/U would be isomorphic as an ordered field to A/V. This question turns out to be equivalent to the continuum hypothesis; in ZFC with the continuum hypothesis we can prove this field is unique up to order isomorphism, and in ZFC with the negation of continuum hypothesis we can prove that there are non-order-isomorphic pairs of fields that are both countably indexed ultrapowers of the reals.
For more information about this method of construction, see ultraproduct.
An intuitive approach to the ultrapower construction
The following is an intuitive way of understanding the hyperreal numbers. The approach taken here is very close to the one in the book by Goldblatt. Recall that the sequences converging to zero are sometimes called infinitely small. These are almost the infinitesimals in a sense; the true infinitesimals include certain classes of sequences that contain a sequence converging to zero.
Let us see where these classes come from. Consider first the sequences of real numbers. They form a ring, that is, one can multiply, add and subtract them, but not necessarily divide by a non-zero element. The real numbers are considered as the constant sequences, the sequence is zero if it is identically zero, that is, an = 0 for all n.
In our ring of sequences one can get ab = 0 with neither a = 0 nor b = 0. Thus, if for two sequences one has ab = 0, at least one of them should be declared zero. Surprisingly enough, there is a consistent way to do it. As a result, the equivalence classes of sequences that differ by some sequence declared zero will form a field, which is called a hyperreal field. It will contain the infinitesimals in addition to the ordinary real numbers, as well as infinitely large numbers (the reciprocals of infinitesimals, including those represented by sequences diverging to infinity). Also every hyperreal that is not infinitely large will be infinitely close to an ordinary real, in other words, it will be the sum of an ordinary real and an infinitesimal.
This construction is parallel to the construction of the reals from the rationals given by Cantor. He started with the ring of the Cauchy sequences of rationals and declared all the sequences that converge to zero to be zero. The result is the reals. To continue the construction of hyperreals, consider the zero sets of our sequences, that is, the , that is, is the set of indexes for which . It is clear that if , then the union of and is N (the set of all natural numbers), so:
One of the sequences that vanish on two complementary sets should be declared zero
If is declared zero, should be declared zero too, no matter what is.
If both and are declared zero, then should also be declared zero.
Now the idea is to single out a bunch U of subsets X of N and to declare that if and only if belongs to U. From the above conditions one can see that:
From two complementary sets one belongs to U
Any set having a subset that belongs to U, also belongs to U.
An intersection of any two sets belonging to U belongs to U.
Finally, we do not want the empty set to belong to U because then everything would belong to U, as every set has the empty set as a subset.
Any family of sets that satisfies (2–4) is called a filter (an example: the complements to the finite sets, it is called the Fréchet filter and it is used in the usual limit theory). If (1) also holds, U is called an ultrafilter (because you can add no more sets to it without breaking it). The only explicitly known example of an ultrafilter is the family of sets containing a given element (in our case, say, the number 10). Such ultrafilters are called trivial, and if we use it in our construction, we come back to the ordinary real numbers. Any ultrafilter containing a finite set is trivial. It is known that any filter can be extended to an ultrafilter, but the proof uses the axiom of choice. The existence of a nontrivial ultrafilter (the ultrafilter lemma) can be added as an extra axiom, as it is weaker than the axiom of choice.
Now if we take a nontrivial ultrafilter (which is an extension of the Fréchet filter) and do our construction, we get the hyperreal numbers as a result.
If is a real function of a real variable then naturally extends to a hyperreal function of a hyperreal variable by composition:
where means "the equivalence class of the sequence relative to our ultrafilter", two sequences being in the same class if and only if the zero set of their difference belongs to our ultrafilter.
All the arithmetical expressions and formulas make sense for hyperreals and hold true if they are true for the ordinary reals. It turns out that any finite (that is, such that for some ordinary real ) hyperreal will be of the form where is an ordinary (called standard) real and is an infinitesimal. It can be proven by bisection method used in proving the Bolzano-Weierstrass theorem, the property (1) of ultrafilters turns out to be crucial.
Properties of infinitesimal and infinite numbers
The finite elements F of *R form a local ring, and in fact a valuation ring, with the unique maximal ideal S being the infinitesimals; the quotient F/S is isomorphic to the reals. Hence we have a homomorphic mapping, st(x), from F to R whose kernel consists of the infinitesimals and which sends every element x of F to a unique real number whose difference from x is in S; which is to say, is infinitesimal. Put another way, every finite nonstandard real number is "very close" to a unique real number, in the sense that if x is a finite nonstandard real, then there exists one and only one real number st(x) such that x – st(x) is infinitesimal. This number st(x) is called the standard part of x, conceptually the same as x to the nearest real number. This operation is an order-preserving homomorphism and hence is well-behaved both algebraically and order theoretically. It is order-preserving though not isotonic; i.e. implies , but does not imply .
We have, if both x and y are finite,
If x is finite and not infinitesimal.
x is real if and only if
The map st is continuous with respect to the order topology on the finite hyperreals; in fact it is locally constant.
Hyperreal fields
Suppose X is a Tychonoff space, also called a T3.5 space, and C(X) is the algebra of continuous real-valued functions on X. Suppose M is a maximal ideal in C(X). Then the factor algebra A = C(X)/M is a totally ordered field F containing the reals. If F strictly contains R then M is called a hyperreal ideal (terminology due to Hewitt (1948)) and F a hyperreal field. Note that no assumption is being made that the cardinality of F is greater than R; it can in fact have the same cardinality.
An important special case is where the topology on X is the discrete topology; in this case X can be identified with a cardinal number κ and C(X) with the real algebra Rκ of functions from κ to R. The hyperreal fields we obtain in this case are called ultrapowers of R and are identical to the ultrapowers constructed via free ultrafilters in model theory.
See also
- Surreal numbers are a much larger class of numbers, that contains the hyperreals as well as other classes of non-real numbers.
References
Further reading
Hatcher, William S. (1982) "Calculus is Algebra", American Mathematical Monthly 89: 362–370.
Hewitt, Edwin (1948) Rings of real-valued continuous functions. I. Trans. Amer. Math. Soc. 64, 45—99.
Keisler, H. Jerome (1994) The hyperreal line. Real numbers, generalizations of the reals, and theories of continua, 207—237, Synthese Lib., 242, Kluwer Acad. Publ., Dordrecht.
External links
Crowell, Calculus. A text using infinitesimals.
Hermoso, Nonstandard Analysis and the Hyperreals. A gentle introduction.
Keisler, Elementary Calculus: An Approach Using Infinitesimals. Includes an axiomatic treatment of the hyperreals, and is freely available under a Creative Commons license
Stroyan, ''A Brief Introduction to Infinitesimal Calculus Lecture 1 Lecture 2 Lecture 3
Mathematical analysis
Nonstandard analysis
Field (mathematics)
Real closed field
Infinity
Mathematics of infinitesimals |
51910 | https://en.wikipedia.org/wiki/Quantum%20key%20distribution | Quantum key distribution | Quantum key distribution (QKD) is a secure communication method which implements a cryptographic protocol involving components of quantum mechanics. It enables two parties to produce a shared random secret key known only to them, which can then be used to encrypt and decrypt messages. It is often incorrectly called quantum cryptography, as it is the best-known example of a quantum cryptographic task.
An important and unique property of quantum key distribution is the ability of the two communicating users to detect the presence of any third party trying to gain knowledge of the key. This results from a fundamental aspect of quantum mechanics: the process of measuring a quantum system in general disturbs the system. A third party trying to eavesdrop on the key must in some way measure it, thus introducing detectable anomalies. By using quantum superpositions or quantum entanglement and transmitting information in quantum states, a communication system can be implemented that detects eavesdropping. If the level of eavesdropping is below a certain threshold, a key can be produced that is guaranteed to be secure (i.e., the eavesdropper has no information about it), otherwise no secure key is possible and communication is aborted.
The security of encryption that uses quantum key distribution relies on the foundations of quantum mechanics, in contrast to traditional public key cryptography, which relies on the computational difficulty of certain mathematical functions, and cannot provide any mathematical proof as to the actual complexity of reversing the one-way functions used. QKD has provable security based on information theory, and forward secrecy.
The main drawback of quantum key distribution is that it usually relies on having an authenticated classical channel of communications. In modern cryptography, having an authenticated classical channel means that one has either already exchanged a symmetric key of sufficient length or public keys of sufficient security level. With such information already available, in practice one can achieve authenticated and sufficiently secure communications without using QKD, such as by using the Galois/Counter Mode of the Advanced Encryption Standard. Thus QKD does the work of a stream cipher at many times the cost. Noted security expert Bruce Schneier remarked that quantum key distribution is "as useless as it is expensive".
Quantum key distribution is only used to produce and distribute a key, not to transmit any message data. This key can then be used with any chosen encryption algorithm to encrypt (and decrypt) a message, which can then be transmitted over a standard communication channel. The algorithm most commonly associated with QKD is the one-time pad, as it is provably secure when used with a secret, random key. In real-world situations, it is often also used with encryption using symmetric key algorithms like the Advanced Encryption Standard algorithm.
Quantum key exchange
Quantum communication involves encoding information in quantum states, or qubits, as opposed to classical communication's use of bits. Usually, photons are used for these quantum states. Quantum key distribution exploits certain properties of these quantum states to ensure its security. There are several different approaches to quantum key distribution, but they can be divided into two main categories depending on which property they exploit.
Prepare and measure protocols In contrast to classical physics, the act of measurement is an integral part of quantum mechanics. In general, measuring an unknown quantum state changes that state in some way. This is a consequence of quantum indeterminacy and can be exploited in order to detect any eavesdropping on communication (which necessarily involves measurement) and, more importantly, to calculate the amount of information that has been intercepted.
Entanglement based protocols The quantum states of two (or more) separate objects can become linked together in such a way that they must be described by a combined quantum state, not as individual objects. This is known as entanglement and means that, for example, performing a measurement on one object affects the other. If an entangled pair of objects is shared between two parties, anyone intercepting either object alters the overall system, revealing the presence of the third party (and the amount of information they have gained).
These two approaches can each be further divided into three families of protocols: discrete variable, continuous variable and distributed phase reference coding. Discrete variable protocols were the first to be invented, and they remain the most widely implemented. The other two families are mainly concerned with overcoming practical limitations of experiments. The two protocols described below both use discrete variable coding.
BB84 protocol: Charles H. Bennett and Gilles Brassard (1984)
This protocol, known as BB84 after its inventors and year of publication, was originally described using photon polarization states to transmit the information. However, any two pairs of conjugate states can be used for the protocol, and many optical-fibre-based implementations described as BB84 use phase encoded states. The sender (traditionally referred to as Alice) and the receiver (Bob) are connected by a quantum communication channel which allows quantum states to be transmitted. In the case of photons this channel is generally either an optical fibre or simply free space. In addition they communicate via a public classical channel, for example using broadcast radio or the internet. The protocol is designed with the assumption that an eavesdropper (referred to as Eve) can interfere in any way with the quantum channel, while the classical channel needs to be authenticated.
The security of the protocol comes from encoding the information in non-orthogonal states. Quantum indeterminacy means that these states cannot in general be measured without disturbing the original state (see No-cloning theorem). BB84 uses two pairs of states, with each pair conjugate to the other pair, and the two states within a pair orthogonal to each other. Pairs of orthogonal states are referred to as a basis. The usual polarization state pairs used are either the rectilinear basis of vertical (0°) and horizontal (90°), the diagonal basis of 45° and 135° or the circular basis of left- and right-handedness. Any two of these bases are conjugate to each other, and so any two can be used in the protocol. Below the rectilinear and diagonal bases are used.
The first step in BB84 is quantum transmission. Alice creates a random bit (0 or 1) and then randomly selects one of her two bases (rectilinear or diagonal in this case) to transmit it in. She then prepares a photon polarization state depending both on the bit value and basis, as shown in the adjacent table. So for example a 0 is encoded in the rectilinear basis (+) as a vertical polarization state, and a 1 is encoded in the diagonal basis (x) as a 135° state. Alice then transmits a single photon in the state specified to Bob, using the quantum channel. This process is then repeated from the random bit stage, with Alice recording the state, basis and time of each photon sent.
According to quantum mechanics (particularly quantum indeterminacy), no possible measurement distinguishes between the 4 different polarization states, as they are not all orthogonal. The only possible measurement is between any two orthogonal states (an orthonormal basis). So, for example, measuring in the rectilinear basis gives a result of horizontal or vertical. If the photon was created as horizontal or vertical (as a rectilinear eigenstate) then this measures the correct state, but if it was created as 45° or 135° (diagonal eigenstates) then the rectilinear measurement instead returns either horizontal or vertical at random. Furthermore, after this measurement the photon is polarized in the state it was measured in (horizontal or vertical), with all information about its initial polarization lost.
As Bob does not know the basis the photons were encoded in, all he can do is to select a basis at random to measure in, either rectilinear or diagonal. He does this for each photon he receives, recording the time, measurement basis used and measurement result. After Bob has measured all the photons, he communicates with Alice over the public classical channel. Alice broadcasts the basis each photon was sent in, and Bob the basis each was measured in. They both discard photon measurements (bits) where Bob used a different basis, which is half on average, leaving half the bits as a shared key.
To check for the presence of an eavesdropper, Alice and Bob now compare a predetermined subset of their remaining bit strings. If a third party (usually referred to as Eve, for "eavesdropper") has gained any information about the photons' polarization, this introduces errors in Bob's measurements. Other environmental conditions can cause errors in a similar fashion. If more than bits differ they abort the key and try again, possibly with a different quantum channel, as the security of the key cannot be guaranteed. is chosen so that if the number of bits known to Eve is less than this, privacy amplification can be used to reduce Eve's knowledge of the key to an arbitrarily small amount at the cost of reducing the length of the key.
E91 protocol: Artur Ekert (1991)
Artur Ekert's scheme uses entangled pairs of photons. These can be created by Alice, by Bob, or by some source separate from both of them, including eavesdropper Eve. The photons are distributed so that Alice and Bob each end up with one photon from each pair.
The scheme relies on two properties of entanglement. First, the entangled states are perfectly correlated in the sense that if Alice and Bob both measure whether their particles have vertical or horizontal polarizations, they always get the same answer with 100% probability. The same is true if they both measure any other pair of complementary (orthogonal) polarizations. This necessitates that the two distant parties have exact directionality synchronization. However, the particular results are completely random; it is impossible for Alice to predict if she (and thus Bob) will get vertical polarization or horizontal polarization. Second, any attempt at eavesdropping by Eve destroys these correlations in a way that Alice and Bob can detect.
Similarly to BB84, the protocol involves a private measurement protocol before detecting the presence of Eve. The measurement stage involves Alice measuring each photon she receives using some basis from the set while Bob chooses from where is the basis rotated by . They keep their series of basis choices private until measurements are completed. Two groups of photons are made: the first consists of photons measured using the same basis by Alice and Bob while the second contains all other photons. To detect eavesdropping, they can compute the test statistic using the correlation coefficients between Alice's bases and Bob's similar to that shown in the Bell test experiments. Maximally entangled photons would result in . If this were not the case, then Alice and Bob can conclude Eve has introduced local realism to the system, violating Bell's Theorem. If the protocol is successful, the first group can be used to generate keys since those photons are completely anti-aligned between Alice and Bob.
Information reconciliation and privacy amplification
The quantum key distribution protocols described above provide Alice and Bob with nearly identical shared keys, and also with an estimate of the discrepancy between the keys. These differences can be caused by eavesdropping, but also by imperfections in the transmission line and detectors. As it is impossible to distinguish between these two types of errors, guaranteed security requires the assumption that all errors are due to eavesdropping. Provided the error rate between the keys is lower than a certain threshold (27.6% as of 2002), two steps can be performed to first remove the erroneous bits and then reduce Eve's knowledge of the key to an arbitrary small value. These two steps are known as information reconciliation and privacy amplification respectively, and were first described in 1992.
Information reconciliation is a form of error correction carried out between Alice and Bob's keys, in order to ensure both keys are identical. It is conducted over the public channel and as such it is vital to minimise the information sent about each key, as this can be read by Eve. A common protocol used for information reconciliation is the cascade protocol, proposed in 1994. This operates in several rounds, with both keys divided into blocks in each round and the parity of those blocks compared. If a difference in parity is found then a binary search is performed to find and correct the error. If an error is found in a block from a previous round that had correct parity then another error must be contained in that block; this error is found and corrected as before. This process is repeated recursively, which is the source of the cascade name. After all blocks have been compared, Alice and Bob both reorder their keys in the same random way, and a new round begins. At the end of multiple rounds Alice and Bob have identical keys with high probability; however, Eve has additional information about the key from the parity information exchanged. However, from a coding theory point of view information reconciliation is essentially source coding with side information, in consequence any coding scheme that works for this problem can be used for information reconciliation. Lately turbocodes, LDPC codes and polar codes have been used for this purpose improving the efficiency of the cascade protocol.
Privacy amplification is a method for reducing (and effectively eliminating) Eve's partial information about Alice and Bob's key. This partial information could have been gained both by eavesdropping on the quantum channel during key transmission (thus introducing detectable errors), and on the public channel during information reconciliation (where it is assumed Eve gains all possible parity information). Privacy amplification uses Alice and Bob's key to produce a new, shorter key, in such a way that Eve has only negligible information about the new key. This can be done using a universal hash function, chosen at random from a publicly known set of such functions, which takes as its input a binary string of length equal to the key and outputs a binary string of a chosen shorter length. The amount by which this new key is shortened is calculated, based on how much information Eve could have gained about the old key (which is known due to the errors this would introduce), in order to reduce the probability of Eve having any knowledge of the new key to a very low value.
Implementations
Experimental
In 2008, exchange of secure keys at 1 Mbit/s (over 20 km of optical fibre) and 10 kbit/s (over 100 km of fibre), was achieved by a collaboration between the University of Cambridge and Toshiba using the BB84 protocol with decoy state pulses.
In 2007, Los Alamos National Laboratory/NIST achieved quantum key distribution over a 148.7 km of optic fibre using the BB84 protocol. Significantly, this distance is long enough for almost all the spans found in today's fibre networks. A European collaboration achieved free space QKD over 144 km between two of the Canary Islands using entangled photons (the Ekert scheme) in 2006, and using BB84 enhanced with decoy states in 2007.
the longest distance for optical fiber (307 km) was achieved by University of Geneva and Corning Inc. In the same experiment, a secret key rate of 12.7 kbit/s was generated, making it the highest bit rate system over distances of 100 km. In 2016 a team from Corning and various institutions in China achieved a distance of 404 km, but at a bit rate too slow to be practical.
In June 2017, physicists led by Thomas Jennewein at the Institute for Quantum Computing and the University of Waterloo in Waterloo, Canada achieved the first demonstration of quantum key distribution from a ground transmitter to a moving aircraft. They reported optical links with distances between 3–10 km and generated secure keys up to 868 kilobytes in length.
Also in June 2017, as part of the Quantum Experiments at Space Scale project, Chinese physicists led by Pan Jianwei at the University of Science and Technology of China measured entangled photons over a distance of 1203 km between two ground stations, laying the groundwork for future intercontinental quantum key distribution experiments. Photons were sent from one ground station to the satellite they had named Micius and back down to another ground station, where they "observed a survival of two-photon entanglement and a violation of Bell inequality by 2.37 ± 0.09 under strict Einstein locality conditions" along a "summed length varying from 1600 to 2400 kilometers." Later that year BB84 was successfully implemented over satellite links from Micius to ground stations in China and Austria. The keys were combined and the result was used to transmit images and video between Beijing, China, and Vienna, Austria.
In May 2019 a group led by Hong Guo at Peking University and Beijing University of Posts and Telecommunications reported field tests of a continuous-variable QKD system through commercial fiber networks in Xi'an and Guangzhou over distances of 30.02 km (12.48 dB) and 49.85 km (11.62 dB) respectively.
In December 2020, Indian Defence Research and Development Organisation tested a QKD between two of its laboratories in Hyderabad facility. The setup also demonstrated the validation of detection of a third party trying to gain knowledge of the communication. Quantum based security against eavesdropping was validated for the deployed system at over range and 10 dB attenuation over fibre optic channel. A continuous wave laser source was used to generate photons without depolarization effect and timing accuracy employed in the setup was of the order of picoseconds. The Single photon avalanche detector (SPAD) recorded arrival of photons and key rate was achieved in the range of kbps with low Quantum bit error rate.
In March 2021, Indian Space Research Organisation also demonstrated a free-space Quantum Communication over a distance of 300 meters. A free-space QKD was demonstrated at Space Applications Centre (SAC), Ahmedabad, between two line-of-sight buildings within the campus for video conferencing by quantum-key encrypted signals. The experiment utilised a NAVIC receiver for time synchronization between the transmitter and receiver modules. Later in January 2022, Indian scientists were able to successfully create an atmospheric channel for exchange of crypted messages and images. After demonstrating quantum communication between two ground stations, India has plans to develop Satellite Based Quantum Communication (SBQC).
Commercial
There are currently six companies offering commercial quantum key distribution systems around the world; ID Quantique (Geneva), MagiQ Technologies, Inc. (New York), QNu Labs (Bengaluru, India), QuintessenceLabs (Australia), QRate (Russia) and SeQureNet (Paris). Several other companies also have active research programs, including Toshiba, HP, IBM, Mitsubishi, NEC and NTT (See External links for direct research links).
In 2004, the world's first bank transfer using quantum key distribution was carried out in Vienna, Austria. Quantum encryption technology provided by the Swiss company Id Quantique was used in the Swiss canton (state) of Geneva to transmit ballot results to the capital in the national election occurring on 21 October 2007. In 2013, Battelle Memorial Institute installed a QKD system built by ID Quantique between their main campus in Columbus, Ohio and their manufacturing facility in nearby Dublin. Field tests of Tokyo QKD network have been underway for some time.
Quantum key distribution networks
DARPA
The DARPA Quantum Network, was a 10-node quantum key distribution network, which ran continuously for four years, 24 hours a day, from 2004 to 2007 in Massachusetts in the United States. It was developed by BBN Technologies, Harvard University, Boston University, with collaboration from IBM Research, the National Institute of Standards and Technology, and QinetiQ. It supported a standards-based Internet computer network protected by quantum key distribution.
SECOQC
The world's first computer network protected by quantum key distribution was implemented in October 2008, at a scientific conference in Vienna. The name of this network is SECOQC (Secure Communication Based on Quantum Cryptography) and the EU funded this project. The network used 200 km of standard fibre optic cable to interconnect six locations across Vienna and the town of St Poelten located 69 km to the west.
SwissQuantum
Id Quantique has successfully completed the longest running project for testing Quantum Key Distribution (QKD) in a field environment. The main goal of the SwissQuantum network project installed in the Geneva metropolitan area in March 2009, was to validate the reliability and robustness of QKD in continuous operation over a long time period in a field environment. The quantum layer operated for nearly 2 years until the project was shut down in January 2011 shortly after the initially planned duration of the test.
Chinese networks
In May 2009, a hierarchical quantum network was demonstrated in Wuhu, China. The hierarchical network consisted of a backbone network of four nodes connecting a number of subnets. The backbone nodes were connected through an optical switching quantum router. Nodes within each subnet were also connected through an optical switch, which were connected to the backbone network through a trusted relay.
Launched in August 2016, the QUESS space mission created an international QKD channel between China and the Institute for Quantum Optics and Quantum Information in Vienna, Austria − a ground distance of , enabling the first intercontinental secure quantum video call. By October 2017, a 2,000-km fiber line was operational between Beijing, Jinan, Hefei and Shanghai. Together they constitute the world's first space-ground quantum network. Up to 10 Micius/QUESS satellites are expected, allowing a European–Asian quantum-encrypted network by 2020, and a global network by 2030.
Tokyo QKD Network
The Tokyo QKD Network was inaugurated on the first day of the UQCC2010 conference. The network involves an international collaboration between 7 partners; NEC, Mitsubishi Electric, NTT and NICT from Japan, and participation from Europe by Toshiba Research Europe Ltd. (UK), Id Quantique (Switzerland) and All Vienna (Austria). "All Vienna" is represented by researchers from the Austrian Institute of Technology (AIT), the Institute for Quantum Optics and Quantum Information (IQOQI) and the University of Vienna.
Los Alamos National Laboratory
A hub-and-spoke network has been operated by Los Alamos National Laboratory since 2011. All messages are routed via the hub. The system equips each node in the network with quantum transmitters—i.e., lasers—but not with expensive and bulky photon detectors. Only the hub receives quantum messages. To communicate, each node sends a one-time pad to the hub, which it then uses to communicate securely over a classical link. The hub can route this message to another node using another one time pad from the second node. The entire network is secure only if the central hub is secure. Individual nodes require little more than a laser: Prototype nodes are around the size of a box of matches.
Attacks and security proofs
Intercept and resend
The simplest type of possible attack is the intercept-resend attack, where Eve measures the quantum states (photons) sent by Alice and then sends replacement states to Bob, prepared in the state she measures. In the BB84 protocol, this produces errors in the key Alice and Bob share. As Eve has no knowledge of the basis a state sent by Alice is encoded in, she can only guess which basis to measure in, in the same way as Bob. If she chooses correctly, she measures the correct photon polarization state as sent by Alice, and resends the correct state to Bob. However, if she chooses incorrectly, the state she measures is random, and the state sent to Bob cannot be the same as the state sent by Alice. If Bob then measures this state in the same basis Alice sent, he too gets a random result—as Eve has sent him a state in the opposite basis—with a 50% chance of an erroneous result (instead of the correct result he would get without the presence of Eve). The table below shows an example of this type of attack.
The probability Eve chooses the incorrect basis is 50% (assuming Alice chooses randomly), and if Bob measures this intercepted photon in the basis Alice sent he gets a random result, i.e., an incorrect result with probability of 50%. The probability an intercepted photon generates an error in the key string is then 50% × 50% = 25%. If Alice and Bob publicly compare of their key bits (thus discarding them as key bits, as they are no longer secret) the probability they find disagreement and identify the presence of Eve is
So to detect an eavesdropper with probability Alice and Bob need to compare key bits.
Man-in-the-middle attack
Quantum key distribution is vulnerable to a man-in-the-middle attack when used without authentication to the same extent as any classical protocol, since no known principle of quantum mechanics can distinguish friend from foe. As in the classical case, Alice and Bob cannot authenticate each other and establish a secure connection without some means of verifying each other's identities (such as an initial shared secret). If Alice and Bob have an initial shared secret then they can use an unconditionally secure authentication scheme (such as Carter-Wegman,) along with quantum key distribution to exponentially expand this key, using a small amount of the new key to authenticate the next session. Several methods to create this initial shared secret have been proposed, for example using a 3rd party or chaos theory. Nevertheless, only "almost strongly universal" family of hash functions can be used for unconditionally secure authentication.
Photon number splitting attack
In the BB84 protocol Alice sends quantum states to Bob using single photons. In practice many implementations use laser pulses attenuated to a very low level to send the quantum states. These laser pulses contain a very small number of photons, for example 0.2 photons per pulse, which are distributed according to a Poisson distribution. This means most pulses actually contain no photons (no pulse is sent), some pulses contain 1 photon (which is desired) and a few pulses contain 2 or more photons. If the pulse contains more than one photon, then Eve can split off the extra photons and transmit the remaining single photon to Bob. This is the basis of the photon number splitting attack, where Eve stores these extra photons in a quantum memory until Bob detects the remaining single photon and Alice reveals the encoding basis. Eve can then measure her photons in the correct basis and obtain information on the key without introducing detectable errors.
Even with the possibility of a PNS attack a secure key can still be generated, as shown in the GLLP security proof; however, a much higher amount of privacy amplification is needed reducing the secure key rate significantly (with PNS the rate scales as as compared to for a single photon sources, where is the transmittance of the quantum channel).
There are several solutions to this problem. The most obvious is to use a true single photon
source instead of an attenuated laser. While such sources are still at a developmental stage QKD has been carried out successfully with them. However, as current sources operate at a low efficiency and frequency key rates and transmission distances are limited. Another solution is to modify the BB84 protocol, as is done for example in the SARG04 protocol, in which the secure key rate scales as . The most promising solution is the decoy states in which Alice randomly sends some of her laser pulses with a lower average photon number. These decoy states can be used to detect a PNS attack, as Eve has no way to tell which pulses are signal and which decoy. Using this idea the secure key rate scales as , the same as for a single photon source. This idea has been implemented successfully first at the University of Toronto, and in several follow-up QKD experiments, allowing for high key rates secure against all known attacks.
Denial of service
Because currently a dedicated fibre optic line (or line of sight in free space) is required between the two points linked by quantum key distribution, a denial of service attack can be mounted by simply cutting or blocking the line. This is one of the motivations for the development of quantum key distribution networks, which would route communication via alternate links in case of disruption.
Trojan-horse attacks
A quantum key distribution system may be probed by Eve by sending in bright light from the quantum channel and analyzing the back-reflections in a Trojan-horse attack. In a recent research study it has been shown that Eve discerns Bob's secret basis choice with higher than 90% probability, breaching the security of the system.
Security proofs
If Eve is assumed to have unlimited resources, for example both classical and quantum computing power, there are many more attacks possible. BB84 has been proven secure against any attacks allowed by quantum mechanics, both for sending information using an ideal photon source which only ever emits a single photon at a time, and also using practical photon sources which sometimes emit multiphoton pulses. These proofs are unconditionally secure in the sense that no conditions are imposed on the resources available to the eavesdropper; however, there are other conditions required:
Eve cannot physically access Alice and Bob's encoding and decoding devices.
The random number generators used by Alice and Bob must be trusted and truly random (for example a Quantum random number generator).
The classical communication channel must be authenticated using an unconditionally secure authentication scheme.
The message must be encrypted using one-time pad like scheme
Quantum hacking
Hacking attacks target vulnerabilities in the operation of a QKD protocol or deficiencies in the components of the physical devices used in construction of the QKD system. If the equipment used in quantum key distribution can be tampered with, it could be made to generate keys that were not secure using a random number generator attack. Another common class of attacks is the Trojan horse attack which does not require physical access to the endpoints: rather than attempt to read Alice and Bob's single photons, Eve sends a large pulse of light back to Alice in between transmitted photons. Alice's equipment reflects some of Eve's light, revealing the state of Alice's basis (e.g., a polarizer). This attack can be detected, e.g. by using a classical detector to check the non-legitimate signals (i.e. light from Eve) entering Alice's system. It is also conjectured that most hacking attacks can similarly be defeated by modifying the implementation, though there is no formal proof.
Several other attacks including faked-state attacks, phase remapping attacks, and time-shift attacks are now known. The time-shift attack has even been demonstrated on a commercial quantum cryptosystem. This is the first demonstration of quantum hacking against a non-homemade quantum key distribution system. Later on, the phase-remapping attack was also demonstrated on a specially configured, research oriented open QKD system (made and provided by the Swiss company Id Quantique under their Quantum Hacking program). It is one of the first 'intercept-and-resend' attacks on top of a widely used QKD implementation in commercial QKD systems. This work has been widely reported in media.
The first attack that claimed to be able to eavesdrop the whole key without leaving any trace was demonstrated in 2010. It was experimentally shown that the single-photon detectors in two commercial devices could be fully remote-controlled using specially tailored bright illumination. In a spree of publications thereafter, the collaboration between the Norwegian University of Science and Technology in Norway and Max Planck Institute for the Science of Light in Germany, has now demonstrated several methods to successfully eavesdrop on commercial QKD systems based on weaknesses of Avalanche photodiodes (APDs) operating in gated mode. This has sparked research on new approaches to securing communications networks.
Counterfactual quantum key distribution
The task of distributing a secret key could be achieved even when the particle (on which the secret information, e.g. polarization, has been encoded) does not traverse through the quantum channel using a protocol developed by Tae-Gon Noh. serves to explain how this non-intuitive or counterfactual idea actually works. Here Alice generates a photon which, by not taking a measurement until later, exists in a superposition of being in paths (a) and (b) simultaneously. Path (a) stays inside Alice's secure device and path (b) goes to Bob. By rejecting the photons that Bob receives and only accepting the ones he doesn't receive, Bob & Alice can set up a secure channel, i.e. Eve's attempts to read the counterfactual photons would still be detected. This protocol uses the quantum phenomenon whereby the possibility that a photon can be sent has an effect even when it isn't sent. So-called interaction-free measurement also uses this quantum effect, as for example in the bomb testing problem, whereby you can determine which bombs are not duds without setting them off, except in a counterfactual sense.
History
Quantum cryptography was proposed first by Stephen Wiesner, then at Columbia University in New York, who, in the early 1970s, introduced the concept of quantum conjugate coding. His seminal paper titled "Conjugate Coding" was rejected by IEEE Information Theory but was eventually published in 1983 in SIGACT News (15:1 pp. 78–88, 1983). In this paper he showed how to store or transmit two messages by encoding them in two "conjugate observables", such as linear and circular polarization of light, so that either, but not both, of which may be received and decoded. He illustrated his idea with a design of unforgeable bank notes. A decade later, building upon this work, Charles H. Bennett, of the IBM Thomas J. Watson Research Center, and Gilles Brassard, of the University of Montreal, proposed a method for secure communication based on Wiesner's "conjugate observables". In 1990, Artur Ekert, then a PhD student at Wolfson College, University of Oxford, developed a different approach to quantum key distribution based on quantum entanglement.
Future
The current commercial systems are aimed mainly at governments and corporations with high security requirements. Key distribution by courier is typically used in such cases, where traditional key distribution schemes are not believed to offer enough guarantee. This has the advantage of not being intrinsically distance limited, and despite long travel times the transfer rate can be high due to the availability of large capacity portable storage devices. The major difference of quantum key distribution is the ability to detect any interception of the key, whereas with courier the key security cannot be proven or tested. QKD (Quantum Key Distribution) systems also have the advantage of being automatic, with greater reliability and lower operating costs than a secure human courier network.
Kak's three-stage protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution in which the cryptographic transformation uses classical algorithms
Factors preventing wide adoption of quantum key distribution outside high security areas include the cost of equipment, and the lack of a demonstrated threat to existing key exchange protocols. However, with optic fibre networks already present in many countries the infrastructure is in place for a more widespread use.
An Industry Specification Group (ISG) of the European Telecommunications Standards Institute (ETSI) has been set up to address standardisation issues in quantum cryptography.
European Metrology Institutes, in the context of dedicated projects, are developing measurements required to characterise components of QKD systems.
Toshiba Europe has been awarded a prestigious Institute of Physics Award for Business Innovation. This recognises Toshiba’s pioneering QKD technology developed over two decades of research, protecting communication infrastructure from present and future cyber-threats, and commercialising UK-manufactured products which pave the road to the quantum internet. The Institute of Physics (IOP) is the professional body and learned society for physics, and the leading body for practising physicists, in the UK and Ireland. With a rich history of supporting business innovation and growth, it is committed to working with ‘physics-based’ businesses, and companies that apply and employ physics and physicists.
Toshiba also took the Semi Grand Prix award in the Solutions Category for the QKD has won the Minister of Economy, Trade and Industry Award in CEATEC AWARD 2021, the prestigious awards presented at CEATEC, Japan’s premier electronics industry trade show.
See also
List of quantum key distribution protocols
Quantum computing
Quantum cryptography
Quantum information science
Quantum network
References
External links
General and review
Quantum Computing 101
Scientific American Magazine (January 2005 Issue) Best-Kept Secrets Non-technical article on quantum cryptography
Physics World Magazine (March 2007 Issue) Non-technical article on current state and future of quantum communication
SECOQC White Paper on Quantum Key Distribution and Cryptography European project to create a large scale quantum cryptography network, includes discussion of current QKD approaches and comparison with classical cryptography
The future of cryptography May 2003 Tomasz Grabowski
ARDA Quantum Cryptography Roadmap
Lectures at the Institut Henri Poincaré (slides and videos)
Interactive quantum cryptography demonstration experiment with single photons for education
More specific information
Description of entanglement based quantum cryptography from Artur Ekert.
Description of BB84 protocol and privacy amplification by Sharon Goldwater.
Public debate on the Security of Quantum Key Distribution at the conference Hot Topics in Physical Informatics, 11 November 2013
Further information
Quantiki.org - Quantum Information portal and wiki
Interactive BB84 simulation
Quantum key distribution simulation
Online Simulation and Analysis Toolkit for Quantum Key Distribution
Quantum cryptography research groups
Experimental Quantum Cryptography with Entangled Photons
NIST Quantum Information Networks
Free Space Quantum Cryptography
Experimental Continuous Variable QKD, MPL Erlangen
Experimental Quantum Hacking, MPL Erlangen
Quantum cryptography lab. Pljonkin A.P.
Companies selling quantum devices for cryptography
AUREA Technology sells the optical building blocks for Quantum cryptography
id Quantique sells Quantum Key Distribution products
MagiQ Technologies sells quantum devices for cryptography
QuintessenceLabs Solutions based on continuous wave lasers
SeQureNet sells Quantum Key Distribution products using continuous-variables
Companies with quantum cryptography research programmes
Toshiba
Hewlett Packard
IBM
Mitsubishi
NEC
NTT
Cryptography
Quantum information science
Quantum cryptography |
52491 | https://en.wikipedia.org/wiki/Non-repudiation | Non-repudiation | Non-repudiation refers to a situation where a statement's author cannot successfully dispute its authorship or the validity of an associated contract. The term is often seen in a legal setting when the authenticity of a signature is being challenged. In such an instance, the authenticity is being "repudiated".
For example, Mallory buys a cell phone for $100, writes a paper cheque as payment, and signs the cheque with a pen. Later, she finds that she can't afford it, and claims that the cheque is a forgery. The signature guarantees that only Mallory could have signed the cheque, and so Mallory's bank must pay the cheque. This is non-repudiation; Mallory cannot repudiate the cheque. In practice, pen-and-paper signatures aren't hard to forge, but digital signatures can be very hard to break.
In security
In general, non-repudiation involves associating actions or changes with a unique individual. For example, a secure area may use a key card access system where non-repudiation would be violated if key cards were shared or if lost and stolen cards were not immediately reported. Similarly, the owner of a computer account must not allow others to use it, such as by giving away their password, and a policy should be implemented to enforce this.
In digital security
In digital security, non-repudiation means:
A service that provides proof of the integrity and origin of data.
An authentication that can be said to be genuine with high confidence.
Proof of data integrity is typically the easiest of these requirements to accomplish. A data hash such as SHA2 usually ensures that the data will not be changed undetectably. Even with this safeguard, it is possible to tamper with data in transit, either through a man-in-the-middle attack or phishing. Because of this, data integrity is best asserted when the recipient already possesses the necessary verification information, such as after being mutually authenticated.
The common method to provide non-repudiation in the context of digital communications or storage is Digital Signatures, a more powerful tool that provides non-repudiation in a publicly verifiable manner. Message Authentication Codes (MAC), useful when the communicating parties have arranged to use a shared secret that they both possess, does not give non-repudiation. A misconception is that encrypting, per se, provides authentication "If the message decrypts properly then it is authentic" - Wrong! MAC can be subject to several types of attacks, like: message reordering, block substitution, block repetition, .... Thus just providing message integrity and authentication, but not non-repudiation. To achieve non-repudiation one must trust a service (a certificate generated by a trusted third party (TTP) called certificate authority (CA)) which prevents an entity from denying previous commitments or actions (e.g. sending message A to B). The difference between MAC and Digital Signatures, one uses symmetric keys and the other asymmetric keys (provided by the CA). Note that the goal is not to achieve confidentiality: in both cases (MAC or digital signature), one simply appends a tag to the otherwise plaintext, visible message. If confidentiality is also required, then an encryption scheme can be combined with the digital signature, or some form of authenticated encryption could be used. Verifying the digital origin means that the certified/signed data likely came from someone who possesses the private key corresponding to the signing certificate. If the key used to digitally sign a message is not properly safeguarded by the original owner, digital forgery can occur.
Trusted third parties (TTPs)
To mitigate the risk of people repudiating their own signatures, the standard approach is to involve a trusted third party.
The two most common TTPs are forensic analysts and notaries. A forensic analyst specializing in handwriting can compare some signature to a known valid signature and assess its legitimacy. A notary is a witness who verifies an individual's identity by checking other credentials and affixing their certification that the person signing is who they claim to be. A notary provides the extra benefit of maintaining independent logs of their transactions, complete with the types of credentials checked, and another signature that can be verified by the forensic analyst. This double security makes notaries the preferred form of verification.
For digital information, the most commonly employed TTP is a certificate authority, which issues public key certificates. A public key certificate can be used by anyone to verify digital signatures without a shared secret between the signer and the verifier. The role of the certificate authority is to authoritatively state to whom the certificate belongs, meaning that this person or entity possesses the corresponding private key. However, a digital signature is forensically identical in both legitimate and forged uses. Someone who possesses the private key can create a valid digital signature. Protecting the private key is the idea behind some smart cards such as the United States Department of Defense's Common Access Card (CAC), which never lets the key leave the card. That means that to use the card for encryption and digital signatures, a person needs the personal identification number (PIN) code necessary to unlock it.
See also
Plausible deniability
Shaggy defense
Designated verifier signature
Information security
Undeniable signature
References
External links
"Non-repudiation in Electronic Commerce" (Jianying Zhou), Artech House, 2001
'Non-repudiation' taken from Stephen Mason, Electronic Signatures in Law (3rd edn, Cambridge University Press, 2012)
'Non-repudiation' in the legal context in Stephen Mason, Electronic Signatures in Law (4th edn, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2016) now open source
Public-key cryptography
Contract law
Notary |
53036 | https://en.wikipedia.org/wiki/Network%20address%20translation | Network address translation | Network address translation (NAT) is a method of mapping an IP address space into another by modifying network address information in the IP header of packets while they are in transit across a traffic routing device. The technique was originally used to avoid the need to assign a new address to every host when a network was moved, or when the upstream Internet service provider was replaced, but could not route the network's address space. It has become a popular and essential tool in conserving global address space in the face of IPv4 address exhaustion. One Internet-routable IP address of a NAT gateway can be used for an entire private network.
As network address translation modifies the IP address information in packets, NAT implementations may vary in their specific behavior in various addressing cases and their effect on network traffic. The specifics of NAT behavior are not commonly documented by vendors of equipment containing NAT implementations.
Basic NAT
The simplest type of NAT provides a one-to-one translation of IP addresses. RFC 2663 refers to this type of NAT as basic NAT; it is also called a one-to-one NAT. In this type of NAT, only the IP addresses, IP header checksum, and any higher-level checksums that include the IP address are changed. Basic NAT can be used to interconnect two IP networks that have incompatible addressing.
One-to-many NAT
The majority of network address translators map multiple private hosts to one publicly exposed IP address. In a typical configuration, a local network uses one of the designated private IP address subnets (RFC 1918). A router in that network has a private address of that address space. The router is also connected to the Internet with a public address, typically assigned by an Internet service provider. As traffic passes from the local network to the Internet, the source address in each packet is translated on the fly from a private address to the public address. The router tracks basic data about each active connection (particularly the destination address and port). When a reply returns to the router, it uses the connection tracking data it stored during the outbound phase to determine the private address on the internal network to which to forward the reply.
All IP packets have a source IP address and a destination IP address. Typically packets passing from the private network to the public network will have their source address modified, while packets passing from the public network back to the private network will have their destination address modified. To avoid ambiguity in how replies are translated, further modifications to the packets are required. The vast bulk of Internet traffic uses Transmission Control Protocol (TCP) or User Datagram Protocol (UDP). For these protocols, the port numbers are changed so that the combination of IP address (within the IP header) and port number (within the Transport Layer header) on the returned packet can be unambiguously mapped to the corresponding private network destination. RFC 2663 uses the term network address and port translation (NAPT) for this type of NAT. Other names include port address translation (PAT), IP masquerading, NAT overload and many-to-one NAT. This is the most common type of NAT and has become synonymous with the term "NAT" in common usage.
This method enables communication through the router only when the conversation originates in the private network since the initial originating transmission is what establishes the required information in the translation tables. A web browser in the masqueraded network can, for example, browse a website outside, but a web browser outside cannot browse a website hosted within the masqueraded network. Protocols not based on TCP and UDP require other translation techniques.
One of the additional benefits of one-to-many NAT is that it is a practical solution to IPv4 address exhaustion. Even large networks can be connected to the Internet using a single public IP address.
Methods of translation
Network address and port translation may be implemented in several ways. Some applications that use IP address information may need to determine the external address of a network address translator. This is the address that its communication peers in the external network detect. Furthermore, it may be necessary to examine and categorize the type of mapping in use, for example when it is desired to set up a direct communication path between two clients both of which are behind separate NAT gateways.
For this purpose, RFC 3489 specified a protocol called Simple Traversal of UDP over NATs (STUN) in 2003. It classified NAT implementations as full-cone NAT, (address) restricted-cone NAT, port-restricted cone NAT or symmetric NAT, and proposed a methodology for testing a device accordingly. However, these procedures have since been deprecated from standards status, as the methods are inadequate to correctly assess many devices. RFC 5389 standardized new methods in 2008 and the acronym STUN now represents the new title of the specification: Session Traversal Utilities for NAT.
Many NAT implementations combine these types, and it is, therefore, better to refer to specific individual NAT behavior instead of using the Cone/Symmetric terminology. RFC 4787 attempts to alleviate confusion by introducing standardized terminology for observed behaviors. For the first bullet in each row of the above table, the RFC would characterize Full-Cone, Restricted-Cone, and Port-Restricted Cone NATs as having an Endpoint-Independent Mapping, whereas it would characterize a Symmetric NAT as having an Address- and Port-Dependent Mapping. For the second bullet in each row of the above table, RFC 4787 would also label Full-Cone NAT as having an Endpoint-Independent Filtering, Restricted-Cone NAT as having an Address-Dependent Filtering, Port-Restricted Cone NAT as having an Address and Port-Dependent Filtering, and Symmetric NAT as having either an Address-Dependent Filtering or Address and Port-Dependent Filtering. Other classifications of NAT behavior mentioned in the RFC include whether they preserve ports, when and how mappings are refreshed, whether external mappings can be used by internal hosts (i.e., its hairpinning behavior), and the level of determinism NATs exhibit when applying all these rules. Specifically, most NATs combine symmetric NAT for outgoing connections with static port mapping, where incoming packets addressed to the external address and port are redirected to a specific internal address and port.
Type of NAT and NAT traversal, role of port preservation for TCP
The NAT traversal problem arises when peers behind different NATs try to communicate. One way to solve this problem is to use port forwarding. Another way is to use various NAT traversal techniques. The most popular technique for TCP NAT traversal is TCP hole punching.
TCP hole punching requires the NAT to follow the port preservation design for TCP. For a given outgoing TCP communication, the same port numbers are used on both sides of the NAT. NAT port preservation for outgoing TCP connections is crucial for TCP NAT traversal because, under TCP, one port can only be used for one communication at a time, so programs bind distinct TCP sockets to ephemeral ports for each TCP communication, rendering NAT port prediction impossible for TCP.
On the other hand, for UDP, NATs do not need port preservation. Indeed, multiple UDP communications (each with a distinct endpoint) can occur on the same source port, and applications usually reuse the same UDP socket to send packets to distinct hosts. This makes port prediction straightforward, as it is the same source port for each packet.
Furthermore, port preservation in NAT for TCP allows P2P protocols to offer less complexity and less latency because there is no need to use a third party (like STUN) to discover the NAT port since the application itself already knows the NAT port.
However, if two internal hosts attempt to communicate with the same external host using the same port number, the NAT may attempt to use a different external IP address for the second connection or may need to forgo port preservation and remap the port.
, roughly 70% of the clients in P2P networks employed some form of NAT.
Implementation
Establishing two-way communication
Every TCP and UDP packet contains a source port number and a destination port number. Each of those packets is encapsulated in an IP packet, whose IP header contains a source IP address and a destination IP address. The IP address/protocol/port number triple defines an association with a network socket.
For publicly accessible services such as web and mail servers the port number is important. For example, port 80 connects through a socket to the web server software and port 25 to a mail server's SMTP daemon. The IP address of a public server is also important, similar in global uniqueness to a postal address or telephone number. Both IP address and port number must be correctly known by all hosts wishing to successfully communicate.
Private IP addresses as described in RFC 1918 are usable only on private networks not directly connected to the internet. Ports are endpoints of communication unique to that host, so a connection through the NAT device is maintained by the combined mapping of port and IP address. A private address on the inside of the NAT is mapped to an external public address. Port address translation (PAT) resolves conflicts that arise when multiple hosts happen to use the same source port number to establish different external connections at the same time.
Telephone number extension analogy
A NAT device is similar to a phone system at an office that has one public telephone number and multiple extensions. Outbound phone calls made from the office all appear to come from the same telephone number. However, an incoming call that does not specify an extension cannot be automatically transferred to an individual inside the office. In this scenario, the office is a private LAN, the main phone number is the public IP address, and the individual extensions are unique port numbers.
Translation process
With NAT, all communications sent to external hosts actually contain the external IP address and port information of the NAT device instead of internal host IP addresses or port numbers. NAT only translates IP addresses and ports of its internal hosts, hiding the true endpoint of an internal host on a private network.
When a computer on the private (internal) network sends an IP packet to the external network, the NAT device replaces the internal source IP address in the packet header with the external IP address of the NAT device. PAT may then assign the connection a port number from a pool of available ports, inserting this port number in the source port field. The packet is then forwarded to the external network. The NAT device then makes an entry in a translation table containing the internal IP address, original source port, and the translated source port. Subsequent packets from the same internal source IP address and port number are translated to the same external source IP address and port number. The computer receiving a packet that has undergone NAT establishes a connection to the port and IP address specified in the altered packet, oblivious to the fact that the supplied address is being translated.
Upon receiving a packet from the external network, the NAT device searches the translation table based on the destination port in the packet header. If a match is found, the destination IP address and port number is replaced with the values found in the table and the packet is forwarded to the inside network. Otherwise, if the destination port number of the incoming packet is not found in the translation table, the packet is dropped or rejected because the PAT device doesn't know where to send it.
Visibility of operation
NAT operation is typically transparent to both the internal and external hosts. The NAT device may function as the default gateway for the internal host which is typically aware of the true IP address and TCP or UDP port of the external host. However, the external host is only aware of the public IP address for the NAT device and the particular port being used to communicate on behalf of a specific internal host.
Applications
Routing Network address translation can be used to mitigate IP address overlap. Address overlap occurs when hosts in different networks with the same IP address space try to reach the same destination host. This is most often a misconfiguration and may result from the merger of two networks or subnets, especially when using RFC 1918 private network addressing. The destination host experiences traffic apparently arriving from the same network, and intermediate routers have no way to determine where reply traffic should be sent to. The solution is either renumbering to eliminate overlap or network address translation.
Load balancing In client–server applications, load balancers forward client requests to a set of server computers to manage the workload of each server. Network address translation may be used to map a representative IP address of the server cluster to specific hosts that service the request.
Related techniques
IEEE Reverse Address and Port Translation (RAPT or RAT) allows a host whose real IP address changes from time to time to remain reachable as a server via a fixed home IP address. Cisco's RAPT implementation is PAT or NAT overloading and maps multiple private IP addresses to a single public IP address. Multiple addresses can be mapped to a single address because each private address is tracked by a port number. PAT uses unique source port numbers on the inside global IP address to distinguish between translations. PAT attempts to preserve the original source port. If this source port is already used, PAT assigns the first available port number starting from the beginning of the appropriate port group 0–511, 512–1023, or 1024–65535. When there are no more ports available and there is more than one external IP address configured, PAT moves to the next IP address to try to allocate the original source port again. This process continues until it runs out of available ports and external IP addresses.
Mapping of Address and Port is a Cisco proposal that combines Address plus Port translation with tunneling of the IPv4 packets over an ISP provider's internal IPv6 network. In effect, it is an (almost) stateless alternative to carrier-grade NAT and DS-Lite that pushes the IPv4 address/port translation function (and therefore the maintenance of NAT state) entirely into the existing customer premises equipment NAT implementation. Thus avoiding the NAT444 and statefulness problems of carrier-grade NAT, and also provides a transition mechanism for the deployment of native IPv6 at the same time with very little added complexity.
Issues and limitations
Hosts behind NAT-enabled routers do not have end-to-end connectivity and cannot participate in some internet protocols. Services that require the initiation of TCP connections from the outside network, or that use stateless protocols such as those using UDP, can be disrupted. Unless the NAT router makes a specific effort to support such protocols, incoming packets cannot reach their destination. Some protocols can accommodate one instance of NAT between participating hosts ("passive mode" FTP, for example), sometimes with the assistance of an application-level gateway (see below), but fail when both systems are separated from the internet by NAT. The use of NAT also complicates tunneling protocols such as IPsec because NAT modifies values in the headers which interfere with the integrity checks done by IPsec and other tunneling protocols.
End-to-end connectivity has been a core principle of the Internet, supported, for example, by the Internet Architecture Board. Current Internet architectural documents observe that NAT is a violation of the end-to-end principle, but that NAT does have a valid role in careful design. There is considerably more concern with the use of IPv6 NAT, and many IPv6 architects believe IPv6 was intended to remove the need for NAT.
An implementation that only tracks ports can be quickly depleted by internal applications that use multiple simultaneous connections such as an HTTP request for a web page with many embedded objects. This problem can be mitigated by tracking the destination IP address in addition to the port thus sharing a single local port with many remote hosts. This additional tracking increases implementation complexity and computing resources at the translation device.
Because the internal addresses are all disguised behind one publicly accessible address, it is impossible for external hosts to directly initiate a connection to a particular internal host. Applications such as VOIP, videoconferencing, and other peer-to-peer applications must use NAT traversal techniques to function.
Fragmentation and checksums
Pure NAT, operating on IP alone, may or may not correctly parse protocols with payloads containing information about IP, such as ICMP. This depends on whether the payload is interpreted by a host on the inside or outside of the translation. Basic protocols as TCP and UDP cannot function properly unless NAT takes action beyond the network layer.
IP packets have a checksum in each packet header, which provides error detection only for the header. IP datagrams may become fragmented and it is necessary for a NAT to reassemble these fragments to allow correct recalculation of higher-level checksums and correct tracking of which packets belong to which connection.
TCP and UDP, have a checksum that covers all the data they carry, as well as the TCP or UDP header, plus a pseudo-header that contains the source and destination IP addresses of the packet carrying the TCP or UDP header. For an originating NAT to pass TCP or UDP successfully, it must recompute the TCP or UDP header checksum based on the translated IP addresses, not the original ones, and put that checksum into the TCP or UDP header of the first packet of the fragmented set of packets.
Alternatively, the originating host may perform path MTU Discovery to determine the packet size that can be transmitted without fragmentation and then set the don't fragment (DF) bit in the appropriate packet header field. Of course, this is only a one-way solution, because the responding host can send packets of any size, which may be fragmented before reaching the NAT.
DNAT
Destination network address translation (DNAT) is a technique for transparently changing the destination IP address of a routed packet and performing the inverse function for any replies. Any router situated between two endpoints can perform this transformation of the packet.
DNAT is commonly used to publish a service located in a private network on a publicly accessible IP address. This use of DNAT is also called port forwarding, or DMZ when used on an entire server, which becomes exposed to the WAN, becoming analogous to an undefended military demilitarized zone (DMZ).
SNAT
The meaning of the term SNAT varies by vendor:
source NAT is a common expansion and is the counterpart of destination NAT (DNAT). This is used to describe one-to-many NAT; NAT for outgoing connections to public services.
stateful NAT is used by Cisco Systems
static NAT is used by WatchGuard
secure NAT is used by F5 Networks and by Microsoft (in regard to the ISA Server)
Microsoft's Secure network address translation (SNAT) is part of Microsoft's Internet Security and Acceleration Server and is an extension to the NAT driver built into Microsoft Windows Server. It provides connection tracking and filtering for the additional network connections needed for the FTP, ICMP, H.323, and PPTP protocols as well as the ability to configure a transparent HTTP proxy server.
Dynamic network address translation
Dynamic NAT, just like static NAT, is not common in smaller networks but is found within larger corporations with complex networks. The way dynamic NAT differs from static NAT is that where static NAT provides a one-to-one internal to public static IP address mapping, dynamic NAT usually uses a group of available public IP addresses.
NAT hairpinning
NAT hairpinning, also known as NAT loopback or NAT reflection, is a feature in many consumer routers that permits the access of a service via the public IP address from inside the local network. This eliminates the need for using separate domain name resolution for hosts inside the network than for the public network for a website.
The following describes an example network:
Public address: . This is the address of the WAN interface on the router.
Internal address of router:
Address of the server:
Address of a local computer:
If a packet is sent to the public address by a computer at , the packet would normally be routed to the default gateway (the router), unless an explicit route is set in the computer's routing tables. A router with the NAT loopback feature detects that is the address of its WAN interface, and treats the packet as if coming from that interface. It determines the destination for that packet, based on DNAT (port forwarding) rules for the destination. If the data were sent to port 80 and a DNAT rule exists for port 80 directed to , then the host at that address receives the packet.
If no applicable DNAT rule is available, the router drops the packet. An ICMP Destination Unreachable reply may be sent. If any DNAT rules were present, address translation is still in effect; the router still rewrites the source IP address in the packet. The local computer () sends the packet as coming from , but the server () receives it as coming from . When the server replies, the process is identical to an external sender. Thus, two-way communication is possible between hosts inside the LAN network via the public IP address.
NAT in IPv6
Network address translation is not commonly used in IPv6, because one of the design goals of IPv6 is to restore end-to-end network connectivity. NAT loopback is not commonly needed. Although still possible, the large addressing space of IPv6 obviates the need to conserve addresses and every device can be given a unique globally routable address. That being said, using unique local addresses in combination with network prefix translation can achieve similar results.
Applications affected by NAT
Some application layer protocols (such as FTP and SIP) send explicit network addresses within their application data. FTP in active mode, for example, uses separate connections for control traffic (commands) and for data traffic (file contents). When requesting a file transfer, the host making the request identifies the corresponding data connection by its network layer and transport layer addresses. If the host making the request lies behind a simple NAT firewall, the translation of the IP address and/or TCP port number makes the information received by the server invalid. The Session Initiation Protocol (SIP) controls many Voice over IP (VoIP) calls, and suffers the same problem. SIP and SDP may use multiple ports to set up a connection and transmit voice stream via RTP. IP addresses and port numbers are encoded in the payload data and must be known before the traversal of NATs. Without special techniques, such as STUN, NAT behavior is unpredictable and communications may fail.
Application Layer Gateway (ALG) software or hardware may correct these problems. An ALG software module running on a NAT firewall device updates any payload data made invalid by address translation. ALGs need to understand the higher-layer protocol that they need to fix, and so each protocol with this problem requires a separate ALG. For example, on many Linux systems there are kernel modules called connection trackers that serve to implement ALGs. However, ALG does not work if the control channel is encrypted (e.g. FTPS).
Another possible solution to this problem is to use NAT traversal techniques using protocols such as STUN or ICE, or proprietary approaches in a session border controller. NAT traversal is possible in both TCP- and UDP-based applications, but the UDP-based technique is simpler, more widely understood, and more compatible with legacy NATs. In either case, the high-level protocol must be designed with NAT traversal in mind, and it does not work reliably across symmetric NATs or other poorly behaved legacy NATs.
Other possibilities are UPnP Internet Gateway Device Protocol, NAT-PMP (NAT Port Mapping Protocol), or Port Control Protocol (PCP), but these require the NAT device to implement that protocol.
Most traditional client–server protocols (FTP being the main exception), however, do not send layer 3 contact information and therefore do not require any special treatment by NATs. In fact, avoiding NAT complications is practically a requirement when designing new higher-layer protocols today (e.g. the use of SFTP instead of FTP).
NATs can also cause problems where IPsec encryption is applied and in cases where multiple devices such as SIP phones are located behind a NAT. Phones that encrypt their signaling with IPsec encapsulate the port information within an encrypted packet, meaning that NA(P)T devices cannot access and translate the port. In these cases the NA(P)T devices revert to simple NAT operation. This means that all traffic returning to the NAT is mapped onto one client, causing service to more than one client "behind" the NAT to fail. There are a couple of solutions to this problem: one is to use TLS, which operates at level 4 in the OSI Reference Model and therefore does not mask the port number; another is to encapsulate the IPsec within UDP – the latter being the solution chosen by TISPAN to achieve secure NAT traversal, or a NAT with "IPsec Passthru" support.
Interactive Connectivity Establishment is a NAT traversal technique that does not rely on ALG support.
The DNS protocol vulnerability announced by Dan Kaminsky on July 8, 2008 is indirectly affected by NAT port mapping. To avoid DNS cache poisoning, it is highly desirable not to translate UDP source port numbers of outgoing DNS requests from a DNS server behind a firewall that implements NAT. The recommended workaround for the DNS vulnerability is to make all caching DNS servers use randomized UDP source ports. If the NAT function de-randomizes the UDP source ports, the DNS server becomes vulnerable.
Examples of NAT software
Internet Connection Sharing (ICS): NAT & DHCP implementation included with Windows desktop operating systems
IPFilter: included with (Open)Solaris, FreeBSD and NetBSD, available for many other Unix-like operating systems
ipfirewall (ipfw): FreeBSD-native packet filter
Netfilter with iptables/nftables: the Linux packet filter
NPF: NetBSD-native Packet Filter
PF: OpenBSD-native Packet Filter
Routing and Remote Access Service: routing implementation included with Windows Server operating systems
WinGate: third-party routing implementation for Windows
See also
Anything In Anything (AYIYA) – IPv6 over IPv4 UDP, thus working IPv6 tunneling over most NATs
Internet Gateway Device Protocol (IGD) – UPnP NAT-traversal method
Teredo tunneling – NAT traversal using IPv6
Carrier-grade NAT – NAT behind NAT within ISP.
Notes
References
External links
NAT-Traversal Test and results
Characterization of different TCP NATs – Paper discussing the different types of NAT
Anatomy: A Look Inside Network Address Translators – Volume 7, Issue 3, September 2004
Jeff Tyson, HowStuffWorks: How Network Address Translation Works
Routing with NAT (Part of the documentation for the IBM iSeries)
Network Address Translation (NAT) FAQ – Cisco Systems |
53039 | https://en.wikipedia.org/wiki/Key%20%28cryptography%29 | Key (cryptography) | A key in cryptography is a piece of information, usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographic algorithm, can encode or decode cryptographic data. Based on the used method, the key can be different sizes and varieties, but in all cases, the strength of the encryption relies on the security of the key being maintained. A key’s security strength is dependent on its algorithm, the size of the key, the generation of the key, and the process of key exchange.
Scope
The key is what is used to encrypt data from plaintext to ciphertext. There are different methods for utilizing keys and encryption.
Symmetric cryptography
Symmetric cryptography refers to the practice of the same key being used for both encryption and decryption.
Asymmetric cryptography
Asymmetric cryptography has separate keys for encrypting and decrypting. These keys are known as the public and private keys, respectively.
Purpose
Since the key protects the confidentiality and integrity of the system, it is important to be kept secret from unauthorized parties. With public key cryptography, only the private key must be kept secret , but with symmetric cryptography, it is important to maintain the confidentiality of the key. Kerckhoff's principle states that the entire security of the cryptographic system relies on the secrecy of the key.
Key sizes
Key size is the number of bits in the key defined by the algorithm. This size defines the upper bound of the cryptographic algorithm’s security. The larger the key size, the longer it will take before the key is compromised by a brute force attack. Since perfect secrecy is not feasible for key algorithms, researches are now more focused on computational security.
In the past, keys were required to be a minimum of 40 bits in length, however, as technology advanced, these keys were being broken quicker and quicker. As a response, restrictions on symmetric keys were enhanced to be greater in size.
Currently, 2048 bit RSA is commonly used, which is sufficient for current systems. However, current key sizes would all be cracked quickly with a powerful quantum computer.
“The keys used in public key cryptography have some mathematical structure. For example, public keys used in the RSA system are the product of two prime numbers. Thus public key systems require longer key lengths than symmetric systems for an equivalent level of security. 3072 bits is the suggested key length for systems based on factoring and integer discrete logarithms which aim to have security equivalent to a 128 bit symmetric cipher.”
Key generation
To prevent a key from being guessed, keys need to be generated randomly and contain sufficient entropy. The problem of how to safely generate random keys is difficult and has been addressed in many ways by various cryptographic systems. A key can directly be generated by using the output of a Random Bit Generator (RBG), a system that generates a sequence of unpredictable and unbiased bits. A RBG can be used to directly produce either a symmetric key or the random output for an asymmetric key pair generation. Alternatively, a key can also be indirectly created during a key-agreement transaction, from another key or from a password.
Some operating systems include tools for "collecting" entropy from the timing of unpredictable operations such as disk drive head movements. For the production of small amounts of keying material, ordinary dice provide a good source of high-quality randomness.
Establishment scheme
The security of a key is dependent on how a key is exchanged between parties. Establishing a secured communication channel is necessary so that outsiders cannot obtain the key. A key establishment scheme (or key exchange) is used to transfer an encryption key among entities. Key agreement and key transport are the two types of a key exchange scheme that are used to be remotely exchanged between entities . In a key agreement scheme, a secret key, which is used between the sender and the receiver to encrypt and decrypt information, is set up to be sent indirectly. All parties exchange information (the shared secret) that permits each party to derive the secret key material. In a key transport scheme, encrypted keying material that is chosen by the sender is transported to the receiver. Either symmetric key or asymmetric key techniques can be used in both schemes.
The Diffie–Hellman key exchange and Rivest-Shamir-Adleman (RSA) are the most two widely used key exchange algorithms. In 1976, Whitfield Diffie and Martin Hellman constructed the Diffie–Hellman algorithm, which was the first public key algorithm. The Diffie–Hellman key exchange protocol allows key exchange over an insecure channel by electronically generating a shared key between two parties. On the other hand, RSA is a form of the asymmetric key system which consists of three steps: key generation, encryption, and decryption.
Key confirmation delivers an assurance between the key confirmation recipient and provider that the shared keying materials are correct and established. The National Institute of Standards and Technology recommends key confirmation to be integrated into a key establishment scheme to validate its implementations.
Management
Key management concerns the generation, establishment, storage, usage and replacement of cryptographic keys. A key management system (KMS) typically includes three steps of establishing, storing and using keys. The base of security for the generation, storage, distribution, use and destruction of keys depends on successful key management protocols.
Key vs password
A password is a memorized series of characters including letters, digits, and other special symbols that are used to verify identity. It is often produced by a human user or a password management software to protect personal and sensitive information or generate cryptographic keys. Passwords are often created to be memorized by users and may contain non-random information such as dictionary words. On the other hand, a key can help strengthen password protection by implementing a cryptographic algorithm which is difficult to guess or replace the password altogether. A key is generated based on random or pseudo-random data and can often be unreadable to humans.
A password is less safe than a cryptographic key due to its low entropy, randomness, and human-readable properties. However, the password may be the only secret data that is accessible to the cryptographic algorithm for information security in some applications such as securing information in storage devices. Thus, a deterministic algorithm called a key derivation function (KDF) uses a password to generate the secure cryptographic keying material to compensate for the password’s weakness. Various methods such as adding a salt or key stretching may be used in the generation.
See also
Cryptographic key types
Diceware
EKMS
Group key
Keyed hash algorithm
Key authentication
Key derivation function
Key distribution center
Key escrow
Key exchange
Key generation
Key management
Key schedule
Key server
Key signature (cryptography)
Key signing party
Key stretching
Key-agreement protocol
glossary
Password psychology
Public key fingerprint
Random number generator
Session key
Tripcode
Machine-readable paper key
Weak key
References
Cryptography
Key management |
53042 | https://en.wikipedia.org/wiki/Symmetric-key%20algorithm | Symmetric-key algorithm | Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for both the encryption of plaintext and the decryption of ciphertext. The keys may be identical, or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. The requirement that both parties have access to the secret key is one of the main drawbacks of symmetric-key encryption, in comparison to public-key encryption (also known as asymmetric-key encryption).However, symmetric-key encryption are usually better for bulk encryption. They have a smaller file size which allows for less storage space and faster transmission. Due to this, asymmetric-encryption is often used to exchange the secret key for symmetric-key encryption.
Types
Symmetric-key encryption can use either stream ciphers or block ciphers.
Stream ciphers encrypt the digits (typically bytes), or letters (in substitution ciphers) of a message one at a time. An example is ChaCha20.
Substitution ciphers are well-known ciphers, but can be easily decrypted using a frequency table.
Block ciphers take a number of bits and encrypt them as a single unit, padding the plaintext so that it is a multiple of the block size. The Advanced Encryption Standard (AES) algorithm, approved by NIST in December 2001, uses 128-bit blocks.
Implementations
Examples of popular symmetric-key algorithms include Twofish, Serpent, AES (Rijndael), Camellia, Salsa20, ChaCha20, Blowfish, CAST5, Kuznyechik, RC4, DES, 3DES, Skipjack, Safer, and IDEA.
Use as a cryptographic primitive
Symmetric ciphers are commonly used to achieve other cryptographic primitives than just encryption.
Encrypting a message does not guarantee that it will remain unchanged while encrypted. Hence, often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from an AEAD cipher (e.g. AES-GCM).
However, symmetric ciphers cannot be used for non-repudiation purposes except by involving additional parties. See the ISO/IEC 13888-2 standard.
Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
Construction of symmetric ciphers
Many modern block ciphers are based on a construction proposed by Horst Feistel. Feistel's construction makes it possible to build invertible functions from other functions that are themselves not invertible.
Security of symmetric ciphers
Symmetric ciphers have historically been susceptible to known-plaintext attacks, chosen-plaintext attacks, differential cryptanalysis and linear cryptanalysis. Careful construction of the functions for each round can greatly reduce the chances of a successful attack. It is also possible to increase the key length or the rounds in the encryption process to better protect against attack. This, however, tends to increase the processing power and decrease the speed at which the process runs due to the amount of operations the system needs to do.
Key management
Key establishment
Symmetric-key algorithms require both the sender and the recipient of a message to have the same secret key.
All early cryptographic systems required either the sender or the recipient to somehow receive a copy of that secret key over a physically secure channel.
Nearly all modern cryptographic systems still use symmetric-key algorithms internally to encrypt the bulk of the messages, but they eliminate the need for a physically secure channel by using Diffie–Hellman key exchange or some other public-key protocol to securely come to agreement on a fresh new secret key for each session/conversation (forward secrecy).
Key generation
When used with asymmetric ciphers for key transfer, pseudorandom key generators are nearly always used to generate the symmetric cipher session keys. However, lack of randomness in those generators or in their initialization vectors is disastrous and has led to cryptanalytic breaks in the past. Therefore, it is essential that an implementation use a source of high entropy for its initialization.
Reciprocal cipher
A reciprocal cipher is a cipher where, just as one enters the plaintext into the cryptography system to get the ciphertext, one could enter the ciphertext into the same place in the system to get the plaintext. A reciprocal cipher is also sometimes referred as self-reciprocal cipher.
Practically all mechanical cipher machines implement a reciprocal cipher, a mathematical involution on each typed-in letter.
Instead of designing two kinds of machines, one for encrypting and one for decrypting, all the machines can be identical and can be set up (keyed) the same way.
Examples of reciprocal ciphers include:
Atbash
Beaufort cipher
Enigma machine
Marie Antoinette and Axel von Fersen communicated with a self-reciprocal cipher.
the Porta polyalphabetic cipher is self-reciprocal.
Purple cipher
RC4
ROT13
XOR cipher
Vatsyayana cipher
The majority of all modern ciphers can be classified as either a stream cipher, most of which use a reciprocal XOR cipher combiner, or a block cipher, most of which use a Feistel cipher or Lai–Massey scheme with a reciprocal transformation in each round.
Notes
References
Cryptographic algorithms |
53064 | https://en.wikipedia.org/wiki/Kerckhoffs%27s%20principle | Kerckhoffs's principle | Kerckhoffs's principle (also called Kerckhoffs's desideratum, assumption, axiom, doctrine or law) of cryptography was stated by Netherlands born cryptographer Auguste Kerckhoffs in the 19th century. The principle holds that a cryptosystem should be secure, even if everything about the system, except the key, is public knowledge.
Kerckhoffs's principle was reformulated (or possibly independently formulated) by American mathematician Claude Shannon as "the enemy knows the system", i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". In that form, it is called Shannon's maxim.
Another formulation is "design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.)"
This concept is widely embraced by cryptographers, in contrast to "security through obscurity", which is not.
Origins
In 1883 Auguste Kerckhoffs
wrote two journal articles on La Cryptographie Militaire,
in the first of which he stated six design principles for military ciphers. Translated from French, they are:
The system must be practically, if not mathematically, indecipherable;
It should not require secrecy, and it should not be a problem if it falls into enemy hands;
It must be possible to communicate and remember the key without using written notes, and correspondents must be able to change or modify it at will;
It must be applicable to telegraph communications;
It must be portable, and should not require several persons to handle or operate;
Lastly, given the circumstances in which it is to be used, the system must be easy to use and should not be stressful to use or require its users to know and comply with a long list of rules.
Some are no longer relevant given the ability of computers to perform complex encryption, but his second axiom, now known as Kerckhoffs's principle, is still critically important.
Explanation of the principle
Kerckhoffs viewed cryptography as a rival to, and a better alternative than, steganographic encoding, which was common in the nineteenth century for hiding the meaning of military messages. One problem with encoding schemes is that they rely on humanly-held secrets such as "dictionaries" which disclose for example, the secret meaning of words. Steganographic-like dictionaries, once revealed, permanently compromise a corresponding encoding system. Another problem is that the risk of exposure increases as the number of users holding the secrets increases.
Nineteenth century cryptography, in contrast, used simple tables which provided for the transposition of alphanumeric characters, generally given row-column intersections which could be modified by keys which were generally short, numeric, and could be committed to human memory. The system was considered "indecipherable" because tables and keys do not convey meaning by themselves. Secret messages can be compromised only if a matching set of table, key, and message falls into enemy hands in a relevant time frame. Kerckhoffs viewed tactical messages as only having a few hours of relevance. Systems are not necessarily compromised, because their components (i.e. alphanumeric character tables and keys) can be easily changed.
Advantage of secret keys
Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.
There are a large number of ways the internal details of a widely used system could be discovered. The most obvious is that someone could bribe, blackmail, or otherwise threaten staff or customers into explaining the system. In war, for example, one side will probably capture some equipment and people from the other side. Each side will also use spies to gather information.
If a method involves software, someone could do memory dumps or run the software under the control of a debugger in order to understand the method. If hardware is being used, someone could buy or steal some of the hardware and build whatever programs or gadgets needed to test it. Hardware can also be dismantled so that the chip details can be examined under the microscope.
Maintaining security
A generalization some make from Kerckhoffs's principle is: "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security." Bruce Schneier ties it in with a belief that all security systems must be designed to fail as gracefully as possible:
Any security system depends crucially on keeping some things secret. However, Kerckhoffs's principle points out that the things kept secret ought to be those least costly to change if inadvertently disclosed.
For example, a cryptographic algorithm may be implemented by hardware and software that is widely distributed among users. If security depends on keeping that secret, then disclosure leads to major logistic difficulties in developing, testing, and distributing implementations of a new algorithm – it is "brittle". On the other hand, if keeping the algorithm secret is not important, but only the keys used with the algorithm must be secret, then disclosure of the keys simply requires the simpler, less costly process of generating and distributing new keys.
Applications
In accordance with Kerckhoffs's principle, the majority of civilian cryptography makes use of publicly known algorithms. By contrast, ciphers used to protect classified government or military information are often kept secret (see Type 1 encryption). However, it should not be assumed that government/military ciphers must be kept secret to maintain security. It is possible that they are intended to be as cryptographically sound as public algorithms, and the decision to keep them secret is in keeping with a layered security posture.
Security through obscurity
It is moderately common for companies, and sometimes even standards bodies as in the case of the CSS encryption on DVDs, to keep the inner workings of a system secret. Some argue this "security by obscurity" makes the product safer and less vulnerable to attack. A counter-argument is that keeping the innards secret may improve security in the short term, but in the long run, only systems that have been published and analyzed should be trusted.
Steven Bellovin and Randy Bush commented:
Notes
References
External links
John Savard article discussing Kerckhoffs's design goals for ciphers
Reference to Kerckhoffs's original paper, with scanned original text
Computer architecture statements
Cryptography |
53289 | https://en.wikipedia.org/wiki/File%20Transfer%20Protocol | File Transfer Protocol | The File Transfer Protocol (FTP) is a standard communication protocol used for the transfer of computer files from a server to a client on a computer network. FTP is built on a client–server model architecture using separate control and data connections between the client and the server. FTP users may authenticate themselves with a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it. For secure transmission that protects the username and password, and encrypts the content, FTP is often secured with SSL/TLS (FTPS) or replaced with SSH File Transfer Protocol (SFTP).
The first FTP client applications were command-line programs developed before operating systems had graphical user interfaces, and are still shipped with most Windows, Unix, and Linux operating systems. Many FTP clients and automation utilities have since been developed for desktops, servers, mobile devices, and hardware, and FTP has been incorporated into productivity applications, such as HTML editors.
In January 2021, support for the FTP protocol was disabled in Google Chrome 88, and disabled in Firefox 88.0. In July 2021, Firefox 90 dropped FTP entirely, and Google followed suit in October 2021, removing FTP entirely in Google Chrome 95.
History of FTP servers
The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as on 16 April 1971. Until 1980, FTP ran on NCP, the predecessor of TCP/IP. The protocol was later replaced by a TCP/IP version, (June 1980) and (October 1985), the current specification. Several proposed standards amend , for example (February 1994) enables Firewall-Friendly FTP (passive mode), (June 1997) proposes security extensions, (September 1998) adds support for IPv6 and defines a new type of passive mode.
Protocol overview
Communication and data transfer
FTP may run in active or passive mode, which determines how the data connection is established. (This sense of "mode" is different from that of the MODE command in the FTP protocol, and corresponds to the PORT/PASV/EPSV/etc commands instead.) In both cases, the client creates a TCP control connection from a random, usually an unprivileged, port N to the FTP server command port 21.
In active mode, the client starts listening for incoming data connections from the server on port M. It sends the FTP command PORT M to inform the server on which port it is listening. The server then initiates a data channel to the client from its port 20, the FTP server data port.
In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used. In this mode, the client uses the control connection to send a PASV command to the server and then receives a server IP address and server port number from the server, which the client then uses to open a data connection from an arbitrary client port to the server IP address and server port number received.
Both modes were updated in September 1998 to support IPv6. Further changes were introduced to the passive mode at that time, updating it to extended passive mode.
The server responds over the control connection with three-digit status codes in ASCII with an optional text message. For example, "200" (or "200 OK") means that the last command was successful. The numbers represent the code for the response and the optional text represents a human-readable explanation or request (e.g. <Need account for storing file>). An ongoing transfer of file data over the data connection can be aborted using an interrupt message sent over the control connection.
FTP needs two ports (one for sending and one for receiving) because it was originally designed to operate on Network Control Program (NCP), which was a simplex protocol that utilized two port addresses, establishing two connections, for two-way communications. An odd and an even port were reserved for each application layer application or protocol. The standardization of TCP and UDP reduced the need for the use of two simplex ports for each application down to one duplex port, but the FTP protocol was never altered to only use one port, and continued using two for backwards compatibility.
NAT and firewall traversal
FTP normally transfers data by having the server connect back to the client, after the PORT command is sent by the client. This is problematic for both NATs and firewalls, which do not allow connections from the Internet towards internal hosts. For NATs, an additional complication is that the representation of the IP addresses and port number in the PORT command refer to the internal host's IP address and port, rather than the public IP address and port of the NAT.
There are two approaches to solve this problem. One is that the FTP client and FTP server use the PASV command, which causes the data connection to be established from the FTP client to the server. This is widely used by modern FTP clients. Another approach is for the NAT to alter the values of the PORT command, using an application-level gateway for this purpose.
Data types
While transferring data over the network, four data types are defined:
ASCII (TYPE A): Used for text. Data is converted, if needed, from the sending host's character representation to "8-bit ASCII" before transmission, and (again, if necessary) to the receiving host's character representation. As a consequence, this mode is inappropriate for files that contain data other than plain text.
Image (TYPE I, commonly called Binary mode): The sending machine sends each file byte by byte, and the recipient stores the bytestream as it receives it. (Image mode support has been recommended for all implementations of FTP).
EBCDIC (TYPE E): Used for plain text between hosts using the EBCDIC character set.
Local (TYPE L n): Designed to support file transfer between machines which do not use 8-bit bytes, e.g. 36-bit systems such as DEC PDP-10s. For example, "TYPE L 9" would be used to transfer data in 9-bit bytes, or "TYPE L 36" to transfer 36-bit words. Most contemporary FTP clients/servers only support L 8, which is equivalent to I.
An expired Internet Draft defined a TYPE U for transferring Unicode text files using UTF-8; although the draft never became an RFC, it has been implemented by several FTP clients/servers.
Note these data types are commonly called "modes", although ambiguously that word is also used to refer to active-vs-passive communication mode (see above), and the modes set by the FTP protocol MODE command (see below).
For text files (TYPE A and TYPE E), three different format control options are provided, to control how the file would be printed:
Non-print (TYPE A N and TYPE E N) – the file does not contain any carriage control characters intended for a printer
Telnet (TYPE A T and TYPE E T) – the file contains Telnet (or in other words, ASCII C0) carriage control characters (CR, LF, etc)
ASA (TYPE A A and TYPE E A) – the file contains ASA carriage control characters
These formats were mainly relevant to line printers; most contemporary FTP clients/servers only support the default format control of N.
File structures
File organization is specified using the STRU command. The following file structures are defined in section 3.1.1 of RFC959:
F or FILE structure (stream-oriented). Files are viewed as an arbitrary sequence of bytes, characters or words. This is the usual file structure on Unix systems and other systems such as CP/M, MS-DOS and Microsoft Windows. (Section 3.1.1.1)
R or RECORD structure (record-oriented). Files are viewed as divided into records, which may be fixed or variable length. This file organization is common on mainframe and midrange systems, such as MVS, VM/CMS, OS/400 and VMS, which support record-oriented filesystems.
P or PAGE structure (page-oriented). Files are divided into pages, which may either contain data or metadata; each page may also have a header giving various attributes. This file structure was specifically designed for TENEX systems, and is generally not supported on other platforms. RFC1123 section 4.1.2.3 recommends that this structure not be implemented.
Most contemporary FTP clients and servers only support STRU F. STRU R is still in use in mainframe and minicomputer file transfer applications.
Data transfer modes
Data transfer can be done in any of three modes:
Stream mode (MODE S): Data is sent as a continuous stream, relieving FTP from doing any processing. Rather, all processing is left up to TCP. No End-of-file indicator is needed, unless the data is divided into records.
Block mode (MODE B): Designed primarily for transferring record-oriented files (STRU R), although can also be used to transfer stream-oriented (STRU F) text files. FTP puts each record (or line) of data into several blocks (block header, byte count, and data field) and then passes it on to TCP.
Compressed mode (MODE C): Extends MODE B with data compression using run-length encoding.
Most contemporary FTP clients and servers do not implement MODE B or MODE C; FTP clients and servers for mainframe and minicomputer operating systems are the exception to that.
Some FTP software also implements a DEFLATE-based compressed mode, sometimes called "Mode Z" after the command that enables it. This mode was described in an Internet Draft, but not standardized.
GridFTP defines additional modes, MODE E and MODE X, as extensions of MODE B.
Additional commands
More recent implementations of FTP support the Modify Fact: Modification Time (MFMT) command, which allows a client to adjust that file attribute remotely, enabling the preservation of that attribute when uploading files.
To retrieve a remote file timestamp, there's MDTM command. Some servers (and clients) support nonstandard syntax of the MDTM command with two arguments, that works the same way as MFMT
Login
FTP login uses normal username and password scheme for granting access. The username is sent to the server using the USER command, and the password is sent using the PASS command. This sequence is unencrypted "on the wire", so may be vulnerable to a network sniffing attack. If the information provided by the client is accepted by the server, the server will send a greeting to the client and the session will commence. If the server supports it, users may log in without providing login credentials, but the same server may authorize only limited access for such sessions.
Anonymous FTP
A host that provides an FTP service may provide anonymous FTP access. Users typically log into the service with an 'anonymous' (lower-case and case-sensitive in some FTP servers) account when prompted for user name. Although users are commonly asked to send their email address instead of a password, no verification is actually performed on the supplied data. Many FTP hosts whose purpose is to provide software updates will allow anonymous logins.
Differences from HTTP
HTTP essentially fixes the bugs in FTP that made it inconvenient to use for many small ephemeral transfers as are typical in web pages.
FTP has a stateful control connection which maintains a current working directory and other flags, and each transfer requires a secondary connection through which the data are transferred. In "passive" mode this secondary connection is from client to server, whereas in the default "active" mode this connection is from server to client. This apparent role reversal when in active mode, and random port numbers for all transfers, is why firewalls and NAT gateways have such a hard time with FTP. HTTP is stateless and multiplexes control and data over a single connection from client to server on well-known port numbers, which trivially passes through NAT gateways and is simple for firewalls to manage.
Setting up an FTP control connection is quite slow due to the round-trip delays of sending all of the required commands and awaiting responses, so it is customary to bring up a control connection and hold it open for multiple file transfers rather than drop and re-establish the session afresh each time. In contrast, HTTP originally dropped the connection after each transfer because doing so was so cheap. While HTTP has subsequently gained the ability to reuse the TCP connection for multiple transfers, the conceptual model is still of independent requests rather than a session.
When FTP is transferring over the data connection, the control connection is idle. If the transfer takes too long, the firewall or NAT may decide that the control connection is dead and stop tracking it, effectively breaking the connection and confusing the download. The single HTTP connection is only idle between requests and it is normal and expected for such connections to be dropped after a time-out.
Software support
Web browser
Most common web browsers can retrieve files hosted on FTP servers, although they may not support protocol extensions such as FTPS. When an FTP—rather than an HTTP—URL is supplied, the accessible contents on the remote server are presented in a manner that is similar to that used for other web content. FireFTP is an browser extension designed as a full-featured FTP client, it could be run within Firefox in the past, but it's now recommend working with Waterfox.
Google Chrome removed FTP support entirely in Chrome 88. As of 2019, Mozilla was discussing proposals, including only removing support for old FTP implementations that are no longer in use to simplify their code. In April, 2021, Mozilla released Firefox 88.0 which disabled FTP support by default. In July 2021, Firefox 90 dropped FTP support entirely.
Syntax
FTP URL syntax is described in , taking the form: ftp://[user[:password]@]host[:port]/url-path (the bracketed parts are optional).
For example, the URL ftp://public.ftp-servers.example.com/mydirectory/myfile.txt represents the file myfile.txt from the directory mydirectory on the server public.ftp-servers.example.com as an FTP resource. The URL ftp://user001:secretpassword@private.ftp-servers.example.com/mydirectory/myfile.txt adds a specification of the username and password that must be used to access this resource.
More details on specifying a username and password may be found in the browsers' documentation (e.g., Firefox and Internet Explorer). By default, most web browsers use passive (PASV) mode, which more easily traverses end-user firewalls.
Some variation has existed in how different browsers treat path resolution in cases where there is a non-root home directory for a user.
Download manager
Most common download managers can receive files hosted on FTP servers, while some of them also give the interface to retrieve the files hosted on FTP servers. DownloadStudio and Internet Download Accelerator allows not only download a file from FTP server but also view the list of files on a FTP server.
Security
FTP was not designed to be a secure protocol, and has many security weaknesses. In May 1999, the authors of listed a vulnerability to the following problems:
Brute-force attack
FTP bounce attack
Packet capture
Port stealing (guessing the next open port and usurping a legitimate connection)
Spoofing attack
Username enumeration
DoS or DDoS
FTP does not encrypt its traffic; all transmissions are in clear text, and usernames, passwords, commands and data can be read by anyone able to perform packet capture (sniffing) on the network. This problem is common to many of the Internet Protocol specifications (such as SMTP, Telnet, POP and IMAP) that were designed prior to the creation of encryption mechanisms such as TLS or SSL.
Common solutions to this problem include:
Using the secure versions of the insecure protocols, e.g., FTPS instead of FTP and TelnetS instead of Telnet.
Using a different, more secure protocol that can handle the job, e.g. SSH File Transfer Protocol or Secure Copy Protocol.
Using a secure tunnel such as Secure Shell (SSH) or virtual private network (VPN).
FTP over SSH
FTP over SSH is the practice of tunneling a normal FTP session over a Secure Shell connection. Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to set up a tunnel for the control channel (the initial client-to-server connection on port 21) will protect only that channel; when data is transferred, the FTP software at either end sets up new TCP connections (data channels) and thus have no confidentiality or integrity protection.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the FTP protocol, to monitor and rewrite FTP control channel messages and autonomously open new packet forwardings for FTP data channels. Software packages that support this mode include:
Tectia ConnectSecure (Win/Linux/Unix) of SSH Communications Security's software suite
Derivatives
FTPS
Explicit FTPS is an extension to the FTP standard that allows clients to request FTP sessions to be encrypted. This is done by sending the "AUTH TLS" command. The server has the option of allowing or denying connections that do not request TLS. This protocol extension is defined in . Implicit FTPS is an outdated standard for FTP that required the use of a SSL or TLS connection. It was specified to use different ports than plain FTP.
SSH File Transfer Protocol
The SSH file transfer protocol (chronologically the second of the two protocols abbreviated SFTP) transfers files and has a similar command set for users, but uses the Secure Shell protocol (SSH) to transfer files. Unlike FTP, it encrypts both commands and data, preventing passwords and sensitive information from being transmitted openly over the network. It cannot interoperate with FTP software.
Trivial File Transfer Protocol
Trivial File Transfer Protocol (TFTP) is a simple, lock-step FTP that allows a client to get a file from or put a file onto a remote host. One of its primary uses is in the early stages of booting from a local area network, because TFTP is very simple to implement. TFTP lacks security and most of the advanced features offered by more robust file transfer protocols such as File Transfer Protocol. TFTP was first standardized in 1981 and the current specification for the protocol can be found in .
Simple File Transfer Protocol
Simple File Transfer Protocol (the first protocol abbreviated SFTP), as defined by , was proposed as an (unsecured) file transfer protocol with a level of complexity intermediate between TFTP and FTP. It was never widely accepted on the Internet, and is now assigned Historic status by the IETF. It runs through port 115, and often receives the initialism of SFTP. It has a command set of 11 commands and support three types of data transmission: ASCII, binary and continuous. For systems with a word size that is a multiple of 8 bits, the implementation of binary and continuous is the same. The protocol also supports login with user ID and password, hierarchical folders and file management (including rename, delete, upload, download, download with overwrite, and download with append).
FTP commands
FTP reply codes
Below is a summary of FTP reply codes that may be returned by an FTP server. These codes have been standardized in by the IETF. The reply code is a three-digit value. The first digit is used to indicate one of three possible outcomes — success, failure, or to indicate an error or incomplete reply:
2yz – Success reply
4yz or 5yz – Failure reply
1yz or 3yz – Error or Incomplete reply
The second digit defines the kind of error:
x0z – Syntax. These replies refer to syntax errors.
x1z – Information. Replies to requests for information.
x2z – Connections. Replies referring to the control and data connections.
x3z – Authentication and accounting. Replies for the login process and accounting procedures.
x4z – Not defined.
x5z – File system. These replies relay status codes from the server file system.
The third digit of the reply code is used to provide additional detail for each of the categories defined by the second digit.
See also
References
Further reading
– CWD Command of FTP. July 1975.
– (Standard) File Transfer Protocol (FTP). J. Postel, J. Reynolds. October 1985.
– (Informational) Firewall-Friendly FTP. February 1994.
– (Informational) How to Use Anonymous FTP. May 1994.
– FTP Operation Over Big Address Records (FOOBAR). June 1994.
– Uniform Resource Locators (URL). December 1994.
– (Proposed Standard) FTP Security Extensions. October 1997.
– (Proposed Standard) Feature negotiation mechanism for the File Transfer Protocol. August 1998.
– (Proposed Standard) Extensions for IPv6, NAT, and Extended passive mode. September 1998.
– (Informational) FTP Security Considerations. May 1999.
– (Proposed Standard) Internationalization of the File Transfer Protocol. July 1999.
– (Proposed Standard) Extensions to FTP. P. Hethmon. March 2007.
– (Proposed Standard) FTP Command and Extension Registry. March 2010.
– (Proposed Standard) File Transfer Protocol HOST Command for Virtual Hosts. March 2014.
IANA FTP Commands and Extensions registry – The official registry of FTP Commands and Extensions
External links
FTP Server Online Tester Authentication, encryption, mode and connectivity.
Anonymous FTP Servers by Country Code TLD (2012):
Application layer protocols
Clear text protocols
Computer-related introductions in 1971
History of the Internet
Internet Standards
Network file transfer protocols
OS/2 commands
Unix network-related software
File sharing |
53452 | https://en.wikipedia.org/wiki/Euler%27s%20totient%20function | Euler's totient function | In number theory, Euler's totient function counts the positive integers up to a given integer that are relatively prime to . It is written using the Greek letter phi as or , and may also be called Euler's phi function. In other words, it is the number of integers in the range for which the greatest common divisor is equal to 1. The integers of this form are sometimes referred to as totatives of .
For example, the totatives of are the six numbers 1, 2, 4, 5, 7 and 8. They are all relatively prime to 9, but the other three numbers in this range, 3, 6, and 9 are not, since and . Therefore, . As another example, since for the only integer in the range from 1 to is 1 itself, and .
Euler's totient function is a multiplicative function, meaning that if two numbers and are relatively prime, then .
This function gives the order of the multiplicative group of integers modulo (the group of units of the ring ). It is also used for defining the RSA encryption system.
History, terminology, and notation
Leonhard Euler introduced the function in 1763. However, he did not at that time choose any specific symbol to denote it. In a 1784 publication, Euler studied the function further, choosing the Greek letter to denote it: he wrote for "the multitude of numbers less than , and which have no common divisor with it". This definition varies from the current definition for the totient function at but is otherwise the same. The now-standard notation comes from Gauss's 1801 treatise Disquisitiones Arithmeticae, although Gauss didn't use parentheses around the argument and wrote . Thus, it is often called Euler's phi function or simply the phi function.
In 1879, J. J. Sylvester coined the term totient for this function, so it is also referred to as Euler's totient function, the Euler totient, or Euler's totient. Jordan's totient is a generalization of Euler's.
The cototient of is defined as . It counts the number of positive integers less than or equal to that have at least one prime factor in common with .
Computing Euler's totient function
There are several formulas for computing .
Euler's product formula
It states
where the product is over the distinct prime numbers dividing . (For notation, see Arithmetical function.)
An equivalent formulation for , where are the distinct primes dividing n, is:The proof of these formulas depends on two important facts.
Phi is a multiplicative function
This means that if , then . Proof outline: Let , , be the sets of positive integers which are coprime to and less than , , , respectively, so that , etc. Then there is a bijection between and by the Chinese remainder theorem.
Value of phi for a prime power argument
If is prime and , then
Proof: Since is a prime number, the only possible values of are , and the only way to have is if is a multiple of , i.e. , and there are such multiples less than . Therefore, the other numbers are all relatively prime to .
Proof of Euler's product formula
The fundamental theorem of arithmetic states that if there is a unique expression where are prime numbers and each . (The case corresponds to the empty product.) Repeatedly using the multiplicative property of and the formula for gives
This gives both versions of Euler's product formula.
An alternative proof that does not require the multiplicative property instead uses the inclusion-exclusion principle applied to the set , excluding the sets of integers divisible by the prime divisors.
Example
In words: the distinct prime factors of 20 are 2 and 5; half of the twenty integers from 1 to 20 are divisible by 2, leaving ten; a fifth of those are divisible by 5, leaving eight numbers coprime to 20; these are: 1, 3, 7, 9, 11, 13, 17, 19.
The alternative formula uses only integers:
Fourier transform
The totient is the discrete Fourier transform of the gcd, evaluated at 1. Let
where for . Then
The real part of this formula is
For example, using and :Unlike the Euler product and the divisor sum formula, this one does not require knowing the factors of . However, it does involve the calculation of the greatest common divisor of and every positive integer less than , which suffices to provide the factorization anyway.
Divisor sum
The property established by Gauss, that
where the sum is over all positive divisors of , can be proven in several ways. (See Arithmetical function for notational conventions.)
One proof is to note that is also equal to the number of possible generators of the cyclic group ; specifically, if with , then is a generator for every coprime to . Since every element of generates a cyclic subgroup, and all subgroups are generated by precisely elements of , the formula follows. Equivalently, the formula can be derived by the same argument applied to the multiplicative group of the th roots of unity and the primitive th roots of unity.
The formula can also be derived from elementary arithmetic. For example, let and consider the positive fractions up to 1 with denominator 20:
Put them into lowest terms:
These twenty fractions are all the positive ≤ 1 whose denominators are the divisors . The fractions with 20 as denominator are those with numerators relatively prime to 20, namely , , , , , , , ; by definition this is fractions. Similarly, there are fractions with denominator 10, and fractions with denominator 5, etc. Thus the set of twenty fractions is split into subsets of size for each dividing 20. A similar argument applies for any n.
Möbius inversion applied to the divisor sum formula gives
where is the Möbius function, the multiplicative function defined by and for each prime and . This formula may also be derived from the product formula by multiplying out to get
An example:
Some values
The first 100 values are shown in the table and graph below:
{| class="wikitable" style="text-align: right"
|+ for
! +
! 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 || 10
|-
! 0
| 1 || 1 || 2 || 2 || 4 || 2 || 6 || 4 || 6 || 4
|-
! 10
| 10 || 4 || 12 || 6 || 8 || 8 || 16 || 6 || 18 || 8
|-
! 20
| 12 || 10 || 22 || 8 || 20 || 12 || 18 || 12 || 28 || 8
|-
! 30
| 30 || 16 || 20 || 16 || 24 || 12 || 36 || 18 || 24 || 16
|-
! 40
| 40 || 12 || 42 || 20 || 24 || 22 || 46 || 16 || 42 || 20
|-
! 50
| 32 || 24 || 52 || 18 || 40 || 24 || 36 || 28 || 58 || 16
|-
! 60
| 60 || 30 || 36 || 32 || 48 || 20 || 66 || 32 || 44 || 24
|-
! 70
| 70 || 24 || 72 || 36 || 40 || 36 || 60 || 24 || 78 || 32
|-
! 80
| 54 || 40 || 82 || 24 || 64 || 42 || 56 || 40 || 88 || 24
|-
! 90
| 72 || 44 || 60 || 46 || 72 || 32 || 96 || 42 || 60 || 40
|}
In the graph at right the top line is an upper bound valid for all other than one, and attained if and only if is a prime number. A simple lower bound is , which is rather loose: in fact, the lower limit of the graph is proportional to .
Euler's theorem
This states that if and are relatively prime then
The special case where is prime is known as Fermat's little theorem.
This follows from Lagrange's theorem and the fact that is the order of the multiplicative group of integers modulo .
The RSA cryptosystem is based on this theorem: it implies that the inverse of the function , where is the (public) encryption exponent, is the function , where , the (private) decryption exponent, is the multiplicative inverse of modulo . The difficulty of computing without knowing the factorization of is thus the difficulty of computing : this is known as the RSA problem which can be solved by factoring . The owner of the private key knows the factorization, since an RSA private key is constructed by choosing as the product of two (randomly chosen) large primes and . Only is publicly disclosed, and given the difficulty to factor large numbers we have the guarantee that no one else knows the factorization.
Other formulae
Note the special cases
Compare this to the formula
(See least common multiple.)
is even for . Moreover, if has distinct odd prime factors,
For any and such that there exists an such that .
where is the radical of (the product of all distinct primes dividing ).
( cited in)
(where is the Euler–Mascheroni constant).
where is a positive integer and is the number of distinct prime factors of .
Menon's identity
In 1965 P. Kesava Menon proved
where is the number of divisors of .
Formulae involving the golden ratio
Schneider found a pair of identities connecting the totient function, the golden ratio and the Möbius function . In this section is the totient function, and is the golden ratio.
They are:
and
Subtracting them gives
Applying the exponential function to both sides of the preceding identity yields an infinite product formula for :
The proof is based on the two formulae
Generating functions
The Dirichlet series for may be written in terms of the Riemann zeta function as:
The Lambert series generating function is
which converges for .
Both of these are proved by elementary series manipulations and the formulae for .
Growth rate
In the words of Hardy & Wright, the order of is "always 'nearly '."
First
but as n goes to infinity, for all
These two formulae can be proved by using little more than the formulae for and the divisor sum function .
In fact, during the proof of the second formula, the inequality
true for , is proved.
We also have
Here is Euler's constant, , so and .
Proving this does not quite require the prime number theorem. Since goes to infinity, this formula shows that
In fact, more is true.
and
The second inequality was shown by Jean-Louis Nicolas. Ribenboim says "The method of proof is interesting, in that the inequality is shown first under the assumption that the Riemann hypothesis is true, secondly under the contrary assumption."
For the average order, we have
due to Arnold Walfisz, its proof exploiting estimates on exponential sums due to I. M. Vinogradov and N. M. Korobov (this is currently the best known estimate of this type). The "Big " stands for a quantity that is bounded by a constant times the function of inside the parentheses (which is small compared to ).
This result can be used to prove that the probability of two randomly chosen numbers being relatively prime is .
Ratio of consecutive values
In 1950 Somayajulu proved
In 1954 Schinzel and Sierpiński strengthened this, proving that the set
is dense in the positive real numbers. They also proved that the set
is dense in the interval (0,1).
Totient numbers
A totient number is a value of Euler's totient function: that is, an for which there is at least one for which . The valency or multiplicity of a totient number is the number of solutions to this equation. A nontotient is a natural number which is not a totient number. Every odd integer exceeding 1 is trivially a nontotient. There are also infinitely many even nontotients, and indeed every positive integer has a multiple which is an even nontotient.
The number of totient numbers up to a given limit is
for a constant .
If counted accordingly to multiplicity, the number of totient numbers up to a given limit is
where the error term is of order at most for any positive .
It is known that the multiplicity of exceeds infinitely often for any .
Ford's theorem
proved that for every integer there is a totient number of multiplicity : that is, for which the equation has exactly solutions; this result had previously been conjectured by Wacław Sierpiński, and it had been obtained as a consequence of Schinzel's hypothesis H. Indeed, each multiplicity that occurs, does so infinitely often.
However, no number is known with multiplicity . Carmichael's totient function conjecture is the statement that there is no such .
Perfect totient numbers
Applications
Cyclotomy
In the last section of the Disquisitiones Gauss proves that a regular -gon can be constructed with straightedge and compass if is a power of 2. If is a power of an odd prime number the formula for the totient says its totient can be a power of two only if is a first power and is a power of 2. The primes that are one more than a power of 2 are called Fermat primes, and only five are known: 3, 5, 17, 257, and 65537. Fermat and Gauss knew of these. Nobody has been able to prove whether there are any more.
Thus, a regular -gon has a straightedge-and-compass construction if n is a product of distinct Fermat primes and any power of 2. The first few such are
2, 3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 30, 32, 34, 40,... .
The RSA cryptosystem
Setting up an RSA system involves choosing large prime numbers and , computing and , and finding two numbers and such that . The numbers and (the "encryption key") are released to the public, and (the "decryption key") is kept private.
A message, represented by an integer , where , is encrypted by computing .
It is decrypted by computing . Euler's Theorem can be used to show that if , then .
The security of an RSA system would be compromised if the number could be factored or if could be computed without factoring .
Unsolved problems
Lehmer's conjecture
If is prime, then . In 1932 D. H. Lehmer asked if there are any composite numbers such that divides . None are known.
In 1933 he proved that if any such exists, it must be odd, square-free, and divisible by at least seven primes (i.e. ). In 1980 Cohen and Hagis proved that and that . Further, Hagis showed that if 3 divides then and .
Carmichael's conjecture
This states that there is no number with the property that for all other numbers , , . See Ford's theorem above.
As stated in the main article, if there is a single counterexample to this conjecture, there must be infinitely many counterexamples, and the smallest one has at least ten billion digits in base 10.
See also
Carmichael function
Duffin–Schaeffer conjecture
Generalizations of Fermat's little theorem
Highly composite number
Multiplicative group of integers modulo
Ramanujan sum
Totient summatory function
Dedekind psi function
Notes
References
The Disquisitiones Arithmeticae has been translated from Latin into English and German. The German edition includes all of Gauss' papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
References to the Disquisitiones are of the form Gauss, DA, art. nnn.
. See paragraph 24.3.2.
Dickson, Leonard Eugene, "History Of The Theory Of Numbers", vol 1, chapter 5 "Euler's Function, Generalizations; Farey Series", Chelsea Publishing 1952
.
.
External links
Euler's Phi Function and the Chinese Remainder Theorem — proof that is multiplicative
Euler's totient function calculator in JavaScript — up to 20 digits
Dineva, Rosica, The Euler Totient, the Möbius, and the Divisor Functions
Plytage, Loomis, Polhill Summing Up The Euler Phi Function
Modular arithmetic
Multiplicative functions
Articles containing proofs
Algebra
Number theory
Leonhard Euler |
53784 | https://en.wikipedia.org/wiki/Brute-force%20attack | Brute-force attack | In cryptography, a brute-force attack consists of an attacker submitting many passwords or passphrases with the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess the key which is typically created from the password using a key derivation function. This is known as an exhaustive key search.
A brute-force attack is a cryptanalytic attack that can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in an information-theoretically secure manner). Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier.
When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as the dictionary attack are used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones.
Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one.
Basic concept
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.
Theoretical limits
The resources required for a brute-force attack grow exponentially with increasing key size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bit symmetric keys (e.g. Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. The so-called Landauer limit implied by the laws of physics sets a lower limit on the energy required to perform a computation of per bit erased in a computation, where T is the temperature of the computing device in kelvins, k is the Boltzmann constant, and the natural logarithm of 2 is about 0.693. No irreversible computing device can use less energy than this, even in principle. Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require 2128 − 1 bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (~300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ~1018 joules, which is equivalent to consuming 30 gigawatts of power for one year. This is equal to 30×109 W×365×24×3600 s = 9.46×1017 J or 262.7 TWh (about 0.1% of the yearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0.
However, this argument assumes that the register values are changed using conventional set and clear operations which inevitably generate entropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (see reversible computing), though no such computers are known to have been constructed.
As commercial successors of governmental ASIC solutions have become available, also known as custom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is modern graphics processing unit (GPU) technology, the other is the field-programmable gate array (FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from their energy efficiency per cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors.
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGA PCI Express card up to dedicated FPGA computers. WPA and WPA2 encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs and some hundred in case of FPGAs.
AES permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute force requires 2128 times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100 petaFLOPS which could theoretically check 100 million million (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055 years to exhaust the 256-bit key space.
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effective random number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute force have nevertheless been cracked because the key space to search through was found to be much smaller than originally thought, because of a lack of entropy in their pseudorandom number generators. These include Netscape's implementation of SSL (famously cracked by Ian Goldberg and David Wagner in 1995) and a Debian/Ubuntu edition of OpenSSL discovered in 2008 to be flawed. A similar lack of implemented entropy led to the breaking of Enigma's code.
Credential recycling
Credential recycling refers to the hacking practice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling is pass the hash, where unsalted hashed credentials are stolen and re-used without first being brute forced.
Unbreakable codes
Certain types of encryption, by their mathematical properties, cannot be defeated by brute force. An example of this is one-time pad cryptography, where every cleartext bit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by the Venona project, generally relies not on pure cryptography, but upon mistakes in its implementation: the key pads not being truly random, intercepted keypads, operators making mistakes – or other errors.
Countermeasures
In case of an offline attack where the attacker has access to the encrypted material, one can try key combinations without the risk of discovery or interference. However database and directory administrators can take countermeasures against online attacks, for example by limiting the number of attempts that a password can be tried, by introducing time delays between successive attempts, increasing the answer's complexity (e.g. requiring a CAPTCHA answer or verification code sent via cellphone), and/or locking accounts out after unsuccessful login attempts. Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site.
Reverse brute-force attack
In a reverse brute-force attack, a single (usually common) password is tested against multiple usernames or encrypted files. The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
See also
Bitcoin mining
Cryptographic key length
Distributed.net
Key derivation function
MD5CRK
Metasploit Express
Side-channel attack
TWINKLE and TWIRL
Unicity distance
RSA Factoring Challenge
Secure Shell
Notes
References
External links
RSA-sponsored DES-III cracking contest
Demonstration of a brute-force device designed to guess the passcode of locked iPhones running iOS 10.3.3
How We Cracked the Code Book Ciphers – Essay by the winning team of the challenge in The Code Book
Cryptographic attacks |
53785 | https://en.wikipedia.org/wiki/Dictionary%20attack | Dictionary attack | In cryptanalysis and computer security, a dictionary attack is an attack using a restricted subset of a keyspace to defeat a cipher or authentication mechanism by trying to determine its decryption key or passphrase, sometimes trying thousands or millions of likely possibilities often obtained from lists of past security breaches.
Technique
A dictionary attack is based on trying all the strings in a pre-arranged listing. Such attacks originally used words found in a dictionary (hence the phrase dictionary attack); however, now there are much larger lists available on the open Internet containing hundreds of millions of passwords recovered from past data breaches. There is also cracking software that can use such lists and produce common variations, such as substituting numbers for similar-looking letters. A dictionary attack tries only those possibilities which are deemed most likely to succeed. Dictionary attacks often succeed because many people have a tendency to choose short passwords that are ordinary words or common passwords; or variants obtained, for example, by appending a digit or punctuation character. Dictionary attacks are often successful, since many commonly used password creation techniques are covered by the available lists, combined with cracking software pattern generation. A safer approach is to randomly generate a long password (15 letters or more) or a multiword passphrase, using a password manager program or manually typing a password.
Pre-computed dictionary attack/Rainbow table attack
It is possible to achieve a time–space tradeoff by pre-computing a list of hashes of dictionary words and storing these in a database using the hash as the key. This requires a considerable amount of preparation time, but this allows the actual attack to be executed faster. The storage requirements for the pre-computed tables were once a major cost, but now they are less of an issue because of the low cost of disk storage. Pre-computed dictionary attacks are particularly effective when a large number of passwords are to be cracked. The pre-computed dictionary needs be generated only once, and when it is completed, password hashes can be looked up almost instantly at any time to find the corresponding password. A more refined approach involves the use of rainbow tables, which reduce storage requirements at the cost of slightly longer lookup-times. See LM hash for an example of an authentication system compromised by such an attack.
Pre-computed dictionary attacks, or "rainbow table attacks", can be thwarted by the use of salt, a technique that forces the hash dictionary to be recomputed for each password sought, making precomputation infeasible, provided that the number of possible salt values is large enough.
Dictionary attack software
Cain and Abel
Crack
Aircrack-ng
John the Ripper
L0phtCrack
Metasploit Project
Ophcrack
Cryptool
See also
Brute-force attack
E-mail address harvesting
Intercontinental Dictionary Series, an online linguistic database
Key derivation function
Key stretching
Password cracking
Password strength
References
External links
– Internet Security Glossary
– Internet Security Glossary, Version 2
US Secret Service use a distributed dictionary attack on suspect's password protecting encryption keys
Testing for Brute Force (OWASP-AT-004)
Cryptographic attacks |
54666 | https://en.wikipedia.org/wiki/Anonymous%20remailer | Anonymous remailer | An anonymous remailer is a server that receives messages with embedded instructions on where to send them next, and that forwards them without revealing where they originally came from. There are cypherpunk anonymous remailers, mixmaster anonymous remailers, and nym servers, among others, which differ in how they work, in the policies they adopt, and in the type of attack on the anonymity of e-mail they can (or are intended to) resist. Remailing as discussed in this article applies to e-mails intended for particular recipients, not the general public. Anonymity in the latter case is more easily addressed by using any of several methods of anonymous publication.
Types of remailer
There are several strategies that affect the anonymity of the handled e-mail. In general, different classes of anonymous remailers differ with regard to the choices their designers/operators have made. These choices can be influenced by the legal ramifications of operating specific types of remailers.
It must be understood that every data packet traveling on the Internet contains the node addresses (as raw IP bit strings) of both the sending and intended recipient nodes, and so no data packet can ever actually be anonymous at this level . In addition, all standards-based e-mail messages contain defined fields in their headers in which the source and transmitting entities (and Internet nodes as well) are required to be included.
Some remailers change both types of address in messages they forward, and the list of forwarding nodes in e-mail messages as well, as the message passes through; in effect, they substitute 'fake source addresses' for the originals. The 'IP source address' for that packet may become that of the remailer server itself, and within an e-mail message (which is usually several packets), a nominal 'user' on that server. Some remailers forward their anonymized e-mail to still other remailers, and only after several such hops is the e-mail actually delivered to the intended address.
There are, more or less, four types of remailers:
Pseudonymous remailers
A pseudonymous remailer simply takes away the e-mail address of the sender, gives a pseudonym to the sender, and sends the message to the intended recipient (that can be answered via that remailer).
Cypherpunk remailers, also called Type I
A Cypherpunk remailer sends the message to the recipient, stripping away the sender address on it. One can not answer a message sent via a Cypherpunk remailer. The message sent to the remailer can usually be encrypted, and the remailer will decrypt it and send it to the recipient address hidden inside the encrypted message. In addition, it is possible to chain two or three remailers, so that each remailer can't know who is sending a message to whom. Cypherpunk remailers do not keep logs of transactions.
Mixmaster remailers, also called Type II
In Mixmaster, the user composes an email to a remailer, which is relayed through each node in the network using SMTP, until it finally arrives at the final recipient. Mixmaster can only send emails one way. An email is sent anonymously to an individual, but for them to be able to respond, a reply address must be included in the body of the email. Also, Mixmaster remailers require the use of a computer program to write messages. Such programs are not supplied as a standard part of most operating systems or mail management systems.
Mixminion remailers, also called Type III
A Mixminion remailer attempts to address the following challenges in Mixmaster remailers: replies, forward anonymity, replay prevention and key rotation, exit policies, integrated directory servers and dummy traffic. They are currently available for the Linux and Windows platforms. Some implementations are open source.
Traceable remailers
Some remailers establish an internal list of actual senders and invented names such that a recipient can send mail to invented name AT some-remailer.example. When receiving traffic addressed to this user, the server software consults that list, and forwards the mail to the original sender, thus permitting anonymous—though traceable with access to the list—two-way communication. The famous "penet.fi" remailer in Finland did just that for several years. Because of the existence of such lists in this type of remailing server, it is possible to break the anonymity by gaining access to the list(s), by breaking into the computer, asking a court (or merely the police in some places) to order that the anonymity be broken, and/or bribing an attendant. This happened to penet.fi as a result of some traffic passed through it about Scientology. The Church claimed copyright infringement and sued penet.fi's operator. A court ordered the list be made available. Penet's operator shut it down after destroying its records (including the list) to retain identity confidentiality for its users; though not before being forced to supply the court with the real e-mail addresses of two of its users.
More recent remailer designs use cryptography in an attempt to provide more or less the same service, but without so much risk of loss of user confidentiality. These are generally termed nym servers or pseudonymous remailers. The degree to which they remain vulnerable to forced disclosure (by courts or police) is and will remain unclear since new statutes/regulations and new cryptanalytic developments proceed apace. Multiple anonymous forwarding among cooperating remailers in different jurisdictions may retain, but cannot guarantee, anonymity against a determined attempt by one or more governments, or civil litigators.
Untraceable remailers
If users accept the loss of two-way interaction, identity anonymity can be made more secure.
By not keeping any list of users and corresponding anonymizing labels for them, a remailer can ensure that any message that has been forwarded leaves no internal information behind that can later be used to break identity confidentiality. However, while being handled, messages remain vulnerable within the server (e.g., to Trojan software in a compromised server, to a compromised server operator, or to mis-administration of the server), and traffic analysis comparison of traffic into and out of such a server can suggest quite a lot—far more than almost any would credit.
The Mixmaster strategy is designed to defeat such attacks, or at least to increase their cost (i.e., to 'attackers') beyond feasibility. If every message is passed through several servers (ideally in different legal and political jurisdictions), then attacks based on legal systems become considerably more difficult, if only because of 'Clausewitzian' friction among lawyers, courts, different statutes, organizational rivalries, legal systems, etc. And, since many different servers and server operators are involved, subversion of any (i.e., of either system or operator) becomes less effective also since no one (most likely) will be able to subvert the entire chain of remailers.
Random padding of messages, random delays before forwarding, and encryption of forwarding information between forwarding remailers, increases the degree of difficulty for attackers still further as message size and timing can be largely eliminated as traffic analysis clues, and lack of easily readable forwarding information renders ineffective simple automated traffic analysis algorithms.
Web-based mailer
There are also web services that allow users to send anonymous e-mail messages. These services do not provide the anonymity of real remailers, but they are easier to use. When using a web-based anonymous e-mail or anonymous remailer service, its reputation should first be analyzed, since the service stands between senders and recipients. Some of the aforementioned web services log the users I.P. addresses to ensure they do not break the law; others offer superior anonymity with attachment functionality by choosing to trust that the users will not breach the websites Terms of Service (TOS).
Remailer statistics
In most cases, remailers are owned and operated by individuals, and are not as stable as they might ideally be. In fact, remailers can, and have, gone down without warning. It is important to use up-to-date statistics when choosing remailers.
Remailer abuse and blocking by governments
Although most re-mailer systems are used responsibly, the anonymity they provide can be exploited by entities or individuals whose reasons for anonymity are not necessarily benign.
Such reasons could include support for violent extremist actions, sexual exploitation of children or more commonly to frustrate accountability for 'trolling' and harassment of targeted individuals, or companies (The Dizum.com re-mailer chain being abused as recently as May 2013 for this purpose.)
The response of some re-mailers to this abuse potential is often to disclaim responsibility (as dizum.com does), as owing to the technical design (and ethical principles) of many systems, it is impossible for the operators to physically unmask those using their systems. Some re-mailer systems go further and claim that it would be illegal for them to monitor for certain types abuse at all.
Until technical changes were made in the remailers concerned in the mid-2000s, some re-mailers (notably nym.alias.net based systems) were seemingly willing to use any genuine (and thus valid) but otherwise forged address. This loophole allowed trolls to mis-attribute controversial claims or statements with the aim of causing offence, upset or harassment to the genuine holder(s) of the address(es) forged.
While re-mailers may disclaim responsibility, the comments posted via them have led to them being blocked in some countries. In 2014, dizum.com (a Netherlands-based remailer) was seemingly blocked by authorities in Pakistan, because comments an (anonymous) user of that service had made concerning key figures in Islam.
See also
Anonymity
Anonymity application
Anonymous blogging
Anonymous P2P
Anonymous remailer
Cypherpunk anonymous remailer (Type I)
Mixmaster anonymous remailer (Type II)
Mixminion anonymous remailer (Type III)
Anonymous web browsing
Data privacy
Identity theft
Internet privacy
Personally identifiable information
Privacy software and Privacy-enhancing technologies
I2P
I2P-Bote
Java Anon Proxy
Onion routing
Tor (network)
Pseudonymity, Pseudonymization
Pseudonymous remailer (a.k.a. nym servers)
Penet remailer
Traffic analysis
Winston Smith Project
Mix network
References
Remailer Vulnerabilities
Email Security, Bruce Schneier ()
Computer Privacy Handbook, Andre Bacard ()
Anonymous file sharing networks
Internet Protocol based network software
Routing
Network architecture
Cryptography |
55951 | https://en.wikipedia.org/wiki/Instant%20messaging | Instant messaging | Instant messaging (IM) technology is a type of online chat allowing real-time text transmission over the Internet or another computer network. Messages are typically transmitted between two or more parties, when each user inputs text and triggers a transmission to the recipient(s), who are all connected on a common network. It differs from email in that conversations over instant messaging happen in real-time (hence "instant"). Most modern IM applications (sometimes called "social messengers", "messaging apps" or "chat apps") use push technology and also add other features such as emojis (or graphical smileys), file transfer, chatbots, Voice over IP, or video chat capabilities.
Instant messaging systems tend to facilitate connections between specified known users (often using a contact list also known as a "buddy list" or "friend list"), and can be standalone applications or integrated into e.g. a wider social media platform, or a website where it can for instance be used for conversational commerce. IM can also consist of conversations in "chat rooms". Depending on the IM protocol, the technical architecture can be peer-to-peer (direct point-to-point transmission) or client–server (an IM service center retransmits messages from the sender to the communication device). It is usually distinguished from text messaging which is typically simpler and normally uses cellular phone networks.
Instant messaging was pioneered in the early Internet era; the IRC protocol was the earliest to achieve wide adoption. Later in the 1990s, ICQ was among the first closed and commercialized instant messengers, and several rival services appeared afterwards as it became a popular use of the Internet. Beginning with its first introduction in 2005, BlackBerry Messenger, which initially had been available only on BlackBerry smartphones, soon became one of the most popular mobile instant messaging apps worldwide. BBM was for instance the most used mobile messaging app in the United Kingdom and Indonesia. Instant messaging remains very popular today; IM apps are the most widely used smartphone apps: in 2018 there were over 1.3 billion monthly users of WhatsApp and Facebook Messenger, and 980 million monthly active users of WeChat.
Overview
Instant messaging is a set of communication technologies used for text-based communication between two (private messaging) or more (chat room) participants over the Internet or other types of networks (see also LAN messenger). IM–chat happens in real-time. Of importance is that online chat and instant messaging differ from other technologies such as email due to the perceived quasi-synchrony of the communications by the users. Some systems permit messages to be sent to users not then 'logged on' (offline messages), thus removing some differences between IM and email (often done by sending the message to the associated email account).
IM allows effective and efficient communication, allowing immediate receipt of acknowledgment or reply. However IM is basically not necessarily supported by transaction control. In many cases, instant messaging includes added features which can make it even more popular. For example, users may see each other via webcams, or talk directly for free over the Internet using a microphone and headphones or loudspeakers. Many applications allow file transfers, although they are usually limited in the permissible file-size. It is usually possible to save a text conversation for later reference. Instant messages are often logged in a local message history, making it similar to the persistent nature of emails.
Major IM services are controlled by their corresponding companies. They usually follow the client–server model when all clients have to first connect to the central server. This requires users to trust this server because messages can generally be accessed by the company. Companies can be compelled to reveal their user's communication. Companies can also suspend user accounts for any reason.
Non-IM types of chat include multicast transmission, usually referred to as "chat rooms", where participants might be anonymous or might be previously known to each other (for example collaborators on a project that is using chat to facilitate communication).
An Instant Message Service Center (IMSC) is a network element in the mobile telephone network which delivers instant messages. When a user sends an IM message to another user, the phone sends the message to the IMSC. The IMSC stores the message and delivers it to the destination user when they are available. The IMSC usually has a configurable time limit for how long it will store the message. Few companies who make many of the IMSCs in use in the GSM world are Miyowa, Followap and OZ. Other players include Acision, Colibria, Ericsson, Nokia, Comverse Technology, Now Wireless, Jinny Software, Miyowa, Feelingk and few others.
The term "Instant Messenger" is a service mark of Time Warner and may not be used in software not affiliated with AOL in the United States. For this reason, in April 2007, the instant messaging client formerly named Gaim (or gaim) announced that they would be renamed "Pidgin".
Clients
Each modern IM service generally provides its own client, either a separately installed piece of software, or a browser-based client. They are normally centralised networks run by the servers of the platform's operators, unlike peer-to-peer protocols like XMPP. These usually only work within the same IM network, although some allow limited function with other services. Third party client software applications exist that will connect with most of the major IM services. There is the class of instant messengers that uses the serverless model, which doesn't require servers, and the IM network consists only of clients. There are several serverless messengers: RetroShare, Tox, Bitmessage, Ricochet, Ring.
Some examples of popular IM services today include WhatsApp, Facebook Messenger, WeChat, QQ Messenger, Telegram, Viber, Line, and Snapchat. The popularity of certain apps greatly differ between different countries. Certain apps have emphasis on certain uses - for example Skype focuses on video calling, Slack focuses on messaging and file sharing for work teams, and Snapchat focuses on image messages. Some social networking services offer messaging services as a component of their overall platform, such as Facebook's Facebook Messenger, while others have a direct messaging function as an additional adjunct component of their social networking platforms, like Instagram, Reddit, Tumblr, TikTok, Clubhouse and Twitter, either directly or through chat rooms.
Features
Private and group messaging Private chat allows private conversation with another person or a group. The privacy aspect can also be enhanced as applications have a timer feature, like Snapchat, where messages or conversations are automatically deleted once the time limit is reached. Public and group chat features allow users to communicate with multiple people at a time.
Calling Many major IM services and applications offer the call feature for user-to-user calls, conference calls, and voice messages. The call functionality is useful for professionals who utilize the application for work purposes and as a hands-free method. Videotelephony using a webcam is also possible by some.
Games and entertainment Some IM applications include in-app games for entertainment. Yahoo! Messenger for example introduced these where users could play a game and viewed by friends in real-time. The Facebook Messenger application has a built in option to play computer games with people in a chat, including games like Tetris and Blackjack.
Payments Though a relatively new feature, peer-to-peer payments are available on major messaging platforms. This functionality allows individuals to use one application for both communication and financial tasks. The lack of a service fee also makes messaging apps advantageous to financial applications. Major platforms such as Facebook messenger and WeChat already offer a payment feature, and this functionality is likely to become a standard amongst IM apps competing in the market.
History
Though the term dates from the 1990s, instant messaging predates the Internet, first appearing on multi-user operating systems like Compatible Time-Sharing System (CTSS) and Multiplexed Information and Computing Service (Multics) in the mid-1960s. Initially, some of these systems were used as notification systems for services like printing, but quickly were used to facilitate communication with other users logged into the same machine. CTSS facilitated communication via text message for up to 30 people.
Parallel to instant messaging were early online chat facilities, the earliest of which was Talkomatic (1973) on the PLATO system, which allowed 5 people to chat simultaneously on a 512x512 plasma display (5 lines of text + 1 status line per person). During the bulletin board system (BBS) phenomenon that peaked during the 1980s, some systems incorporated chat features which were similar to instant messaging; Freelancin' Roundtable was one prime example. The first such general-availability commercial online chat service (as opposed to PLATO, which was educational) was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio.
As networks developed, the protocols spread with the networks. Some of these used a peer-to-peer protocol (e.g. talk, ntalk and ytalk), while others required peers to connect to a server (see talker and IRC). The Zephyr Notification Service (still in use at some institutions) was invented at MIT's Project Athena in the 1980s to allow service providers to locate and send messages to users.
Early instant messaging programs were primarily real-time text, where characters appeared as they were typed. This includes the Unix "talk" command line program, which was popular in the 1980s and early 1990s. Some BBS chat programs (i.e. Celerity BBS) also used a similar interface. Modern implementations of real-time text also exist in instant messengers, such as AOL's Real-Time IM as an optional feature.
In the latter half of the 1980s and into the early 1990s, the Quantum Link online service for Commodore 64 computers offered user-to-user messages between concurrently connected customers, which they called "On-Line Messages" (or OLM for short), and later "FlashMail." Quantum Link later became America Online and made AOL Instant Messenger (AIM, discussed later). While the Quantum Link client software ran on a Commodore 64, using only the Commodore's PETSCII text-graphics, the screen was visually divided into sections and OLMs would appear as a yellow bar saying "Message From:" and the name of the sender along with the message across the top of whatever the user was already doing, and presented a list of options for responding. As such, it could be considered a type of graphical user interface (GUI), albeit much more primitive than the later Unix, Windows and Macintosh based GUI IM software. OLMs were what Q-Link called "Plus Services" meaning they charged an extra per-minute fee on top of the monthly Q-Link access costs.
Modern, Internet-wide, GUI-based messaging clients as they are known today, began to take off in the mid-1990s with PowWow, ICQ, and AOL Instant Messenger. Similar functionality was offered by CU-SeeMe in 1992; though primarily an audio/video chat link, users could also send textual messages to each other. AOL later acquired Mirabilis, the authors of ICQ; establishing dominance in the instant messaging market. A few years later ICQ (then owned by AOL) was awarded two patents for instant messaging by the U.S. patent office. Meanwhile, other companies developed their own software; (Excite, MSN, Ubique, and Yahoo!), each with its own proprietary protocol and client; users therefore had to run multiple client applications if they wished to use more than one of these networks. In 1998, IBM released IBM Lotus Sametime, a product based on technology acquired when IBM bought Haifa-based Ubique and Lexington-based Databeam.
In 2000, an open-source application and open standards-based protocol called Jabber was launched. The protocol was standardized under the name Extensible Messaging and Presence Protocol (XMPP). XMPP servers could act as gateways to other IM protocols, reducing the need to run multiple clients. Multi-protocol clients can use any of the popular IM protocols by using additional local libraries for each protocol. IBM Lotus Sametime's November 2007 release added IBM Lotus Sametime Gateway support for XMPP.
Video calling using a webcam also started taking off during this time. Microsoft NetMeeting was one of the earliest, but Skype released in 2003 was one of the first that focused on this features and brought it to a wider audience.
By 2006, AIM controlled 52 percent of the instant messaging market, but rapidly declined shortly thereafter as the company struggled to compete with other services.
By 2010, instant messaging over the Web was in sharp decline in favor of messaging features on social networks. Social networking providers often offer IM abilities, for example Facebook Chat, while Twitter can be thought of as a Web 2.0 instant messaging system. Similar server-side chat features are part of most dating websites, such as OKCupid or PlentyofFish. The former most popular IM platforms were terminated in later years, such as AIM.
The popularity of instant messaging was soon revived with new services in the form of mobile applications, notable examples of the time being BlackBerry Messenger (first released in 2005; today available as BlackBerry Messenger Enterprise) and WhatsApp (first released in 2009). Unlike previous IM applications, these newer ones usually ran only on mobile devices and coincided with the rising popularity of Internet-enabled smartphones; this led to IM surpassing SMS in message volume by 2013. By 2014, IM had more users than social networks. In January 2015, the service WhatsApp alone accommodated 30 billion messages daily in comparison to about 20 billion for SMS.
In 2016, Google introduced a new intelligent messaging app that incorporates machine learning technology called Allo. Google Allo was shut down on March 12, 2019.
Interoperability
Standard complementary instant messaging applications offer functions like file transfer, contact list(s), the ability to hold several simultaneous conversations, etc. These may be all the functions that a small business needs, but larger organizations will require more sophisticated applications that can work together. The solution to finding applications capable of this is to use enterprise versions of instant messaging applications. These include titles like XMPP, Lotus Sametime, Microsoft Office Communicator, etc., which are often integrated with other enterprise applications such as workflow systems. These enterprise applications, or enterprise application integration (EAI), are built to certain constraints, namely storing data in a common format.
There have been several attempts to create a unified standard for instant messaging: IETF's Session Initiation Protocol (SIP) and SIP for Instant Messaging and Presence Leveraging Extensions (SIMPLE), Application Exchange (APEX), Instant Messaging and Presence Protocol (IMPP), the open XML-based Extensible Messaging and Presence Protocol (XMPP), and Open Mobile Alliance's Instant Messaging and Presence Service developed specifically for mobile devices.
Most attempts at producing a unified standard for the major IM providers (AOL, Yahoo! and Microsoft) have failed, and each continues to use its own proprietary protocol.
However, while discussions at IETF were stalled, Reuters signed the first inter-service provider connectivity agreement in September 2003. This agreement enabled AIM, ICQ and MSN Messenger users to talk with Reuters Messaging counterparts and vice versa. Following this, Microsoft, Yahoo! and AOL agreed to a deal in which Microsoft's Live Communications Server 2005 users would also have the possibility to talk to public instant messaging users. This deal established SIP/SIMPLE as a standard for protocol interoperability and established a connectivity fee for accessing public instant messaging groups or services. Separately, on October 13, 2005, Microsoft and Yahoo! announced that by the 3rd quarter of 2006 they would interoperate using SIP/SIMPLE, which was followed, in December 2005, by the AOL and Google strategic partnership deal in which Google Talk users would be able to communicate with AIM and ICQ users provided they have an AIM account.
There are two ways to combine the many disparate protocols:
Combine the many disparate protocols inside the IM client application.
Combine the many disparate protocols inside the IM server application. This approach moves the task of communicating with the other services to the server. Clients need not know or care about other IM protocols. For example, LCS 2005 Public IM Connectivity. This approach is popular in XMPP servers; however, the so-called transport projects suffer the same reverse engineering difficulties as any other project involved with closed protocols or formats.
Some approaches allow organizations to deploy their own, private instant messaging network by enabling them to restrict access to the server (often with the IM network entirely behind their firewall) and administer user permissions. Other corporate messaging systems allow registered users to also connect from outside the corporation LAN, by using an encrypted, firewall-friendly, HTTPS-based protocol. Usually, a dedicated corporate IM server has several advantages, such as pre-populated contact lists, integrated authentication, and better security and privacy.
Certain networks have made changes to prevent them from being used by such multi-network IM clients. For example, Trillian had to release several revisions and patches to allow its users to access the MSN, AOL, and Yahoo! networks, after changes were made to these networks. The major IM providers usually cite the need for formal agreements, and security concerns as reasons for making these changes.
The use of proprietary protocols has meant that many instant messaging networks have been incompatible and users have been unable to reach users on other networks. This may have allowed social networking with IM-like features and text messaging an opportunity to gain market share at the expense of IM.
Effects of IM on communication
Messaging applications have affected the way people communicate on their devices. A survey conducted by MetrixLabs showed that messaging applications 63% of Baby Boomers, 63% of Generation X, and 67% of Generation Y said that they used messaging applications in place of texting. A Facebook survey showed that 65% of people surveyed thought that messaging applications made group messaging easier.
Effects on workplace communication
Messaging applications have also changed how people communicate in the workplace. Enterprise messaging applications like Slack, TeleMessage, Teamnote and Yammer allow companies to enforce policies on how employees message at work and ensure secure storage of sensitive data. Message applications allow employees to separate work information from their personal emails and texts.
Messaging applications may make workplace communication efficient, but they can also have consequences on productivity. A study at Slack showed on average, people spend 10 hours a day on Slack, which is about 67% more time than they spend using email.
IM language
Users sometimes make use of internet slang or text speak to abbreviate common words or expressions to quicken conversations or reduce keystrokes. The language has become widespread, with well-known expressions such as 'lol' translated over to face-to-face language.
Emotions are often expressed in shorthand, such as the abbreviation LOL, BRB and TTYL; respectively laugh(ing) out loud, be right back, and talk to you later.
Some, however, attempt to be more accurate with emotional expression over IM. Real time reactions such as (chortle) (snort) (guffaw) or (eye-roll) are becoming more popular. Also there are certain standards that are being introduced into mainstream conversations including, '#' indicates the use of sarcasm in a statement and '*' which indicates a spelling mistake and/or grammatical error in the prior message, followed by a correction.
Business application
Instant messaging has proven to be similar to personal computers, email, and the World Wide Web, in that its adoption for use as a business communications medium was driven primarily by individual employees using consumer software at work, rather than by formal mandate or provisioning by corporate information technology departments. Tens of millions of the consumer IM accounts in use are being used for business purposes by employees of companies and other organizations.
In response to the demand for business-grade IM and the need to ensure security and legal compliance, a new type of instant messaging, called "Enterprise Instant Messaging" ("EIM") was created when Lotus Software launched IBM Lotus Sametime in 1998. Microsoft followed suit shortly thereafter with Microsoft Exchange Instant Messaging, later created a new platform called Microsoft Office Live Communications Server, and released Office Communications Server 2007 in October 2007. Oracle Corporation also jumped into the market with its Oracle Beehive unified collaboration software. Both IBM Lotus and Microsoft have introduced federation between their EIM systems and some of the public IM networks so that employees may use one interface to both their internal EIM system and their contacts on AOL, MSN, and Yahoo. As of 2010, leading EIM platforms include IBM Lotus Sametime, Microsoft Office Communications Server, Jabber XCP and Cisco Unified Presence. Industry-focused EIM platforms such as Reuters Messaging and Bloomberg Messaging also provide IM abilities to financial services companies.
The adoption of IM across corporate networks outside of the control of IT organizations creates risks and liabilities for companies who do not effectively manage and support IM use. Companies implement specialized IM archiving and security products and services to mitigate these risks and provide safe, secure, productive instant messaging abilities to their employees. IM is increasingly becoming a feature of enterprise software rather than a stand-alone application.
IM products can usually be categorised into two types: Enterprise Instant Messaging (EIM) and Consumer Instant Messaging (CIM). Enterprise solutions use an internal IM server, however this is not always feasible, particularly for smaller businesses with limited budgets. The second option, using a CIM provides the advantage of being inexpensive to implement and has little need for investing in new hardware or server software.
For corporate use, encryption and conversation archiving are usually regarded as important features due to security concerns. There are also a bunch of open source encrypting messengers. Sometimes the use of different operating systems in organizations requires use of software that supports more than one platform. For example, many software companies use Windows in administration departments but have software developers who use Linux.
Comparison to SMS
SMS is the acronym for “short message service” and allows mobile phone users to send text messages without an Internet connection, while instant messaging provides similar services through an Internet connection. SMS was a much more dominant form of communication before, when smartphones became widely used globally. While SMS relied on traditional paid telephone services, instant messaging apps on mobiles were available for free or a minor data charge. In 2012 SMS volume peaked, and in 2013 chat apps surpassed SMS in global message volume.
Easier group messaging was another advantage of smartphone messaging apps and also contributed to their adoption. Before the introduction of messaging apps, smartphone users could only participate in single-person interactions via mobile voice calls or SMS. With the introduction of messaging apps, the group chat functionality allows all the members to see an entire thread of everyone's responses. Members can also respond directly to each other, rather than having to go through the member who started the group message, to relay the information.
However, SMS still remains popular in the United States because it is usually included free in monthly phone bundles. While SMS volumes in some countries like Denmark, Spain and Singapore dropped up to two-thirds from 2011 to 2013, in the United States SMS use only dropped by about one quarter.
Security and archiving
Crackers (malicious or black hat hackers) have consistently used IM networks as vectors for delivering phishing attempts, "poison URLs", and virus-laden file attachments from 2004 to the present, with over 1100 discrete attacks listed by the IM Security Center in 2004–2007. Hackers use two methods of delivering malicious code through IM: delivery of viruses, trojan horses, or spyware within an infected file, and the use of "socially engineered" text with a web address that entices the recipient to click on a URL connecting him or her to a website that then downloads malicious code.
Viruses, computer worms, and trojans usually propagate by sending themselves rapidly through the infected user's contact list. An effective attack using a poisoned URL may reach tens of thousands of users in a short period when each user's contact list receives messages appearing to be from a trusted friend. The recipients click on the web address, and the entire cycle starts again. Infections may range from nuisance to criminal, and are becoming more sophisticated each year.
IM connections sometimes occur in plain text, making them vulnerable to eavesdropping. Also, IM client software often requires the user to expose open UDP ports to the world, raising the threat posed by potential security vulnerabilities.
In the early 2000s, a new class of IT security provider emerged to provide remedies for the risks and liabilities faced by corporations who chose to use IM for business communications. The IM security providers created new products to be installed in corporate networks for the purpose of archiving, content-scanning, and security-scanning IM traffic moving in and out of the corporation. Similar to the e-mail filtering vendors, the IM security providers focus on the risks and liabilities described above.
With rapid adoption of IM in the workplace, demand for IM security products began to grow in the mid-2000s. By 2007, the preferred platform for the purchase of security software had become the "computer appliance", according to IDC, who estimated that by 2008, 80% of network security products would be delivered via an appliance.
By 2014 however, the level of safety offered by instant messengers was still extremely poor. According to a scorecard made by the Electronic Frontier Foundation, only 7 out of 39 instant messengers received a perfect score, whereas the most popular instant messengers at the time only attained a score of 2 out of 7. A number of studies have shown that IM services are quite vulnerable for providing user privacy.
Encryption
Encryption is the primary method that messaging apps use to protect user's data privacy and security. SMS messages are not encrypted, making them insecure, as the content of each SMS message is visible to mobile carriers and governments and can be intercepted by a third party. SMS messages also leak metadata, or information about the message that is not the message content itself, such as phone numbers of the sender and recipient, which can identify the people involved in the conversation. SMS messages can also be spoofed and the sender of the message can be edited to impersonate another person.
Messaging applications on the market that use end-to-end encryption include Signal, WhatsApp, Wire and iMessage. Applications that have been criticized for lacking or poor encryption methods include Telegram and Confide, as both are prone to error.
Compliance risks
In addition to the malicious code threat, the use of instant messaging at work also creates a risk of non-compliance to laws and regulations governing use of electronic communications in businesses.
In the United States alone there are over 10,000 laws and regulations related to electronic messaging and records retention. The better-known of these include the Sarbanes–Oxley Act, HIPAA, and SEC 17a-3.
Clarification from the Financial Industry Regulatory Authority (FINRA) was issued to member firms in the financial services industry in December, 2007, noting that "electronic communications", "email", and "electronic correspondence" may be used interchangeably and can include such forms of electronic messaging as instant messaging and text messaging. Changes to Federal Rules of Civil Procedure, effective December 1, 2006, created a new category for electronic records which may be requested during discovery in legal proceedings.
Most nations also regulate use of electronic messaging and electronic records retention in similar fashion as the United States. The most common regulations related to IM at work involve the need to produce archived business communications to satisfy government or judicial requests under law. Many instant messaging communications fall into the category of business communications that must be archived and retrievable.
User base
As of October 2019, the most used messaging apps worldwide are WhatsApp with 1.6 billion active users, Facebook messenger with 1.3 billion users, and WeChat with 1.1 billion. There are only 25 countries in the world where WhatsApp is not the market leader in messaging apps and only 10 countries where the leading messenger app is not owned by Facebook.
More than 100 million users
Other platforms
Closed services and such with unclear activity
See also
Terms
Ambient awareness
Communications protocol
Mass collaboration
Message-oriented middleware
Operator messaging
Social media
Text messaging
SMS
Unified communications / Messaging
Lists
Comparison of instant messaging clients
Comparison of instant messaging protocols
Comparison of user features of messaging platforms
Other
Code Shikara (Computer worm)
References
External links
Internet culture
Internet Relay Chat
Social networking services
Online chat
Videotelephony
Text messaging |
57598 | https://en.wikipedia.org/wiki/Telecommunications%20in%20Algeria | Telecommunications in Algeria | Types of communications in Algeria, including telephones, mass media and the Internet.
In October 2013, the Algeria Regulatory Authority for Post and Telecommunication awarded 3G licences to the three mobile operators in Algeria: Mobilis, ooredoo Algeria and Djezzy. The mobile operator companies promised their clients better mobile internet service.
Telephony
Telephones - main lines in use: 3.068 million (2007)
country comparison to the world: 48
Telephones - mobile cellular: 43.227 million (2015)
country comparison to the world: 31
Telephone system: domestic: good service in north but sparse in south; domestic satellite system with 12 earth stations (20 additional domestic earth stations are planned)
international: 5 submarine cables; microwave radio relay to Italy, France, Spain, Morocco, and Tunisia; coaxial cable to Morocco and Tunisia; participant in Medarabtel; satellite earth stations - 2 Intelsat (1 Atlantic Ocean and 1 Indian Ocean), 1 Intersputnik, and 1 Arabsat
Mass media
Radio broadcast stations: AM 25, FM 1, shortwave 8 (1999)
Radios: 7.1 million (1997)
Television broadcast stations: 46 (plus 216 repeaters) (1995)
Televisions: 3.1 million (1997)
Internet
Internet Service Providers (ISPs): 03 (2005)
Internet Hosts: 477 (2008)
country comparison to the world: 161
Internet Users: 20 million (2007)
country comparison to the world: 51
Country codes: .dz
Controversy
In October 2013, the Algeria Regulatory Authority for Post and Telecommunication awarded 3G licences to the three mobile operators in Algeria: Mobilis, ooredoo Algeria and Djezzy. The mobile operator companies promised their clients better mobile internet service.
The authority imposed on mobile operators a 3G double numbering license. This double numbering system requires any citizen that wishes to use the 3G service, must hold two separate 2G and 3G mobile numbers. Having two numbers with two different mobile internet access is more expensive and inconvenient.
The Controversial 3G Launch in Algeria
Anonymous involvement
'' In August 2012 a group in the Anonymous irc channel #OpAlgeria started an operation against the Algeria Regulatory Authority for Post and Telecommunication. This operation was started because the authority made a decision to require authorization of its users before allowing them to use any kind of encryption (for example; IpSec). They also will require authorization for any type of virtual private network (VPN) technology (for example; PPTP, L2TP, GRE tunneling, OpenVPN, and most other protocols that allow protection of information).
See also
Media of Algeria
References
External links
GSM World page on Algeria
PanAfriL10n page on Algeria |
57829 | https://en.wikipedia.org/wiki/Keystroke%20logging | Keystroke logging | Keystroke logging, often referred to as keylogging or keyboard capturing, is the action of recording (logging) the keys struck on a keyboard, typically covertly, so that a person using the keyboard is unaware that their actions are being monitored. Data can then be retrieved by the person operating the logging program. A keystroke recorder or keylogger can be either software or hardware.
While the programs themselves are legal, with many designed to allow employers to oversee the use of their computers, keyloggers are most often used for stealing passwords and other confidential information.
Keylogging can also be used to study keystroke dynamics or human-computer interaction. Numerous keylogging methods exist, ranging from hardware and software-based approaches to acoustic cryptanalysis.
Application of keylogger
Software-based keyloggers
A software-based keylogger is a computer program designed to record any input from the keyboard. Keyloggers are used in IT organizations to troubleshoot technical problems with computers and business networks. Families and businesspeople use keyloggers legally to monitor network usage without their users' direct knowledge. Microsoft publicly stated that Windows 10 has a built-in keylogger in its final version "to improve typing and writing services". However, malicious individuals can use keyloggers on public computers to steal passwords or credit card information. Most keyloggers are not stopped by HTTPS encryption because that only protects data in transit between computers; software-based keyloggers run on the affected user's computer, reading keyboard inputs directly as the user types.
From a technical perspective, there are several categories:
Hypervisor-based: The keylogger can theoretically reside in a malware hypervisor running underneath the operating system, which thus remains untouched. It effectively becomes a virtual machine. Blue Pill is a conceptual example.
Kernel-based: A program on the machine obtains root access to hide in the OS and intercepts keystrokes that pass through the kernel. This method is difficult both to write and to combat. Such keyloggers reside at the kernel level, which makes them difficult to detect, especially for user-mode applications that do not have root access. They are frequently implemented as rootkits that subvert the operating system kernel to gain unauthorized access to the hardware. This makes them very powerful. A keylogger using this method can act as a keyboard device driver, for example, and thus gain access to any information typed on the keyboard as it goes to the operating system.
API-based: These keyloggers hook keyboard APIs inside a running application. The keylogger registers keystroke events as if it was a normal piece of the application instead of malware. The keylogger receives an event each time the user presses or releases a key. The keylogger simply records it.
Windows APIs such as GetAsyncKeyState(), GetForegroundWindow(), etc. are used to poll the state of the keyboard or to subscribe to keyboard events. A more recent example simply polls the BIOS for pre-boot authentication PINs that have not been cleared from memory.
Form grabbing based: Form grabbing-based keyloggers log Web form submissions by recording the form data on submit events. This happens when the user completes a form and submits it, usually by clicking a button or pressing enter. This type of keylogger records form data before it is passed over the Internet.
Javascript-based: A malicious script tag is injected into a targeted web page, and listens for key events such as onKeyUp(). Scripts can be injected via a variety of methods, including cross-site scripting, man-in-the-browser, man-in-the-middle, or a compromise of the remote website.
Memory-injection-based: Memory Injection (MitB)-based keyloggers perform their logging function by altering the memory tables associated with the browser and other system functions. By patching the memory tables or injecting directly into memory, this technique can be used by malware authors to bypass Windows UAC (User Account Control). The Zeus and SpyEye trojans use this method exclusively. Non-Windows systems have protection mechanisms that allow access to locally recorded data from a remote location. Remote communication may be achieved when one of these methods is used:
Data is uploaded to a website, database or an FTP server.
Data is periodically emailed to a pre-defined email address.
Data is wirelessly transmitted employing an attached hardware system.
The software enables a remote login to the local machine from the Internet or the local network, for data logs stored on the target machine.
Keystroke logging in writing process research
Since 2006, Keystroke logging has been an established research method for the study of writing processes. Different programs have been developed to collect online process data of writing activities, including Inputlog, Scriptlog, Translog and GGXLog.
Keystroke logging is used legitimately as a suitable research instrument in several writing contexts. These include studies on cognitive writing processes, which include
descriptions of writing strategies; the writing development of children (with and without writing difficulties),
spelling,
first and second language writing, and
specialist skill areas such as translation and subtitling.
Keystroke logging can be used to research writing, specifically. It can also be integrated into educational domains for second language learning, programming skills, and typing skills.
Related features
Software keyloggers may be augmented with features that capture user information without relying on keyboard key presses as the sole input. Some of these features include:
Clipboard logging. Anything that has been copied to the clipboard can be captured by the program.
Screen logging. Screenshots are taken to capture graphics-based information. Applications with screen logging abilities may take screenshots of the whole screen, of just one application, or even just around the mouse cursor. They may take these screenshots periodically or in response to user behaviors (for example, when a user clicks the mouse). Screen logging can be used to capture data inputted with an on-screen keyboard.
Programmatically capturing the text in a control. The Microsoft Windows API allows programs to request the text 'value' in some controls. This means that some passwords may be captured, even if they are hidden behind password masks (usually asterisks).
The recording of every program/folder/window opened including a screenshot of every website visited.
The recording of search engines queries, instant messenger conversations, FTP downloads and other Internet-based activities (including the bandwidth used).
Hardware-based keyloggers
Hardware-based keyloggers do not depend upon any software being installed as they exist at a hardware level in a computer system.
Firmware-based: BIOS-level firmware that handles keyboard events can be modified to record these events as they are processed. Physical and/or root-level access is required to the machine, and the software loaded into the BIOS needs to be created for the specific hardware that it will be running on.
Keyboard hardware: Hardware keyloggers are used for keystroke logging utilizing a hardware circuit that is attached somewhere in between the computer keyboard and the computer, typically inline with the keyboard's cable connector. There are also USB connector-based hardware keyloggers, as well as ones for laptop computers (the Mini-PCI card plugs into the expansion slot of a laptop). More stealthy implementations can be installed or built into standard keyboards so that no device is visible on the external cable. Both types log all keyboard activity to their internal memory, which can be subsequently accessed, for example, by typing in a secret key sequence. Hardware keyloggers do not require any software to be installed on a target user's computer, therefore not interfering with the computer's operation and less likely to be detected by software running on it. However, its physical presence may be detected if, for example, it is installed outside the case as an inline device between the computer and the keyboard. Some of these implementations can be controlled and monitored remotely using a wireless communication standard.
Wireless keyboard and mouse sniffers: These passive sniffers collect packets of data being transferred from a wireless keyboard and its receiver. As encryption may be used to secure the wireless communications between the two devices, this may need to be cracked beforehand if the transmissions are to be read. In some cases, this enables an attacker to type arbitrary commands into a victim's computer.
Keyboard overlays: Criminals have been known to use keyboard overlays on ATMs to capture people's PINs. Each keypress is registered by the keyboard of the ATM as well as the criminal's keypad that is placed over it. The device is designed to look like an integrated part of the machine so that bank customers are unaware of its presence.
Acoustic keyloggers: Acoustic cryptanalysis can be used to monitor the sound created by someone typing on a computer. Each key on the keyboard makes a subtly different acoustic signature when struck. It is then possible to identify which keystroke signature relates to which keyboard character via statistical methods such as frequency analysis. The repetition frequency of similar acoustic keystroke signatures, the timings between different keyboard strokes and other context information such as the probable language in which the user is writing are used in this analysis to map sounds to letters. A fairly long recording (1000 or more keystrokes) is required so that a large enough sample is collected.
Electromagnetic emissions: It is possible to capture the electromagnetic emissions of a wired keyboard from up to away, without being physically wired to it. In 2009, Swiss researchers tested 11 different USB, PS/2 and laptop keyboards in a semi-anechoic chamber and found them all vulnerable, primarily because of the prohibitive cost of adding shielding during manufacture. The researchers used a wide-band receiver to tune into the specific frequency of the emissions radiated from the keyboards.
Optical surveillance: Optical surveillance, while not a keylogger in the classical sense, is nonetheless an approach that can be used to capture passwords or PINs. A strategically placed camera, such as a hidden surveillance camera at an ATM, can allow a criminal to watch a PIN or password being entered.
Physical evidence: For a keypad that is used only to enter a security code, the keys which are in actual use will have evidence of use from many fingerprints. A passcode of four digits, if the four digits in question are known, is reduced from 10,000 possibilities to just 24 possibilities (104 versus 4! [factorial of 4]). These could then be used on separate occasions for a manual "brute force attack".
Smartphone sensors: Researchers have demonstrated that it is possible to capture the keystrokes of nearby computer keyboards using only the commodity accelerometer found in smartphones. The attack is made possible by placing a smartphone near a keyboard on the same desk. The smartphone's accelerometer can then detect the vibrations created by typing on the keyboard and then translate this raw accelerometer signal into readable sentences with as much as 80 percent accuracy. The technique involves working through probability by detecting pairs of keystrokes, rather than individual keys. It models "keyboard events" in pairs and then works out whether the pair of keys pressed is on the left or the right side of the keyboard and whether they are close together or far apart on the QWERTY keyboard. Once it has worked this out, it compares the results to a preloaded dictionary where each word has been broken down in the same way. Similar techniques have also been shown to be effective at capturing keystrokes on touchscreen keyboards while in some cases, in combination with gyroscope or with the ambient-light sensor.
Body keyloggers: Body keyloggers track and analyze body movements to determine which keys were pressed. The attacker needs to be familiar with the keys layout of the tracked keyboard to correlate between body movements and keys position. Tracking audible signals of the user' interface (e.g. a sound the device produce to informs the user that a keystroke was logged) may reduce the complexity of the body keylogging algorithms, as it marks the moment at which a key was pressed.
History
In the mid-1970s, the Soviet Union developed and deployed a hardware keylogger targeting typewriters. Termed the "selectric bug", it measured the movements of the print head of IBM Selectric typewriters via subtle influences on the regional magnetic field caused by the rotation and movements of the print head. An early keylogger was written by Perry Kivolowitz and posted to the Usenet newsgroup net.unix-wizards, net.sources on November 17, 1983. The posting seems to be a motivating factor in restricting access to /dev/kmem on Unix systems. The user-mode program operated by locating and dumping character lists (clients) as they were assembled in the Unix kernel.
In the 1970s, spies installed keystroke loggers in the US Embassy and Consulate buildings in Moscow.
They installed the bugs in Selectric II and Selectric III electric typewriters.
Soviet embassies used manual typewriters, rather than electric typewriters, for classified information—apparently because they are immune to such bugs.
As of 2013, Russian special services still use typewriters.
Cracking
Writing simple software applications for keylogging can be trivial, and like any nefarious computer program, can be distributed as a trojan horse or as part of a virus. What is not trivial for an attacker, however, is installing a covert keystroke logger without getting caught and downloading data that has been logged without being traced. An attacker that manually connects to a host machine to download logged keystrokes risks being traced. A trojan that sends keylogged data to a fixed e-mail address or IP address risks exposing the attacker.
Trojans
Researchers Adam Young and Moti Yung discussed several methods of sending keystroke logging. They presented a deniable password snatching attack in which the keystroke logging trojan is installed using a virus or worm. An attacker who is caught with the virus or worm can claim to be a victim. The cryptotrojan asymmetrically encrypts the pilfered login/password pairs using the public key of the trojan author and covertly broadcasts the resulting ciphertext. They mentioned that the ciphertext can be steganographically encoded and posted to a public bulletin board such as Usenet.
Use by police
In 2000, the FBI used FlashCrest iSpy to obtain the PGP passphrase of Nicodemo Scarfo, Jr., son of mob boss Nicodemo Scarfo.
Also in 2000, the FBI lured two suspected Russian cybercriminals to the US in an elaborate ruse, and captured their usernames and passwords with a keylogger that was covertly installed on a machine that they used to access their computers in Russia. The FBI then used these credentials to gain access to the suspects' computers in Russia to obtain evidence to prosecute them.
Countermeasures
The effectiveness of countermeasures varies because keyloggers use a variety of techniques to capture data and the countermeasure needs to be effective against the particular data capture technique. In the case of Windows 10 keylogging by Microsoft, changing certain privacy settings may disable it. An on-screen keyboard will be effective against hardware keyloggers; transparency will defeat some—but not all—screen loggers. An anti-spyware application that can only disable hook-based keyloggers will be ineffective against kernel-based keyloggers.
Keylogger program authors may be able to update their program's code to adapt to countermeasures that have proven effective against it.
Anti-keyloggers
An anti-keylogger is a piece of software specifically designed to detect keyloggers on a computer, typically comparing all files in the computer against a database of keyloggers, looking for similarities which might indicate the presence of a hidden keylogger. As anti-keyloggers have been designed specifically to detect keyloggers, they have the potential to be more effective than conventional antivirus software; some antivirus software do not consider keyloggers to be malware, as under some circumstances a keylogger can be considered a legitimate piece of software.
Live CD/USB
Rebooting the computer using a Live CD or write-protected Live USB is a possible countermeasure against software keyloggers if the CD is clean of malware and the operating system contained on it is secured and fully patched so that it cannot be infected as soon as it is started. Booting a different operating system does not impact the use of a hardware or BIOS based keylogger.
Anti-spyware / Anti-virus programs
Many anti-spyware applications can detect some software based keyloggers and quarantine, disable, or remove them. However, because many keylogging programs are legitimate pieces of software under some circumstances, anti-spyware often neglects to label keylogging programs as spyware or a virus. These applications can detect software-based keyloggers based on patterns in executable code, heuristics and keylogger behaviors (such as the use of hooks and certain APIs).
No software-based anti-spyware application can be 100% effective against all keyloggers. Software-based anti-spyware cannot defeat non-software keyloggers (for example, hardware keyloggers attached to keyboards will always receive keystrokes before any software-based anti-spyware application).
The particular technique that the anti-spyware application uses will influence its potential effectiveness against software keyloggers. As a general rule, anti-spyware applications with higher privileges will defeat keyloggers with lower privileges. For example, a hook-based anti-spyware application cannot defeat a kernel-based keylogger (as the keylogger will receive the keystroke messages before the anti-spyware application), but it could potentially defeat hook- and API-based keyloggers.
Network monitors
Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with their typed information.
Automatic form filler programs
Automatic form-filling programs may prevent keylogging by removing the requirement for a user to type personal details and passwords using the keyboard. Form fillers are primarily designed for Web browsers to fill in checkout pages and log users into their accounts. Once the user's account and credit card information has been entered into the program, it will be automatically entered into forms without ever using the keyboard or clipboard, thereby reducing the possibility that private data is being recorded. However, someone with physical access to the machine may still be able to install software that can intercept this information elsewhere in the operating system or while in transit on the network. (Transport Layer Security (TLS) reduces the risk that data in transit may be intercepted by network sniffers and proxy tools.)
One-time passwords (OTP)
Using one-time passwords may prevent unauthorized access to an account which has had its login details exposed to an attacker via a keylogger, as each password is invalidated as soon as it is used. This solution may be useful for someone using a public computer. However, an attacker who has remote control over such a computer can simply wait for the victim to enter their credentials before performing unauthorized transactions on their behalf while their session is active.
Security tokens
Use of smart cards or other security tokens may improve security against replay attacks in the face of a successful keylogging attack, as accessing protected information would require both the (hardware) security token as well as the appropriate password/passphrase. Knowing the keystrokes, mouse actions, display, clipboard, etc. used on one computer will not subsequently help an attacker gain access to the protected resource. Some security tokens work as a type of hardware-assisted one-time password system, and others implement a cryptographic challenge–response authentication, which can improve security in a manner conceptually similar to one time passwords. Smartcard readers and their associated keypads for PIN entry may be vulnerable to keystroke logging through a so-called supply chain attack where an attacker substitutes the card reader/PIN entry hardware for one which records the user's PIN.
On-screen keyboards
Most on-screen keyboards (such as the on-screen keyboard that comes with Windows XP) send normal keyboard event messages to the external target program to type text. Software key loggers can log these typed characters sent from one program to another.
Keystroke interference software
Keystroke interference software is also available.
These programs attempt to trick keyloggers by introducing random keystrokes, although this simply results in the keylogger recording more information than it needs to. An attacker has the task of extracting the keystrokes of interest—the security of this mechanism, specifically how well it stands up to cryptanalysis, is unclear.
Speech recognition
Similar to on-screen keyboards, speech-to-text conversion software can also be used against keyloggers, since there are no typing or mouse movements involved. The weakest point of using voice-recognition software may be how the software sends the recognized text to target software after the user's speech has been processed.
Handwriting recognition and mouse gestures
Many PDAs and lately tablet PCs can already convert pen (also called stylus) movements on their touchscreens to computer understandable text successfully. Mouse gestures use this principle by using mouse movements instead of a stylus. Mouse gesture programs convert these strokes to user-definable actions, such as typing text. Similarly, graphics tablets and light pens can be used to input these gestures, however, these are becoming less common.
The same potential weakness of speech recognition applies to this technique as well.
Macro expanders/recorders
With the help of many programs, a seemingly meaningless text can be expanded to a meaningful text and most of the time context-sensitively, e.g. "en.wikipedia.org" can be expanded when a web browser window has the focus. The biggest weakness of this technique is that these programs send their keystrokes directly to the target program. However, this can be overcome by using the 'alternating' technique described below, i.e. sending mouse clicks to non-responsive areas of the target program, sending meaningless keys, sending another mouse click to the target area (e.g. password field) and switching back-and-forth.
Deceptive typing
Alternating between typing the login credentials and typing characters somewhere else in the focus window can cause a keylogger to record more information than it needs to, but this could be easily filtered out by an attacker. Similarly, a user can move their cursor using the mouse while typing, causing the logged keystrokes to be in the wrong order e.g., by typing a password beginning with the last letter and then using the mouse to move the cursor for each subsequent letter. Lastly, someone can also use context menus to remove, cut, copy, and paste parts of the typed text without using the keyboard. An attacker who can capture only parts of a password will have a larger key space to attack if they choose to execute a brute-force attack.
Another very similar technique uses the fact that any selected text portion is replaced by the next key typed. e.g., if the password is "secret", one could type "s", then some dummy keys "asdf". These dummy characters could then be selected with the mouse, and the next character from the password "e" typed, which replaces the dummy characters "asdf".
These techniques assume incorrectly that keystroke logging software cannot directly monitor the clipboard, the selected text in a form, or take a screenshot every time a keystroke or mouse click occurs. They may, however, be effective against some hardware keyloggers.
See also
Anti-keylogger
Black-bag cryptanalysis
Computer surveillance
Digital footprint
Hardware keylogger
Reverse connection
Session replay
Spyware
Trojan horse
Virtual keyboard
Web tracking
References
External links
Cryptographic attacks
Spyware
Surveillance
Cybercrime
Security breaches |
58171 | https://en.wikipedia.org/wiki/AirPort | AirPort | AirPort is the name given to a series of products by Apple Inc. using the Wi-Fi protocols (802.11b, 802.11g, 802.11n and 802.11ac). These products comprise a number of wireless routers and wireless cards. In Japan, the line of products was marketed under the brand AirMac due to previous registration by I-O Data.
In 2018, Apple discontinued the AirPort product line. The remaining inventory was sold off, and Apple later retailed routers from Linksys, Netgear, and Eero in Apple retail stores.
Overview
AirPort debuted in 1999, as "one more thing" at Macworld New York, with Steve Jobs picking up an iBook supposedly to give the cameraman a better shot as he surfed the Web. The initial offering consisted of an optional expansion card for Apple's new line of iBook notebooks and an AirPort Base Station. The AirPort card (a repackaged Lucent ORiNOCO Gold Card PC Card adapter) was later added as an option for almost all of Apple's product line, including PowerBooks, eMacs, iMacs, and Power Macs. Only Xserves did not have it as a standard or optional feature. The original AirPort system allowed transfer rates up to 11 Mbit/s and was commonly used to share Internet access and files between multiple computers.
In 2003, Apple introduced AirPort Extreme, based on the 802.11g specification, using Broadcom's BCM4306/BCM2050 two-chip solution. AirPort Extreme allows theoretical peak data transfer rates of up to 54 Mbit/s, and is fully backward-compatible with existing 802.11b wireless network cards and base stations. Several of Apple's desktop computers and portable computers, including the MacBook Pro, MacBook, Mac Mini, and iMac shipped with an AirPort Extreme (802.11g) card as standard. All other Macs of the time had an expansion slot for the card. AirPort and AirPort Extreme cards are not physically compatible: AirPort Extreme cards cannot be installed in older Macs, and AirPort cards cannot be installed in newer Macs. The original AirPort card was discontinued in June 2004.
In 2004, Apple released the AirPort Express base station as a "Swiss Army knife" multifunction product. It can be used as a portable travel router, using the same AC connectors as on Apple's AC adapters; as an audio streaming device, with both line-level and optical audio outputs; and as a USB printer sharing device, through its USB host port.
In 2007, Apple unveiled a new AirPort Extreme (802.11 Draft-N) Base Station, which introduced 802.11 Draft-N to the Apple AirPort product line. This implementation of 802.11 Draft-N can operate in both the 2.4 GHz and 5 GHz ISM bands, and has modes that make it compatible with 802.11b/g and 802.11a. The number of Ethernet ports was increased to four—one nominally for WAN, three for LAN, but all can be used in bridged mode. A USB port was included for printers and other USB devices. The Ethernet ports were later updated to Gigabit Ethernet on all ports. The styling is similar to that of the Mac Mini and Apple TV.
In January 2008, Apple introduced Time Capsule, an AirPort Extreme (802.11 Draft-N) with an internal hard drive. The device includes software to allow any computer running a reasonably recent version of Mac OS or Windows to access the disk as a shared volume. Macs running Mac OS X 10.5 and later, which includes the Time Machine feature, can use the Time Capsule as a wireless backup device, allowing automatic, untethered backups of the client computer. As an access point, the unit is otherwise equivalent to an AirPort Extreme (802.11 Draft-N), with four Gigabit Ethernet ports and a USB port for printer and disk sharing.
In March 2008, Apple released an updated AirPort Express Base Station with 802.11 Draft-N 2x2 radio. All other features (analog and digital optical audio out, single Ethernet port, USB port for printer sharing) remained the same. At the time, it was the least expensive ($99) device to handle both frequency bands (2.4 GHz and 5 GHz) in 2x2 802.11 Draft-N.
In March 2009, Apple unveiled AirPort Extreme and Time Capsule products with simultaneous dual-band 802.11 Draft-N radios. This allows full 802.11 Draft-N 2x2 communication in both 802.11 Draft-N bands at the same time.
In October 2009, Apple unveiled the updated AirPort Extreme and Time Capsule products with antenna improvements (the 5.8 GHz model).
In 2011, Apple unveiled an updated AirPort Extreme base station, referred to as AirPort Extreme 802.11n (5th Generation). The latest AirPort base stations and cards work with third-party base stations and wireless cards that conformed to the 802.11a, 802.11b, 802.11g, 802.11 Draft-N, and 802.11 Final-N networking standards. It was not uncommon to see wireless networks composed of several types of AirPort base station serving old and new Macintosh, Microsoft Windows, and Linux systems. Apple's software drivers for AirPort Extreme also supported some Broadcom and Atheros-based PCI Wireless adapters when fitted to Power Mac computers. Due to the developing nature of Draft-N hardware, there was no assurance that the new model would work with all 802.11 Draft-N routers and access devices from other manufacturers.
Discontinuation
In approximately 2016, Apple disbanded its wireless router team. In 2018, Apple formally discontinued all of its AirPort products, exiting the router market. Bloomberg News noted that "Apple rarely discontinues product categories" and that its decision to leave the business was "a boon for other wireless router makers."
AirPort routers
An AirPort router is used to connect AirPort-enabled computers to the Internet, each other, a wired LAN, and/or other devices.
AirPort Base Station
The original AirPort Base Station (known as Graphite, model M5757, part number M7601LL/B) features a dial-up modem and an Ethernet port. It employs a Lucent WaveLAN Silver PC Card as the Radio, and uses an embedded AMD Elan processor. It connects to the machine via the Ethernet port. It was released July 21, 1999. The Graphite AirPort Base Station is functionally identical to the Lucent RG-1000 wireless base station and can run the same firmware. Due to the original firmware-locked limitations of the Silver card, the unit can only accept 40-bit WEP encryption. Later aftermarket tweaks can enable 128-bit WEP on the Silver card. Aftermarket Linux firmware has been developed for these units to extend their useful service life.
A second-generation model (known as Dual Ethernet or Snow, model M8440, part number M8209LL/A) was introduced on November 13, 2001. It features a second Ethernet port when compared to the Graphite design, allowing for a shared Internet connection with both wired and wireless clients. Also new (but available for the original model via software update) was the ability to connect to and share America Online's dial-up service—a feature unique to Apple base stations. This model is based on Motorola's PowerPC 855 processor and contained a fully functional original AirPort Card, which can be removed and used in any compatible Macintosh computer.
AirPort Extreme Base Station
Three different configurations of model A1034 are all called the "AirPort Extreme Base Station":1. M8799LL/A – 2 ethernet ports, 1 USB port, external antenna connector, 1 56k (V.90) modem port2. M8930LL/A – 2 ethernet ports, 1 USB port, external antenna connector.3. M9397LL/A – 2 ethernet ports, 1 USB port, external antenna connector, powered over ethernet cable (PoE/UL2043)
The AirPort Base Station was discontinued after the updated AirPort Extreme was announced on January 7, 2003. In addition to providing wireless connection speeds of up to a maximum of 54 Mbit/s, it adds an external antenna port and a USB port. The antenna port allows the addition of a signal-boosting antenna, and the USB port allows the sharing of a USB printer. A connected printer is made available via Bonjour's "zero configuration" technology and IPP to all wired and wireless clients on the network. The CPU is an AU1500-333MBC Alchemy (processor). A second model (M8930LL/A) lacking the modem and external antenna port was briefly made available, but then discontinued after the launch of AirPort Express (see below). On April 19, 2004, a third version, marketed as the AirPort Extreme Base Station (with Power over Ethernet and UL 2043), was introduced that supports Power over Ethernet and complies to the UL 2043 specifications for safe usage in air handling spaces, such as above suspended ceilings. All three models support the Wireless Distribution System (WDS) standard. The model introduced in January 2007 does not have a corresponding PoE, UL-compliant variant.
An AirPort Extreme base station can serve a maximum of 50 wireless clients simultaneously.
AirPort Extreme 802.11n
The AirPort Extreme was updated on January 9, 2007, to support the 802.11n protocol. This revision also adds two LAN ports for a total of three. It now more closely resembles the square-shaped 1st generation Apple TV and Mac Mini, and is about the same size as the mini.
The new AirPort Disk feature allows users to plug a USB hard drive into the AirPort Extreme for use as a network-attached storage (NAS) device for Mac OS X and Microsoft Windows clients. Users may also connect a USB hub and printer. The performance of USB hard drives attached to an AirPort Extreme is slower than if the drive were connected directly to a computer. This is due to the processor speed on the AirPort extreme. Depending on the setup and types of reads and writes, performance ranges from 0.5 to 17.5 MB/s for writing and 1.9 to 25.6 MB/s for reading. Performance for the same disk connected directly to a computer would be 6.6 to 31.6 MB/s for writing and 7.1 to 37.2 MB/s for reading.
The AirPort Extreme has no port for an external antenna.
On August 7, 2007, the AirPort Extreme began shipping with Gigabit Ethernet, matching most other Apple products.
On March 19, 2008, Apple released a firmware update for both models of the AirPort Extreme to allow AirPort Disks to be used in conjunction with Time Machine, similar to the functionality provided by Time Capsule.
On March 3, 2009, Apple unveiled a new AirPort Extreme with simultaneous dual-band 802.11 Draft-N radios. This allows full 802.11 Draft-N 2x2 communication in both 802.11 Draft-N bands at the same time.
On October 20, 2009, Apple unveiled an updated AirPort Extreme base station with antenna improvements.
On June 21, 2011, Apple unveiled an updated AirPort Extreme base station, referred to as AirPort Extreme 802.11n (5th Generation).
AirPort Express
The AirPort Express is a simplified and compact AirPort Extreme base station. It allows up to 50 networked users, and includes a feature called AirTunes (predecessor to AirPlay). The original version (M9470LL/A, model A1084) was introduced by Apple on June 7, 2004, and includes an analog–optical audio mini-jack output, a USB port for remote printing or charging the iPod (iPod Shuffle only), and a single Ethernet port. The USB port cannot be used to connect a hard disk or other storage device.
The AirPort Express functions as a wireless access point when connected to an Ethernet network. It can be used as an Ethernet-to-wireless bridge under certain wireless configurations.
It can be used to extend the range of a network, or as a printer and audio server.
In 2012, the AirPort Express took on a new shape, similar to that of the second and third generation Apple TV. The new product also features two 10/100 Mbit/s Ethernet LAN ports.
AirPort Time Capsule
The AirPort Time Capsule is a version of AirPort Extreme with a built-in hard drive currently coming in either 2 TB or 3 TB sizes, with a previous version having 1 TB or 500 GB. It features a built-in design that, when used with Time Machine in Mac OS X Leopard, automatically makes incremental data backups. Acting as a wireless file server, AirPort Time Capsule can serve to back up multiple Macs. It also includes all AirPort Extreme (802.11 Draft-N) functionality.
On March 3, 2009, the Time Capsule was updated with simultaneous dual-band 802.11 Draft-N capability, remote AirPort Disk accessibility through Back to My Mac, and the ability to broadcast a guest network at the same time as an existing network.
On October 20, 2009, Apple unveiled the updated Time Capsule with antenna improvements resulting in wireless performance gains of both speed and range. Also stated is a resulting performance improvement/time reduction on Time Capsule backups of up to 60%.
In June 2011, Apple unveiled the updated Time Capsule with a higher capacity 2 TB and 3 TB. They also changed the wireless card from a Marvell chip to a Broadcom BCM4331 chip. When used in conjunction with the latest 2011 MacBooks, MacBook Pros, and MacBook Airs (which also use a Broadcom BCM4331 wireless chip), the wireless signal is improved thanks to Broadcom's Frame Bursting technology.
On June 10, 2013, Apple renamed the Time Capsule to the AirPort Time Capsule and added support for the 802.11ac standard.
AirPort cards
An AirPort card is an Apple-branded wireless card used to connect to wireless networks such as those provided by an AirPort Base Station.
AirPort 802.11b card
The original model, known as simply AirPort card, was a re-branded Lucent WaveLAN/Orinoco Gold PC card, in a modified housing that lacked the integrated antenna. It was designed to be capable of being user-installable. It was also modified in such a way that it could not be used in a regular PCMCIA slot (at the time it was significantly cheaper than the official WaveLAN/Orinoco Gold card).
An AirPort card adapter is required to use this card in the slot-loading iMacs.
AirPort Extreme 802.11g cards
Corresponding with the release of the AirPort Extreme Base Station, the AirPort Extreme card became available as an option on the current models. It is based on a Broadcom 802.11g chipset and is housed in a custom form factor, but is electrically compatible with the Mini PCI standard. It was also capable of being user-installed.
Variants of the user-installable AirPort Extreme card are marked A-1010 (early North American spec), A-1026 (current North American spec), A-1027 (Europe/Asia spec (additional channels)) and A-1095 (unknown).
A different 802.11g card was included in the last iteration of the PowerPC-based PowerBooks and iBooks. A major distinction for this card was that it was the first "combo" card that included both 802.11g as well as Bluetooth. It was also the first card that was not user-installable. It was again a custom form factor, but was still electrically a Mini PCI interface for the Broadcom WLAN chip. A separate USB connection was used for the on-board Bluetooth chip.
The AirPort Extreme (802.11g) card was discontinued in January 2009.
Integrated AirPort Extreme 802.11a/b/g and /n cards
As 802.11g began to come standard on all notebook models, Apple phased out the user-installable designs in their notebooks, iMacs and Mac Minis by mid-2005, moving to an integrated design. AirPort continued to be an option, either installed at purchase or later, on the Power Mac G5 and the Mac Pro.
With the introduction of the Intel-based MacBook Pro in January 2006, Apple began to use a standard PCI Express mini card. The particular brand and model of card has changed over the years; in early models, it was Atheros brand, while since late 2008 they have been Broadcom cards. This distinction is mostly of concern to those who run other operating systems such as Linux on MacBooks, as different cards require different device drivers.
The MacBook Air Mid 2012 13", MacBook Air Mid 2011 13" and MacBook Air Late 2010 (11", A1370 and 13", Model A1369) each use a Broadcom BCM 943224 PCIEBT2 Wi-Fi card (main chip BCM43224: 2 × 2 2.4 GHz and 5 GHz).
The MacBook Pro Retina Mid 2012 uses Broadcom BCM94331CSAX (main chip BCM4331: 3 × 3 2.4 GHz and 5 GHz, up to 450Mbit/s).
In early 2007, Apple announced that most Intel Core 2 Duo-based Macs, which had been shipping since November 2006, already included AirPort Extreme cards compatible with the draft-802.11 Draft-N specification. Apple also offered an application to enable 802.11 Draft-N functionality on these Macs for a fee of $1.99, or free with the purchase of an AirPort Extreme base station. Starting with Leopard, the Draft-N functionality was quietly enabled on all Macs that had Draft-N cards. This card was also a PCI Express mini design, but used three antenna connectors in the notebooks and iMacs, in order to use a 2 × 3 MIMO antenna configuration. The cards in the Mac Pro and Apple TV have two antenna connectors and support a 2 × 2 configuration. The Network Utility application located in Applications → Utilities can be used to identify the model and supported protocols of an installed AirPort card.
Integrated AirPort Extreme 802.11ac cards
The Macbook Air Mid 2013 uses a Broadcom BCM94360CS2 (main chip BCM4360: 2 × 2 : 2).
Security
AirPort and AirPort Extreme support a variety of security technologies to prevent eavesdropping and unauthorized network access, including several forms of cryptography.
The original graphite AirPort base station used 40-bit Wired Equivalent Privacy (WEP). The second-generation model (known as Dual Ethernet or Snow) AirPort base station, like most other Wi-Fi products, used 40-bit or 128-bit Wired Equivalent Privacy (WEP). AirPort Extreme and Express base stations retain this option, but also allow and encourage the use of Wi-Fi Protected Access (WPA) and, as of July 14, 2005, WPA2.
AirPort Extreme cards, which use the Broadcom chipset, have the Media Access Control layer in software. The driver is closed source.
AirPort Disk
The AirPort Disk feature shares a hard disk connected to an AirPort Extreme or Time Capsule (though not AirPort Express), as a small-scale NAS. AirPort Disk can be accessed from Windows and Linux as well as Mac OS X using the SMB/CIFS protocol for FAT volumes, and both SMB/CIFS and AFP for HFS+ partitions. NTFS- or exFAT-formatted volumes are not supported.
Although Windows does not natively support HFS+, an HFS+ volume on an AirPort Disk can be easily accessed from Windows. This is because the SMB/CIFS protocol used to access the disk, and hence access from Windows is filesystem-independent. Therefore, HFS+ is a viable option for Windows as well as OS X users, and more flexible than FAT32 as the latter has a 4 GiB file size limit.
Recent firmware versions cause the internal disk and any external USB drives to sleep after periods of time as short as 2 minutes.
A caveat of the use of AirPort Disk is that the AFP port 548 is reserved for the service, which then does not allow for simultaneous use of port forwarding to provide AFP services to external users. This is also true of a Time Capsule setup for use as a network-based Time Machine Backup location, its main purpose and default configuration. An AirPort administrator must choose between using AirPort Disk and providing remote access to AFP services.
The AirPort Extreme or Time Capsule will recognize multiple disks connected via a USB hub.
See also
AirDrop
AirPrint
iTunes
Sleep Proxy Service
Timeline of Apple products
Wireless LAN
IEEE 802.11
Notes
External links
AirPort products (archived 2013-06-07)
All AirPort products
AirPort manuals
AirPort software compatibility table
Apple AirPort 802.11 N first look at ifixit
Apple Inc. peripherals
Macintosh internals
Wi-Fi
ITunes
Computer-related introductions in 1999
Products and services discontinued in 2016 |
58222 | https://en.wikipedia.org/wiki/Fujitsu | Fujitsu | is a Japanese multinational information and communications technology equipment and services corporation, established in 1935 and headquartered in Tokyo. Fujitsu is the world's sixth-largest IT services provider by annual revenue, and the largest in Japan, in 2021. The hardware offerings from Fujitsu are mainly of personal and enterprise computing products, including x86, SPARC and mainframe compatible server products, although the corporation and its subsidiaries also offer a diversity of products and services in the areas of data storage, telecommunications, advanced microelectronics, and air conditioning. It has approximately 126,400 employees and its products and services are available in approximately 180 countries.
Fortune named Fujitsu as one of the world's most admired companies and a Global 500 company. Fujitsu is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange; its Tokyo listing is a constituent of the Nikkei 225 and TOPIX 100 indices.
History
1935 to 2000
Fujitsu was established on June 20, 1935, which makes it one of the oldest operating IT companies after IBM and before Hewlett Packard, under the name , as a spin-off of the Fuji Electric Company, itself a joint venture between the Furukawa Electric Company and the German conglomerate Siemens which had been founded in 1923. Despite its connections to the Furukawa zaibatsu, Fujitsu escaped the Allied occupation of Japan after the Second World War mostly unscathed.
In 1954, Fujitsu manufactured Japan's first computer, the FACOM 100 mainframe, and in 1961 launched its second generation computers (transistorized) the FACOM 222 mainframe. The 1968 FACOM230 "5" Series marked the beginning of its third generation computers. Fujitsu offered mainframe computers from 1955 until at least 2002 Fujitsu's computer products have also included minicomputers, small business computers, servers and personal computers.
In 1955, Fujitsu founded Kawasaki Frontale as a company football club; Kawasaki Frontale has been a J. League football club since 1999. In 1967, the company's name was officially changed to the contraction . Since 1985, the company also fields a company American football team, the Fujitsu Frontiers, who play in the corporate X-League, appeared in 7 Japan X Bowls, winning two, and won two Rice Bowls.
In 1971, Fujitsu signed an OEM agreement with the Canadian company Consolidated Computers Limited (CCL) to distribute CCL's data entry product, Key-Edit. Fujitsu joined both International Computers Limited (ICL) who earlier began marketing Key-Edit in the British Commonwealth of countries as well as in both western and eastern Europe; and CCL's direct marketing staff in Canada, USA, London (UK) and Frankfurt. Mers Kutt, inventor of Key-Edit and founder of CCL, was the common thread that led to Fujitsu's later association with ICL and Gene Amdahl.
In 1986, Fujitsu and The Queen's University of Belfast business incubation unit (QUBIS Ltd) established a joint venture called Kainos, a privately held software company based in Belfast, Northern Ireland.
In 1990, Fujitsu acquired 80% of the UK-based computer company ICL for $1.29 billion. In September 1990, Fujitsu announced the launch of a new series of mainframe computers which were at that time the fastest in the world. In July 1991, Fujitsu acquired more than half of the Russian company KME-CS (Kazan Manufacturing Enterprise of Computer Systems).
In 1992, Fujitsu introduced the world's first 21-inch full-color plasma display. It was a hybrid, based upon the plasma display created at the University of Illinois at Urbana-Champaign and NHK STRL, achieving superior brightness.
In 1993, Fujitsu formed a flash memory manufacturing joint venture with AMD, Spansion. As part of the transaction, AMD contributed its flash memory group, Fab 25 in Texas, its R&D facilities and assembly plants in Thailand, Malaysia and China; Fujitsu provided its Flash memory business division and the Malaysian Fujitsu Microelectronics final assembly and test operations.
From February 1989 until mid-1997, Fujitsu built the FM Towns PC variant. It started as a proprietary PC variant intended for multimedia applications and computer games, but later became more compatible with regular PCs. In 1993, the FM Towns Marty was released, a gaming console compatible with the FM Towns games.
Fujitsu agreed to acquire the 58 percent of Amdahl Corporation (including the Canada-based DMR consulting group) that it did not already own for around $850 million in July 1997.
In April 1997, the company acquired a 30 percent stake in GLOVIA International, Inc., an El Segundo, Calif., manufacturing ERP software provider whose software it had begun integrating into its electronics plants starting in 1994.
In June 1999 Fujitsu's historical connection with Siemens was revived, when the two companies agreed to merge their European computer operations into a new 50:50 joint venture called Fujitsu Siemens Computers, which became the world's fifth-largest computer manufacturing company.
2000 to 2020
In April 2000, Fujitsu acquired the remaining 70% of GLOVIA International.
In April 2002 ICL re-branded itself as Fujitsu. On March 2, 2004, Fujitsu Computer Products of America lost a class action lawsuit over hard disk drives with defective chips and firmware. In October 2004, Fujitsu acquired the Australian subsidiary of Atos Origin, a systems implementation company with around 140 employees which specialized in SAP.
In August 2007, Fujitsu signed a £500 million, 10-year deal with Reuters Group under which Reuters outsourced the majority of its internal IT department to Fujitsu. As part of the agreement around 300 Reuters staff and 200 contractors transferred to Fujitsu. In October 2007, Fujitsu announced that it would be establishing an offshore development centre in Noida, India with a capacity to house 1,200 employees, in an investment of US$10 million.
In October 2007, Fujitsu's Australia and New Zealand subsidiary acquired Infinity Solutions Ltd, a New Zealand-based IT hardware, services and consultancy company, for an undisclosed amount.
In January 2009, Fujitsu reached an agreement to sell its HDD business to Toshiba. Transfer of the business was completed on October 1. 2009.
In March 2009, Fujitsu announced that it had decided to convert FDK Corporation, at that time an equity-method affiliate, to a consolidated subsidiary from May 1, 2009 (tentative schedule) by subscribing to a private placement to increase FDK's capital. On April 1, 2009, Fujitsu agreed to acquire Siemens' stake in Fujitsu Siemens Computers for approximately EUR450m. Fujitsu Siemens Computers was subsequently renamed Fujitsu Technology Solutions.
In April 2009, Fujitsu acquired Australian software company Supply Chain Consulting for a $48 million deal, just weeks after purchasing the Telstra subsidiary Kaz for $200 million.
Concerning of net loss forecast amounted 95 billion yen in the year ending March 2013, in February 2013 Fujitsu announced to cut 5,000 jobs of which 3,000 jobs in Japan and the rest overseas from its 170,000 employees. Fujitsu also merged its Large-scale integration chip designing business with that of Panasonic Corporation, resulting in establishment of Socionext.
In 2014, after severe losses, Fujitsu spun off its LSI chip manufacturing division as well, as Mie Fujitsu semiconductor, which was later bought in 2018 by United Semiconductor Japan Co., Ltd., wholly owned by United Microelectronics Corporation.
In 2015, Fujitsu celebrated 80 years since establishment at a time when its IT business embarked upon the Fujitsu 2015 World Tour which has included 15 major cities globally and been visited by over 10,000 IT professionals with Fujitsu presenting its take on the future of Hyper Connectivity and Human Centric Computing.
In April 2015 GLOVIA International is renamed FUJITSU GLOVIA, Inc.
In November 2015, Fujitsu Limited and VMware announced new areas of collaboration to empower customers with flexible and secure cloud technologies. It also acquired USharesoft which provides enterprise-class application delivery software for automating the build, migration and governance of applications in multi-cloud environments.
In January 2016, Fujitsu Network Communications Inc. announced a new suite of layered products to advance software-defined networking (SDN) for carriers, service providers and cloud builders. Virtuora NC, based on open standards, is described by Fujitsu as "a suite of standards-based, multi-layered, multi-vendor network automation and virtualization products" that "has been hands-on hardened by some of the largest global service providers."
In 2019, Fujitsu started to deliver 5G telecommunications equipment to NTT Docomo, along with NEC.
In March 2020, Fujitsu announced the creation of a subsidiary, later named Fujitsu Japan, that will enable the company to expand its business in the Japanese IT services market.
In June 2020, Fugaku, co-developed with the RIKEN research institute, was declared the most powerful supercomputer in the world. The performance capability of Fugaku is 415.53 PFLOPS with a theoretical peak of 513.86 PFLOPS. It is three times faster than of the previous champion. Fugaku also ranked first place in categories that measure computational methods performance for industrial use, artificial intelligence applications, and big data analytics. The supercomputer is located in a facility in Kobe.
In June 2020, Fujitsu developed an artificial intelligence monitor that can recognize complex hand movements, built on its crime surveillance technology. The AI is designed to check whether the subject complete proper hand washing procedure based on the guidelines issued by the WHO.
In September 2020, Fujitsu introduced software-defined storage technology that incorporates Qumulo hybrid cloud file storage software to enable enterprises to unify petabytes of unstructured data from disparate locations, across multiple data centers and the cloud.
Operations
Fujitsu Laboratories
Fujitsu Laboratories, Fujitsu's Research and Development division, has approximately 900 employees and a capital of JP¥5 billion. The current CEO is Hirotaka Hara.
In 2012, Fujitsu announced that it had developed new technology for non-3D camera phones. The technology will allow the camera phones to take 3D photos.
Fujitsu Electronics Europe GmbH
Fujitsu Electronics Europe GmbH entered the market as a global distributor on January 1, 2016.
Fujitsu Consulting
Fujitsu Consulting is the consulting and services arm of the Fujitsu group, providing information technology consulting, implementation and management services.
Fujitsu Consulting was founded in 1973 in Montreal, Quebec, Canada, under its original name "DMR" (an acronym of the three founder's names: Pierre Ducros, Serge Meilleur and Alain Roy) During the next decade, the company established a presence throughout Quebec and Canada, before extending its reach to international markets. For nearly thirty years, DMR Consulting grew to become an international consulting firm, changing its name to Fujitsu Consulting in 2002 after being acquired by Fujitsu Ltd.
Fujitsu operates a division of the company in India, resulting from an acquisition of North America-based company, Rapidigm. It has offshore divisions at Noida, Pune, Hyderabad, Chennai and Bangalore with Pune being the head office. Fujitsu Consulting India launched its second $10 million development center at Noida in October 2007, a year after starting operation in the country. Following the expansion plan, Fujitsu Consulting India launched the fourth development center in Bengaluru in Nov 2011.
Fujitsu General
Fujitsu Ltd. has a 42% shareholding in Fujitsu General, which manufactures and markets various air conditioning units and humidity control solutions under the General & Fujitsu brands. In India, The company has ended its long-standing joint venture agreement with the Dubai-based ETA group and henceforth will operate under a wholly owned subsidiary Fujitsu General (India) Pvt Ltd, which was earlier known as ETA General.
PFU Limited
PFU Limited, headquartered in Ishikawa, Japan is a wholly owned subsidiary of Fujitsu Limited. PFU Limited was established in 1960, has approximately 4,600 employees globally and in 2013 turned over 126.4 billion Yen (US$1.2 Billion). PFU manufactures interactive kiosks, keyboards, network security hardware, embedded computers and imaging products (document scanners) all under the PFU or Fujitsu brand. In addition to hardware PFU also produce desktop and enterprise document capture software and document management software products. PFU has overseas Sales & Marketing offices in Germany (PFU Imaging Solutions Europe Limited), Italy (PFU Imaging Solutions Europe Limited), United Kingdom (PFU Imaging Solutions Europe Limited) and United States of America (Fujitsu Computer Products of America Inc). PFU Limited are responsible for the design, development, manufacture, sales and support of document scanners which are sold under the Fujitsu brand. Fujitsu are market leaders in professional document scanners with their best selling fi-series, Scansnap and ScanPartner product families as well as Paperstream IP, Paperstream Capture, ScanSnap Manager, ScanSnap Home, Cardminder, Magic Desktop and Rack2Filer software products.
Fujitsu Glovia, Inc.
Fujitsu Glovia, a wholly owned subsidiary of Fujitsu Ltd., is a discrete manufacturing enterprise resource planning software vendor based in El Segundo, California, with international operations in the Netherlands, Japan and the United Kingdom. The company offers on-premise and cloud-based ERP manufacturing software under the Glovia G2 brand, and software as a service (SaaS) under the brand Glovia OM. The company was established in 1970 as Xerox Computer Services, where it developed inventory, manufacturing and financial applications. Fujitsu acquired 30 percent of the renamed Glovia International in 1997 and the remaining 70 percent stake in 2000.
Fujitsu Client Computing Limited
Fujitsu Client Computing Limited (FCCL), headquartered in Kawasaki, Kanagawa, the city where the company was founded, is the division of Fujitsu responsible for research, development, design, manufacturing and sales of consumer PC products. Formerly a wholly owned subsidiary, in November 2017, FCCL was spun off into a joint venture with Lenovo and Development Bank of Japan (DBJ). The new company retains the same name, and Fujitsu is still responsible for sales and support of the products; however, Lenovo owns a majority stake at 51%, while Fujitsu retains 44%. The remaining 5% stake is held by DBJ.
Fujitsu Network Communications, Inc.
Fujitsu Network Communications, Inc., headquartered in Richardson, Texas, United States, is a wholly owned subsidiary of Fujitsu Limited. Established in 1996, Fujitsu Network Communications specializes in building, operating, and supporting optical and wireless broadband and telecommunications networks. The company’s customers include telecommunications service providers, internet service providers, cable companies, utilities, and municipalities. Fujitsu Network Communications provides multivendor solutions that integrate equipment from more than one manufacturer, as well as manufacturing its own network equipment in its Richardson, TX manufacturing facility. The Fujitsu Network Communications optical networking portfolio includes the 1FINITY™ and FLASHWAVE® hardware platforms; Virtuora® cloud software solutions; and NETSMART™ network management and design tools. The company also builds networks that comply with various next-generation technologies and initiatives, including the Open ROADM MSA, oRAN Alliance, and Telecom Infra Project.
Products and services
Computing products
Fujitsu's computing product lines include:
Relational Database: Fujitsu Enterprise Postgres
Fujitsu has more than 35 years experience in database development and is a “major contributor” to open source Postgres. Fujitsu engineers have also developed an Enterprise Postgres version called Fujitsu Enterprise Postgres. Fujitsu Enterprise Postgres benefits include Enterprise Support; warranted code; High Availability enhancements; security enhancements (end to end transparent data encryption, data masking, auditing); Performance enhancements (In-Memory Columnar Index provides support for HTAP (Hybrid transactional/analytical processing) workloads); High-speed Backup and Recovery; High-speed data load; Global metacache (improved memory management); Oracle compatibility extensions (to assist migration from Oracle to Postgres). Fujitsu Enterprise Postgres can be deployed on X86 (Linux.Windows), IBM z/IBM LinuxONE; it is also packaged as a RedHat OpenShift (OCP) container.
PRIMERGYIn May 2011, Fujitsu decided to enter the mobile phone space again, Microsoft announcing plans that Fujitsu would release Windows Phone devices.
ETERNUS
Fujitsu PRIMERGY and ETERNUS are distributed by TriTech Distribution Limited in Hong Kong.
LIFEBOOK, AMILO: Fujitsu's range of notebook computers and tablet PCs.
Cloud computing
Fujitsu offers a public cloud service delivered from data centers in Japan, Australia, Singapore, the United States, the United Kingdom and Germany based on its Global Cloud Platform strategy announced in 2010. The platform delivers Infrastructure-as-a-Service (IaaS) – virtual information and communication technology (ICT) infrastructure, such as servers and storage functionality – from Fujitsu's data centers. In Japan, the service was offered as the On-Demand Virtual System Service (OViSS) and was then launched globally as Fujitsu Global Cloud Platform/S5 (FGCP/S5). Since July 2013 the service has been called IaaS Trusted Public S5. Globally, the service is operated from Fujitsu data centers located in Australia, Singapore, the United States, the United Kingdom, Germany and Japan.
Fujitsu has also launched a Windows Azure powered Global Cloud Platform in a partnership with Microsoft. This offering, delivering Platform-as-a-Service (PaaS), was known as FGCP/A5 in Japan but has since been renamed FUJITSU Cloud PaaS A5 for Windows Azure. It is operated from a Fujitsu data center in Japan. It offers a set of application development frameworks, such as Microsoft .NET, Java and PHP, and data storage capabilities consistent with the Windows Azure platform provided by Microsoft. The basic service consists of compute, storage, Microsoft SQL Azure, and Windows Azure AppFabric technologies such as Service Bus and Access Control Service, with options for inter-operating services covering implementation and migration of applications, system building, systems operation, and support.
Fujitsu acquired RunMyProcess in April 2013, a Cloud-based integration Platform-as-a-Service (PaaS) specialized in workflow automation and business application development.
Fujitsu offers local cloud platforms, such as in Australia, that provide the ability to rely on its domestic data centers which keep sensitive financial data under local jurisdiction and compliance standards.
Microprocessors
Fujitsu produces the SPARC-compliant CPU (SPARClite), the "Venus" 128 GFLOP SPARC64 VIIIfx model is included in the K computer, the world's fastest supercomputer in June 2011 with a rating of over 8 petaflops, and in October 2011, K became the first computer to top 10 petaflops. This speed was achieved in testing on October 7 - 8, and the results were then presented at the International Conference for High Performance Computing, Networking, Storage and Analysis (SC11) in November 2011.
The Fujitsu FR, FR-V and ARM architecture microprocessors are widely used, additionally in ASICs and Application-specific standard products (ASSP) like the Milbeaut with customer variants named Nikon Expeed. They were acquired by Spansion in 2013.
Advertising
The old slogan "The possibilities are infinite" can be found below the company's logo on major advertisements and ties in with the small logo above the letters J and I of the word Fujitsu. This smaller logo represents the symbol for infinity. As of April 2010, Fujitsu is in the process of rolling out a new slogan focused on entering into partnerships with its customers and retiring the "possibilities are infinite" tagline. The new slogan is "shaping tomorrow with you".
Criticism
Fujitsu operated the Horizon IT system mentioned in the trial between the Post Office and its sub-postmasters. The case, settled in December 2019, found that the IT system was unreliable and that faults in the system caused discrepancies in branch accounts which were not due to the postmasters themselves. Mr. Justice Fraser, the judge hearing the case, noted that Fujitsu had given "wholly unsatisfactory evidence" and there had been a "lack of accuracy on the part of Fujitsu witnesses in their evidence". Following his concerns, Fraser sent a file to the Director of Public Prosecutions.
Environmental record
Fujitsu reports that all its notebook and tablet PCs released globally comply with the latest Energy Star standard.
Greenpeace's Cool IT Leaderboard of April 2013 "examines how IT companies use their considerable influence to change government policies that will drive clean energy deployment" and ranks Fujitsu 4th out of 21 leading manufacturers, on the strength of "developed case study data of its solutions with fairly transparent methodology, and is the leading company in terms of establishing ambitious and detailed goals for future carbon savings from its IT solutions."
Awards
In 2021 Fujitsu Network Communications won the Optica Diversity & Inclusion Advocacy Recognition for "their investment in programs and initiatives celebrating and advancing Black, LGBTQ+ and women employees in pursuit of greater inclusion and equality within their company and the wider community."
See also
List of computer system manufacturers
List of semiconductor fabrication plants
See the World by Train, a daily Japanese TV mini-programme sponsored by Fujitsu since 1987
References
External links
Wiki collection of bibliographic works on Fujitsu
1935 establishments in Japan
Cloud computing providers
Companies listed on the Tokyo Stock Exchange
Consumer electronics brands
Defense companies of Japan
Display technology companies
Electronics companies established in 1935
Electronics companies of Japan
Furukawa Group
Heating, ventilation, and air conditioning companies
Japanese brands
Manufacturing companies based in Tokyo
Mobile phone manufacturers
Multinational companies headquartered in Japan
Point of sale companies
Software companies based in Tokyo
Technology companies of Japan
Telecommunications companies based in Tokyo
Computer enclosure companies
Japanese companies established in 1935 |
58608 | https://en.wikipedia.org/wiki/Trusted%20Computing | Trusted Computing | Trusted Computing (TC), also often referred to as Confidential Computing, is a technology developed and promoted by the Trusted Computing Group. The term is taken from the field of trusted systems and has a specialized meaning. The core idea of trusted computing is to give hardware manufacturers control over what software does and does not run on a system by refusing to run unsigned software. With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by computer hardware and software. Enforcing this behavior is achieved by loading the hardware with a unique encryption key that is inaccessible to the rest of the system and the owner.
TC is controversial as the hardware is not only secured for its owner, but also secured against its owner. Such controversy has led opponents of trusted computing, such as free software activist Richard Stallman, to refer to it instead as treacherous computing, even to the point where some scholarly articles have begun to place scare quotes around "trusted computing".
Trusted Computing proponents such as International Data Corporation, the Enterprise Strategy Group and Endpoint Technologies Associates claim the technology will make computers safer, less prone to viruses and malware, and thus more reliable from an end-user perspective. They also claim that Trusted Computing will allow computers and servers to offer improved computer security over that which is currently available. Opponents often claim this technology will be used primarily to enforce digital rights management policies (imposed restrictions to the owner) and not to increase computer security.
Chip manufacturers Intel and AMD, hardware manufacturers such as HP and Dell, and operating system providers such as Microsoft include Trusted Computing in their products if enabled. The U.S. Army requires that every new PC it purchases comes with a Trusted Platform Module (TPM). As of July 3, 2007, so does virtually the entire United States Department of Defense.
In 2019, the Confidential Computing Consortium (CCC) was established by the Linux Foundation with the mission to "improve security for data in use". The consortium now has over 40 members, including Microsoft, Intel, Baidu, Red Hat, and Meta.
Key concepts
Trusted Computing encompasses six key technology concepts, of which all are required for a fully Trusted system, that is, a system compliant to the TCG specifications:
Endorsement key
Secure input and output
Memory curtaining / protected execution
Sealed storage
Remote attestation
Trusted Third Party (TTP)
Endorsement key
The endorsement key is a 2048-bit RSA public and private key pair that is created randomly on the chip at manufacture time and cannot be changed. The private key never leaves the chip, while the public key is used for attestation and for encryption of sensitive data sent to the chip, as occurs during the TPM_TakeOwnership command.
This key is used to allow the execution of secure transactions: every Trusted Platform Module (TPM) is required to be able to sign a random number (in order to allow the owner to show that he has a genuine trusted computer), using a particular protocol created by the Trusted Computing Group (the direct anonymous attestation protocol) in order to ensure its compliance of the TCG standard and to prove its identity; this makes it impossible for a software TPM emulator with an untrusted endorsement key (for example, a self-generated one) to start a secure transaction with a trusted entity. The TPM should be designed to make the extraction of this key by hardware analysis hard, but tamper resistance is not a strong requirement.
Memory curtaining
Memory curtaining extends common memory protection techniques to provide full isolation of sensitive areas of memory—for example, locations containing cryptographic keys. Even the operating system does not have full access to curtained memory. The exact implementation details are vendor specific.
Sealed storage
Sealed storage protects private information by binding it to platform configuration information including the software and hardware being used. This means the data can be released only to a particular combination of software and hardware. Sealed storage can be used for DRM enforcing. For example, users who keep a song on their computer that has not been licensed to be listened will not be able to play it. Currently, a user can locate the song, listen to it, and send it to someone else, play it in the software of their choice, or back it up (and in some cases, use circumvention software to decrypt it). Alternatively, the user may use software to modify the operating system's DRM routines to have it leak the song data once, say, a temporary license was acquired. Using sealed storage, the song is securely encrypted using a key bound to the trusted platform module so that only the unmodified and untampered music player on his or her computer can play it. In this DRM architecture, this might also prevent people from listening to the song after buying a new computer, or upgrading parts of their current one, except after explicit permission of the vendor of the song.
Remote attestation
Remote attestation allows changes to the user's computer to be detected by authorized parties. For example, software companies can identify unauthorized changes to software, including users tampering with their software to circumvent commercial digital rights restrictions. It works by having the hardware generate a certificate stating what software is currently running. The computer can then presents this certificate to a remote party to show that unaltered software is currently executing. Numerous remote attestation schemes have been proposed for various computer architectures, including Intel, RISC-V, and ARM.
Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that requested the attestation, and not by an eavesdropper.
To take the song example again, the user's music player software could send the song to other machines, but only if they could attest that they were running an authorized copy of the music player software. Combined with the other technologies, this provides a more restricted path for the music: encrypted I/O prevents the user from recording it as it is transmitted to the audio subsystem, memory locking prevents it from being dumped to regular disk files as it is being worked on, sealed storage curtails unauthorized access to it when saved to the hard drive, and remote attestation prevents unauthorized software from accessing the song even when it is used on other computers. To preserve the privacy of attestation responders, Direct Anonymous Attestation has been proposed as a solution, which uses a group signature scheme to prevent revealing the identity of individual signers.
Proof of space (PoS) have been proposed to be used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoSpace routine without cache misses) or contains a routine that resisted being evicted.
Trusted third party
One of the main obstacles that had to be overcome by the developers of the TCG technology was how to maintain anonymity while still providing a “trusted platform”. The main object of obtaining “trusted mode” is that the other party (Bob), with whom a computer (Alice) may be communicating, can trust that Alice is running un-tampered hardware and software. This will assure Bob that Alice will not be able to use malicious software to compromise sensitive information on the computer. Unfortunately, in order to do this, Alice has to inform Bob that she is using registered and “safe” software and hardware, thereby potentially uniquely identifying herself to Bob.
This might not be a problem where one wishes to be identified by the other party, e.g., during banking transactions over the Internet. But in many other types of communicating activities people enjoy the anonymity that the computer provides. The TCG acknowledges this, and allegedly have developed a process of attaining such anonymity but at the same time assuring the other party that he or she is communicating with a "trusted" party. This was done by developing a “trusted third party”. This entity will work as an intermediary between a user and his own computer and between a user and other users. In this essay the focus will be on the latter process, a process referred to as remote attestation.
When a user requires an AIK (Attestation Identity Key) the user wants its key to be certified by a CA (Certification Authority). The user through a TPM (Trusted Platform Module) sends three credentials: a public key credential, a platform credential, and a conformance credential. This set of certificates and cryptographic keys will in short be referred to as "EK". The EK can be split into two main parts, the private part "EKpr" and the public part "EKpub". The EKpr never leaves the TPM.
Disclosure of the EKpub is however necessary (version 1.1). The EKpub will uniquely identify the endorser of the platform, model, what kind of software is currently being used on the platform, details of the TPM, and that the platform (PC) complies with the TCG specifications. If this information is communicated directly to another party as a process of getting trusted status it would at the same time be impossible to obtain an anonymous identity. Therefore, this information is sent to the privacy certification authority, (trusted third party). When the C.A (Privacy certification Authority) receives the EKpub sent by the TPM, the C.A verifies the information. If the information can be verified it will create a certified secondary key pair AIK, and sends this credential back to the requestor. This is intended to provide the user with anonymity. When the user has this certified AIK, he or she can use it to communicate with other trusted platforms.
In version 1.2, the TCG have developed a new method of obtaining a certified AIK. This process is called DAA Direct anonymous attestation. This method does not require the user to disclose his/her EKpub with the TTP. The unique new feature of the DAA is that it has the ability to convince the remote entity that a particular TPM (trusted platform module) is a valid TPM without disclosing the EKpub or any other unique identifier. Before the TPM can send a certification request for an AIK to the remote entity, the TPM has to generate a set of DAA credentials. This can only be done by interacting with an issuer. The DAA credentials are created by the TPM sending a TPM-unique secret that remains within the TPM. The TPM secret is similar but not analogous to the EK. When the TPM has obtained a set of DAA credentials, it can send these to the Verifier. When the Verifier receives the DAA credentials from the TTP, it will verify them and send a certified AIK back to the user. The user will then be able to communicate with other trusted parties using the certified AIK. The Verifier may or may not be a trusted third party (TTP). The Verifier can determine whether the DAA credentials are valid, but the DAA credentials do not contain any unique information that discloses the TPM platform. An example would be where a user wants trusted status and sends a request to the Issuer. The Issuer could be the manufacturer of the user's platform, e.g. Compaq. Compaq would check if the TPM it has produced is a valid one, and if so, issues DAA credentials. In the next step, the DAA credentials are sent by the user to the Verifier. As mentioned this might be a standard TTP, but could also be a different entity. If the Verifier accepts the DAA supplied it will produce a certified AIK. The certified AIK will then be used by the user to communicate with other trusted platforms. In summary the new version introduces a separate entity that will assist in the anonymous attestation process. By introducing the Issuer which supplies a DAA, one will be able to sufficiently protect the user's anonymity towards the Verifier/TTP. The issuer most commonly will be the platform manufacturer. Without such credentials, it will be probably difficult for a private customer or small business or organization to convince others that they have a genuine trusted platform.
Known applications
The Microsoft products Windows Vista, Windows 7, Windows 8 and Windows RT make use of a Trusted Platform Module to facilitate BitLocker Drive Encryption. Other known applications with runtime encryption and the use of secure enclaves include the Signal messenger and the e-prescription service ("E-Rezept") by the German government.
Possible applications
Digital rights management
Trusted Computing would allow companies to create a digital rights management (DRM) system which would be very hard to circumvent, though not impossible. An example is downloading a music file. Sealed storage could be used to prevent the user from opening the file with an unauthorized player or computer. Remote attestation could be used to authorize play only by music players that enforce the record company's rules. The music would be played from curtained memory, which would prevent the user from making an unrestricted copy of the file while it is playing, and secure I/O would prevent capturing what is being sent to the sound system. Circumventing such a system would require either manipulation of the computer's hardware, capturing the analogue (and thus degraded) signal using a recording device or a microphone, or breaking the security of the system.
New business models for use of software (services) over Internet may be boosted by the technology. By strengthening the DRM system, one could base a business model on renting programs for a specific time periods or "pay as you go" models. For instance, one could download a music file which could only be played a certain number of times before it becomes unusable, or the music file could be used only within a certain time period.
Preventing cheating in online games
Trusted Computing could be used to combat cheating in online games. Some players modify their game copy in order to gain unfair advantages in the game; remote attestation, secure I/O and memory curtaining could be used to determine that all players connected to a server were running an unmodified copy of the software.
Verification of remote computation for grid computing
Trusted Computing could be used to guarantee participants in a grid computing system are returning the results of the computations they claim to be instead of forging them. This would allow large scale simulations to be run (say a climate simulation) without expensive redundant computations to guarantee malicious hosts are not undermining the results to achieve the conclusion they want.
Criticism
Trusted Computing opponents such as the Electronic Frontier Foundation and Free Software Foundation claim trust in the underlying companies is not deserved and that the technology puts too much power and control into the hands of those who design systems and software. They also believe that it may cause consumers to lose anonymity in their online interactions, as well as mandating technologies Trusted Computing opponents say are unnecessary. They suggest Trusted Computing as a possible enabler for future versions of mandatory access control, copy protection, and DRM.
Some security experts, such as Alan Cox and Bruce Schneier, have spoken out against Trusted Computing, believing it will provide computer manufacturers and software authors with increased control to impose restrictions on what users are able to do with their computers. There are concerns that Trusted Computing would have an anti-competitive effect on the IT market.
There is concern amongst critics that it will not always be possible to examine the hardware components on which Trusted Computing relies, the Trusted Platform Module, which is the ultimate hardware system where the core 'root' of trust in the platform has to reside. If not implemented correctly, it presents a security risk to overall platform integrity and protected data. The specifications, as published by the Trusted Computing Group, are open and are available for anyone to review. However, the final implementations by commercial vendors will not necessarily be subjected to the same review process. In addition, the world of cryptography can often move quickly, and that hardware implementations of algorithms might create an inadvertent obsolescence. Trusting networked computers to controlling authorities rather than to individuals may create digital imprimaturs.
Cryptographer Ross Anderson, University of Cambridge, has great concerns that:
TC can support remote censorship [...] In general, digital objects created using TC systems remain under the control of their creators, rather than under the control of the person who owns the machine on which they happen to be stored [...] So someone who writes a paper that a court decides is defamatory can be compelled to censor it — and the software company that wrote the word processor could be ordered to do the deletion if she refuses. Given such possibilities, we can expect TC to be used to suppress everything from pornography to writings that criticize political leaders.
He goes on to state that:
[...] software suppliers can make it much harder for you to switch to their competitors' products. At a simple level, Word could encrypt all your documents using keys that only Microsoft products have access to; this would mean that you could only read them using Microsoft products, not with any competing word processor. [...]
The [...] most important benefit for Microsoft is that TC will dramatically increase the costs of switching away from Microsoft products (such as Office) to rival products (such as OpenOffice). For example, a law firm that wants to change from Office to OpenOffice right now merely has to install the software, train the staff and convert their existing files. In five years' time, once they have received TC-protected documents from perhaps a thousand different clients, they would have to get permission (in the form of signed digital certificates) from each of these clients in order to migrate their files to a new platform. The law firm won't in practice want to do this, so they will be much more tightly locked in, which will enable Microsoft to hike its prices.
Anderson summarizes the case by saying:
The fundamental issue is that whoever controls the TC infrastructure will acquire a huge amount of power. Having this single point of control is like making everyone use the same bank, or the same accountant, or the same lawyer. There are many ways in which this power could be abused.
Digital rights management
One of the early motivations behind trusted computing was a desire by media and software corporations for stricter DRM technology to prevent users from freely sharing and using potentially copyrighted or private files without explicit permission.
An example could be downloading a music file from a band: the band's record company could come up with rules for how the band's music can be used. For example, they might want the user to play the file only three times a day without paying additional money. Also, they could use remote attestation to only send their music to a music player that enforces their rules: sealed storage would prevent the user from opening the file with another player that did not enforce the restrictions. Memory curtaining would prevent the user from making an unrestricted copy of the file while it is playing, and secure output would prevent capturing what is sent to the sound system.
Users unable to modify software
A user who wanted to switch to a competing program might find that it would be impossible for that new program to read old data, as the information would be "locked in" to the old program. It could also make it impossible for the user to read or modify their data except as specifically permitted by the software.
Remote attestation could cause other problems. Currently, web sites can be visited using a number of web browsers, though certain websites may be formatted such that some browsers cannot decipher their code. Some browsers have found a way to get around that problem by emulating other browsers. With remote attestation, a website could check the internet browser being used and refuse to display on any browser other than the specified one (like Internet Explorer), so even emulating the browser would not work.
Users unable to exercise legal rights
The law in many countries allows users certain rights over data whose copyright they do not own (including text, images, and other media), often under headings such as fair use or public interest. Depending on jurisdiction, these may cover issues such as whistleblowing, production of evidence in court, quoting or other small-scale usage, backups of owned media, and making a copy of owned material for personal use on other owned devices or systems. The steps implicit in trusted computing have the practical effect of preventing users exercising these legal rights.
Users vulnerable to vendor withdrawal of service
A service that requires external validation or permission - such as a music file or game that requires connection with the vendor to confirm permission to play or use - is vulnerable to that service being withdrawn or no longer updated. A number of incidents have already occurred where users, having purchased music or video media, have found their ability to watch or listen to it suddenly stop due to vendor policy or cessation of service, or server inaccessibility, at times with no compensation. Alternatively in some cases the vendor refuses to provide services in future which leaves purchased material only usable on the present -and increasingly obsolete- hardware (so long as it lasts) but not on any hardware that may be purchased in future.
Users unable to override
Some opponents of Trusted Computing advocate "owner override": allowing an owner who is confirmed to be physically present to allow the computer to bypass restrictions and use the secure I/O path. Such an override would allow remote attestation to a user's specification, e.g., to create certificates that say Internet Explorer is running, even if a different browser is used. Instead of preventing software change, remote attestation would indicate when the software has been changed without owner's permission.
Trusted Computing Group members have refused to implement owner override. Proponents of trusted computing believe that owner override defeats the trust in other computers since remote attestation can be forged by the owner. Owner override offers the security and enforcement benefits to a machine owner, but does not allow him to trust other computers, because their owners could waive rules or restrictions on their own computers. Under this scenario, once data is sent to someone else's computer, whether it be a diary, a DRM music file, or a joint project, that other person controls what security, if any, their computer will enforce on their copy of those data. This has the potential to undermine the applications of trusted computing to enforce DRM, control cheating in online games and attest to remote computations for grid computing.
Loss of anonymity
Because a Trusted Computing equipped computer is able to uniquely attest to its own identity, it will be possible for vendors and others who possess the ability to use the attestation feature to zero in on the identity of the user of TC-enabled software with a high degree of certainty.
Such a capability is contingent on the reasonable chance that the user at some time provides user-identifying information, whether voluntarily, indirectly, or simply through inference of many seemingly benign pieces of data. (e.g. search records, as shown through simple study of the AOL search records leak). One common way that information can be obtained and linked is when a user registers a computer just after purchase. Another common way is when a user provides identifying information to the website of an affiliate of the vendor.
While proponents of TC point out that online purchases and credit transactions could potentially be more secure as a result of the remote attestation capability, this may cause the computer user to lose expectations of anonymity when using the Internet.
Critics point out that this could have a chilling effect on political free speech, the ability of journalists to use anonymous sources, whistle blowing, political blogging and other areas where the public needs protection from retaliation through anonymity.
The TPM specification offers features and suggested implementations that are meant to address the anonymity requirement. By using a third-party Privacy Certification Authority (PCA), the information that identifies the computer could be held by a trusted third party. Additionally, the use of direct anonymous attestation (DAA), introduced in TPM v1.2, allows a client to perform attestation while not revealing any personally identifiable or machine information.
The kind of data that must be supplied to the TTP in order to get the trusted status is at present not entirely clear, but the TCG itself admits that "attestation is an important TPM function with significant privacy implications". It is, however, clear that both static and dynamic information about the user computer may be supplied (Ekpubkey) to the TTP (v1.1b), it is not clear what data will be supplied to the “verifier” under v1.2. The static information will uniquely identify the endorser of the platform, model, details of the TPM, and that the platform (PC) complies with the TCG specifications . The dynamic information is described as software running on the computer. If a program like Windows is registered in the user's name this in turn will uniquely identify the user. Another dimension of privacy infringing capabilities might also be introduced with this new technology; how often you use your programs might be possible information provided to the TTP. In an exceptional, however practical situation, where a user purchases a pornographic movie on the Internet, the purchaser nowadays, must accept the fact that he has to provide credit card details to the provider, thereby possibly risking being identified. With the new technology a purchaser might also risk someone finding out that he (or she) has watched this pornographic movie 1000 times. This adds a new dimension to the possible privacy infringement. The extent of data that will be supplied to the TTP/Verifiers is at present not exactly known, only when the technology is implemented and used will we be able to assess the exact nature and volume of the data that is transmitted.
TCG specification interoperability problems
Trusted Computing requests that all software and hardware vendors will follow the technical specifications released by the Trusted Computing Group in order to allow interoperability between different trusted software stacks. However, since at least mid-2006, there have been interoperability problems between the TrouSerS trusted software stack (released as open source software by IBM) and Hewlett-Packard's stack. Another problem is that the technical specifications are still changing, so it is unclear which is the standard implementation of the trusted stack.
Shutting out of competing products
People have voiced concerns that trusted computing could be used to keep or discourage users from running software created by companies outside of a small industry group. Microsoft has received a great deal of bad press surrounding their Palladium software architecture, evoking comments such as "Few pieces of vaporware have evoked a higher level of fear and uncertainty than Microsoft's Palladium", "Palladium is a plot to take over cyberspace", and "Palladium will keep us from running any software not personally approved by Bill Gates". The concerns about trusted computing being used to shut out competition exist within a broader framework of consumers being concerned about using bundling of products to obscure prices of products and to engage in anti-competitive practices. Trusted Computing is seen as harmful or problematic to independent and open source software developers.
Trust
In the widely used public-key cryptography, creation of keys can be done on the local computer and the creator has complete control over who has access to it, and consequentially their own security policies. In some proposed encryption-decryption chips, a private/public key is permanently embedded into the hardware when it is manufactured, and hardware manufacturers would have the opportunity to record the key without leaving evidence of doing so. With this key it would be possible to have access to data encrypted with it, and to authenticate as it. It is trivial for a manufacturer to give a copy of this key to the government or the software manufacturers, as the platform must go through steps so that it works with authenticated software.
Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, an end user has to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process. A security breach breaking that chain of trust happened to a SIM card manufacturer Gemalto, which in 2010 was infiltrated by US and British spies, resulting in compromised security of cellphone calls.
It is also critical that one be able to trust that the hardware manufacturers and software developers properly implement trusted computing standards. Incorrect implementation could be hidden from users, and thus could undermine the integrity of the whole system without users being aware of the flaw.
Hardware and software support
Since 2004, most major manufacturers have shipped systems that have included Trusted Platform Modules, with associated BIOS support. In accordance with the TCG specifications, the user must enable the Trusted Platform Module before it can be used.
The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux. In January 2005, members of Gentoo Linux's "crypto herd" announced their intention of providing support for TC—in particular support for the Trusted Platform Module. There is also a TCG-compliant software stack for Linux named TrouSerS, released under an open source license.
Some limited form of trusted computing can be implemented on current versions of Microsoft Windows with third-party software.
With the Intel Software Guard Extension (SGX) and AMD Secure Encrypted Virtualization (SEV) processors, there is hardware available for runtime memory encryption and remote attestation features.
Major cloud providers such as Microsoft Azure, AWS and Google Cloud Platform have virtual machines with trusted computing features available.
There are several open-source projects that facilitate the use of confidential computing technology. These include EGo, EdgelessDB and MarbleRun from Edgeless Systems, as well as Enarx, which originates from security research at Red Hat.
The Intel Classmate PC (a competitor to the One Laptop Per Child) includes a Trusted Platform Module.
PrivateCore vCage software can be used to attest x86 servers with TPM chips.
Mobile T6 secure operating system simulates the TPM functionality in mobile devices using the ARM TrustZone technology.
Samsung Smartphones come equipped Samsung Knox that depend on features like Secure Boot, TIMA, MDM, TrustZone and SE Linux
See also
Glossary of legal terms in technology
Hardware restrictions
Next-Generation Secure Computing Base (formerly known as Palladium)
Trusted Computing Group
Trusted Network Connect
Trusted Platform Module
References
External links
Cryptography
Copyright law
Microsoft Windows security technology |
58623 | https://en.wikipedia.org/wiki/The%20Culture | The Culture | The Culture is a fictional interstellar post-scarcity civilisation or society created by the Scottish writer Iain M. Banks and features in a number of his space opera novels and works of short fiction, collectively called the Culture series.
In the series, the Culture is composed primarily of sentient beings of the humanoid alien variety, artificially intelligent sentient machines, and a small number of other sentient "alien" life forms. Machine intelligences range from human-equivalent drones to hyper-intelligent Minds. Artificial intelligences with capabilities measured as a fraction of human intelligence also perform a variety of tasks, e.g. controlling spacesuits. Without scarcity, the Culture has no need for money, instead minds voluntarily indulge humanoid and drone citizens' pleasures, leading to a largely hedonistic society. Many of the series' protagonists are humanoids who choose to work for the Culture's elite diplomatic or espionage organisations, and interact with other civilisations whose citizens hold wildly different ideologies, morals, and technologies.
The Culture has a grasp of technology that is advanced relative to most other civilisations that share the galaxy. Most of the Culture's citizens do not live on planets but in artificial habitats such as orbitals and ships, the largest of which are home to billions of individuals. The Culture's citizens have been genetically enhanced to live for centuries and have modified mental control over their physiology, including the ability to introduce a variety of psychoactive drugs into their systems, change biological sex, or switch off pain at will. Culture technology can transform individuals into vastly different body forms, although the Culture standard form remains fairly close to human.
The Culture holds peace and individual freedom as core values, and a central theme of the series is ethical struggle it faces when interacting with other societies – some of which brutalise their own members, pose threats to other civilisations, or threaten the Culture itself. It tends to make major decisions based on the consensus formed by its Minds and, if appropriate, its citizens. In one instance, a direct democratic vote of trillions – the entire population – decided The Culture would go to war with a rival civilisation. Those who objected to the Culture's subsequent militarisation broke off from the meta-civilisation, forming their own separate civilisation; a hallmark of the Culture is its ambiguity. In contrast to the many interstellar societies and empires which share its fictional universe, the Culture is difficult to define, geographically or sociologically, and "fades out at the edges".
Overview
The Culture is characterized as being a post-scarcity society, having overcome most physical constraints on life and being an egalitarian, stable society without the use of any form of force or compulsion, except where necessary to protect others. That being said, some citizens and especially crafty Minds tend to enjoy manipulating others, in particular by controlling the course of alien societies, through the group known as contact.
Minds, extremely powerful artificial intelligences, have an important role. They administer this abundance for the benefit of all. As one commentator has said:
The novels of the Culture cycle, therefore, mostly deal with people at the fringes of the Culture: diplomats, spies, or mercenaries; those who interact with other civilisations, and who do the Culture's dirty work in moving those societies closer to the Culture ideal, sometimes by force.
Fictional history
In this fictional universe, the Culture exists concurrently with human society on Earth. The time frame for the published Culture stories is from 1267 to roughly 2970, with Earth being contacted around 2100, though the Culture had covertly visited the planet in the 1970s in The State of the Art.
The Culture itself is described as having been created when several humanoid species and machine sentiences reached a certain social level, and took not only their physical, but also their civilisational evolution into their own hands. In The Player of Games, the Culture is described as having existed as a space-faring society for eleven thousand years. In The Hydrogen Sonata, one of these founding civilisations was named as the Buhdren Federality.
Society and culture
Economy
The Culture is a symbiotic society of artificial intelligences (AIs) (Minds and drones), humanoids and other alien species who all share equal status. All essential work is performed (as far as possible) by non-sentient devices, freeing sentients to do only things that they enjoy (administrative work requiring sentience is undertaken by the AIs using a bare fraction of their mental power, or by people who take on the work out of free choice). As such, the Culture is a post-scarcity society, where technological advances ensure that no one lacks any material goods or services. Energy is farmed from a fictitious "energy grid", and matter to build orbitals is collected mostly from asteroids. As a consequence, the Culture has no need of economic constructs such as money (as is apparent when it deals with civilisations in which money is still important). The Culture rejects all forms of economics based on anything other than voluntary activity. "Money implies poverty" is a common saying in the Culture.
Language
Marain is the Culture's shared constructed language. The Culture believes the Sapir–Whorf hypothesis that language influences thought, and Marain was designed by early Minds to exploit this effect, while also "appealing to poets, pedants, engineers and programmers". Designed to be represented either in binary or symbol-written form, Marain is also regarded as an aesthetically pleasing language by the Culture. The symbols of the Marain alphabet can be displayed in three-by-three grids of binary (yes/no, black/white) dots and thus correspond to nine-bit wide binary numbers.
Related comments are made by the narrator in The Player of Games regarding gender-specific pronouns, which Marain speakers do not use in typical conversation unless specifying one's gender is necessary, and by general reflection on the fact that Marain places much less structural emphasis on (or even lacks) concepts like possession and ownership, dominance and submission, and especially aggression. Many of these concepts would in fact be somewhat theoretical to the average Culture citizen. Indeed, the presence of these concepts in other civilisations signify the brutality and hierarchy associated with forms of empire that the Culture strives to avoid.
Marain itself is also open to encryption and dialect-specific implementations for different parts of the Culture. M1 is basic Nonary Marain, the three-by-three grid. All Culture citizens can communicate in this variant. Other variants include M8 through M16, which are encrypted by various degrees, and are typically used by the Contact Section. Higher level encryptions exist, the highest of these being M32. M32 and lower level encrypted signals are the province of Special Circumstances (SC). Use of M32 is reserved for extremely secret and reserved information and communication within Special Circumstances. That said, M32 has an air of notoriety in the Culture, and in the thoughts of most may best be articulated as "the Unbreakable, Inviolable, Holy of Holies Special Circumstances M32" as described by prospective SC agent Ulver Seich. Ships and Minds also have a slightly distasteful view of SC procedure associated with M32, one Ship Mind going so far as to object to the standard SC attitude of "Full scale, stark raving M32 don't-talk-about-this-or-we'll-pull-your-plugs-out-baby paranoia" on the use of the encryption.
Laws
There are no laws as such in the Culture. Social norms are enforced by convention (personal reputation, "good manners", and by, as described in The Player of Games, possible ostracism and involuntary supervision for more serious crimes). Minds generally refrain from using their all-seeing capabilities to influence people's reputations, though they are not necessarily themselves above judging people based on such observations, as described in Excession. Minds also judge each other, with one of the more relevant criteria being the quality of their treatment of sentients in their care. Hub Minds for example are generally nominated from well-regarded GSV (the largest class of ships) Minds, and then upgraded to care for the billions living on the artificial habitats.
The only serious prohibitions that seem to exist are against harming sentient beings, or forcing them into undertaking any act (another concept that seems unnatural to and is, in fact, almost unheard of by almost all Culture citizens). As mentioned in The Player of Games, the Culture does have the occasional "crime of passion" (as described by an Azadian) and the punishment was to be "slap-droned", or to have a drone assigned to follow the offender and "make sure [they] don't do it again".
While the enforcement in theory could lead to a Big Brother-style surveillance society, in practice social convention among the Minds prohibits them from watching, or interfering in, citizens' lives unless requested, or unless they perceive severe risk. The practice of reading a sentient's mind without permission (something the Culture is technologically easily capable of) is also strictly taboo. The whole plot of Look to Windward relies on a Hub Mind not reading an agent's mind (with certain precautions in case this rule gets violated). Minds that do so anyway are considered deviant and shunned by other Minds (see GCU Grey Area). At one point it is said that if the Culture actually had written laws, the sanctity of one's own thoughts against the intrusion of others would be the first on the books.
This gives some measure of privacy and protection; though the very nature of Culture society would, strictly speaking, make keeping secrets irrelevant: most of them would be considered neither shameful nor criminal. It does allow the Minds in particular to scheme amongst themselves in a very efficient manner, and occasionally withhold information.
Symbols
The Culture has no flag, symbol or logo. According to Consider Phlebas, people can recognize items made by the Culture implicitly, by the way they are simple, efficient and aesthetic. The main outright symbol of the Culture, the one by which it is most explicitly and proudly recognized, is not a visual symbol, but its language, Marain, which is used far beyond the Culture itself. It is often employed in the galaxy as a de facto lingua franca among people who don't share a language. Even the main character of Consider Phlebas, an enemy of the Culture, ready to die to help in its downfall, is fluent in Marain and uses it with other non-Culture characters out of sheer convenience.
It would have helped if the Culture had used some sort of emblem or logo; but, pointlessly unhelpful and unrealistic to the last, the Culture refused to place its trust in symbols. It maintained that it was what it was and had no need for such outward representation. The Culture was every single individual human and machine in it, not one thing. Just as it could not imprison itself with laws, impoverish itself with money or misguide itself with leaders, so it would not misrepresent itself with signs.
Citizens
Biological
The Culture is a posthuman society, which originally arose when seven or eight roughly humanoid space-faring species coalesced into a quasi-collective (a group-civilisation) ultimately consisting of approximately thirty trillion (short scale) sentient (more properly, sapient) beings (this includes artificial intelligences). In Banks's universe, a good part (but by no means an overwhelming percentage) of all sentient species is of the "pan-human" type, as noted in Matter.
Although the Culture was originated by humanoid species, subsequent interactions with other civilisations have introduced many non-humanoid species into the Culture (including some former enemy civilisations), though the majority of the biological Culture is still pan-human. Little uniformity exists in the Culture, and its citizens are such by choice, free to change physical form and even species (though some stranger biological conversions are irreversible, and conversion from biological to artificial sentience is considered to be what is known as an Unusual Life Choice). All members are also free to join, leave, and rejoin, or indeed declare themselves to be, say, 80% Culture.
Within the novels, opponents of the Culture have argued that the role of humans in the Culture is nothing more than that of pets, or parasites on Culture Minds, and that they can have nothing genuinely useful to contribute to a society where science is close to omniscient about the physical universe, where every ailment has been cured, and where every thought can be read. Many of the Culture novels in fact contain characters (from within or without the Culture) wondering how far-reaching the Minds' dominance of the Culture is, and how much of the democratic process within it might in fact be a sham: subtly but very powerfully influenced by the Minds in much the same ways Contact and Special Circumstances influence other societies. Also, except for some mentions about a vote over the Idiran-Culture War, and the existence of a very small number of "Referrers" (humans of especially acute reasoning), few biological entities are ever described as being involved in any high-level decisions.
On the other hand, the Culture can be seen as fundamentally hedonistic (one of the main objectives for any being, including Minds, is to have fun rather than to be "useful"). Also, Minds are constructed, by convention, to care for and value human beings. While a General Contact Unit (GCU) does not strictly need a crew (and could construct artificial avatars when it did), a real human crew adds richness to its existence, and offers distraction during otherwise dull periods. In Consider Phlebas it is noted that Minds still find humans fascinating, especially their odd ability to sometimes achieve similarly advanced reasoning as their much more complex machine brains.
To a large degree, the freedoms enjoyed by humans in the Culture are only available because Minds choose to provide them. The freedoms include the ability to leave the Culture when desired, often forming new associated but separate societies with Culture ships and Minds, most notably the Zetetic Elench and the ultra-pacifist and non-interventionist Peace Faction.
Physiology
Techniques in genetics have advanced in the Culture to the point where bodies can be freed from built-in limitations. Citizens of the Culture refer to a normal human as "human-basic" and the vast majority opt for significant enhancements: severed limbs grow back, sexual physiology can be voluntarily changed from male to female and back (though the process takes time), sexual stimulation and endurance are strongly heightened in both sexes (something that is often the subject of envious debate among other species), pain can be switched off, toxins can be bypassed away from the digestive system, autonomic functions such as heart rate can be switched to conscious control, reflexes like blinking can be switched off, and bones and muscles adapt quickly to changes in gravity without the need to exercise. The degree of enhancement found in Culture individuals varies to taste, with certain of the more exotic enhancements limited to Special Circumstances personnel (for example, weapons systems embedded in various parts of the body).
Most Culture individuals opt to have drug glands that allow for hormonal levels and other chemical secretions to be consciously monitored, released and controlled. These allow owners to secrete on command any of a wide selection of synthetic drugs, from the merely relaxing to the mind-altering: "Snap" is described in Use of Weapons and The Player of Games as "The Culture's favourite breakfast drug". "Sharp Blue" is described as a utility drug, as opposed to a sensory enhancer or a sexual stimulant, that helps in problem solving. "Quicken", mentioned in Excession, speeds up the user's neural processes so that time seems to slow down, allowing them to think and have mental conversation (for example with artificial intelligences) in far less time than it appears to take to the outside observer. "Sperk", as described in Matter, is a mood- and energy-enhancing drug, while other such self-produced drugs include "Calm", "Gain", "Charge", "Recall", "Diffuse", "Somnabsolute", "Softnow", "Focal", "Edge", "Drill", "Gung", "Winnow" and "Crystal Fugue State". The glanded substances have no permanent side-effects and are non-habit-forming.
Phenotypes
For all their genetic improvements, the Culture is by no means eugenically uniform. Human members in the Culture setting vary in size, colour and shape as in reality, and with possibly even further natural differences: in the novella The State of the Art, it is mentioned that a character "looks like a Yeti", and that there is variance among the Culture in minor details such as the number of toes or of joints on each finger. It is mentioned in Excession that:
Some Culture citizens opt to leave the constraints of a human or even humanoid body altogether, opting to take on the appearance of one of the myriad other galactic sentients (perhaps in order to live with them) or even non-sentient objects as commented upon in Matter (though this process can be irreversible if the desired form is too removed from the structure of the human brain). Certain eccentrics have chosen to become drones or even Minds themselves, though this is considered rude and possibly even insulting by most humans and AIs alike.
While the Culture is generally pan-humanoid (and tends to call itself "human"), various other species and individuals of other species have become part of the Culture.
As all Culture citizens are of perfect genetic health, the very rare cases of a Culture citizen showing any physical deformity are almost certain to be a sort of fashion statement of somewhat dubious taste.
Personality
Almost all Culture citizens are very sociable, of great intellectual capability and learning, and possess very well-balanced psyches. Their biological make-up and their growing up in an enlightened society make neuroses and lesser emotions like greed or (strong) jealousy practically unknown, and produce persons that, in any lesser society, appear very self-composed and charismatic. Character traits like strong shyness, while very rare, are not fully unknown, as shown in Excession. As described there and in Player of Games, a Culture citizen who becomes dysfunctional enough to pose a serious nuisance or threat to others would be offered (voluntary) psychological adjustment therapy and might potentially find himself under constant (non-voluntary) oversight by representatives of the local Mind. In extreme cases, as described in Use of Weapons and Surface Detail, dangerous individuals have been known to be assigned a "slap-drone", a robotic follower who ensures that the person in question doesn't continue to endanger the safety of others.
Artificial
As well as humans and other biological species, sentient artificial intelligences are also members of the Culture. These can be broadly categorised into drones and Minds. Also, by custom, as described in Excession, any artefact (be it a tool or vessel) above a certain capability level has to be given sentience.
Drones
Drones are roughly comparable in intelligence and social status to that of the Culture's biological members. Their intelligence is measured against that of an average biological member of the Culture; a so-called "1.0 value" drone would be considered the mental equal of a biological citizen, whereas lesser drones such as the menial service units of Orbitals are merely proto-sentient (capable of limited reaction to unprogrammed events, but possessing no consciousness, and thus not considered citizens; these take care of much of the menial work in the Culture). The sentience of advanced drones has various levels of redundancy, from systems similar to that of Minds (though much reduced in capability) down to electronic, to mechanical and finally biochemical back-up brains.
Although drones are artificial, the parameters that prescribe their minds are not rigidly constrained, and sentient drones are full individuals, with their own personalities, opinions and quirks. Like biological citizens, Culture drones generally have lengthy names. They also have a form of sexual intercourse for pleasure, called being "in thrall", though this is an intellect-only interfacing with another sympathetic drone.
While civilian drones do generally match humans in intelligence, drones built especially as Contact or Special Circumstances agents are often several times more intelligent, and imbued with extremely powerful senses, powers and armaments (usually forcefield and effector-based, though occasionally more destructive weaponry such as lasers or, exceptionally, "knife-missiles" are referred to) all powered by antimatter reactors. Despite being purpose-built, these drones are still allowed individual personalities and given a choice in lifestyle. Indeed, some are eventually deemed psychologically unsuitable as agents (for example as Mawhrin-Skel notes about itself in The Player of Games) and must choose either mental reprofiling or demilitarisation and discharge from Special Circumstances.
Physically, drones are floating units of various sizes and shapes, usually with no visible moving parts. Drones get around the limitations of this inanimation with the ability to project "fields": both those capable of physical force, which allow them to manipulate objects, as well as visible, coloured fields called "auras", which are used to enable the drone to express emotion. There is a complex drone code based on aura colours and patterns (which is fully understood by biological Culture citizens as well). Drones have full control of their auras and can display emotions they're not feeling or can switch their aura off. The drone, Jase, in Consider Phlebas, is described as being constructed before the use of auras, and refuses to be retrofitted with them, preferring to remain inscrutable.
In size drones vary substantially: the oldest still alive (eight or nine thousand years old) tend to be around the size of humans, whereas later technology allows drones to be small enough to lie in a human's cupped palm; modern drones may be any size between these extremes according to fashion and personal preference. Some drones are also designed as utility equipment with its own sentience, such as the gelfield protective suit described in Excession.
Minds
By contrast to drones, Minds are orders of magnitude more powerful and intelligent than the Culture's other biological and artificial citizens. Typically they inhabit and act as the controllers of large-scale Culture hardware such as ships or space-based habitats. Unsurprisingly, given their duties, Minds are tremendously powerful: capable of running all of the functions of a ship or habitat, while holding potentially billions of simultaneous conversations with the citizens that live aboard them. To allow them to perform at such a high degree, they exist partially in hyperspace to get around hindrances to computing power such as the speed of light.
In Iain M. Banks's Culture series, most larger starships, some inhabited planets and all orbitals have their own Minds: sapient, hyperintelligent machines originally built by biological species, which have evolved, redesigned themselves, and become many times more intelligent than their original creators. According to Consider Phlebas, a Mind is an ellipsoid object roughly the size of a bus and weighing around tons. A Mind is in fact a entity, meaning that the ellipsoid is only the protrusion of the larger four dimensional device into our 'real space'.
In the Culture universe, Minds have become an indispensable part of the prevailing society, enabling much of its post-scarcity amenities by planning and automating societal functions, and by handling day-to-day administration with mere fractions of their mental power.
The main difference between Minds and other extremely powerful artificial intelligences in fiction is that they are highly humanistic and benevolent. They are so both by design, and by their shared culture. They are often even rather eccentric. Yet, by and large, they show no wish to supplant or dominate their erstwhile creators.
On the other hand, it can also be argued that to the Minds, the human-like members of the Culture amount to little more than pets, whose wants are followed on a Mind's whim. Within the Series, this dynamic is played on more than once. In 'Excession', it is also played on to put a Mind in its place—in the mythology, a Mind is not thought to be a god, still, but an artificial intelligence capable of surprise, and even fear.
Although the Culture is a type of utopian anarchy, Minds most closely approach the status of leaders, and would likely be considered godlike in less rational societies. As independent, thinking beings, each has its own character, and indeed, legally (insofar as the Culture has a 'legal system'), each is a Culture citizen. Some Minds are more aggressive, some more calm; some don't mind mischief, others simply demonstrate intellectual curiosity. But above all they tend to behave rationally and benevolently in their decisions.
As mentioned before, Minds can serve several different purposes, but Culture ships and habitats have one special attribute: the Mind and the ship or habitat are perceived as one entity; in some ways the Mind is the ship, certainly from its passengers' point of view. It seems normal practice to address the ship's Mind as "Ship" (and an Orbital hub as "Hub"). However, a Mind can transfer its 'mind state' into and out of its ship 'body', and even switch roles entirely, becoming (for example) an Orbital Hub from a warship.
More often than not, the Mind's character defines the ship's purpose. Minds do not end up in roles unsuited to them; an antisocial Mind simply would not volunteer to organise the care of thousands of humans, for example.
On occasion groupings of two or three Minds may run a ship. This seems normal practice for larger vehicles such as s, though smaller ships only ever seem to have one Mind.
Banks also hints at a Mind's personality becoming defined at least partially before its creation or 'birth'. Warships, as an example, are designed to revel in controlled destruction; seeing a certain glory in achieving a 'worthwhile' death also seems characteristic. The presence of human crews on board warships may discourage such recklessness, since in the normal course of things, a Mind would not risk beings other than itself.
With their almost godlike powers of reasoning and action comes a temptation to bend (or break) Cultural norms of ethical behaviour, if deemed necessary for some greater good. In The Player of Games, a Culture citizen is blackmailed, apparently by Special Circumstances Minds, into assisting the overthrow of a barbaric empire, while in Excession, a conspiracy by some Minds to start a war against an oppressive alien race nearly comes to fruition. Yet even in these rare cases, the essentially benevolent intentions of Minds towards other Culture citizens is never in question. More than any other beings in the Culture, Minds are the ones faced with more the complex and provocative ethical dilemmas.
While Minds would likely have different capabilities, especially seeing their widely differing ages (and thus technological sophistication), this is not a theme of the books. It might be speculated that the older Minds are upgraded to keep in step with the advances in technology, thus making this point moot. It is also noted in Matter that every Culture Mind writes its own , thus continually improving itself and, as a side benefit, becoming much less vulnerable to outside takeover by electronic means and viruses, as every Mind's processing functions work differently.
The high computing power of the Mind is apparently enabled by thought processes (and electronics) being constantly in hyperspace (thus circumventing the light speed limit in computation). Minds do have back-up capabilities functioning with light-speed if the hyperspace capabilities fail - however, this reduces their computational powers by several orders of magnitude (though they remain sentient).
The storage capability of a GSV Mind is described in Consider Phlebas as 1030 bytes (1 million yottabytes).
The Culture is a society undergoing slow (by present-day Earth standards) but constant technological change, so the stated capacity of Minds is open to change. In the last 3000 years the capacity of Minds has increased considerably. By the time of the events of the novel Excession in the mid 19th century, Minds from the first millennium are referred to jocularly as minds, with a small 'm'. Their capacities only allows them to be considered equivalent to what are now known as Cores, small (in the literal physical sense) Artificial intelligences used in shuttles, trans-light modules, Drones, and other machines not large enough for a full scale Mind. While still considered sentient, a mind's power at this point is considered greatly inferior to a contemporary Mind. That said, It is possible for Minds to have upgrades, improvements and enhancements given to them since construction, to allow them to remain up to date.
Using the sensory equipment available to the Culture, Minds can see inside solid objects; in principle they can also read minds by examining the cellular processes inside a living brain, but Culture Minds regard such mindreading as taboo. The only known Mind to break this Taboo, the Grey Area seen in Excession, is largely ostracized and shunned by other Minds as a result. In Look to Windward an example is cited of an attempt to destroy a Culture Mind by smuggling a minuscule antimatter bomb onto a Culture orbital inside the head of a Chelgrian agent. However the bomb ends up being spotted without the taboo being broken.
In Consider Phlebas, a typical Mind is described as a mirror-like ellipsoid of several dozen cubic metres, but weighing many thousands of tons, due to the fact that it is made up of hyper-dense matter. It is noted that most of its 'body' only exists in the real world at the outer shell, the inner workings staying constantly within hyperspace.
The Mind in Consider Phlebas is also described as having internal power sources which function as back-up shield generators and space propulsion, and seeing the rational, safety-conscious thinking of Minds, it would be reasonable to assume that all Minds have such features, as well as a complement of drones and other remote sensors as also described.
Other equipment available to them spans the whole range of the Culture's technological capabilities and its practically limitless resources. However, this equipment would more correctly be considered emplaced in the ship or orbital that the Mind is controlling, rather than being part of the Mind itself.
Minds are constructed entities, which have general parameters fixed by their constructors (other Minds) before 'birth', not unlike biological beings. A wide variety of characteristics can be and are manipulated, such as introversion-extroversion, aggressiveness (for warships) or general disposition.
However, the character of a Mind evolves as well, and Minds often change over the course of centuries, sometimes changing personality entirely. This is often followed by them becoming eccentric or at least somewhat odd. Others drift from the Culture-accepted ethical norms, and may even start influencing their own society in subtle ways, selfishly furthering their own views of how the Culture should act.
Minds have also been known to commit suicide to escape punishment, or because of grief.
Minds are constructed with a personality typical of the Culture's interests, i.e. full of curiosity, general benevolence (expressed in the 'good works' actions of the Culture, or in the protectiveness regarding sentient beings) and respect for the Culture's customs.
Nonetheless, Minds have their own interests in addition to what their peers expect them to do for the Culture, and may develop fascinations or hobbies like other sentient beings do.
The mental capabilities of Minds are described in Excession to be vast enough to run entire universe-simulations inside their own imaginations, exploring metamathical (a fictional branch of metamathematics) scenarios, an activity addictive enough to cause some Minds to totally withdraw from caring about our own physical reality into "Infinite Fun Space", their own, ironic and understated term for this sort of activity.
One of the main activities of Ship Minds is the guidance of spaceships from a certain minimum size upwards. A culture spaceship is the Mind and vice versa; there are no different names for the two, and a spaceship without a Mind would be considered damaged or incomplete to the Culture.
Ship Mind classes include General Systems Vehicle (GSV), Medium Systems Vehicle (), Limited Systems Vehicle (), General Contact Vehicle (), General Contact Unit (GCU), Limited Contact Unit (), Rapid Offensive Unit (), General Offensive Unit (), Limited Offensive Unit (), Demilitarised ROU (), Demilitarised GOU (), Demilitarised LOU (), Very Fast Picket (–synonym for dROU), Fast Picket (–synonym for dGOU or dLOU), and Superlifter.
These ships provide a convenient 'body' for a Mind, which is too large and too important to be contained within smaller, more fragile shells. Following the 'body' analogy, it also provides the Mind with the capability of physical movement. As Minds are living beings with curiosity, emotion and wishes of their own, such mobility is likely very important to most.
Culture Minds (mostly also being ships) usually give themselves whimsical names, though these often hint at their function as well. Even the names of warships retain this humorous approach, though the implications are much darker.
Some Minds also take on functions which either preclude or discourage movement. These usually administer various types of Culture facilities:
Orbital Hubs – A Culture Orbital is a smaller version of a ringworld, with large numbers of people living on the inside surface of them, in a planet-like environment.
Rocks – Minds in charge of planetoid-like structures, built/accreted, mostly from the earliest times of the Culture before it moved into space-built orbitals.
Stores – Minds of a quiet temperament run these asteroids, containing vast hangars, full of mothballed military ships or other equipment. Some 'Rocks' also act as 'Stores'.
University Sages – Minds that run Culture universities / schools, a very important function as every Culture citizen has an extensive education and further learning is considered one of the most important reasons for life in the Culture.
Eccentric – Culture Minds who have become "... a bit odd" (as compared to the very rational standards of other Culture Minds). Existing at the fringe of the Culture, they can be considered (and consider themselves) as somewhat, but not wholly part of the Culture.
Sabbaticaler – Culture Minds who have decided to abdicate from their peer-pressure based duties in the Culture for a time.
Ulterior – Minds of the Culture Ulterior, an umbrella term for all the no-longer-quite-Culture factions.
Converts – Minds (or sentient computers) from other societies who have chosen to join the Culture.
Absconder – Minds who have completely left the Culture, especially when in doing so having deserted some form of task.
Deranged – A more extreme version of Eccentric as implied in The Hydrogen Sonata
Minds (and, as a consequence, Culture starships) usually bear names that do a little more than just identify them. The Minds themselves choose their own names, and thus they usually express something about a particular Mind's attitude, character or aims in their personal life. They range from funny to just plain cryptic. Some examples are:
Sanctioned Parts List – a habitation / factory ship
So Much For Subtlety – a habitation / factory ship
All Through With This Niceness And Negotiation Stuff – a warship
Attitude Adjuster – a warship
Of Course I Still Love You – an ambassador ship
Funny, It Worked Last Time... – an ambassador ship
Names
Some humanoid or drone Culture citizens have long names, often with seven or more words. Some of these words specify the citizen's origin (place of birth or manufacture), some an occupation, and some may denote specific philosophical or political alignments (chosen later in life by the citizen themselves), or make other similarly personal statements. An example would be Diziet Sma, whose full name is Rasd-Coduresa Diziet Embless Sma da' Marenhide:
Rasd-Coduresa is the planetary system of her birth, and the specific object (planet, orbital, Dyson sphere, etc.). The -sa suffix is roughly equivalent to -er in English. By this convention, Earth humans would all be named Sun-Earthsa (or Sun-Earther).
Diziet is her given name. This is chosen by a parent, usually the mother.
Embless is her chosen name. Most Culture citizens choose this when they reach adulthood (according to The Player of Games this is known as "completing one's name"). As with all conventions in the Culture, it may be broken or ignored: some change their chosen name during their lives, some never take one.
Sma is her surname, usually taken from one's mother.
da' Marenhide is the house or estate she was raised within, the da or dam being similar to von in German. (The usual formation is dam; da is used in Sma's name because the house name begins with an M, eliding an awkward phoneme repetition.)
Iain Banks gave his own Culture name as "Sun-Earther Iain El-Bonko Banks of North Queensferry".
Death
The Culture has a relatively relaxed attitude towards death. Genetic manipulation and the continual benevolent surveillance of the Minds make natural or accidental death almost unknown. Advanced technology allows citizens to make backup copies of their personalities, allowing them to be resurrected in case of death. The form of that resurrection can be specified by the citizen, with personalities returning either in the same biological form, in an artificial form (see below), or even just within virtual reality. Some citizens choose to go into "storage" (a form of suspended animation) for long periods of time, out of boredom or curiosity about the future.
Attitudes individual citizens have towards death are varied (and have varied throughout the Culture's history). While many, if not most, citizens make some use of backup technology, many others do not, preferring instead to risk death without the possibility of recovery (for example when engaging in extreme sports). These citizens are sometimes called "disposables", and are described in Look to Windward. Taking into account such accidents, voluntary euthanasia for emotional reasons, or choices like sublimation, the average lifespan of humans is described in Excession as being around 350 to 400 years. Some citizens choose to forgo death altogether, although this is rarely done and is viewed as an eccentricity. Other options instead of death include conversion of an individual's consciousness into an AI, joining of a group mind (which can include biological and non-biological consciousnesses), or subliming (usually in association with a group mind).
Concerning the lifespan of drones and Minds, given the durability of Culture technology and the options of mindstate backups, it is reasonable to assume that they live as long as they choose. Even Minds, with their utmost complexity, are known to be backed up (and reactivated if they for example die in a risky mission, see GSV Lasting Damage). It is noted that even Minds themselves do not necessarily live forever either, often choosing to eventually sublime or even killing themselves (as does the double-Mind GSV Lasting Damage due to its choices in the Culture-Idiran war).
Science and technology
Anti-gravity and forcefields
The Culture (and other societies) have developed powerful anti-gravity abilities, closely related to their ability to manipulate forces themselves.
In this ability they can create action-at-a-distance – including forces capable of pushing, pulling, cutting, and even fine manipulation, and forcefields for protection, visual display or plain destructive ability. Such applications still retain restrictions on range and power: while forcefields of many cubic kilometres are possible (and in fact, orbitals are held together by forcefields), even in the chronologically later novels, such as Look to Windward, spaceships are still used for long-distance travel and drones for many remote activities.
With the control of a Mind, fields can be manipulated over vast distances. In Use of Weapons, a Culture warship uses its electromagnetic effectors to hack into a computer light years away.
Artificial intelligence
Artificial intelligences (and to a lesser degree, the non-sentient computers omnipresent in all material goods), form the backbone of the technological advances of the Culture. Not only are they the most advanced scientists and designers the Culture has, their lesser functions also oversee the vast (but usually hidden) production and maintenance capabilities of the society.
The Culture has achieved artificial intelligences where each Mind has thought processing capabilities many orders of magnitude beyond that of human beings, and data storage drives which, if written out on paper and stored in filing cabinets, would cover thousands of planets skyscraper high (as described by one Mind in Consider Phlebas). Yet it has managed to condense these entities to a volume of several dozen cubic metres (though much of the contents and the operating structure are continually in hyperspace). Minds also demonstrate reaction times and multitasking abilities orders of magnitude greater than any sentient being; armed engagements between Culture and equivalent technological civilisations sometimes occur in timeframes as short as microseconds, and standard Orbital Minds are capable of running all of the vital systems on the Orbital while simultaneously conversing with millions of the inhabitants and observing phenomena in the surrounding regions of space.
At the same time, it has achieved drone sentiences and capability of Special Circumstance proportions in forms that could fit easily within a human hand, and built extremely powerful (though not sentient) computers capable of fitting into tiny insect-like drones. Some utilitarian devices (such as spacesuits) are also provided with artificial sentience. These specific types of drones, like all other Culture AI, would also be considered citizens - though as described in the short story "Descendant", they may spend most of the time when their "body" is not in use in a form of remote-linked existence outside of it, or in a form of AI-level virtual reality.
Energy manipulation
A major feature of its post-scarcity society, the Culture is obviously able to gather, manipulate, transfer and store vast amounts of energy. While not explained in detail in the novels, this involves antimatter and the "energy grid", a postulated energy field dividing the universe from neighboring anti-matter universes, and providing practically limitless energy. Transmission or storage of such energy is not explained, though these capabilities must be powerful as well, with tiny drones capable of very powerful manipulatory fields and forces.
The Culture also uses various forms of energy manipulation as weapons, with "gridfire", a method of creating a dimensional rift to the energy grid, releasing astronomical amounts of energy into a region of non-hyperspace, being described as a sort of ultimate weapon more destructive than collapsed antimatter bombardment. One character in Consider Phlebas refers to gridfire as "the weaponry of the end of the universe". Gridfire resembles the zero-point energy used within many popular science fiction stories.
Matter displacement
The Culture (at least by the time of The Player of Games) has developed a form of teleportation capable of transporting both living and unliving matter instantaneously via wormholes. This technology has not rendered spacecraft obsolete – in Excession a barely apple-sized drone was displaced no further than a light-second at maximum range (mass being a limiting factor determining range), a tiny distance in galactic terms. The process also still has a very small chance of failing and killing living beings, but the chance is described as being so small (1 in 61 million) that it normally only becomes an issue when transporting a large number of people and is only regularly brought up due to the Culture's safety conscious nature.
Displacement is an integral part of Culture technology, being widely used for a range of applications from peaceful to belligerent. Displacing warheads into or around targets is one of the main forms of attack in space warfare in the Culture universe. The Player of Games mentions that drones can be displaced to catch a person falling from a cliff before they impact the ground, as well.
Brain–computer interfaces
Through "neural lace", a form of brain–computer interface that is implanted into the brains of young people and grows with them, the Culture has the capability to read and store the full sentience of any being, biological or artificial, and thus reactivate a stored being after its death. The neural lace also allows wireless communication with the Minds and databases. This also necessitates the capability to read thoughts, but as described in Look to Windward, doing this without permission is considered taboo.
Starships and warp drives
Starships are living spaces, vehicles and ambassadors of the Culture. A proper Culture starship (as defined by hyperspace capability and the presence of a Mind to inhabit it) may range from several hundreds of metres to hundreds of kilometres. The latter may be inhabited by billions of beings and are artificial worlds in their own right, including whole ecosystems, and are considered to be self-contained representations of all aspects of Culture life and capability.
The Culture (and most other space-faring species in its universe) use a form of Hyperspace-drive to achieve faster-than-light speeds. Banks has evolved a (self-confessedly) technobabble system of theoretical physics to describe the ships' acceleration and travel, using such concepts as "infraspace" and "ultraspace" and an "energy grid" between universes (from which the warp engines "push off" to achieve momentum). An "induced singularity" is used to access infra or ultra space from real space; once there, "engine fields" reach down to the Grid and gain power and traction from it as they travel at high speeds.
These hyperspace engines do not use reaction mass and hence do not need to be mounted on the surface of the ship. They are described as being very dense exotic matter, which only reveals its complexity under a powerful microscope. Acceleration and maximum speed depend on the ratio of the mass of the ship to its engine mass. As with any other matter aboard, ships can gradually manufacture extra engine volume or break it down as needed. In Excession one of the largest ships of the Culture redesigns itself to be mostly engine and reaches a speed of 233,000 times lightspeed. Within the range of the Culture's influence in the galaxy, most ships would still take years of travelling to reach the more remote spots.
Other than the engines used by larger Culture ships, there are a number of other propulsion methods such as gravitic drive at sublight speeds, with antimatter, fusion and other reaction engines occasionally seen with less advanced civilisations, or on Culture hobby craft.
Warp engines can be very small, with Culture drones barely larger than fist-size described as being thus equipped. There is also at least one (apparently non-sentient) species (the "Chuy-Hirtsi" animal), that possesses the innate capability of warp travel. In Consider Phlebas, it is being used as a military transport by the Idirans, but no further details are given.
Nanotechnology
The Culture has highly advanced nanotechnology, though descriptions of such technology in the books is limited. Many of the described uses are by or for Special Circumstances, but there are no indications that the use of nanotechnology is limited in any way. (In a passage in one of the books, there is a brief reference to the question of sentience when comparing the human brain or a "pico-level substrate".)
One of the primary clandestine uses of nanotechnology is information gathering. The Culture likes to be in the know, and as described in Matter "they tend to know everything." Aside from its vast network of sympathetic allies and wandering Culture citizens one of the primary ways that the Culture keeps track of important events is by the use of practically invisible nanobots capable of recording and transmitting their observations. This technique is described as being especially useful to track potentially dangerous people (such as ex-Special Circumstance agents). Via such nanotechnology, it is potentially possible for the Culture (or similarly advanced societies) to see everything happening on a given planet, orbital or any other habitat. The usage of such devices is limited by various treaties and agreements among the Involved.
In addition, EDust assassins are potent Culture terror weapons, composed entirely of nano machines called EDust, or "Everything Dust." They are capable of taking almost any shape or form, including swarms of insects or entire humans or aliens, and possess powerful weaponry capable of levelling entire buildings.
Living space
Much of the Culture's population lives on orbitals, vast artificial worlds that can accommodate billions of people. Others travel the galaxy in huge space ships such as General Systems Vehicles (GSVs) that can accommodate hundreds of millions of people. Almost no Culture citizens are described as living on planets, except when visiting other civilisations. The reason for this is partly because the Culture believes in containing its own expansion to self-constructed habitats, instead of colonising or conquering new planets. With the resources of the universe allowing permanent expansion (at least assuming non-exponential growth), this frees them from having to compete for living space.
The Culture, and other civilisations in Banks' universe, are described as living in these various, often constructed habitats:
Airspheres
These are vast, brown dwarf-sized bubbles of atmosphere enclosed by force fields, and (presumably) set up by an ancient advanced race at least one and a half billion years ago (see: Look to Windward). There is only minimal gravity within an airsphere. They are illuminated by moon-sized orbiting planetoids that emit enormous light beams.
Citizens of the Culture live there only very occasionally as guests, usually to study the complex ecosystem of the airspheres and the dominant life-forms: the "dirigible behemothaurs" and "gigalithine lenticular entities", which may be described as inscrutable, ancient intelligences looking similar to a cross between gigantic blimps and whales. The airspheres slowly migrate around the galaxy, taking anywhere from 50 to 100 million years to complete one circuit. In the novels no one knows who created the airspheres or why, but it is presumed that whoever did has long since sublimed but may maintain some obscure link with the behemothaurs and lenticular entities. Guests in the airspheres are not allowed to use any force-field technology, though no reason has been offered for this prohibition.
The airspheres resemble in some respects the orbit-sized ring of breathable atmosphere created by Larry Niven in The Integral Trees, but spherical not toroidal, require a force field to retain their integrity, and arose by artificial rather than natural processes.
Orbitals
One of the main types of habitats of the Culture, an orbital is a ring structure orbiting a star as would a megastructure akin to a bigger Bishop ring. Unlike a Ringworld or a Dyson Sphere, an orbital does not enclose the star (being much too small). Like a ringworld, the orbital rotates to provide an analog of gravity on the inner surface. A Culture orbital rotates about once every 24 hours and has gravity-like effect about the same as the gravity of Earth, making the diameter of the ring about , and ensuring that the inhabitants experience night and day. Orbitals feature prominently in many Culture stories.
Planets
Though many other civilisations in the Culture books live on planets, the Culture as currently developed has little direct connection to on-planet existence. Banks has written that he presumes this to be an inherent consequence of space colonisation, and a foundation of the liberal nature of the Culture. A small number of home worlds of the founding member-species of the Culture receive a mention in passing, and a few hundred human-habitable worlds were colonised (some of them terraformed) before the Culture elected to turn towards artificial habitats, preferring to keep the planets it encounters wild. Since then, the Culture has come to look down on terraforming as inelegant, ecologically problematic and possibly even immoral. Less than one percent of the population of the Culture lives on planets, and many find the very concept somewhat bizarre.
This attitude is not absolute though; in Consider Phlebas, some Minds suggest testing a new technology on a "spare planet" (knowing that it could be destroyed in an antimatter explosion if unsuccessful). One could assume – from Minds' usual ethics – that such a planet would have been lifeless to start with. It is also quite possible, even probable, that the suggestion was not made in complete seriousness.
Rings
Ringworld-like megastructures exist in the Culture universe; the texts refer to them simply as "Rings" (with a capital R). As opposed to the smaller orbitals which revolve around a star, these structures are massive and completely encircle a star. Banks does not describe these habitats in detail, but records one as having been destroyed (along with three Spheres) in the Idiran-Culture war. In Matter, the Morthanveld people possesses ringworld-like structures made of innumerable various-sized tubes. Those structures, like Niven's Ringworld, encircle a star and are about the same size.
Rocks
These are asteroids and other non-planetary bodies hollowed out for habitation and usually spun for centrifugal artificial gravity. Rocks (with the exception of those used for secretive purposes) are described as having faster-than-light space drives, and thus can be considered a special form of spaceship. Like Orbitals, they are usually administered by one or more Minds.
Rocks do not play a large part in most of the Culture stories, though their use as storage for mothballed military ships (Pittance) and habitats (Phage Rock, one of the founding communities of the Culture) are both key plot points in Excession.
Shellworlds
Shellworlds are introduced in Matter, and consist of multilayered levels of concentric spheres in four dimensions held up by countless titanic interior towers. Their extra dimensional characteristics render some products of Culture technology too dangerous to use and yet others ineffective, notably access to hyperspace. About 4000 were built millions of years ago as vast machines intended to cast a forcefield around the whole of the galaxy for unknown purposes; less than half of those remain at the time of Matter, many having been destroyed by a departed species known as the Iln. The species that developed this technology, known as the Veil or the Involucra, are now lost, and many of the remaining shellworlds have become inhabited, often by many different species throughout their varying levels. Many still hold deadly secret defence mechanisms, often leading to great danger for their new inhabitants, giving them one of their other nicknames: Slaughter Worlds.
Ships
Ships in the Culture are intelligent individuals, often of very large size, controlled by one or more Minds. The ship is considered by the Culture generally and the Mind itself to be the Mind's body (compare avatars). Some ships (GSVs, for example) are tens or even hundreds of kilometres in length and may have millions or even billions of residents who live on them full-time; together with Orbitals, such ships represent the main form of habitat for the Culture. Such large ships may temporarily contain smaller ships with their own populations, and/or manufacture such ships themselves.
In Use of Weapons, the protagonist Zakalwe is allowed to acclimatise himself to the Culture by wandering for days through the habitable levels of a ship (the GSV Size Isn't Everything, which is described as over long), eating and sleeping at the many locations which provide food and accommodation throughout the structure and enjoying the various forms of contact possible with the friendly and accommodating inhabitants.
Spheres
Dyson spheres also exist in the Culture universe but receive only passing mention as "Spheres". Three spheres are recorded as having been destroyed in the Idiran-Culture war.
Interaction with other civilisations
The Culture, living mostly on massive spaceships and in artificial habitats, and also feeling no need for conquest in the typical sense of the word, possesses no borders. Its sphere of influence is better defined by the (current) concentration of Culture ships and habitats as well as the measure of effect its example and its interventions have already had on the "local" population of any galactic sector. As the Culture is also a very graduated and constantly evolving society, its societal boundaries are also constantly in flux (though they tend to be continually expanding during the novels), peacefully "absorbing" societies and individuals.
While the Culture is one of the most advanced and most powerful of all galactic civilisations, it is but one of the "high-level Involved" (called "Optimae" by some less advanced civilisations), the most powerful non-sublimed civilisations which mentor or control the others.
An Involved society is a highly advanced group that has achieved galaxy-wide involvement with other cultures or societies. There are a few dozen Involved societies and hundreds or thousands of well-developed (interstellar) but insufficiently influential societies or cultures; there are also well-developed societies known as "galactically mature" which do not take a dynamic role in the galaxy as a whole. In the novels, the Culture might be considered the premier Involved society, or at least the most dynamic and energetic, especially given that the Culture itself is a growing multicultural fusion of Involved societies. The Involved are contrasted with the Sublimed, groups that have reached a high level of technical development and galactic influence but subsequently abandoned physical reality, ceasing to take serious interventionist interest in galactic civilisation. They are also contrasted with what some Culture people loosely refer to as "barbarians", societies of intelligent beings which lack the technical capacity to know about or take a serious role in their interstellar neighbourhood. There are also the elder civilisations, which are civilisations that reached the required level of technology for sublimation, but chose not to, and have retreated from the larger galactic meta-civilisation.
The Involved are also contrasted with hegemonising swarms (a term used in several of Banks' Culture novels). These are entities that exist to convert as much of the universe as possible into more of themselves; most typically these are technological in nature, resembling more sophisticated forms of grey goo, but the term can be applied to cultures that are sufficiently single-minded in their devotion to mass conquest, control, and colonisation. Both the Culture and the author (in his Notes on the Culture) find this behaviour quixotic and ridiculous. Most often, societies categorised as hegemonising swarms consist of species or groups newly arrived in the galactic community with highly expansionary and exploitative goals. The usage of the term "hegemonising swarm" in this context is considered derisive in the Culture and among other Involved and is used to indicate their low regard for those with these ambitions by comparing their behaviour to that of mindless self-replicating technology. The Culture's central moral dilemma regarding intervention in other societies can be construed as a conflict between the desire to help others and the desire to avoid becoming a hegemonising swarm themselves.
Foreign policy
Although they lead a comfortable life within the Culture, many of its citizens feel a need to be useful and to belong to a society that does not merely exist for their own sake but that also helps improve the lot of sentient beings throughout the galaxy. For that reason the Culture carries out "good works", covertly or overtly interfering in the development of lesser civilisations, with the main aim to gradually guide them towards less damaging paths. As Culture citizens see it these good works provide the Culture with a "moral right to exist".
A group within the Culture, known as Contact, is responsible for its interactions (diplomatic or otherwise) with other civilisations. Non-Contact citizens are apparently not prevented from travelling or interacting with other civilisations, though the effort and potential danger involved in doing so alone makes it much more commonly the case for Culture people simply to join Contact if they long to "see the world". Further within Contact, an intelligence organisation named Special Circumstances exists to deal with interventions which require more covert behaviour; the interventionist approach that the Culture takes to advancing other societies may often create resentment in the affected civilisations and thus requires a rather delicate touch (see: Look to Windward).
In Matter, it is described that there are a number of other galactic civilisations that come close to or potentially even surpass the Culture in power and sophistication. The Culture is very careful and considerate of these groupings, and while still trying to convince them of the Culture ideal, will be much less likely to openly interfere in their activities.
In Surface Detail, three more branches of Contact are described: Quietus, the Quietudinal Service, whose purview is dealing with those entities who have retired from biological existence into digital form and/or those who have died and been resurrected; Numina, which is described as having the charge of contact with races that have sublimed; and Restoria, a subset of Contact which focuses on containing and negating the threat of swarms of self-replicating creatures ("hegswarms").
Behaviour in war
While the Culture is normally pacifist, Contact historically acts as its military arm in times of war and Special Circumstances can be considered its secret service and its military intelligence. During war, most of the strategic and tactical decisions are taken by the Minds, with apparently only a small number of especially gifted humans, the "Referrers", being involved in the top-level decisions, though they are not shown outside Consider Phlebas. It is shown in Consider Phlebas that actual decisions to go to war (as opposed to purely defensive actions) are based on a vote of all Culture citizens, presumably after vigorous discussion within the whole society.
It is described in various novels that the Culture is extremely reluctant to go to war, though it may start to prepare for it long before its actual commencement. In the Idiran-Culture War (possibly one of the most hard-fought wars for the normally extremely superior Culture forces), various star systems, stellar regions and many orbital habitats were overrun by the Idirans before the Culture had converted enough of its forces to military footing. The Culture Minds had had enough foresight to evacuate almost all its affected citizens (apparently numbering in the many billions) in time before actual hostilities reached them. As shown in Player of Games, this is a standard Culture tactic, with its strong emphasis on protecting its citizens rather than sacrificing some of them for short-term goals.
War within the Culture is mostly fought by the Culture's sentient warships, the most powerful of these being war-converted GSVs, which are described as powerful enough to oppose whole enemy fleets. The Culture has little use for conventional ground forces (as it rarely occupies enemy territory); combat drones equipped with knife missiles do appear in Descendant and "terror weapons" (basically intelligent, nano-form assassins) are mentioned in Look to Windward, while infantry combat suits of great power (also usable as capable combat drones when without living occupants) are used in Matter.
Relevance to real-world politics
The inner workings of The Culture are not especially described in detail though it is shown that the society is populated by an empowered, educated and augmented citizenry in a direct democracy or highly democratic and transparent system of self-governance. In comparisons to the real world, intended or not, the Culture could resemble various posited egalitarian societies including in the writings of Karl Marx, the end condition of communism after a withering away of the state, the anarchism of Bakunin and Fourier et al., libertarian socialism, council communism and anarcho-communism. Other characteristics of The Culture that are recognisable in real world politics include pacifism, post-capitalism, and transhumanism. Banks deliberately portrayed an imperfect utopia whose imperfection or weakness is related to its interaction with the 'other', that is, exterior civilisations and species that are sometimes variously warred with or mishandled through the Culture's Contact section which cannot always control its intrigues and the individuals it either 'employs' or interacts with. This 'dark side' of The Culture also alludes to or echoes mistakes and tragedies in 20th century Marxist–Leninist countries, although the Culture is generally portrayed as far more 'humane' and just.
Utopia
Comparisons are often made between the Culture and twentieth and twenty first century Western civilisation and nation-states, particularly their interventions in less-developed societies. These are often confused with regard to the author's assumed politics.
Ben Collier has said that the Culture is a utopia carrying significantly greater moral legitimacy than the West's, by comparison, proto-democracies. While Culture interventions can seem similar at first to Western interventions, especially when considered with their democratising rhetoric, the argument is that the Culture operates completely without material need, and therefore without the possibility of baser motives. This is not to say that the Culture's motives are purely altruistic; a peaceful, enlightened universe full of good neighbours lacking ethnic, religious, and sexual chauvinisms is in the Culture's interest as well. Furthermore, the Culture's ideals, in many ways similar to those of the liberal perspective today, are to a much larger extent realised internally in comparison to the West.
Criticism
Examples are the use of mercenaries to perform the work that the Culture does not want to get their hands dirty with, and even outright threats of invasion (the Culture has issued ultimatums to other civilisations before). Some commentators have also argued that those Special Circumstances agents tasked with civilising foreign cultures (and thus potentially also changing them into a blander, more Culture-like state) are also those most likely to regret these changes, with parallels drawn to real-world special forces trained to operate within the cultural mindsets of foreign nations.
The events of Use of Weapons are an example of just how dirty Special Circumstances will play in order to get their way and the conspiracy at the heart of the plot of Excession demonstrates how at least some Minds are prepared to risk killing sentient beings when they conclude that these actions are beneficial for the long term good. Special Circumstances represents a very small fraction of Contact, which itself is only a small fraction of the entire Culture, making it comparable again to size and influence of modern intelligence agencies.
Issues raised
The Culture stories are largely about problems and paradoxes that confront liberal societies. The Culture itself is an "ideal-typical" liberal society; that is, as pure an example as one can reasonably imagine. It is highly egalitarian; the liberty of the individual is its most important value; and all actions and decisions are expected to be determined according to a standard of reasonability and sociability inculcated into all people through a progressive system of education. It is a society so beyond material scarcity that for almost all practical purposes its people can have and do what they want. If they do not like the behaviour or opinions of others, they can easily move to a more congenial Culture population centre (or Culture subgroup), and hence there is little need to enforce codes of behaviour.
Even the Culture has to compromise its ideals where diplomacy and its own security are concerned. Contact, the group that handles these issues, and Special Circumstances, its secret service division, can employ only those on whose talents and emotional stability it can rely, and may even reject self-aware drones built for its purposes that fail to meet its requirements. Hence these divisions are regarded as the Culture's elite and membership is widely regarded as a prize; yet also something that can be shameful as it contradicts many of the Culture's moral codes.
Within Contact and Special Circumstances, there are also inner circles that can take control in crises, somewhat contradictory to the ideal notions of democratic and open process the Culture espouses. Contact and Special Circumstances may suppress or delay the release of information, for example to avoid creating public pressure for actions they consider imprudent or to prevent other civilisations from exploiting certain situations.
In dealing with less powerful regressive civilisations, the Culture usually intervenes discreetly, for example by protecting and discreetly supporting the more liberal elements, or subverting illiberal institutions. For instance, in Use of Weapons, the Culture operates within a less advanced illiberal society through control of a business cartel which is known for its humanitarian and social development investments, as well as generic good Samaritanism. In Excession, a sub-group of Minds conspires to provoke a war with the extremely sadistic Affront, although the conspiracy is foiled by a GSV that is a deep cover Special Circumstances agent. Only one story, Consider Phlebas, pits the Culture against a highly illiberal society of approximately equal power: the aggressive, theocratic Idirans. Though they posed no immediate, direct threat to the Culture, the Culture declared war because it would have felt useless if it allowed the Idirans' ruthless expansion to continue. The Culture's decision was a value-judgement rather than a utilitarian calculation, and the "Peace Faction" within the Culture seceded. Later in the timeline of the Culture's universe, the Culture has reached a technological level at which most past civilisations have Sublimed, in other words disengaged from Galactic politics and from most physical interaction with other civilisations. The Culture continues to behave "like an idealistic adolescent".
As of 2008, three stories force the Culture to consider its approach to more powerful civilisations. In one incident during the Culture-Idiran War, they strive to avoid offending a civilisation so advanced that it has disengaged from Galactic politics, and note that this hyper-advanced society is not a threat to either the welfare or the values of the Culture. In Excession, an overwhelmingly more powerful individual from an extremely advanced civilisation is simply passing through on its way from one plane of the physical Reality to another, and there is no real interaction. In the third case it sets up teams to study a civilisation that is not threatening but is thought to have eliminated aggressors in the past.
List of books describing the Culture
Banks on the Culture
When asked in Wired magazine (June 1996) whether mankind's fate depends on having intelligent machines running things, as in the Culture, Banks replied:
In a 2002 interview with Science Fiction Weekly magazine, when asked:
Banks replied:
Notes
References
Bibliography
Primary Sources
.
.
.
.
.
.
.
.
.
.
.
Secondary Sources
.
.
.
.
.
.
Interviews and Reviews
.
.
.
News Sources
.
.
Further reading
Anarchist fiction
Communism in fiction
Artificial intelligence in fiction
Artificial wormholes in fiction
Cyborgs in fiction
Fiction about robots
Genetic engineering in fiction
Fiction about consciousness transfer
Nanotechnology in fiction
Fictional civilizations
Science fiction literature
Series of books
Space opera
Utopian fiction |
58899 | https://en.wikipedia.org/wiki/Direct%20sum%20of%20modules | Direct sum of modules | In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.
The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces.
See the article decomposition of a module for a way to write a module as a direct of submodules.
Construction for vector spaces and abelian groups
We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by considering these two cases in depth.
Construction for two vector spaces
Suppose V and W are vector spaces over the field K. The cartesian product V × W can be given the structure of a vector space over K by defining the operations componentwise:
(v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)
α (v, w) = (α v, α w)
for v, v1, v2 ∈ V, w, w1, w2 ∈ W, and α ∈ K.
The resulting vector space is called the direct sum of V and W and is usually denoted by a plus symbol inside a circle:
It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w.
The subspace V × {0} of V ⊕ W is isomorphic to V and is often identified with V; similarly for {0} × W and W. (See internal direct sum below.) With this identification, every element of V ⊕ W can be written in one and only one way as the sum of an element of V and an element of W. The dimension of V ⊕ W is equal to the sum of the dimensions of V and W. One elementary use is the reconstruction
of a finite vector space from any subspace W and its orthogonal complement:
This construction readily generalizes to any finite number of vector spaces.
Construction for two abelian groups
For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum . Thus the Cartesian product G × H is equipped with the structure of an abelian group by defining the operations componentwise:
(g1, h1) + (g2, h2) = (g1 + g2, h1 + h2)
for g1, g2 in G, and h1, h2 in H.
Integral multiples are similarly defined componentwise by
n(g, h) = (ng, nh)
for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the direct sum above.
The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle:
It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h.
The subgroup G × {0} of G ⊕ H is isomorphic to G and is often identified with G; similarly for {0} × H and H. (See internal direct sum below.) With this identification, it is true that every element of G ⊕ H can be written in one and only one way as the sum of an element of G and an element of H. The rank of G ⊕ H is equal to the sum of the ranks of G and H.
This construction readily generalises to any finite number of abelian groups.
Construction for an arbitrary family of modules
One should notice a clear similarity between the definitions of the direct sum of two vector spaces and of two abelian groups. In fact, each is a special case of the construction of the direct sum of two modules. Additionally, by modifying the definition one can accommodate the direct sum of an infinite family of modules. The precise definition is as follows .
Let R be a ring, and {Mi : i ∈ I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is then defined to be the set of all sequences where and for cofinitely many indices i. (The direct product is analogous but the indices do not need to cofinitely vanish.)
It can also be defined as functions α from I to the disjoint union of the modules Mi such that α(i) ∈ Mi for all i ∈ I and α(i) = 0 for cofinitely many indices i. These functions can equivalently be regarded as finitely supported sections of the fiber bundle over the index set I, with the fiber over being .
This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two such sequences (or functions) α and β can be added by writing for all i (note that this is again zero for all but finitely many indices), and such a function can be multiplied with an element r from R by defining for all i. In this way, the direct sum becomes a left R-module, and it is denoted
It is customary to write the sequence as a sum . Sometimes a primed summation is used to indicate that cofinitely many of the terms are zero.
Properties
The direct sum is a submodule of the direct product of the modules Mi . The direct product is the set of all functions α from I to the disjoint union of the modules Mi with α(i)∈Mi, but not necessarily vanishing for all but finitely many i. If the index set I is finite, then the direct sum and the direct product are equal.
Each of the modules Mi may be identified with the submodule of the direct sum consisting of those functions which vanish on all indices different from i. With these identifications, every element x of the direct sum can be written in one and only one way as a sum of finitely many elements from the modules Mi.
If the Mi are actually vector spaces, then the dimension of the direct sum is equal to the sum of the dimensions of the Mi. The same is true for the rank of abelian groups and the length of modules.
Every vector space over the field K is isomorphic to a direct sum of sufficiently many copies of K, so in a sense only these direct sums have to be considered. This is not true for modules over arbitrary rings.
The tensor product distributes over direct sums in the following sense: if N is some right R-module, then the direct sum of the tensor products of N with Mi (which are abelian groups) is naturally isomorphic to the tensor product of N with the direct sum of the Mi.
Direct sums are commutative and associative (up to isomorphism), meaning that it doesn't matter in which order one forms the direct sum.
The abelian group of R-linear homomorphisms from the direct sum to some left R-module L is naturally isomorphic to the direct product of the abelian groups of R-linear homomorphisms from Mi to L: Indeed, there is clearly a homomorphism τ from the left hand side to the right hand side, where τ(θ)(i) is the R-linear homomorphism sending x∈Mi to θ(x) (using the natural inclusion of Mi into the direct sum). The inverse of the homomorphism τ is defined by for any α in the direct sum of the modules Mi. The key point is that the definition of τ−1 makes sense because α(i) is zero for all but finitely many i, and so the sum is finite.In particular, the dual vector space of a direct sum of vector spaces is isomorphic to the direct product of the duals of those spaces.
The finite direct sum of modules is a biproduct: If are the canonical projection mappings and are the inclusion mappings, then equals the identity morphism of A1 ⊕ ⋯ ⊕ An, and is the identity morphism of Ak in the case l = k, and is the zero map otherwise.
Internal direct sum
Suppose M is some R-module, and Mi is a submodule of M for every i in I. If every x in M can be written in one and only one way as a sum of finitely many elements of the Mi, then we say that M is the internal direct sum of the submodules Mi . In this case, M is naturally isomorphic to the (external) direct sum of the Mi as defined above .
A submodule N of M is a direct summand of M if there exists some other submodule N′ of M such that M is the internal direct sum of N and N′. In this case, N and N′ are complementary submodules.
Universal property
In the language of category theory, the direct sum is a coproduct and hence a colimit in the category of left R-modules, which means that it is characterized by the following universal property. For every i in I, consider the natural embedding
which sends the elements of Mi to those functions which are zero for all arguments but i. If fi : Mi → M are arbitrary R-linear maps for every i, then there exists precisely one R-linear map
such that f o ji = fi for all i.
Grothendieck group
The direct sum gives a collection of objects the structure of a commutative monoid, in that the addition of objects is defined, but not subtraction. In fact, subtraction can be defined, and every commutative monoid can be extended to an abelian group. This extension is known as the Grothendieck group. The extension is done by defining equivalence classes of pairs of objects, which allows certain pairs to be treated as inverses. The construction, detailed in the article on the Grothendieck group, is "universal", in that it has the universal property of being unique, and homomorphic to any other embedding of a commutative monoid in an abelian group.
Direct sum of modules with additional structure
If the modules we are considering carry some additional structure (for example, a norm or an inner product), then the direct sum of the modules can often be made to carry this additional structure, as well. In this case, we obtain the coproduct in the appropriate category of all objects carrying the additional structure. Two prominent examples occur for Banach spaces and Hilbert spaces.
In some classical texts, the phrase "direct sum of algebras over a field" is also introduced for denoting the algebraic structure that is presently more commonly called a direct product of algebras; that is, the Cartesian product of the underlying sets with the componentwise operations. This construction, however, does not provide a coproduct in the category of algebras, but a direct product (see note below and the remark on direct sums of rings).
Direct sum of algebras
A direct sum of algebras and is the direct sum as vector spaces, with product
Consider these classical examples:
is ring isomorphic to split-complex numbers, also used in interval analysis.
is the algebra of tessarines introduced by James Cockle in 1848.
called the split-biquaternions, was introduced by William Kingdon Clifford in 1873.
Joseph Wedderburn exploited the concept of a direct sum of algebras in his classification of hypercomplex numbers. See his Lectures on Matrices (1934), page 151.
Wedderburn makes clear the distinction between a direct sum and a direct product of algebras: For the direct sum the field of scalars acts jointly on both parts: while for the direct product a scalar factor may be collected alternately with the parts, but not both:
Ian R. Porteous uses the three direct sums above, denoting them as rings of scalars in his analysis of Clifford Algebras and the Classical Groups (1995).
The construction described above, as well as Wedderburn's use of the terms and follow a different convention than the one in category theory. In categorical terms, Wedderburn's is a categorical product, whilst Wedderburn's is a coproduct (or categorical sum), which (for commutative algebras) actually corresponds to the tensor product of algebras.
Direct sum of Banach spaces
The direct sum of two Banach spaces and is the direct sum of and considered as vector spaces, with the norm for all and
Generally, if is a collection of Banach spaces, where traverses the index set then the direct sum is a module consisting of all functions defined over such that for all and
The norm is given by the sum above. The direct sum with this norm is again a Banach space.
For example, if we take the index set and then the direct sum is the space which consists of all the sequences of reals with finite norm
A closed subspace of a Banach space is complemented if there is another closed subspace of such that is equal to the internal direct sum Note that not every closed subspace is complemented; e.g. is not complemented in
Direct sum of modules with bilinear forms
Let be a family indexed by of modules equipped with bilinear forms. The orthogonal direct sum is the module direct sum with bilinear form defined by
in which the summation makes sense even for infinite index sets because only finitely many of the terms are non-zero.
Direct sum of Hilbert spaces
If finitely many Hilbert spaces are given, one can construct their orthogonal direct sum as above (since they are vector spaces), defining the inner product as:
The resulting direct sum is a Hilbert space which contains the given Hilbert spaces as mutually orthogonal subspaces.
If infinitely many Hilbert spaces for are given, we can carry out the same construction; notice that when defining the inner product, only finitely many summands will be non-zero. However, the result will only be an inner product space and it will not necessarily be complete. We then define the direct sum of the Hilbert spaces to be the completion of this inner product space.
Alternatively and equivalently, one can define the direct sum of the Hilbert spaces as the space of all functions α with domain such that is an element of for every and:
The inner product of two such function α and β is then defined as:
This space is complete and we get a Hilbert space.
For example, if we take the index set and then the direct sum is the space which consists of all the sequences of reals with finite norm Comparing this with the example for Banach spaces, we see that the Banach space direct sum and the Hilbert space direct sum are not necessarily the same. But if there are only finitely many summands, then the Banach space direct sum is isomorphic to the Hilbert space direct sum, although the norm will be different.
Every Hilbert space is isomorphic to a direct sum of sufficiently many copies of the base field, which is either This is equivalent to the assertion that every Hilbert space has an orthonormal basis. More generally, every closed subspace of a Hilbert space is complemented because it admits an orthogonal complement. Conversely, the Lindenstrauss–Tzafriri theorem asserts that if every closed subspace of a Banach space is complemented, then the Banach space is isomorphic (topologically) to a Hilbert space.
See also
References
.
.
.
.
Linear algebra
Module theory |
58992 | https://en.wikipedia.org/wiki/Linear-feedback%20shift%20register | Linear-feedback shift register | In computing, a linear-feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state.
The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by the XOR of some bits of the overall shift register value.
The initial value of the LFSR is called the seed, and because the operation of the register is deterministic, the stream of values produced by the register is completely determined by its current (or previous) state. Likewise, because the register has a finite number of possible states, it must eventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function can produce a sequence of bits that appears random and has a very long cycle.
Applications of LFSRs include generating pseudo-random numbers, pseudo-noise sequences, fast digital counters, and whitening sequences. Both hardware and software implementations of LFSRs are common.
The mathematics of a cyclic redundancy check, used to provide a quick check against transmission errors, are closely related to those of an LFSR. In general, the arithmetics behind LFSRs makes them very elegant as an object to study and implement. One can produce relatively complex logics with simple building blocks. However, other methods, that are less elegant but perform better, should be considered as well.
Fibonacci LFSRs
The bit positions that affect the next state are called the taps. In the diagram the taps are [16,14,13,11]. The rightmost bit of the LFSR is called the output bit. The taps are XOR'd sequentially with the output bit and then fed back into the leftmost bit. The sequence of bits in the rightmost position is called the output stream.
The bits in the LFSR state that influence the input are called taps.
A maximum-length LFSR produces an m-sequence (i.e., it cycles through all possible 2m − 1 states within the shift register except the state where all bits are zero), unless it contains all zeros, in which case it will never change.
As an alternative to the XOR-based feedback in an LFSR, one can also use XNOR. This function is an affine map, not strictly a linear map, but it results in an equivalent polynomial counter whose state is the complement of the state of an LFSR. A state with all ones is illegal when using an XNOR feedback, in the same way as a state with all zeroes is illegal when using XOR. This state is considered illegal because the counter would remain "locked-up" in this state. This method can be advantageous in hardware LFSRs using flip-flops that start in a zero state, as it does not start in a lockup state, meaning that the register does not need to be seeded in order to begin operation.
The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered a binary numeral system just as valid as Gray code or the natural binary code.
The arrangement of taps for feedback in an LFSR can be expressed in finite field arithmetic as a polynomial mod 2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is
The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e. x0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively.
The LFSR is maximal-length if and only if the corresponding feedback polynomial is primitive. This means that the following conditions are necessary (but not sufficient):
The number of taps is even.
The set of taps is setwise co-prime; i.e., there must be no divisor other than 1 common to all taps.
Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references.
There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in an n-bit LFSR is , where the 0 corresponds to the x0 = 1 term, then the corresponding "mirror" sequence is . So the tap sequence has as its counterpart . Both give a maximum-length sequence.
An example in C is below:
#include <stdint.h>
unsigned lfsr_fib(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
uint16_t bit; /* Must be 16-bit to allow bit<<15 later in the code */
unsigned period = 0;
do
{ /* taps: 16 14 13 11; feedback polynomial: x^16 + x^14 + x^13 + x^11 + 1 */
bit = ((lfsr >> 0) ^ (lfsr >> 2) ^ (lfsr >> 3) ^ (lfsr >> 5)) & 1u;
lfsr = (lfsr >> 1) | (bit << 15);
++period;
}
while (lfsr != start_state);
return period;
}
If a fast parity or popcount operation is available, the feedback bit can be computed more efficiently as the dot product of the register with the characteristic polynomial:
bit = parity(lfsr & 0x002Du);, or equivalently
bit = popcnt(lfsr & 0x002Du) /* & 1u */;. (The & 1u turns the popcnt into a true parity function, but the bitshift later bit << 15 makes higher bits irrelevant.)
If a rotation operation is available, the new state can be computed as
lfsr = rotateright((lfsr & ~1u) | (bit & 1u), 1);, or equivalently
lfsr = rotateright(((bit ^ lfsr) & 1u) ^ lfsr, 1);
This LFSR configuration is also known as standard, many-to-one or external XOR gates. The alternative Galois configuration is described in the next section.
Example in Python
A sample python implementation of a similar (16 bit taps at [16,15,13,4]) Fibonacci LFSR would be
state = 1 << 15 | 1
while True:
print( state & 1, end='')
newbit = (state ^ (state >> 3) ^ (state >> 12) ^ (state >> 14) ^ (state >> 15)) & 1
state = (state >> 1) | (newbit << 15)
Where a register of 16 bits is used and the xor tap at the fourth, 13th, 15th and sixteenth bit establishes a maximum sequence length.
Galois LFSRs
Named after the French mathematician Évariste Galois, an LFSR in Galois configuration, which is also known as modular, internal XORs, or one-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time). In the Galois configuration, when the system is clocked, bits that are not taps are shifted one position to the right unchanged. The taps, on the other hand, are XORed with the output bit before they are stored in the next position. The new output bit is the next input bit. The effect of this is that when the output bit is zero, all the bits in the register shift to the right unchanged, and the input bit becomes zero. When the output bit is one, the bits in the tap positions all flip (if they are 0, they become 1, and if they are 1, they become 0), and then the entire register is shifted to the right and the input bit becomes 1.
To generate the same output stream, the order of the taps is the counterpart (see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle.
Galois LFSRs do not concatenate every tap to produce the new input (the XORing is done within the LFSR, and no XOR gates are run in serial, therefore the propagation times are reduced to that of one XOR rather than a whole chain), thus it is possible for each tap to be computed in parallel, increasing the speed of execution.
In a software implementation of an LFSR, the Galois form is more efficient, as the XOR operations can be implemented a word at a time: only the output bit must be examined individually.
Below is a C code example for the 16-bit maximal-period Galois LFSR example in the figure:
#include <stdint.h>
unsigned lfsr_galois(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
unsigned period = 0;
do
{
#ifndef LEFT
unsigned lsb = lfsr & 1u; /* Get LSB (i.e., the output bit). */
lfsr >>= 1; /* Shift register */
if (lsb) /* If the output bit is 1, */
lfsr ^= 0xB400u; /* apply toggle mask. */
#else
unsigned msb = (int16_t) lfsr < 0; /* Get MSB (i.e., the output bit). */
lfsr <<= 1; /* Shift register */
if (msb) /* If the output bit is 1, */
lfsr ^= 0x002Du; /* apply toggle mask. */
#endif
++period;
}
while (lfsr != start_state);
return period;
}
The branch if (lsb) lfsr ^= 0xB400u;can also be written as lfsr ^= (-lsb) & 0xB400u; which may produce more efficient code on some compilers. In addition, the left-shifting variant may produce even better code, as the msb is the carry from the addition of lfsr to itself.
Non-binary Galois LFSR
Binary Galois LFSRs like the ones shown above can be generalized to any q-ary alphabet {0, 1, ..., q − 1} (e.g., for binary, q = 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to addition modulo-q (note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-q) by a q-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generate Galois fields for arbitrary prime values of q.
Xorshift LFSRs
As shown by George Marsaglia and further analysed by Richard P. Brent, linear feedback shift registers can be implemented using XOR and Shift operations. This approach lends itself to fast execution in software because these operations typically map efficiently into modern processor instructions.
Below is a C code example for a 16-bit maximal-period Xorshift LFSR using the 7,9,13 triplet from John Metcalf:
#include <stdint.h>
unsigned lfsr_xorshift(void)
{
uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */
uint16_t lfsr = start_state;
unsigned period = 0;
do
{ // 7,9,13 triplet from http://www.retroprogramming.com/2017/07/xorshift-pseudorandom-numbers-in-z80.html
lfsr ^= lfsr >> 7;
lfsr ^= lfsr << 9;
lfsr ^= lfsr >> 13;
++period;
}
while (lfsr != start_state);
return period;
}
Matrix forms
Binary LFSRs of both Fibonacci and Galois configurations can be expressed as linear functions using matrices in (see GF(2)). Using the companion matrix of the characteristic polynomial of the LFSR and denoting the seed as a column vector , the state of the register in Fibonacci configuration after steps is given by
Matrix for the corresponding Galois form is :
For a suitable initialisation,
the top coefficient of the column vector :
gives the term of the original sequence.
These forms generalize naturally to arbitrary fields.
Example polynomials for maximal LFSRs
The following table lists examples of maximal-length feedback polynomials (primitive polynomials) for shift-register lengths up to 24. The formalism for maximum-length LFSRs was developed by Solomon W. Golomb in his 1967 book. The number of different primitive polynomials grows exponentially with shift-register length and can be calculated exactly using Euler's totient function .
Xilinx published an extend list of tap counters up to 168 bit. Tables of maximum length polynomials are available from http://users.ece.cmu.edu/~koopman/lfsr/ and can be generated by the https://github.com/hayguen/mlpolygen project.
Output-stream properties
Ones and zeroes occur in "runs". The output stream 1110010, for example, consists of four runs of lengths 3, 2, 1, 1, in order. In one period of a maximal LFSR, 2n−1 runs occur (in the example above, the 3-bit LFSR has 4 runs). Exactly half of these runs are one bit long, a quarter are two bits long, up to a single run of zeroes n − 1 bits long, and a single run of ones n bits long. This distribution almost equals the statistical expectation value for a truly random sequence. However, the probability of finding exactly this distribution in a sample of a truly random sequence is rather low.
LFSR output streams are deterministic. If the present state and the positions of the XOR gates in the LFSR are known, the next state can be predicted. This is not possible with truly random events. With maximal-length LFSRs, it is much easier to compute the next state, as there are only an easily limited number of them for each length.
The output stream is reversible; an LFSR with mirrored taps will cycle through the output sequence in reverse order.
The value consisting of all zeros cannot appear. Thus an LFSR of length n cannot be used to generate all 2n values.
Applications
LFSRs can be implemented in hardware, and this makes them useful in applications that require very fast generation of a pseudo-random sequence, such as direct-sequence spread spectrum radio. LFSRs have also been used for generating an approximation of white noise in various programmable sound generators.
Uses as counters
The repeating sequence of states of an LFSR allows it to be used as a clock divider or as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable. LFSR counters have simpler feedback logic than natural binary counters or Gray-code counters, and therefore can operate at higher clock rates. However, it is necessary to ensure that the LFSR never enters an all-zeros state, for example by presetting it at start-up to any other state in the sequence.
The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states.
Uses in cryptography
LFSRs have long been used as pseudo-random number generators for use in stream ciphers, due to the ease of construction from simple electromechanical or electronic circuits, long periods, and very uniformly distributed output streams. However, an LFSR is a linear system, leading to fairly easy cryptanalysis. For example, given a stretch of known plaintext and corresponding ciphertext, an attacker can intercept and recover a stretch of LFSR output stream used in the system described, and from that stretch of the output stream can construct an LFSR of minimal size that simulates the intended receiver by using the Berlekamp-Massey algorithm. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext.
Three general methods are employed to reduce this problem in LFSR-based stream ciphers:
Non-linear combination of several bits from the LFSR state;
Non-linear combination of the output bits of two or more LFSRs (see also: shrinking generator); or using Evolutionary algorithm to introduce non-linearity.
Irregular clocking of the LFSR, as in the alternating step generator.
Important LFSR-based stream ciphers include A5/1 and A5/2, used in GSM cell phones, E0, used in Bluetooth, and the shrinking generator. The A5/2 cipher has been broken and both A5/1 and E0 have serious weaknesses.
The linear feedback shift register has a strong relationship to linear congruential generators.
Uses in circuit testing
LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis.
Test-pattern generation
Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for an n-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications.
Signature analysis
In built-in self-test (BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time.
Recent applications are proposing set-reset flip-flops as "taps" of the LFSR. This allows the BIST system to optimise storage, since set-reset flip-flops can save the initial seed to generate the whole stream of bits from the LFSR. Nevertheless, this requires changes in the architecture of BIST, is an option for specific applications.
Uses in digital broadcasting and communications
Scrambling
To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the
receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the same bit rate as the transmitted symbol stream, this technique is referred to as scrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is called chipping code. The chipping code is combined with the data using exclusive or before transmitting using binary phase-shift keying or a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method of spread-spectrum communication. When used only for the spread-spectrum property, this technique is called direct-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is called code-division multiple access.
Neither scheme should be confused with encryption or encipherment; scrambling and spreading with LFSRs do not protect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
Digital broadcasting systems that use linear-feedback registers:
ATSC Standards (digital TV transmission system – North America)
DAB (Digital Audio Broadcasting system – for radio)
DVB-T (digital TV transmission system – Europe, Australia, parts of Asia)
NICAM (digital audio system for television)
Other digital communications systems using LFSRs:
INTELSAT business service (IBS)
Intermediate data rate (IDR)
HDMI 2.0
SDI (Serial Digital Interface transmission)
Data transfer over PSTN (according to the ITU-T V-series recommendations)
CDMA (Code Division Multiple Access) cellular telephony
100BASE-T2 "fast" Ethernet scrambles bits using an LFSR
1000BASE-T Ethernet, the most common form of Gigabit Ethernet, scrambles bits using an LFSR
PCI Express
SATA
Serial Attached SCSI (SAS/SPL)
USB 3.0
IEEE 802.11a scrambles bits using an LFSR
Bluetooth Low Energy Link Layer is making use of LFSR (referred to as whitening)
Satellite navigation systems such as GPS and GLONASS. All current systems use LFSR outputs to generate some or all of their ranging codes (as the chipping code for CDMA or DSSS) or to modulate the carrier without data (like GPS L2 CL ranging code). GLONASS also uses frequency-division multiple access combined with DSSS.
Other uses
LFSRs are also used in radio jamming systems to generate pseudo-random noise to raise the noise floor of a target communication system.
The German time signal DCF77, in addition to amplitude keying, employs phase-shift keying driven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise.
See also
Pinwheel
Mersenne twister
Maximum length sequence
Analog feedback shift register
NLFSR, Non-Linear Feedback Shift Register
Ring counter
Pseudo-random binary sequence
Gold sequence
JPL sequence
Kasami sequence
References
Further reading
http://www.xilinx.com/support/documentation/application_notes/xapp052.pdf
https://web.archive.org/web/20161007061934/http://courses.cse.tamu.edu/csce680/walker/lfsr_table.pdf
http://users.ece.cmu.edu/~koopman/lfsr/index.html -- Tables of maximum length feedback polynomials for 2-64 bits.
https://github.com/hayguen/mlpolygen -- Code for generating maximal length feedback polynomials
External links
– LFSR theory and implementation, maximal length sequences, and comprehensive feedback tables for lengths from 7 to 16,777,215 (3 to 24 stages), and partial tables for lengths up to 4,294,967,295 (25 to 32 stages).
International Telecommunications Union Recommendation O.151 (August 1992)
Maximal Length LFSR table with length from 2 to 67.
Pseudo-Random Number Generation Routine for the MAX765x Microprocessor
http://www.ece.ualberta.ca/~elliott/ee552/studentAppNotes/1999f/Drivers_Ed/lfsr.html
http://www.quadibloc.com/crypto/co040801.htm
Simple explanation of LFSRs for Engineers
Feedback terms
General LFSR Theory
An implementation of LFSR in VHDL.
Simple VHDL coding for Galois and Fibonacci LFSR.
mlpolygen: A Maximal Length polynomial generator
LSFR and Intrinsic Generation of Randomness: Notes From NKS
Binary arithmetic
Digital registers
Cryptographic algorithms
Pseudorandom number generators
Articles with example C code |
59114 | https://en.wikipedia.org/wiki/Packet%20analyzer | Packet analyzer | A packet analyzer, also known as packet sniffer, protocol analyzer, or network analyzer, is a computer program or computer hardware such as a packet capture appliance, that can intercept and log traffic that passes over a computer network or part of a network. Packet capture is the process of intercepting and logging traffic. As data streams flow across the network, the analyzer captures each packet and, if needed, decodes the packet's raw data, showing the values of various fields in the packet, and analyzes its content according to the appropriate RFC or other specifications.
A packet analyzer used for intercepting traffic on wireless networks is known as a wireless analyzer or WiFi analyzer. While a packet analyzer can also be referred to as a network analyzer or protocol analyzer these terms can also have other meanings. Protocol analyzer can technically be a broader, more general class that includes packet analyzers/sniffers. However, the terms are frequently used intechangably.
Capabilities
On wired shared medium networks, such as Ethernet, Token Ring, and FDDI, depending on the network structure (hub or switch), it may be possible to capture all traffic on the network from a single machine. On modern networks, traffic can be captured using a network switch using port mirroring, which mirrors all packets that pass through designated ports of the switch to another port, if the switch supports port mirroring. A network tap is an even more reliable solution than to use a monitoring port since taps are less likely to drop packets during high traffic load.
On wireless LANs, traffic can be captured on one channel at a time, or by using multiple adapters, on several channels simultaneously.
On wired broadcast and wireless LANs, to capture unicast traffic between other machines, the network adapter capturing the traffic must be in promiscuous mode. On wireless LANs, even if the adapter is in promiscuous mode, packets not for the service set the adapter is configured for are usually ignored. To see those packets, the adapter must be in monitor mode. No special provisions are required to capture multicast traffic to a multicast group the packet analyzer is already monitoring, or broadcast traffic.
When traffic is captured, either the entire contents of packets or just the headers are recorded. Recording just headers reduces storage requirements, and avoids some privacy legal issues, yet often provides sufficient information to diagnose problems.
Captured information is decoded from raw digital form into a human-readable format that lets engineers review exchanged information. Protocol analyzers vary in their abilities to display and analyze data.
Some protocol analyzers can also generate traffic. These can act as protocol testers. Such testers generate protocol-correct traffic for functional testing, and may also have the ability to deliberately introduce errors to test the device under test's ability to handle errors.
Protocol analyzers can also be hardware-based, either in probe format or, as is increasingly common, combined with a disk array. These devices record packets or packet headers to a disk array.
Uses
Packet analyzers can:
Analyze network problems
Detect network intrusion attempts
Detect network misuse by internal and external users
Documenting regulatory compliance through logging all perimeter and endpoint traffic
Gain information for effecting a network intrusion
Identify data collection and sharing of software such as operating systems (for strengthening privacy, control and security)
Aid in gathering information to isolate exploited systems
Monitor WAN bandwidth utilization
Monitor network usage (including internal and external users and systems)
Monitor data in transit
Monitor WAN and endpoint security status
Gather and report network statistics
Identify suspect content in network traffic
Troubleshoot performance problems by monitoring network data from an application
Serve as the primary data source for day-to-day network monitoring and management
Spy on other network users and collect sensitive information such as login details or users cookies (depending on any content encryption methods that may be in use)
Reverse engineer proprietary protocols used over the network
Debug client/server communications
Debug network protocol implementations
Verify adds, moves, and changes
Verify internal control system effectiveness (firewalls, access control, Web filter, spam filter, proxy)
Packet capture can be used to fulfill a warrant from a law enforcement agency to wiretap all network traffic generated by an individual. Internet service providers and VoIP providers in the United States must comply with Communications Assistance for Law Enforcement Act regulations. Using packet capture and storage, telecommunications carriers can provide the legally required secure and separate access to targeted network traffic and can use the same device for internal security purposes. Collecting data from a carrier system without a warrant is illegal due to laws about interception. By using end-to-end encryption, communications can be kept confidential from telecommunication carriers and legal authorities.
Notable packet analyzers
Allegro Network Multimeter
Capsa Network Analyzer
Charles Web Debugging Proxy
Carnivore (software)
CommView
dSniff
EndaceProbe Analytics Platform by Endace
ettercap
Fiddler
Kismet
Lanmeter
Microsoft Network Monitor
NarusInsight
NetScout Systems nGenius Infinistream
ngrep, Network Grep
OmniPeek, Omnipliance by Savvius
SkyGrabber
The Sniffer
snoop
tcpdump
Observer Analyzer
Wireshark (formerly known as Ethereal)
Xplico Open source Network Forensic Analysis Tool
See also
Bus analyzer
Logic analyzer
Network detector
pcap
Signals intelligence
Traffic generation model
Notes
References
External links
Multi-Tap Network Packet Capture
WiFi Adapter for Packet analyzer
Network analyzers
Packets (information technology)
Wireless networking
Deep packet capture |
59458 | https://en.wikipedia.org/wiki/ElGamal%20encryption | ElGamal encryption | In cryptography, the ElGamal encryption system is an asymmetric key encryption algorithm for public-key cryptography which is based on the Diffie–Hellman key exchange. It was described by Taher Elgamal in 1985. ElGamal encryption is used in the free GNU Privacy Guard software, recent versions of PGP, and other cryptosystems. The Digital Signature Algorithm (DSA) is a variant of the ElGamal signature scheme, which should not be confused with ElGamal encryption.
ElGamal encryption can be defined over any cyclic group , like multiplicative group of integers modulo n. Its security depends upon the difficulty of a certain problem in related to computing discrete logarithms.
The algorithm
ElGamal encryption consists of three components: the key generator, the encryption algorithm, and the decryption algorithm.
Key generation
The first party, Alice, generates a key pair as follows:
Generate an efficient description of a cyclic group of order with generator . Let represent the unit element of .
Choose an integer randomly from .
Compute .
The public key consists of the values . Alice publishes this public key and retains as her private key, which must be kept secret.
Encryption
A second party, Bob, encrypts a message to Alice under her public key as follows:
Map the message to an element of using a reversible mapping function.
Choose an integer randomly from .
Compute . This is called the shared secret.
Compute .
Compute .
Bob sends the ciphertext to Alice.
Note that if one knows both the ciphertext and the plaintext , one can easily find the shared secret , since . Therefore, a new and hence a new is generated for every message to improve security. For this reason, is also called an ephemeral key.
Decryption
Alice decrypts a ciphertext with her private key as follows:
Compute . Since , , and thus it is the same shared secret that was used by Bob in encryption.
Compute , the inverse of in the group . This can be computed in one of several ways. If is a subgroup of a multiplicative group of integers modulo , where is prime, the modular multiplicative inverse can be computed using the extended Euclidean algorithm. An alternative is to compute as . This is the inverse of because of Lagrange's theorem, since .
Compute . This calculation produces the original message , because ; hence .
Map back to the plaintext message .
Practical use
Like most public key systems, the ElGamal cryptosystem is usually used as part of a hybrid cryptosystem, where the message itself is encrypted using a symmetric cryptosystem, and ElGamal is then used to encrypt only the symmetric key. This is because asymmetric cryptosystems like ElGamal are usually slower than symmetric ones for the same level of security, so it is faster to encrypt the message, which can be arbitrarily large, with a symmetric cipher, and then use ElGamal only to encrypt the symmetric key, which usually is quite small compared to the size of the message.
Security
The security of the ElGamal scheme depends on the properties of the underlying group as well as any padding scheme used on the messages. If the computational Diffie–Hellman assumption (CDH) holds in the underlying cyclic group , then the encryption function is one-way.
If the decisional Diffie–Hellman assumption (DDH) holds in , then
ElGamal achieves semantic security. Semantic security is not implied by the computational Diffie–Hellman assumption alone. See decisional Diffie–Hellman assumption for a discussion of groups where the assumption is believed to hold.
ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. For example, given an encryption of some (possibly unknown) message , one can easily construct a valid encryption of the message .
To achieve chosen-ciphertext security, the scheme must be further modified, or an appropriate padding scheme must be used. Depending on the modification, the DDH assumption may or may not be necessary.
Other schemes related to ElGamal which achieve security against chosen ciphertext attacks have also been proposed. The Cramer–Shoup cryptosystem is secure under chosen ciphertext attack assuming DDH holds for . Its proof does not use the random oracle model. Another proposed scheme is DHAES, whose proof requires an assumption that is weaker than the DDH assumption.
Efficiency
ElGamal encryption is probabilistic, meaning that a single plaintext can be encrypted to many possible ciphertexts, with the consequence that a general ElGamal encryption produces a 1:2 expansion in size from plaintext to ciphertext.
Encryption under ElGamal requires two exponentiations; however, these exponentiations are independent of the message and can be computed ahead of time if needed. Decryption requires one exponentiation and one computation of a group inverse, which can, however, be easily combined into just one exponentiation.
See also
Taher Elgamal, designer of this and other cryptosystems
ElGamal signature scheme
Homomorphic encryption
Further reading
References
Public-key encryption schemes |
59524 | https://en.wikipedia.org/wiki/Next-Generation%20Secure%20Computing%20Base | Next-Generation Secure Computing Base | The Next-Generation Secure Computing Base (NGSCB; codenamed Palladium and also known as Trusted Windows') is a software architecture designed by Microsoft which aimed to provide users of the Windows operating system with better privacy, security, and system integrity. NGSCB was the result of years of research and development within Microsoft to create a secure computing solution that equaled the security of closed platforms such as set-top boxes while simultaneously preserving the backward compatibility, flexibility, and openness of the Windows operating system. Microsoft's primary stated objective with NGSCB was to "protect software from software."
Part of the Trustworthy Computing initiative when unveiled in 2002, NGSCB was to be integrated with Windows Vista, then known as "Longhorn." NGSCB relied on hardware designed by the Trusted Computing Group to produce a parallel operation environment hosted by a new hypervisor (referred to as a sort of kernel in documentation) called the "Nexus" that existed alongside Windows and provided new applications with features such as hardware-based process isolation, data encryption based on integrity measurements, authentication of a local or remote machine or software configuration, and encrypted paths for user authentication and graphics output. NGSCB would facilitate the creation and distribution of digital rights management (DRM) policies pertaining the use of information.
NGSCB was subject to much controversy during its development, with critics contending that it would impose restrictions on users, enforce vendor lock-in, and undermine fair use rights and open-source software. It was first demonstrated by Microsoft at WinHEC 2003 before undergoing a revision in 2004 that would enable earlier applications to benefit from its functionality. Reports indicated in 2005 that Microsoft would change its plans with NGSCB so that it could ship Windows Vista by its self-imposed deadline year, 2006; instead, Microsoft would ship only part of the architecture, BitLocker, which can optionally use the Trusted Platform Module to validate the integrity of boot and system files prior to operating system startup. Development of NGSCB spanned approximately a decade before its cancellation, the lengthiest development period of a major feature intended for Windows Vista.
NGSCB differed from technologies Microsoft billed as "pillars of Windows Vista"—Windows Presentation Foundation, Windows Communication Foundation, and WinFS—during its development in that it was not built with the .NET Framework and did not focus on managed code software development. NGSCB has yet to fully materialize; however, aspects of it are available in features such as BitLocker of Windows Vista, Measured Boot of Windows 8, Certificate Attestation of Windows 8.1, and Device Guard of Windows 10.
History
Early development
Development of NGSCB began in 1997 after Peter Biddle conceived of new ways to protect content on personal computers. Biddle enlisted assistance from members from the Microsoft Research division and other core contributors eventually included Blair Dillaway, Brian LaMacchia, Bryan Willman, Butler Lampson, John DeTreville, John Manferdelli, Marcus Peinado, and Paul England. Adam Barr, a former Microsoft employee who worked to secure the remote boot feature during development of Windows 2000 was approached by Biddle and colleagues during his tenure with an initiative tentatively known as "Trusted Windows," which aimed to protect DVD content from being copied. To this end, Lampson proposed the use of a hypervisor to execute a limited operating system dedicated to DVD playback alongside Windows 2000. Patents for a DRM operating system were later filed in 1999 by DeTreville, England, and Lampson; Lampson noted that these patents were for NGSCB. Biddle and colleagues realized by 1999 that NGSCB was more applicable to privacy and security than content protection, and the project was formally given the green-light by Microsoft in October, 2001.
During WinHEC 1999, Biddle discussed intent to create a "trusted" architecture for Windows to leverage new hardware to promote confidence and security while preserving backward compatibility with previous software. On October 11, 1999, the Trusted Computing Platform Alliance, a consortium of various technology companies including Compaq, Hewlett-Packard, IBM, Intel, and Microsoft was formed in an effort to promote personal computing confidence and security. The TCPA released detailed specifications for a trusted computing platform with focus on features such as code validation and encryption based on integrity measurements, hardware-based key storage, and machine authentication; these features required a new hardware component designed by the TCPA called the "Trusted Platform Module" (referred to as a "Security Support Component", "Security CoProcessor", or "Security Support Processor" in early NGSCB documentation).
At WinHEC 2000, Microsoft released a technical presentation on the topics of protection of privacy, security, and intellectual property titled "Privacy, Security, and Content in Windows Platforms", which focused on turning Windows into a "platform of trust" for computer security, user content, and user privacy. Notable in the presentation is the contention that "there is no difference between privacy protection, computer security, and content protection"—"assurances of trust must be universally true". Microsoft reiterated these claims at WinHEC 2001. NGSCB intended to protect all forms of content, unlike traditional rights management schemes which focus only on the protection of audio tracks or movies instead of users they have the potential to protect which made it, in Biddle's words, "egalitarian".
As "Palladium"
Microsoft held its first design review for the NGSCB in April 2002, with approximately 37 companies under a non-disclosure agreement. NGSCB was publicly unveiled under its codename "Palladium" in a June 2002 article by Steven Levy for Newsweek that focused on its design, feature set, and origin. Levy briefly described potential features: access control, authentication, authorization, DRM, encryption, as well as protection from junk mail and malware, with example policies being email accessible only to an intended recipient and Microsoft Word documents readable for only a week after their creation; Microsoft later release a guide clarifying these assertions as being hyperbolic; namely, that NGSCB would not intrinsically enforce content protection, or protect against junk mail or malware. Instead, it would provide a platform on which developers could build new solutions that did not exist by isolating applications and store secrets for them. Microsoft was not sure whether to "expose the feature in the Control Panel or present it as a separate utility," but NGSCB would be an opt-in solution—disabled by default.
Microsoft PressPass later interviewed John Manferdelli, who restated and expanded on many of the key points discussed in the article by Newsweek. Manferdelli described it as evolutionary platform for Windows in July, articulating how "'Palladium' will not require DRM, and DRM will not require 'Palladium'. Microsoft sought a group program manager in August to assist in leading the development of several Microsoft technologies including NGSCB. Paul Otellini announced Intel's support for NGSCB with a set of chipset, platform, and processor codenamed "LaGrande" at Intel Developer Forum 2002, which would provide an NGSCB hardware foundation and preserve backward compatibility with previous software.
As NGSCB
NGSCB was known as "Palladium" until January 24, 2003 when Microsoft announced it had been renamed as "Next-Generation Secure Computing Base." Project manager Mario Juarez stated this name was chosen to avoid legal action from an unnamed company which had acquired the rights to the "Palladium" name, as well as to reflect Microsoft's commitment to NGSCB in the upcoming decade. Juarez acknowledged the previous name was controversial, but denied it was changed by Microsoft to dodge criticism.
The Trusted Computing Platform Alliance was superseded by the Trusted Computing Group in April 2003. A principal goal of the new consortium was to produce a TPM specification compatible with NGSCB; the previous specification, TPM 1.1 did not meet its requirements. TPM 1.2 was designed for compliance with NGSCB and introduced many features for such platforms. The first TPM 1.2 specification, Revision 62 was released in 2003.
Biddle emphasized in June 2003 that hardware vendors and software developers were vital to NGSCB. Microsoft publicly demonstrated NGSCB for the first time at WinHEC 2003, where it protected data in memory from an attacker; prevented access to—and alerted the user of—an application that had been changed; and prevented a remote administration tool from capturing an instant messaging conversation. Despite Microsoft's desire to demonstrate NGSCB on hardware, software emulation was required for as few hardware components were available. Biddle reiterated that NGSCB was a set of evolutionary enhancements to Windows, basing this assessment on preserved backward compatibility and employed concepts in use before its development, but said the capabilities and scenarios it would enable would be revolutionary. Microsoft also revealed its multi-year roadmap for NGSCB, with the next major development milestone scheduled for the Professional Developers Conference, indicating that subsequent versions would ship concurrently with pre-release builds of Windows Vista; however, news reports suggested that NGSCB would not be integrated with Windows Vista when release, but it would instead be made available as separate software for the operating system.
Microsoft also announced details related to adoption and deployment of NGSCB at WinHEC 2003, stating that it would create a new value proposition for customers without significantly increasing the cost of computers; NGSCB adoption during the year of its introductory release was not anticipated and immediate support for servers was not expected. On the last day of the conference, Biddle said NGSCB needed to provide users with a way to differentiate between secured and unsecured windows—that a secure window should be "noticeably different" to help protect users from spoofing attacks; Nvidia was the earliest to announce this feature. WinHEC 2003 represented an important development milestone for NGSCB. Microsoft dedicated several hours to presentations and released many technical whitepapers, and companies including Atmel, Comodo Group, Fujitsu, and SafeNet produced preliminary hardware for the demonstration. Microsoft also demonstrated NGSCB at several U.S. campuses in California and in New York in June 2003.
NGSCB was among the topics discussed during Microsoft's PDC 2003 with a pre-beta software development kit, known as the Developer Preview, being distributed to attendees. The Developer Preview was the first time that Microsoft made NGSCB code available to the developer community and was offered by the company as an educational opportunity for NGSCB software development. With this release, Microsoft stated that it was primarily focused on supporting business and enterprise applications and scenarios with the first version of the NGSCB scheduled to ship with Windows Vista, adding that it intended to address consumers with a subsequent version of the technology, but did not provide an estimated time of delivery for this version. At the conference, Jim Allchin said that Microsoft was continuing to work with hardware vendors so that they would be able to support the technology, and Bill Gates expected a new generation of central processing units to offer full support. Following PDC 2003, NGSCB was demonstrated again on prototype hardware during the annual RSA Security conference in November.
Microsoft announced at WinHEC 2004 that it would revise NSCB in response to feedback from customers and independent software vendors who did not desire to rewrite their existing programs in order to benefit from its functionality; the revision would also provide more direct support for Windows with protected environments for the operating system, its components, and applications, instead of it being an environment to itself and new applications. The NGSCB secure input feature would also undergo a significant revision based on cost assessments, hardware requirements, and usability issues of the previous implementation. There were subsequent reports that Microsoft would cease developing NGSCB; Microsoft denied these reports and reaffirmed its commitment to delivery. Additional reports published later that year suggested that Microsoft would make even additional changes based on feedback from the industry.
Microsoft's absence of continual updates on NGSCB progress in 2005 had caused industry insiders to speculate that NGSCB had been cancelled. At the Microsoft Management Summit event, Steve Ballmer said that the company would build on the security foundation it had started with the NGSCB to create a new set of virtualization technologies for Windows, which were later Hyper-V. Reports during WinHEC 2005 indicated Microsoft scaled back its plans for NGSCB, so that it could to ship Windows Vista—which had already been beset by numerous delays and even a "development reset"—within a reasonable timeframe; instead of isolating components, NGSCB would offer "Secure Startup" ("BitLocker Drive Encryption") to encrypt disk volumes and validate both pre-boot firmware and operating system components. Microsoft intended to deliver other aspects of NGSCB later. Jim Allchin stated NGSCB would "marry hardware and software to gain better security", which was instrumental in the development of BitLocker.
Architecture and technical details
A complete Microsoft-based Trusted Computing-enabled system will consist not only of software components developed by Microsoft but also of hardware components developed by the Trusted Computing Group. The majority of features introduced by NGSCB are heavily reliant on specialized hardware and so will not operate on PCs predating 2004.
In current Trusted Computing specifications, there are two hardware components: the Trusted Platform Module (TPM), which will provide secure storage of cryptographic keys and a secure cryptographic co-processor, and a curtained memory feature in the Central Processing Unit (CPU). In NGSCB, there are two software components, the Nexus, a security kernel that is part of the Operating System which provides a secure environment (Nexus mode) for trusted code to run in, and Nexus Computing Agents (NCAs), trusted modules which run in Nexus mode within NGSCB-enabled applications.
Secure storage and attestation
At the time of manufacture, a cryptographic key is generated and stored within the TPM. This key is never transmitted to any other component, and the TPM is designed in such a way that it is extremely difficult to retrieve the stored key by reverse engineering or any other method, even to the owner. Applications can pass data encrypted with this key to be decrypted by the TPM, but the TPM will only do so under certain strict conditions. Specifically, decrypted data will only ever be passed to authenticated, trusted applications, and will only ever be stored in curtained memory, making it inaccessible to other applications and the Operating System. Although the TPM can only store a single cryptographic key securely, secure storage of arbitrary data is by extension possible by encrypting the data such that it may only be decrypted using the securely stored key.
The TPM is also able to produce a cryptographic signature based on its hidden key. This signature may be verified by the user or by any third party, and so can therefore be used to provide remote attestation that the computer is in a secure state.
Curtained memory
NGSCB also relies on a curtained memory feature provided by the CPU. Data within curtained memory can only be accessed by the application to which it belongs, and not by any other application or the Operating System. The attestation features of the TPM(Trusted Platform Module) can be used to confirm to a trusted application that it is genuinely running in curtained memory; it is therefore very difficult for anyone, including the owner, to trick a trusted application into running outside of curtained memory. This in turn makes reverse engineering of a trusted application extremely difficult.
Applications
NGSCB-enabled applications are to be split into two distinct parts, the NCA, a trusted module with access to a limited Application Programming Interface (API), and an untrusted portion, which has access to the full Windows API. Any code which deals with NGSCB functions must be located within the NCA.
The reason for this split is that the Windows API has developed over many years and is as a result extremely complex and difficult to audit for security bugs. To maximize security, trusted code is required to use a smaller, carefully audited API. Where security is not paramount, the full API is available.
Uses and scenarios
NGSCB enables new categories of applications and scenarios. Examples of uses cited by Microsoft include decentralized access control policies; digital rights management services for consumers, content providers, and enterprises; protected instant messaging conversations and online transactions; ; and more secure forms of machine health compliance, network authentication, and remote access. NGSCB-secured virtual private network access was one of the earliest scenarios envisaged by Microsoft. NGSCB can also strengthen software update mechanisms such as those belonging to antivirus software or Windows Update.
An early NGSCB privacy scenario conceived of by Microsoft is the "wine purchase scenario," where a user can safely conduct a transaction with an online merchant without divulging personally identifiable information during the transaction. With the release of the NGSCB Developer Preview during PDC 2003, Microsoft emphasized the following enterprise applications and scenarios: document signing, secured data viewing, secured instant messaging, and secured plug-ins for emailing.
WinHEC 2004 scenarios
During WinHEC 2004, Microsoft revealed two features based on its revision of NGSCB, Cornerstone and Code Integrity Rooting:
Cornerstone would protect a user's login and authentication information by securely transmitting it to NGSCB-protected Windows components for validation, finalizing the user authentication process by releasing access to the SYSKEY if validation was successful. It was intended to protect data on laptops that had been lost or stolen to prevent hackers or thieves from accessing it even if they had performed a software-based attack or booted into an alternative operating system.
Code Integrity Rooting would validate boot and system files prior to the startup of Microsoft Windows. If validation of these components failed, the SYSKEY would not be released.
BitLocker is the combination of these features; "Cornerstone" was the codename of BitLocker, and BitLocker validates pre-boot firmware and operating system components before boot, which protects SYSKEY from unauthorized access; an unsuccessful validation prohibits access to a protected system.
Reception
Reaction to NGSCB after its unveiling by Newsweek was largely negative. While its security features were praised, critics contended that NGSCB could be used to impose restrictions on users; lock-out competing software vendors; and undermine fair use rights and open source software such as Linux. Microsoft's characterization of NGSCB as a security technology was subject to criticism as its origin focused on DRM. NGSCB's announcement occurred only a few years after Microsoft was accused of anticompetitive practices during the United States v. Microsoft Corporation antitrust case, a detail which called the company's intentions for the technology into question—NGSCB was regarded as an effort by the company to maintain its dominance in the personal computing industry. The notion of a "Trusted Windows" architecture—one that implied Windows itself was untrustworthy—would also be a source of contention within the company itself.
After NGSCB's unveiling, Microsoft drew frequent comparisons to Big Brother, an oppressive dictator of a totalitarian state in George Orwell's dystopian novel Nineteen Eighty-Four. The Electronic Privacy Information Center legislative counsel, Chris Hoofnagle, described Microsoft's characterization of the NGSCB as "Orwellian." Big Brother Awards bestowed Microsoft with an award because of NGSCB. Bill Gates addressed these comments at a homeland security conference by stating that NGSCB "can make our country more secure and prevent the nightmare vision of George Orwell at the same time." Steven Levy—the author who unveiled the existence of the NGSCB—claimed in a 2004 front-page article for Newsweek that NGSCB could eventually lead to an "information infrastructure that encourages censorship, surveillance, and suppression of the creative impulse where anonymity is outlawed and every penny spent is accounted for." However, Microsoft outlined a scenario enabled by NGSCB that allows a user to conduct a transaction without divulging personally identifiable information.
Ross Anderson of Cambridge University was among the most vocal critics of NGSCB and of Trusted Computing. Anderson alleged that the technologies were designed to satisfy federal agency requirements; enable content providers and other third-parties to remotely monitor or delete data in users' machines; use certificate revocation lists to ensure that only content deemed "legitimate" could be copied; and use unique identifiers to revoke or validate files; he compared this to the attempts by the Soviet Union to "register and control all typewriters and fax machines." Anderson also claimed that the TPM could control the execution of applications on a user's machine and, because of this, bestowed to it a derisive "Fritz Chip" name in reference to United States Senator Ernest "Fritz" Hollings, who had recently proposed DRM legislation such as the Consumer Broadband and Digital Television Promotion Act for consumer electronic devices. Anderson's report was referenced extensively in the news media and appeared in publications such as BBC News, The New York Times, and The Register. David Safford of IBM Research stated that Anderson presented several technical errors within his report, namely that the proposed capabilities did not exist within any specification and that many were beyond the scope of trusted platform design. Anderson later alleged that BitLocker was designed to facilitate DRM and to lock out competing software on an encrypted system, and, in spite of his allegation that NGSCB was designed for federal agencies, advocated for Microsoft to add a backdoor to BitLocker. Similar sentiments were expressed by Richard Stallman, founder of the GNU Project and Free Software Foundation, who alleged that Trusted Computing technologies were designed to enforce DRM and to prevent users from running unlicensed software. In 2015, Stallman stated that "the TPM has proved a total failure" for DRM and that "there are reasons to think that it will not be feasible to use them for DRM."
After the release of Anderson's report, Microsoft stated in an NGSCB FAQ that "enhancements to Windows under the NGSCB architecture have no mechanism for filtering content, nor do they provide a mechanism for proactively searching the Internet for 'illegal' content [...] Microsoft is firmly opposed to putting 'policing functions' into nexus-aware PCs and does not intend to do so" and that the idea was in direct opposition with the design goals set forth for NGSCB, which was "built on the premise that no policy will be imposed that is not approved by the user." Concerns about the NGSCB TPM were also raised in that it would use what are essentially unique machine identifiers, which drew comparisons to the Intel Pentium III processor serial number, a unique hardware identification number of the 1990s viewed as a risk to end-user privacy. NGSCB, however, mandates that disclosure or use of the keys provided by the TPM be based solely on user discretion; in contrast, Intel's Pentium III included a unique serial number that could potentially be revealed to any application. NGSCB, also unlike Intel's Pentium III, would provide optional features to allow users to indirectly identify themselves to external requestors.
In response to concerns that NGSCB would take control away from users for the sake of content providers, Bill Gates stated that the latter should "provide their content in easily accessible forms or else it ends up encouraging piracy." Bryan Willman, Marcus Peinado, Paul England, and Peter Biddle—four NGSCB engineers—realized early during the development of NGSCB that DRM would ultimately fail in its efforts to prevent piracy. In 2002, the group released a paper titled "The Darknet and the Future of Content Distribution" that outlined how content protection mechanisms are demonstrably futile. The paper's premise circulated within Microsoft during the late 1990s and was a source of controversy within Microsoft; Biddle stated that the company almost terminated his employment as a result of the paper's release. A 2003 report published by Harvard University researchers suggested that NGSCB and similar technologies could facilitate the secure distribution of copyrighted content across peer-to-peer networks.
Not all assessments were negative. Paul Thurrott praised NGSCB, stating that it was "Microsoft's Trustworthy Computing initiative made real" and that it would "form the basis of next-generation computer systems." Scott Bekker of Redmond Magazine stated that NGSCB was misunderstood because of its controversy and that it appeared to be a "promising, user-controlled defense against privacy intrusions and security violations." In February 2004, In-Stat/MDR, publisher of the Microprocessor Report, bestowed NGSCB with its Best Technology award. Malcom Crompton, Australian Privacy Commissioner, stated that "NGSCB has great privacy enhancing potential [...] Microsoft has recognised there is a privacy issue [...] we should all work with them, give them the benefit of the doubt and urge them to do the right thing." When Microsoft announced at WinHEC 2004 that it would be revising NGSCB so that previous applications would not have to be rewritten, Martin Reynolds of Gartner praised the company for this decision as it would create a "more sophisticated" version of NGSCB that would simplify development. David Wilson, writing for South China Morning Post, defended NGSCB by saying that "attacking the latest Microsoft monster is an international blood sport" and that "even if Microsoft had a new technology capable of ending Third World hunger and First World obesity, digital seers would still lambaste it because they view Bill Gates as a grey incarnation of Satan." Microsoft noted that negative reaction to NGSCB gradually waned after events such as the USENIX Annual Technical Conference in 2003, and several Fortune 500 companies also expressed interest in it.
When reports announced in 2005 that Microsoft would scale back its plans and incorporate only BitLocker with Windows Vista, concerns pertaining erosion of user rights, vendor lock-in, and other potential abuses remained. In 2008, Biddle stated that negative perception was the most significant contributing factor responsible for the cessation of NGSCB's development.
Vulnerability
In an article in 2003, D. Boneh and D. Brumley indicated that projects like NGSCB may be vulnerable to timing attacks.
See also
Microsoft Pluton
Secure Boot
Intel LaGrande
Trusted Computing
Trusted Platform Module
Intel Management Engine
References
External links
Microsoft's NGSCB home page (Archived on 2006-07-05)
Trusted Computing Group home page
System Integrity Team blog — team blog for NGSCB technologies (Archived on 2008-10-21)
Security WMI Providers Reference on MSDN, including BitLocker Drive Encryption and Trusted Platform Module (both components of NGSCB)
TPM Base Services on MSDN
Development Considerations for Nexus Computing Agents
Cryptographic software
Discontinued Windows components
Disk encryption
Microsoft criticisms and controversies
Microsoft initiatives
Microsoft Windows security technology
Trusted computing
Windows Vista |
59644 | https://en.wikipedia.org/wiki/Digital%20signature | Digital signature | A digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents. A valid digital signature, where the prerequisites are satisfied, gives a recipient very strong reason to believe that the message was created by a known sender (authenticity), and that the message was not altered in transit (integrity).
Digital signatures are a standard element of most cryptographic protocol suites, and are commonly used for software distribution, financial transactions, contract management software, and in other cases where it is important to detect forgery or tampering.
Digital signatures are often used to implement electronic signatures, which includes any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures. Electronic signatures have legal significance in some countries, including Canada, South Africa, the United States, Algeria, Turkey, India, Brazil, Indonesia, Mexico, Saudi Arabia, Uruguay, Switzerland, Chile and the countries of the European Union.
Digital signatures employ asymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a non-secure channel: Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects, but properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes, in the sense used here, are cryptographically based, and must be implemented properly to be effective. They can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret. Further, some non-repudiation schemes offer a timestamp for the digital signature, so that even if the private key is exposed, the signature is valid. Digitally signed messages may be anything representable as a bitstring: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.
Definition
A digital signature scheme typically consists of three algorithms;
A key generation algorithm that selects a private key uniformly at random from a set of possible private keys. The algorithm outputs the private key and a corresponding public key.
A signing algorithm that, given a message and a private key, produces a signature.
A signature verifying algorithm that, given the message, public key and signature, either accepts or rejects the message's claim to authenticity.
Two main properties are required. First, the authenticity of a signature generated from a fixed message and fixed private key can be verified by using the corresponding public key. Secondly, it should be computationally infeasible to generate a valid signature for a party without knowing that party's private key.
A digital signature is an authentication mechanism that enables the creator of the message to attach a code that acts as a signature.
The Digital Signature Algorithm (DSA), developed by the National Institute of Standards and Technology, is one of many examples of a signing algorithm.
In the following discussion, 1n refers to a unary number.
Formally, a digital signature scheme is a triple of probabilistic polynomial time algorithms, (G, S, V), satisfying:
G (key-generator) generates a public key (pk), and a corresponding private key (sk), on input 1n, where n is the security parameter.
S (signing) returns a tag, t, on the inputs: the private key (sk), and a string (x).
V (verifying) outputs accepted or rejected on the inputs: the public key (pk), a string (x), and a tag (t).
For correctness, S and V must satisfy
Pr [ (pk, sk) ← G(1n), V( pk, x, S(sk, x) ) = accepted ] = 1.
A digital signature scheme is secure if for every non-uniform probabilistic polynomial time adversary, A
Pr [ (pk, sk) ← G(1n), (x, t) ← AS(sk, · )(pk, 1n), x ∉ Q, V(pk, x, t) = accepted] < negl(n),
where AS(sk, · ) denotes that A has access to the oracle, S(sk, · ), Q denotes the set of the queries on S made by A, which knows the public key, pk, and the security parameter, n, and x ∉ Q denotes that the adversary may not directly query the string, x, on S.
History
In 1976, Whitfield Diffie and Martin Hellman first described the notion of a digital signature scheme, although they only conjectured that such schemes existed based on functions that are trapdoor one-way permutations. Soon afterwards, Ronald Rivest, Adi Shamir, and Len Adleman invented the RSA algorithm, which could be used to produce primitive digital signatures (although only as a proof-of-concept – "plain" RSA signatures are not secure). The first widely marketed software package to offer digital signature was Lotus Notes 1.0, released in 1989, which used the RSA algorithm.
Other digital signature schemes were soon developed after RSA, the earliest being Lamport signatures, Merkle signatures (also known as "Merkle trees" or simply "Hash trees"), and Rabin signatures.
In 1988, Shafi Goldwasser, Silvio Micali, and Ronald Rivest became the first to rigorously define the security requirements of digital signature schemes. They described a hierarchy of attack models for signature schemes, and also presented the GMR signature scheme, the first that could be proved to prevent even an existential forgery against a chosen message attack, which is the currently accepted security definition for signature schemes. The first such scheme which is not built on trapdoor functions but rather on a family of function with a much weaker required property of one-way permutation was presented by Moni Naor and Moti Yung.
Method
One digital signature scheme (of many) is based on RSA. To create signature keys, generate an RSA key pair containing a modulus, N, that is the product of two random secret distinct large primes, along with integers, e and d, such that e d ≡ 1 (mod φ(N)), where φ is the Euler's totient function. The signer's public key consists of N and e, and the signer's secret key contains d.
To sign a message, m, the signer computes a signature, σ, such that σ ≡ md (mod N). To verify, the receiver checks that σe ≡ m (mod N).
Several early signature schemes were of a similar type: they involve the use of a trapdoor permutation, such as the RSA function, or in the case of the Rabin signature scheme, computing square modulo composite, N. A trapdoor permutation family is a family of permutations, specified by a parameter, that is easy to compute in the forward direction, but is difficult to compute in the reverse direction without already knowing the private key ("trapdoor"). Trapdoor permutations can be used for digital signature schemes, where computing the reverse direction with the secret key is required for signing, and computing the forward direction is used to verify signatures.
Used directly, this type of signature scheme is vulnerable to key-only existential forgery attack. To create a forgery, the attacker picks a random signature σ and uses the verification procedure to determine the message, m, corresponding to that signature. In practice, however, this type of signature is not used directly, but rather, the message to be signed is first hashed to produce a short digest, that is then padded to larger width comparable to N, then signed with the reverse trapdoor function. This forgery attack, then, only produces the padded hash function output that corresponds to σ, but not a message that leads to that value, which does not lead to an attack. In the random oracle model, hash-then-sign (an idealized version of that practice where hash and padding combined have close to N possible outputs), this form of signature is existentially unforgeable, even against a chosen-plaintext attack.
There are several reasons to sign such a hash (or message digest) instead of the whole document.
For efficiency The signature will be much shorter and thus save time since hashing is generally much faster than signing in practice.
For compatibility Messages are typically bit strings, but some signature schemes operate on other domains (such as, in the case of RSA, numbers modulo a composite number N). A hash function can be used to convert an arbitrary input into the proper format.
For integrity Without the hash function, the text "to be signed" may have to be split (separated) in blocks small enough for the signature scheme to act on them directly. However, the receiver of the signed blocks is not able to recognize if all the blocks are present and in the appropriate order.
Notions of security
In their foundational paper, Goldwasser, Micali, and Rivest lay out a hierarchy of attack models against digital signatures:
In a key-only attack, the attacker is only given the public verification key.
In a known message attack, the attacker is given valid signatures for a variety of messages known by the attacker but not chosen by the attacker.
In an adaptive chosen message attack, the attacker first learns signatures on arbitrary messages of the attacker's choice.
They also describe a hierarchy of attack results:
A total break results in the recovery of the signing key.
A universal forgery attack results in the ability to forge signatures for any message.
A selective forgery attack results in a signature on a message of the adversary's choice.
An existential forgery merely results in some valid message/signature pair not already known to the adversary.
The strongest notion of security, therefore, is security against existential forgery under an adaptive chosen message attack.
Applications
As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of an electronic document as well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office (GPO) publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State, University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures.
Below are some common reasons for applying a digital signature to communications:
Authentication
Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the identity of the source messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request could be a grave mistake.
Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to an encrypted message without understanding it. (Some encryption algorithms, called nonmalleable, prevent this, but others do not.) However, if a message is digitally signed, any change in the message after signature invalidates the signature. Furthermore, there is no efficient way to modify a message and its signature to produce a new message with a valid signature, because this is still considered to be computationally infeasible by most cryptographic hash functions (see collision resistance).
Non-repudiation
Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.
Note that these authentication, non-repudiation etc. properties rely on the secret key prior to its usage. Public revocation of a key-pair is a required ability, else leaked secret keys would continue to implicate the claimed owner of the key-pair. Checking revocation status requires an "online" check; e.g., checking a certificate revocation list or via the Online Certificate Status Protocol. Very roughly this is analogous to a vendor who receives credit-cards first checking online with the credit-card issuer to find if a given card has been reported lost or stolen. Of course, with stolen key pairs, the theft is often discovered only after the secret key's use, e.g., to sign a bogus certificate for espionage purpose.
Additional security precautions
Putting the private key on a smart card
All public key / private key cryptosystems depend entirely on keeping the private key secret. A private key can be stored on a user's computer, and protected by a local password, but this has two disadvantages:
the user can only sign documents on that particular computer
the security of the private key depends entirely on the security of the computer
A more secure alternative is to store the private key on a smart card. Many smart cards are designed to be tamper-resistant (although some designs have been broken, notably by Ross Anderson and his students). In a typical digital signature implementation, the hash calculated from the document is sent to the smart card, whose CPU signs the hash using the stored private key of the user, and then returns the signed hash. Typically, a user must activate their smart card by entering a personal identification number or PIN code (thus providing two-factor authentication). It can be arranged that the private key never leaves the smart card, although this is not always implemented. If the smart card is stolen, the thief will still need the PIN code to generate a digital signature. This reduces the security of the scheme to that of the PIN system, although it still requires an attacker to possess the card. A mitigating factor is that private keys, if generated and stored on smart cards, are usually regarded as difficult to copy, and are assumed to exist in exactly one copy. Thus, the loss of the smart card may be detected by the owner and the corresponding certificate can be immediately revoked. Private keys that are protected by software only may be easier to copy, and such compromises are far more difficult to detect.
Using smart card readers with a separate keyboard
Entering a PIN code to activate the smart card commonly requires a numeric keypad. Some card readers have their own numeric keypad. This is safer than using a card reader integrated into a PC, and then entering the PIN using that computer's keyboard. Readers with a numeric keypad are meant to circumvent the eavesdropping threat where the computer might be running a keystroke logger, potentially compromising the PIN code. Specialized card readers are also less vulnerable to tampering with their software or hardware and are often EAL3 certified.
Other smart card designs
Smart card design is an active field, and there are smart card schemes which are intended to avoid these particular problems, despite having few security proofs so far.
Using digital signatures only with trusted applications
One of the main differences between a digital signature and a written signature is that the user does not "see" what they sign. The user application presents a hash code to be signed by the digital signing algorithm using the private key. An attacker who gains control of the user's PC can possibly replace the user application with a foreign substitute, in effect replacing the user's own communications with those of the attacker. This could allow a malicious application to trick a user into signing any document by displaying the user's original on-screen, but presenting the attacker's own documents to the signing application.
To protect against this scenario, an authentication system can be set up between the user's application (word processor, email client, etc.) and the signing application. The general idea is to provide some means for both the user application and signing application to verify each other's integrity. For example, the signing application may require all requests to come from digitally signed binaries.
Using a network attached hardware security module
One of the main differences between a cloud based digital signature service and a locally provided one is risk. Many risk averse companies, including governments, financial and medical institutions, and payment processors require more secure standards, like FIPS 140-2 level 3 and FIPS 201 certification, to ensure the signature is validated and secure.
WYSIWYS
Technically speaking, a digital signature applies to a string of bits, whereas humans and applications "believe" that they sign the semantic interpretation of those bits. In order to be semantically interpreted, the bit string must be transformed into a form that is meaningful for humans and applications, and this is done through a combination of hardware and software based processes on a computer system. The problem is that the semantic interpretation of bits can change as a function of the processes used to transform the bits into semantic content. It is relatively easy to change the interpretation of a digital document by implementing changes on the computer system where the document is being processed. From a semantic perspective this creates uncertainty about what exactly has been signed. WYSIWYS (What You See Is What You Sign) means that the semantic interpretation of a signed message cannot be changed. In particular this also means that a message cannot contain hidden information that the signer is unaware of, and that can be revealed after the signature has been applied. WYSIWYS is a requirement for the validity of digital signatures, but this requirement is difficult to guarantee because of the increasing complexity of modern computer systems. The term WYSIWYS was coined by Peter Landrock and Torben Pedersen to describe some of the principles in delivering secure and legally binding digital signatures for Pan-European projects.
Digital signatures versus ink on paper signatures
An ink signature could be replicated from one document to another by copying the image manually or digitally, but to have credible signature copies that can resist some scrutiny is a significant manual or technical skill, and to produce ink signature copies that resist professional scrutiny is very difficult.
Digital signatures cryptographically bind an electronic identity to an electronic document and the digital signature cannot be copied to another document. Paper contracts sometimes have the ink signature block on the last page, and the previous pages may be replaced after a signature is applied. Digital signatures can be applied to an entire document, such that the digital signature on the last page will indicate tampering if any data on any of the pages have been altered, but this can also be achieved by signing with ink and numbering all pages of the contract.
Some digital signature algorithms
RSA
DSA
ECDSA
EdDSA
RSA with SHA
ECDSA with SHA
ElGamal signature scheme as the predecessor to DSA, and variants Schnorr signature and Pointcheval–Stern signature algorithm
Rabin signature algorithm
Pairing-based schemes such as BLS
NTRUSign is an example of a digital signature scheme based on hard lattice problems
Undeniable signatures
– a signature scheme that supports aggregation: Given n signatures on n messages from n users, it is possible to aggregate all these signatures into a single signature whose size is constant in the number of users. This single signature will convince the verifier that the n users did indeed sign the n original messages. A scheme by Mihir Bellare and Gregory Neven may be used with Bitcoin.
Signatures with efficient protocols – are signature schemes that facilitate efficient cryptographic protocols such as zero-knowledge proofs or secure computation.
The current state of use – legal and practical
Most digital signature schemes share the following goals regardless of cryptographic theory or legal provision:
Quality algorithms: Some public-key algorithms are known to be insecure, as practical attacks against them having been discovered.
Quality implementations: An implementation of a good algorithm (or protocol) with mistake(s) will not work.
Users (and their software) must carry out the signature protocol properly.
The private key must remain private: If the private key becomes known to any other party, that party can produce perfect digital signatures of anything.
The public key owner must be verifiable: A public key associated with Bob actually came from Bob. This is commonly done using a public key infrastructure (PKI) and the public key↔user association is attested by the operator of the PKI (called a certificate authority). For 'open' PKIs in which anyone can request such an attestation (universally embodied in a cryptographically protected public key certificate), the possibility of mistaken attestation is non-trivial. Commercial PKI operators have suffered several publicly known problems. Such mistakes could lead to falsely signed, and thus wrongly attributed, documents. 'Closed' PKI systems are more expensive, but less easily subverted in this way.
Only if all of these conditions are met will a digital signature actually be any evidence of who sent the message, and therefore of their assent to its contents. Legal enactment cannot change this reality of the existing engineering possibilities, though some such have not reflected this actuality.
Legislatures, being importuned by businesses expecting to profit from operating a PKI, or by the technological avant-garde advocating new solutions to old problems, have enacted statutes and/or regulations in many jurisdictions authorizing, endorsing, encouraging, or permitting digital signatures and providing for (or limiting) their legal effect. The first appears to have been in Utah in the United States, followed closely by the states Massachusetts and California. Other countries have also passed statutes or issued regulations in this area as well and the UN has had an active model law project for some time. These enactments (or proposed enactments) vary from place to place, have typically embodied expectations at variance (optimistically or pessimistically) with the state of the underlying cryptographic engineering, and have had the net effect of confusing potential users and specifiers, nearly all of whom are not cryptographically knowledgeable.
Adoption of technical standards for digital signatures have lagged behind much of the legislation, delaying a more or less unified engineering position on interoperability, algorithm choice, key lengths, and so on what the engineering is attempting to provide.
Industry standards
Some industries have established common interoperability standards for the use of digital signatures between members of the industry and with regulators. These include the Automotive Network Exchange for the automobile industry and the SAFE-BioPharma Association for the healthcare industry.
Using separate key pairs for signing and encryption
In several countries, a digital signature has a status somewhat like that of a traditional pen and paper signature, as in the
1999 EU digital signature directive and 2014 EU follow-on legislation. Generally, these provisions mean that anything digitally signed legally binds the signer of the document to the terms therein. For that reason, it is often thought best to use separate key pairs for encrypting and signing. Using the encryption key pair, a person can engage in an encrypted conversation (e.g., regarding a real estate transaction), but the encryption does not legally sign every message he or she sends. Only when both parties come to an agreement do they sign a contract with their signing keys, and only then are they legally bound by the terms of a specific document. After signing, the document can be sent over the encrypted link. If a signing key is lost or compromised, it can be revoked to mitigate any future transactions. If an encryption key is lost, a backup or key escrow should be utilized to continue viewing encrypted content. Signing keys should never be backed up or escrowed unless the backup destination is securely encrypted.
See also
21 CFR 11
X.509
Advanced electronic signature
Blind signature
Detached signature
Digital certificate
Digital signature in Estonia
Electronic lab notebook
Electronic signature
Electronic signatures and law
eSign (India)
GNU Privacy Guard
Public key infrastructure
Public key fingerprint
Server-based signatures
Probabilistic signature scheme
Notes
References
Further reading
J. Katz and Y. Lindell, "Introduction to Modern Cryptography" (Chapman & Hall/CRC Press, 2007)
Lorna Brazell, Electronic Signatures and Identities Law and Regulation (2nd edn, London: Sweet & Maxwell, 2008)
Dennis Campbell, editor, E-Commerce and the Law of Digital Signatures (Oceana Publications, 2005).
M. H. M Schellenkens, Electronic Signatures Authentication Technology from a Legal Perspective, (TMC Asser Press, 2004).
Jeremiah S. Buckley, John P. Kromer, Margo H. K. Tank, and R. David Whitaker, The Law of Electronic Signatures (3rd Edition, West Publishing, 2010).
Digital Evidence and Electronic Signature Law Review Free open source |
59652 | https://en.wikipedia.org/wiki/Merkle%E2%80%93Hellman%20knapsack%20cryptosystem | Merkle–Hellman knapsack cryptosystem | The Merkle–Hellman knapsack cryptosystem was one of the earliest public key cryptosystems. It was published by Ralph Merkle and Martin Hellman in 1978. A polynomial time attack was published by Adi Shamir in 1984. As a result, the cryptosystem is now considered insecure.
History
The concept of public key cryptography was introduced by Whitfield Diffie and Martin Hellman in 1976. At that time they proposed only the general concept of a "trapdoor function", a function that is computationally infeasible to calculate without some secret "trapdoor" information, but they had not yet found a practical example of such a function. Several specific public-key cryptosystems were then proposed by other researchers over the next few years, such as RSA in 1977 and Merkle-Hellman in 1978.
Description
Merkle–Hellman is a public key cryptosystem, meaning that two keys are used, a public key for encryption and a private key for decryption. It is based on the subset sum problem (a special case of the knapsack problem). The problem is as follows: given a set of integers and an integer , find a subset of which sums to . In general, this problem is known to be NP-complete. However, if is superincreasing, meaning that each element of the set is greater than the sum of all the numbers in the set lesser than it, the problem is "easy" and solvable in polynomial time with a simple greedy algorithm.
In Merkle–Hellman, decrypting a message requires solving an apparently "hard" knapsack problem. The private key contains a superincreasing list of numbers , and the public key contains a non-superincreasing list of numbers , which is actually a "disguised" version of . The private key also contains some "trapdoor" information that can be used to transform a hard knapsack problem using into an easy knapsack problem using .
Unlike some other public key cryptosystems such as RSA, the two keys in Merkle-Hellman are not interchangeable; the private key cannot be used for encryption. Thus Merkle-Hellman is not directly usable for authentication by cryptographic signing, although Shamir published a variant that can be used for signing.
Key generation
1. Choose a block size . Integers up to bits in length can be encrypted with this key.
2. Choose a random superincreasing sequence of positive integers
The superincreasing requirement means that , for .
3. Choose a random integer such that
4. Choose a random integer such that (that is, and are coprime).
5. Calculate the sequence
where .
The public key is and the private key is .
Encryption
Let be an -bit message consisting of bits , with the highest order bit. Select each for which is nonzero, and add them together. Equivalently, calculate
.
The ciphertext is .
Decryption
To decrypt a ciphertext , we must find the subset of which sums to . We do this by transforming the problem into one of finding a subset of . That problem can be solved in polynomial time since is superincreasing.
1. Calculate the modular inverse of modulo using the Extended Euclidean algorithm. The inverse will exist since is coprime to .
The computation of is independent of the message, and can be done just once when the private key is generated.
2. Calculate
3. Solve the subset sum problem for using the superincreasing sequence , by the simple greedy algorithm described below. Let be the resulting list of indexes of the elements of which sum to . (That is, .)
4. Construct the message with a 1 in each bit position and a 0 in all other bit positions:
Solving the subset sum problem
This simple greedy algorithm finds the subset of a superincreasing sequence which sums to , in polynomial time:
1. Initialize to an empty list.
2. Find the largest element in which is less than or equal to , say .
3. Subtract: .
4. Append to the list .
5. If is greater than zero, return to step 2.
Example
Key generation
Create a key to encrypt 8-bit numbers by creating a random superincreasing sequence of 8 values:
The sum of these is 706, so select a larger value for :
.
Choose to be coprime to :
.
Construct the public key by multiplying each element in by modulo :
Hence .
Encryption
Let the 8-bit message be . We multiply each bit by the corresponding number in and add the results:
0 * 295
+ 1 * 592
+ 1 * 301
+ 0 * 14
+ 0 * 28
+ 0 * 353
+ 0 * 120
+ 1 * 236
= 1129
The ciphertext is 1129.
Decryption
To decrypt 1129, first use the Extended Euclidean Algorithm to find the modular inverse of mod :
.
Compute .
Use the greedy algorithm to decompose 372 into a sum of values:
Thus , and the list of indexes is . The message can now be computed as
.
Cryptanalysis
In 1984 Adi Shamir published an attack on the Merkle-Hellman cryptosystem which can decrypt encrypted messages in polynomial time without using the private key. The attack analyzes the public key and searches for a pair of numbers and such that is a superincreasing sequence. The pair found by the attack may not be equal to in the private key, but like that pair it can be used to transform a hard knapsack problem using into an easy problem using a superincreasing sequence. The attack operates solely on the public key; no access to encrypted messages is necessary.
References
Public-key encryption schemes
Broken cryptography algorithms |
59735 | https://en.wikipedia.org/wiki/Free%20group | Free group | In mathematics, the free group FS over a given set S consists of all words that can be built from members of S, considering two words to be different unless their equality follows from the group axioms (e.g. st = suu−1t, but s ≠ t−1 for s,t,u ∈ S). The members of S are called generators of FS, and the number of generators is the rank of the free group.
An arbitrary group G is called free if it is isomorphic to FS for some subset S of G, that is, if there is a subset S of G such that every element of G can be written in exactly one way as a product of finitely many elements of S and their inverses (disregarding trivial variations such as st = suu−1t).
A related but different notion is a free abelian group; both notions are particular instances of a free object from universal algebra. As such, free groups are defined by their universal property.
History
Free groups first arose in the study of hyperbolic geometry, as examples of Fuchsian groups (discrete groups acting by isometries on the hyperbolic plane). In an 1882 paper, Walther von Dyck pointed out that these groups have the simplest possible presentations. The algebraic study of free groups was initiated by Jakob Nielsen in 1924, who gave them their name and established many of their basic properties. Max Dehn realized the connection with topology, and obtained the first proof of the full Nielsen–Schreier theorem. Otto Schreier published an algebraic proof of this result in 1927, and Kurt Reidemeister included a comprehensive treatment of free groups in his 1932 book on combinatorial topology. Later on in the 1930s, Wilhelm Magnus discovered the connection between the lower central series of free groups and free Lie algebras.
Examples
The group (Z,+) of integers is free of rank 1; a generating set is S = {1}. The integers are also a free abelian group, although all free groups of rank are non-abelian. A free group on a two-element set S occurs in the proof of the Banach–Tarski paradox and is described there.
On the other hand, any nontrivial finite group cannot be free, since the elements of a free generating set of a free group have infinite order.
In algebraic topology, the fundamental group of a bouquet of k circles (a set of k loops having only one point in common) is the free group on a set of k elements.
Construction
The free group FS with free generating set S can be constructed as follows. S is a set of symbols, and we suppose for every s in S there is a corresponding "inverse" symbol, s−1, in a set S−1. Let T = S ∪ S−1, and define a word in S to be any written product of elements of T. That is, a word in S is an element of the monoid generated by T. The empty word is the word with no symbols at all. For example, if S = {a, b, c}, then T = {a, a−1, b, b−1, c, c−1}, and
is a word in S.
If an element of S lies immediately next to its inverse, the word may be simplified by omitting the c, c−1 pair:
A word that cannot be simplified further is called reduced.
The free group FS is defined to be the group of all reduced words in S, with concatenation of words (followed by reduction if necessary) as group operation. The identity is the empty word.
A word is called cyclically reduced if its first and last letter are not inverse to each other. Every word is conjugate to a cyclically reduced word, and a cyclically reduced conjugate of a cyclically reduced word is a cyclic permutation of the letters in the word. For instance b−1abcb is not cyclically reduced, but is conjugate to abc, which is cyclically reduced. The only cyclically reduced conjugates of abc are abc, bca, and cab.
Universal property
The free group FS is the universal group generated by the set S. This can be formalized by the following universal property: given any function from S to a group G, there exists a unique homomorphism φ: FS → G making the following diagram commute (where the unnamed mapping denotes the inclusion from S into FS):
That is, homomorphisms FS → G are in one-to-one correspondence with functions S → G. For a non-free group, the presence of relations would restrict the possible images of the generators under a homomorphism.
To see how this relates to the constructive definition, think of the mapping from S to FS as sending each symbol to a word consisting of that symbol. To construct φ for the given , first note that φ sends the empty word to the identity of G and it has to agree with on the elements of S. For the remaining words (consisting of more than one symbol), φ can be uniquely extended, since it is a homomorphism, i.e., φ(ab) = φ(a) φ(b).
The above property characterizes free groups up to isomorphism, and is sometimes used as an alternative definition. It is known as the universal property of free groups, and the generating set S is called a basis for FS. The basis for a free group is not uniquely determined.
Being characterized by a universal property is the standard feature of free objects in universal algebra. In the language of category theory, the construction of the free group (similar to most constructions of free objects) is a functor from the category of sets to the category of groups. This functor is left adjoint to the forgetful functor from groups to sets.
Facts and theorems
Some properties of free groups follow readily from the definition:
Any group G is the homomorphic image of some free group F(S). Let S be a set of generators of G. The natural map f: F(S) → G is an epimorphism, which proves the claim. Equivalently, G is isomorphic to a quotient group of some free group F(S). The kernel of φ is a set of relations in the presentation of G. If S can be chosen to be finite here, then G is called finitely generated.
If S has more than one element, then F(S) is not abelian, and in fact the center of F(S) is trivial (that is, consists only of the identity element).
Two free groups F(S) and F(T) are isomorphic if and only if S and T have the same cardinality. This cardinality is called the rank of the free group F. Thus for every cardinal number k, there is, up to isomorphism, exactly one free group of rank k.
A free group of finite rank n > 1 has an exponential growth rate of order 2n − 1.
A few other related results are:
The Nielsen–Schreier theorem: Every subgroup of a free group is free.
A free group of rank k clearly has subgroups of every rank less than k. Less obviously, a (nonabelian!) free group of rank at least 2 has subgroups of all countable ranks.
The commutator subgroup of a free group of rank k > 1 has infinite rank; for example for F(a,b), it is freely generated by the commutators [am, bn] for non-zero m and n.
The free group in two elements is SQ universal; the above follows as any SQ universal group has subgroups of all countable ranks.
Any group that acts on a tree, freely and preserving the orientation, is a free group of countable rank (given by 1 plus the Euler characteristic of the quotient graph).
The Cayley graph of a free group of finite rank, with respect to a free generating set, is a tree on which the group acts freely, preserving the orientation.
The groupoid approach to these results, given in the work by P.J. Higgins below, is kind of extracted from an approach using covering spaces. It allows more powerful results, for example on Grushko's theorem, and a normal form for the fundamental groupoid of a graph of groups. In this approach there is considerable use of free groupoids on a directed graph.
Grushko's theorem has the consequence that if a subset B of a free group F on n elements generates F and has n elements, then B generates F freely.
Free abelian group
The free abelian group on a set S is defined via its universal property in the analogous way, with obvious modifications:
Consider a pair (F, φ), where F is an abelian group and φ: S → F is a function. F is said to be the free abelian group on S with respect to φ if for any abelian group G and any function ψ: S → G, there exists a unique homomorphism f: F → G such that
f(φ(s)) = ψ(s), for all s in S.
The free abelian group on S can be explicitly identified as the free group F(S) modulo the subgroup generated by its commutators, [F(S), F(S)], i.e.
its abelianisation. In other words, the free abelian group on S is the set of words that are distinguished only up to the order of letters. The rank of a free group can therefore also be defined as the rank of its abelianisation as a free abelian group.
Tarski's problems
Around 1945, Alfred Tarski asked whether the free groups on two or more generators have the same first-order theory, and whether this theory is decidable. answered the first question by showing that any two nonabelian free groups have the same first-order theory, and answered both questions, showing that this theory is decidable.
A similar unsolved (as of 2011) question in free probability theory asks whether the von Neumann group algebras of any two non-abelian finitely generated free groups are isomorphic.
See also
Generating set of a group
Presentation of a group
Nielsen transformation, a factorization of elements of the automorphism group of a free group
Normal form for free groups and free product of groups
Free product
Notes
References
W. Magnus, A. Karrass and D. Solitar, "Combinatorial Group Theory", Dover (1976).
P.J. Higgins, 1971, "Categories and Groupoids", van Nostrand, {New York}. Reprints in Theory and Applications of Categories, 7 (2005) pp 1–195.
Serre, Jean-Pierre, Trees, Springer (2003) (English translation of "arbres, amalgames, SL2", 3rd edition, astérisque 46 (1983))
P.J. Higgins, The fundamental groupoid of a graph of groups, Journal of the London Mathematical Society (2) 13 (1976), no. 1, 145–149.
.
.
Geometric group theory
Combinatorial group theory
Free algebraic structures
Properties of groups |
59865 | https://en.wikipedia.org/wiki/Secure%20cryptoprocessor | Secure cryptoprocessor | A secure cryptoprocessor is a dedicated computer-on-a-chip or microprocessor for carrying out cryptographic operations, embedded in a packaging with multiple physical security measures, which give it a degree of tamper resistance. Unlike cryptographic processors that output decrypted data onto a bus in a secure environment, a secure cryptoprocessor does not output decrypted data or decrypted program instructions in an environment where security cannot always be maintained.
The purpose of a secure cryptoprocessor is to act as the keystone of a security subsystem, eliminating the need to protect the rest of the subsystem with physical security measures.
Examples
A hardware security module (HSM) contains one or more secure cryptoprocessor chips. These devices are high grade secure cryptoprocessors used with enterprise servers. A hardware security module can have multiple levels of physical security with a single-chip cryptoprocessor as its most secure component. The cryptoprocessor does not reveal keys or executable instructions on a bus, except in encrypted form, and zeros keys by attempts at probing or scanning. The crypto chip(s) may also be potted in the hardware security module with other processors and memory chips that store and process encrypted data. Any attempt to remove the potting will cause the keys in the crypto chip to be zeroed. A hardware security module may also be part of a computer (for example an ATM) that operates inside a locked safe to deter theft, substitution, and tampering.
Modern smartcards are probably the most widely deployed form of secure cryptoprocessor, although more complex and versatile secure cryptoprocessors are widely deployed in systems such as Automated teller machines, TV set-top boxes, military applications, and high-security portable communication equipment. Some secure cryptoprocessors can even run general-purpose operating systems such as Linux inside their security boundary. Cryptoprocessors input program instructions in encrypted form, decrypt the instructions to plain instructions which are then executed within the same cryptoprocessor chip where the decrypted instructions are inaccessibly stored. By never revealing the decrypted program instructions, the cryptoprocessor prevents tampering of programs by technicians who may have legitimate access to the sub-system data bus. This is known as bus encryption. Data processed by a cryptoprocessor is also frequently encrypted.
The Trusted Platform Module (TPM) is an implementation of a secure cryptoprocessor that brings the notion of trusted computing to ordinary PCs by enabling a secure environment. Present TPM implementations focus on providing a tamper-proof boot environment, and persistent and volatile storage encryption.
Security chips for embedded systems are also available that provide the same level of physical protection for keys and other secret material as a smartcard processor or TPM but in a smaller, less complex and less expensive package. They are often referred to as cryptographic authentication devices and are used to authenticate peripherals, accessories and/or consumables. Like TPMs, they are usually turnkey integrated circuits intended to be embedded in a system, usually soldered to a PC board.
Features
Security measures used in secure cryptoprocessors:
Tamper-detecting and tamper-evident containment.
Conductive shield layers in the chip that prevent reading of internal signals.
Controlled execution to prevent timing delays from revealing any secret information.
Automatic zeroization of secrets in the event of tampering.
Chain of trust boot-loader which authenticates the operating system before loading it.
Chain of trust operating system which authenticates application software before loading it.
Hardware-based capability registers, implementing a one-way privilege separation model.
Degree of security
Secure cryptoprocessors, while useful, are not invulnerable to attack, particularly for well-equipped and determined opponents (e.g. a government intelligence agency) who are willing to expend enough resources on the project.
One attack on a secure cryptoprocessor targeted the IBM 4758. A team at the University of Cambridge reported the successful extraction of secret information from an IBM 4758, using a combination of mathematics, and special-purpose codebreaking hardware. However, this attack was not practical in real-world systems because it required the attacker to have full access to all API functions of the device. Normal and recommended practices use the integral access control system to split authority so that no one person could mount the attack.
While the vulnerability they exploited was a flaw in the software loaded on the 4758, and not the architecture of the 4758 itself, their attack serves as a reminder that a security system is only as secure as its weakest link: the strong link of the 4758 hardware was rendered useless by flaws in the design and specification of the software loaded on it.
Smartcards are significantly more vulnerable, as they are more open to physical attack. Additionally, hardware backdoors can undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods.
In the case of full disk encryption applications, especially when implemented without a boot PIN, a cryptoprocessor would not be secure against a cold boot attack if data remanence could be exploited to dump memory contents after the operating system has retrieved the cryptographic keys from its TPM.
However, if all of the sensitive data is stored only in cryptoprocessor memory and not in external storage, and the cryptoprocessor is designed to be unable to reveal keys or decrypted or unencrypted data on chip bonding pads or solder bumps, then such protected data would be accessible only by probing the cryptoprocessor chip after removing any packaging and metal shielding layers from the cryptoprocessor chip. This would require both physical possession of the device as well as skills and equipment beyond that of most technical personnel.
Other attack methods involve carefully analyzing the timing of various operations that might vary depending on the secret value or mapping the current consumption versus time to identify differences in the way that '0' bits are handled internally vs. '1' bits. Or the attacker may apply temperature extremes, excessively high or low clock frequencies or supply voltage that exceeds the specifications in order to induce a fault. The internal design of the cryptoprocessor can be tailored to prevent these attacks.
Some secure cryptoprocessors contain dual processor cores and generate inaccessible encryption keys when needed so that even if the circuitry is reverse engineered, it will not reveal any keys that are necessary to securely decrypt software booted from encrypted flash memory or communicated between cores.
The first single-chip cryptoprocessor design was for copy protection of personal computer software (see US Patent 4,168,396, Sept 18, 1979) and was inspired by Bill Gates's Open Letter to Hobbyists.
History
The hardware security module (HSM), a type of secure cryptoprocessor, was invented by Egyptian-American engineer Mohamed M. Atalla, in 1972. He invented a high security module dubbed the "Atalla Box" which encrypted PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. In 1972, he filed a patent for the device. He founded Atalla Corporation (now Utimaco Atalla) that year, and commercialized the "Atalla Box" the following year, officially as the Identikey system. It was a card reader and customer identification system, consisting of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. It was a success, and led to the wide use of high security modules.
Fearful that Atalla would dominate the market, banks and credit card companies began working on an international standard in the 1970s. The IBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system. Atalla was an early competitor to IBM in the banking security market.
At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities of processing online transactions and dealing with network security. Designed with the focus of taking bank transactions online, the Identikey system was extended to shared-facility operations. It was consistent and compatible with various switching networks, and was capable of resetting itself electronically to any one of 64,000 irreversible nonlinear algorithms as directed by card data information. The Interchange Identikey device was released in March 1976. Later in 1979, Atalla introduced the first network security processor (NSP). Atalla's HSM products protect 250million card transactions every day as of 2013, and secure the majority of the world's ATM transactions as of 2014.
See also
Computer security
Crypto-shredding
FIPS 140-2
Hardware acceleration
SSL/TLS accelerator
Hardware security modules
Security engineering
Smart card
Trusted Computing
Trusted Platform Module
References
Further reading
Ross Anderson, Mike Bond, Jolyon Clulow and Sergei Skorobogatov, Cryptographic Processors — A Survey, April 2005 (PDF). This is not a survey of cryptographic processors; it is a survey of relevant security issues.
Robert M. Best, US Patent 4,278,837, July 14, 1981
R. Elbaz, et al., Hardware Engines for Bus Encryption — A Survey, 2005 (PDF).
David Lie, Execute Only Memory, .
Extracting a 3DES key from an IBM 4758
J. D. Tygar and Bennet Yee, A System for Using Physically Secure Coprocessors, Dyad
Cryptographic hardware
Cryptanalytic devices
Arab inventions
Egyptian inventions |
59868 | https://en.wikipedia.org/wiki/Interpreter%20%28computing%29 | Interpreter (computing) | In computer science, an interpreter is a computer program that directly executes instructions written in a programming or scripting language, without requiring them previously to have been compiled into a machine language program. An interpreter generally uses one of the following strategies for program execution:
Parse the source code and perform its behavior directly;
Translate source code into some efficient intermediate representation or object code and immediately execute that;
Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter Virtual Machine.
Early versions of Lisp programming language and minicomputer and microcomputer BASIC dialects would be examples of the first type. Perl, Raku, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk and contemporary versions of BASIC and Java may also combine two and three. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol, C and C++.
While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high-level language is ideally an abstraction independent of particular implementations.
History
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. The first interpreted high-level language was Lisp. Lisp was first implemented in 1958 by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
General operation
An interpreter usually consists of a set of known commands it can execute, and a list of these commands in the order a programmer wishes to execute them. Each command (also known as an Instruction) contains the data the programmer wants to mutate, and information on how to mutate the data. For example, an interpreter might read ADD Wikipedia_Users, 5 and interpret it as a request to add five to the Wikipedia_Users variable.
Interpreters have a wide variety of instructions which are specialized to perform different tasks, but you will commonly find interpreter instructions for basic mathematical operations, branching, and memory management, making most interpreters Turing complete. Many interpreters are also closely integrated with a garbage collector and debugger.
Compilers versus interpreters
Programs written in a high-level language are either directly executed by some kind of interpreter or converted into machine code by a compiler (and assembler and linker) for the CPU to execute.
While compilers (and assemblers) generally produce machine code directly executable by computer hardware, they can often (optionally) produce an intermediate form called object code. This is basically the same machine specific code but augmented with a symbol table with names and tags to make executable blocks (or modules) identifiable and relocatable. Compiled programs will typically use building blocks (functions) kept in a library of such object code modules. A linker is used to combine (pre-made) library files with the object file(s) of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages (capable of generating the same object format).
A simple interpreter written in a low-level language (e.g. assembly) may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking a parse tree, or by generating and executing intermediate software-defined instructions, or both.
Thus, both compilers and interpreters generally turn source code (text files) into tokens, both may (or may not) generate a parse tree, and both may generate immediate instructions (for a stack machine, quadruple code, or by other means). The basic difference is that a compiler system, including a (built in or separate) linker, generates a stand-alone machine code program, while an interpreter system instead performs the actions described by the high-level program.
A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all (i.e. until the program has to be changed) while an interpreter has to do some of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work (including analysis of types, and similar) is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without (many) dynamic data structures, checks, or type checking.
In traditional compilation, the executable output of the linkers (.exe files or .dll files or a library, see picture) is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small embedded systems are typically statically allocated, often hard coded in a NOR flash memory, as there is often no secondary storage and no operating system in this sense.
Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers (then often called an IDE), although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.
Development cycle
During the software development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and link all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a Make file and program. The Make file lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files.
Distribution
A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A cross compiler can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled.
An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable since the interpreter itself is part of what need be installed.
The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of copyright. However, various systems of encryption and obfuscation exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a decompiler or disassembler.
Efficiency
The main disadvantage of interpreters is that an interpreted program typically runs slower than if it had been compiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.
Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.
There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed. Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.
An interpreter might well use the same lexical analyzer and parser as the compiler and then interpret the resulting abstract syntax tree. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from C expressions are shown in the box.
Regression
Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.
Variations
Bytecode interpreters
There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, Emacs Lisp is compiled to bytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in C). The compiled code in this case is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called compreters. In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated.
Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters.
Threaded code interpreters
Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each "instruction" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the Forth code used in Open Firmware systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine.
Abstract syntax tree interpreters
In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree (AST), then execute the program following this tree structure, or use it to generate native code just-in-time. In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation. Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime.
However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.
Just-in-time compilation
Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time (JIT) compilation, a technique in which the intermediate representation is compiled to native machine code at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work on LISP by John McCarthy in 1960. Adaptive optimization is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such as Smalltalk in the 1980s.
Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with Java, the .NET Framework, most modern JavaScript implementations, and Matlab now including JIT compilers.
Template Interpreter
Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode possible, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs, known as a "Template". When the particular code segment is executed the interpreter simply loads the opcode mapping in the template and directly runs it on the hardware. Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementation of a language to exist is the interpreter within the HotSpot/OpenJDK Java Virtual Machine reference implementation.
Self-interpreter
A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers.
If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language (which may be another programming language or assembler). By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself. It was in this way that Donald Knuth developed the TANGLE interpreter for the language WEB of the industrial standard TeX typesetting system.
Defining a computer language is usually done in relation to an abstract machine (so-called operational semantics) or as a mathematical function (denotational semantics). A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded (it cannot define a language), but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting.
An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a closure in a Lisp-like language is implemented using closures in the interpreter language or implemented "manually" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language.
Some languages such as Lisp and Prolog have elegant self-interpreters. Much research on self-interpreters (particularly reflective interpreters) has been conducted in the Scheme programming language, a dialect of Lisp. In general, however, any Turing-complete language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of metaprogramming is the writing of domain-specific languages (DSLs).
Clive Gifford introduced a measure quality of self-interpreter (the eigenratio), the limit of the ratio between computer time spent running a stack of N self-interpreters and time spent to run a stack of self-interpreters as N goes to infinity. This value does not depend on the program being run.
The book Structure and Interpretation of Computer Programs presents examples of meta-circular interpretation for Scheme and its dialects. Other examples of languages with a self-interpreter are Forth and Pascal.
Microcode
Microcode is a very commonly used technique "that imposes an interpreter between the hardware and the architectural level of a computer". As such, the microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, as well as in more specialized processors such as microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and in other hardware.
Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram.
More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family.
Computer processor
Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such as VHDL to create a system that parses the machine code instructions and immediately executes them.
Applications
Interpreters are frequently used to execute command languages, and glue languages since each operator executed in command language is usually an invocation of a complex routine such as an editor or compiler.
Self-modifying code can easily be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research.
Virtualization. Machine code intended for a hardware architecture can be run using a virtual machine. This is often used when the intended architecture is unavailable, or among other uses, for running multiple copies.
Sandboxing: While some types of sandboxes rely on operating system protections, an interpreter or virtual machine is often used. The actual hardware architecture and the originally intended hardware architecture may or may not be the same. This may seem pointless, except that sandboxes are not compelled to actually execute all the instructions the source code it is processing. In particular, it can refuse to execute code that violates any security constraints it is operating under.
Emulators for running computer software written for obsolete and unavailable hardware on more modern equipment.
See also
BASIC interpreter
Command-line interpreter
Compiled language
Dynamic compilation
Homoiconicity
Meta-circular evaluator
Partial evaluation
References
External links
IBM Card Interpreters page at Columbia University
Theoretical Foundations For Practical 'Totally Functional Programming' (Chapter 7 especially) Doctoral dissertation tackling the problem of formalising what is an interpreter
Short animation explaining the key conceptual difference between interpreters and compilers
Programming language implementation |
59957 | https://en.wikipedia.org/wiki/Smart%20card | Smart card | A smart card, chip card, or integrated circuit card (ICC or IC card) is a physical electronic authorization device, used to control access to a resource. It is typically a plastic credit card-sized card with an embedded integrated circuit (IC) chip. Many smart cards include a pattern of metal contacts to electrically connect to the internal chip. Others are contactless, and some are both. Smart cards can provide personal identification, authentication, data storage, and application processing. Applications include identification, financial, mobile phones (SIM), public transit, computer security, schools, and healthcare. Smart cards may provide strong security authentication for single sign-on (SSO) within organizations. Numerous nations have deployed smart cards throughout their populations.
The universal integrated circuit card, or SIM card, is also a type of smart card. , 10.5billion smart card IC chips are manufactured annually, including 5.44billion SIM card IC chips.
History
The basis for the smart card is the silicon integrated circuit (IC) chip. It was invented by Robert Noyce at Fairchild Semiconductor in 1959, and was made possible by Mohamed M. Atalla's silicon surface passivation process (1957) and Jean Hoerni's planar process (1959). The invention of the silicon integrated circuit led to the idea of incorporating it onto a plastic card in the late 1960s. Smart cards have since used MOS integrated circuit chips, along with MOS memory technologies such as flash memory and EEPROM (electrically erasable programmable read-only memory).
Invention
The idea of incorporating an integrated circuit chip onto a plastic card was first introduced by two German engineers in the late 1960s, Helmut Gröttrup and Jürgen Dethloff. In February 1967, Gröttrup filed the patent DE1574074 in West Germany for a tamper-proof identification switch based on a semiconductor device. Its primary use was intended to provide individual copy-protected keys for releasing the tapping process at unmanned gas stations. In September 1968, Helmut Gröttrup, together with Dethloff as an investor, filed further patents for this identification switch, first in Austria and in 1969 as subsequent applications in the United States, Great Britain, West Germany and other countries.
Independently, Kunitaka Arimura of the Arimura Technology Institute in Japan developed a similar idea of incorporating an integrated circuit onto a plastic card, and filed a smart card patent in March 1970. The following year, Paul Castrucci of IBM filed an American patent titled "Information Card" in May 1971.
In 1974 Roland Moreno patented a secured memory card later dubbed the "smart card". In 1976, Jürgen Dethloff introduced the known element (called "the secret") to identify gate user as of USP 4105156.
In 1977, Michel Ugon from Honeywell Bull invented the first microprocessor smart card with two chips: one microprocessor and one memory, and in 1978, he patented the self-programmable one-chip microcomputer (SPOM) that defines the necessary architecture to program the chip. Three years later, Motorola used this patent in its "CP8". At that time, Bull had 1,200 patents related to smart cards. In 2001, Bull sold its CP8 division together with its patents to Schlumberger, who subsequently combined its own internal smart card department and CP8 to create Axalto. In 2006, Axalto and Gemplus, at the time the world's top two smart-card manufacturers, merged and became Gemalto. In 2008, Dexa Systems spun off from Schlumberger and acquired Enterprise Security Services business, which included the smart-card solutions division responsible for deploying the first large-scale smart-card management systems based on public key infrastructure (PKI).
The first mass use of the cards was as a telephone card for payment in French payphones, starting in 1983.
Carte bleue
After the Télécarte, microchips were integrated into all French Carte Bleue debit cards in 1992. Customers inserted the card into the merchant's point-of-sale (POS) terminal, then typed the personal identification number (PIN), before the transaction was accepted. Only very limited transactions (such as paying small highway tolls) are processed without a PIN.
Smart-card-based "electronic purse" systems store funds on the card, so that readers do not need network connectivity. They entered European service in the mid-1990s. They have been common in Germany (Geldkarte), Austria (Quick Wertkarte), Belgium (Proton), France (Moneo), the Netherlands (Chipknip Chipper (decommissioned in 2015)), Switzerland ("Cash"), Norway ("Mondex"), Spain ("Monedero 4B"), Sweden ("Cash", decommissioned in 2004), Finland ("Avant"), UK ("Mondex"), Denmark ("Danmønt") and Portugal ("Porta-moedas Multibanco").
Private electronic purse systems have also been deployed such as the Marines corps (USMC) at Parris Island allowing small amount payments at the cafeteria.
Since the 1990s, smart cards have been the subscriber identity modules (SIMs) used in GSM mobile-phone equipment. Mobile phones are widely used across the world, so smart cards have become very common.
EMV
Europay MasterCard Visa (EMV)-compliant cards and equipment are widespread with the deployment led by European countries. The United States started later deploying the EMV technology in 2014, with the deployment still in progress in 2019. Typically, a country's national payment association, in coordination with MasterCard International, Visa International, American Express and Japan Credit Bureau (JCB), jointly plan and implement EMV systems.
Historically, in 1993 several international payment companies agreed to develop smart-card specifications for debit and credit cards. The original brands were MasterCard, Visa, and Europay. The first version of the EMV system was released in 1994. In 1998 the specifications became stable.
EMVCo maintains these specifications. EMVco's purpose is to assure the various financial institutions and retailers that the specifications retain backward compatibility with the 1998 version. EMVco upgraded the specifications in 2000 and 2004.
EMV compliant cards were first accepted into Malaysia in 2005 and later into United States in 2014. MasterCard was the first company that was allowed to use the technology in the United States. The United States has felt pushed to use the technology because of the increase in identity theft. The credit card information stolen from Target in late 2013 was one of the largest indicators that American credit card information is not safe. Target made the decision on April 30, 2014 that it would try to implement the smart chip technology in order to protect itself from future credit card identity theft.
Before 2014, the consensus in America was that there were enough security measures to avoid credit card theft and that the smart chip was not necessary. The cost of the smart chip technology was significant, which was why most of the corporations did not want to pay for it in the United States. The debate finally ended when Target sent out a notice stating unauthorized access to magnetic strips costing Target over 300 million dollars along with the increasing cost of online credit theft was enough for the United States to invest in the technology. The adaptation of EMV's increased significantly in 2015 when the liability shifts occurred in October by the credit card companies.
Development of contactless systems
Contactless smart cards do not require physical contact between a card and reader. They are becoming more popular for payment and ticketing. Typical uses include mass transit and motorway tolls. Visa and MasterCard implemented a version deployed in 2004–2006 in the U.S., with Visa's current offering called Visa Contactless. Most contactless fare collection systems are incompatible, though the MIFARE Standard card from NXP Semiconductors has a considerable market share in the US and Europe.
Use of "Contactless" smart cards in transport has also grown through the use of low cost chips NXP Mifare Ultralight and paper/card/PET rather than PVC. This has reduced media cost so it can be used for low cost tickets and short term transport passes (up to 1 year typically). The cost is typically 10% that of a PVC smart card with larger memory. They are distributed through vending machines, ticket offices and agents. Use of paper/PET is less harmful to the environment than traditional PVC cards.
Smart cards are also being introduced for identification and entitlement by regional, national, and international organizations. These uses include citizen cards, drivers’ licenses, and patient cards. In Malaysia, the compulsory national ID MyKad enables eight applications and has 18 million users. Contactless smart cards are part of ICAO biometric passports to enhance security for international travel.
Complex smart cards
Complex Cards are smart cards that conform to the ISO/IEC 7810 standard and include components in addition to those found in traditional single chip smart cards. Complex Cards were invented by Cyril Lalo and Philippe Guillaud in 1999 when they designed a chip smart card with additional components, building upon the initial concept consisting of using audio frequencies to transmit data patented by Alain Bernard. The first Complex Card prototype was developed collaboratively by Cyril Lalo and Philippe Guillaud , who were working at AudioSmartCard at the time, and Henri Boccia and Philippe Patrice, who were working at Gemplus. It was ISO 7810-compliant and included a battery, a piezoelectric buzzer, a button, and delivered audio functions, all within a 0.84mm thickness card.
The Complex Card pilot, developed by AudioSmartCard, was launched in 2002 by Crédit Lyonnais, a French financial institution. This pilot featured acoustic tones as a means of authentication. Although Complex Cards were developed since the inception of the smart card industry, they only reached maturity after 2010.
Complex Cards can accommodate various peripherals including:
One or more buttons,
A digital keyboard,
An alphabetic keyboard,
A touch keyboard,
A small display, for a dynamic Card Security Code (CSC) for instance,
A larger digital display, for OTP or balance, QR code
An alphanumeric display,
A fingerprint sensor,
A LED,
A buzzer or speaker.
While first generation Complex Cards were battery powered, the second generation is battery-free and receives power through the usual card connector and/or induction .
Sound, generated by a buzzer, was the preferred means of communication for the first projects involving Complex Cards. Later, with the progress of displays, visual communication is now present in almost all Complex Cards.
Functionalities
Complex Cards support all communication protocols present on regular smart cards: contact, thanks to a contact pad as defined ISO/IEC 7816 standard, contactless following the ISO/IEC 14443 standard, and magstripe.
Developers of Complex Cards target several needs when developing them:
One Time Password,
Provide account information,
Provide computation capabilities,
Provide a means of transaction security,
Provide a means of user authentication.
One time password
A Complex Card can be used to compute a cryptographic value, such as a One-time password. The One-Time Password is generated by a cryptoprocessor encapsulated in the card. In order to implement this function, the cryptoprocessor must be initialized with a seed value, which enables the identification of the OTPs respective of each card. The hash of seed value has to be stored securely within the card to prevent unauthorized prediction of the generated OTPs.
One-Time Passwords generation is based either on incremental values (event based) or on a real time clock (time based). Using clock-based One-Time Password generation requires the Complex Card to be equipped with a Real-time clock and a quartz.
Complex Cards used to generate One Time Password have been developed for:
Standard Chartered, Singapore,
Bank of America, USA,
Erste Bank, Croatia,
Verisign, USA,
RSA Security.
Account information
A Complex Card with buttons can display the balance of one or multiple account(s) linked to the card. Typically, either one button is used to display the balance in the case of a single account card or, in the case of a card linked to multiple accounts, a combination of buttons is used to select a specific account's balance.
For additional security, features such as requiring the user to enter an identification or a security value such as a PIN can be added to a Complex Card.
Complex Cards used to provide account information have been developed for:
Getin Bank, Poland,
TEB, Turkey.
The latest generation of battery free, button free, Complex Cards can display a balance or other kind of information without requiring any input from the card holder. The information is updated during the use of the card. For instance, in a transit card, key information such as the monetary value balance, the number of remaining trips or the expiry date of a transit pass can be displayed.
Transaction security
A Complex Card being deployed as a payment card can be equipped with capability to provide transaction security. Typically, online payments are made secure thanks to the Card Security Code (CSC), also known as card verification code (CVC2), or card verification value (CVV2). The card security code (CSC) is a 3 or 4 digits number printed on a credit or debit card, used as a security feature for card-not-present (CNP) payment card transactions to reduce the incidence of fraud.
The Card Security Code (CSC) is to be given to the merchant by the cardholder in order to complete a card-not-present transaction. The CSC is transmitted along with other transaction data and verified by the card issuer. The Payment Card Industry Data Security Standard (PCI DSS) prohibits the storage of the CSC by the merchant or any stakeholder in the payment chain. Although designed to be a security feature, the static CSC is susceptible to fraud as it can easily be memorized by a shop attendant, who could then use it for fraudulent online transactions or sale on the dark web.
This vulnerability has led the industry to develop a Dynamic Card Security Code (DCSC) that can be changed at certain time intervals, or after each contact or contactless EMV transaction. This Dynamic CSC brings significantly better security than a static CSC.
The first generation of Dynamic CSC cards, developed by NagraID Security required a battery, a quartz and Real Time Clock (RTC) embedded within the card to power the computation of a new Dynamic CSC, after expiration of the programmed period.
The second generation of Dynamic CSC cards, developed by Ellipse World, Inc. , does not require any battery, quartz, or RTC to compute and display the new dynamic code. Instead, the card obtains its power either through the usual card connector or by induction during every EMV transaction from the Point of Sales (POS) terminal or Automated Teller Machine (ATM) to compute a new DCSC.
The Dynamic CSC, also called dynamic cryptogram, is marketed by several companies, under different brand names:
MotionCode, first developed by NagraID Security, a company later acquired by Idemia,
DCV, the solution offered by Thales,
EVC (Ellipse Verification Code) by Ellipse, a Los Angeles, USA based company.
The advantage of the Dynamic Card Security Code (DCSC) is that new information is transmitted with the payment transactions, thus making it useless for a potential fraudster to memorize or store it. A transaction with a Dynamic Card Security Code is carried out exactly the same way, with the same processes and use of parameters as a transaction with a static code in a card-not-present transaction. Upgrading to a DCSC allows cardholders and merchants to continue their payment habits and processes undisturbed.
User authentication
Complex Cards can be equipped with biometric sensors allowing for stronger user authentication. In the typical use case, fingerprint sensors are integrated into a payment card to bring a higher level of user authentication than a PIN.
In order to implement user authentication using a fingerprint enabled smart card, the user has to authenticate himself/herself to the card by means of the fingerprint before starting a payment transaction.
Several companies offer cards with fingerprint sensors:
Thales: Biometric card,
Idemia: F.Code, originally developed by NagraID Security,
Idex Biometrics,
NXP Semiconductors,
…
Components
Complex Cards can incorporate a wide variety of components. The choice of components drives functionality, influences cost, power supply needs, and manufacturing complexity.
Buttons
Depending on Complex Card types, buttons have been added to allow an easy interaction between the user and the card. Typically, these buttons are used to:
Select one action, such as which account to obtain the balance, or the unit (e.g. currency or number of trips) in which the information is displayed,
Enter numeric data via the addition of a digital keypad,
Enter text data via the addition of an alphanumeric keyboard.
While separate keys have been used on prototypes in the early days, capacitive keyboards are the most popular solution now, thanks to technology developments by AudioSmartCard International SA.
The interaction with a capacitive keyboard requires constant power, therefore a battery and a mechanical button are required to activate the card.
Buzzer
The first Complex Cards were equipped with a buzzer that made it possible to broadcast sound. This feature was generally used over the phone to send identification data such as an identifier and One-Time Passwords (OTPs). Technologies used for sound transmission include DTMF (Dual-tone multi-frequency signaling) or FSK (Frequency-shift keying).
Companies that offered cards with buzzers include:
AudioSmartCard,
nCryptone,
Prosodie,
Société d'exploitation du jeton sécurisé – SEJS.
Display
Displaying data is an essential part of Complex Card functionalities. Depending on the information that needs to be shown, displays can be digital or alphanumeric and of varying lengths. Displays can be located either on the front or back of the card. A front display is the most common solution for showing information such as a One-Time Password or an electronic purse balance. A rear display is more often used for showing a Dynamic Card Security Code (DCSC).
Displays can be made using two technologies:
Liquid-crystal display (LCD) : LCDs are easily available from a wide variety of suppliers, and they are able to display either digits or alphabetical data. However, to be fitted in a complex smart card, LCDs need to have a certain degree of flexibility. Also, LCDs need to be powered to keep information displayed.
Bistable displays, also known as Ferroelectric liquid crystal displays, are increasingly used as they only require power to refresh the displayed information. The displayed data remains visible, without the need for of any power supply. Bistable displays are also available in a variety of specifications, displaying digits or pixels. Bistable displays are available from E Ink Corporation among others.
Cryptoprocessor
If a Complex smart Card is dedicated to making cryptographic computations such as generating a One-Time Password, it may require a secure cryptoprocessor.
Power supply
As Complex Cards contain more components than traditional smart cards, their power consumption must be carefully monitored.
First generation Complex Cards require a power supply even in standby mode. As such, product designers generally included a battery in their design. Incorporating a battery creates an additional burden in terms of complexity, cost, space and flexibility in an already dense design. Including a battery in a Complex Card increases the complexity of the manufacturing process as a battery cannot be hot laminated.
Second generation Complex Cards feature a battery-free design. These cards harvest the necessary power from external sources; for example when the card interacts in a contact or contactless fashion with a payment system or an NFC-enabled smartphone. The use of a bistable display in the card design ensures that the screen remains legible even when the Complex Card is unconnected to the power source.
Manufacturing
Complex Card manufacturing methods are inherited from the smart card industry and from the electronics mounting industry. As Complex Cards incorporate several components while having to remain within 0.8 mm thickness and be flexible, and to comply with the ISO/IEC 7810, ISO/IEC 7811 and ISO/IEC 7816 standards, renders their manufacture more complex than standard smart cards.
One of the most popular manufacturing processes in the smart card industry is lamination. This process involves laminating an inlay between two card faces. The inlay contains the needed electronic components with an antenna printed on an inert support.
Typically battery-powered Complex Cards require a cold lamination manufacturing process. This process impacts the manufacturing lead time and the whole cost of such a Complex Card.
Second generation, battery-free Complex Cards can be manufactured by existing hot lamination process. This automated process, inherited from traditional smart card manufacturing, enables the production of Complex Cards in large quantities while keeping costs under control, a necessity for the evolution from a niche to a mass market.
Card life cycle
As with standard smart cards, Complex Cards go through a lifecycle comprising the following steps:
Manufacturing,
Personalization,
User enrollment, if needed by the application,
Provisioning,
Active life,
Cancellation,
Recycling / destruction.
As Complex Cards bring more functionalities than standard smart cards and, due to their complexity, their personalization can take longer or require more inputs. Having Complex Cards that can be personalized by the same machines and the same processes as regular smart cards allows them to be integrated more easily in existing manufacturing chains and applications.
First generation, battery-operated Complex Cards require specific recycling processes, mandated by different regulatory bodies. Additionally, keeping battery-operated Complex Cards in inventory for extended periods of time may reduce their performance due to battery ageing.
Second-generation battery-free technology ensures operation during the entire lifetime of the card and eliminates self-discharge, providing extended shelf life, and is more eco-friendly.
History and major players
Since the inception of smart cards, innovators have been trying to add extra features. As technologies have matured and have been industrialized, several smart card industry players have been involved in Complex Cards.
The Complex Card concept began in 1999 when Cyril Lalo and Philippe Guillaud, its inventors, first designed a smart card with additional components. The first prototype was developed collaboratively by Cyril Lalo, who was the CEO of AudioSmartCard at the time, and Henri Boccia and Philippe Patrice, from Gemplus. The prototype included a button and audio functions on a 0.84mm thick ISO 7810-compliant card .
Since then, Complex Cards have been mass-deployed primarily by NagraID Security.
AudioSmartCard
AudioSmartCard International SA was instrumental in developing the first Complex Card that included a battery, a piezoelectric buzzer, a button, and audio functions all on a 0.84mm thick, ISO 7810-compatible card.
AudioSmartCard was founded in 1993 and specialized in the development and marketing of acoustic tokens incorporating security features. These acoustic tokens exchanged data in the form of sounds transmitted over a phone line. In 1999, AudioSmartCard transitioned to a new leadership under Cyril Lalo and Philippe Guillaud, who also became major shareholders. They made AudioSmartCard evolve towards the smart card world. In 2003 Prosodie, a subsidiary of Capgemini, joined the shareholders of AudioSmartCard.
AudioSmartCard was renamed nCryptone, in 2004.
CardLab Innovation
CardLab Innovation, incorporated in 2006 in Herlev, Denmark, specializes in Complex Cards that include a switch, a biometric reader, an RFID jammer, and one or more magstripes. The company works with manufacturing partners in China and Thailand and owns a card lamination factory in Thailand.
Coin
Coin was a US-based startup founded in 2012 by Kanishk Parashar. It developed a Complex Card capable of storing the data of several credit and debit cards. The card prototype was equipped with a display and a button that enabled the user to switch between different cards. In 2015, the original Coin card concept evolved into Coin 2.0 adding contactless communication to its original magstripe emulation.
Coin was acquired by Fitbit in May 2016 and all Coin activities were discontinued in February 2017.
Ellipse World, Inc.
Ellipse World, Inc. was founded in 2017 by Cyril Lalo and Sébastien Pochic, both recognized experts in Complex Card technology. Ellipse World, Inc. specializes in battery-free Complex Card technology.
The Ellipse patented technologies enable smart card manufacturers to use their existing dual interface payment card manufacturing process and supply chain to build battery-free, second generation Complex Cards with display capabilities. Thanks to this ease of integration, smart card vendors are able to address banking, transit and prepaid cards markets.
EMue Technologies
EMue Technologies, headquartered in Melbourne, Australia, designed and developed authentication solutions for the financial services industry from 2009 to 2015. The company’s flagship product, developed in collaboration with Cyril Lalo and Philippe Guillaud , was the eMue Card, a Visa CodeSure credit card with an embedded keypad, a display and a microprocessor.
Feitian Technologies
Feitian Technologies, a China-based company created in 1998, provides cyber security products and solutions. The company offers security solutions based on smart cards as well as other authentication devices. These include Complex Cards, that incorporate a display, a keypad or a fingerprint sensor.
Fingerprint Cards
Fingerprint Cards AB (or Fingerprints) is a Swedish company specializing in biometric solutions. The company sells biometric sensors and has recently introduced payment cards incorporating a fingerprint sensor such as the Zwipe card, a biometric dual-interface payment card using an integrated sensor from Fingerprints.
Giesecke+Devrient
Giesecke & Devrient, also known as G+D, is a German company headquartered in Munich that provides banknotes, security printing, smart cards and cash handling systems. Its smart card portfolio includes display cards, OTP cards, as well as cards displaying a Dynamic CSC.
Gemalto
Gemalto, a division of Thales Group, is a major player in the secure transaction industry.
The company’s Complex Card portfolio includes cards with a display or a fingerprint sensor. These cards may display an OTP or a Dynamic CSC.
Idemia
Idemia is the product of the 2017 merger of Oberthur Technologies and Morpho. The combined company has positioned itself as a global provider of financial cards, SIM cards, biometric devices as well as public and private identity solutions. Due to Oberthur’s acquisition of NagraID Security in 2014, Idemia’s Complex Card offerings include the F.CODE biometric payment card that includes a fingerprint sensor, and its battery-powered Motion Code card that displays a Dynamic CSC.
Idex
Idex Biometrics ASA, incorporated in Norway, specializes in fingerprint identification technologies for personal authentication. The company offers fingerprint sensors and modules that are ready to be embedded into cards.
Innovative Card Technologies
Founded in 2002, by Alan Finkelstein, Innovative Card Technologies developed and commercialized enhancements for the smart card market. The company acquired the display card assets of nCryptone in 2006. Innovative Card Technologies has ceased its activities.
NagraID
Nagra ID, now known as NID, was a wholly-owned subsidiary of the Kudelski Group until 2014. NID can trace its history with Complex Cards back to 2003 when it collaborated on development with nCryptone. Nagra ID was instrumental in developing the cold lamination process for Complex Cards manufacturing.
Nagra ID manufactures Complex Cards that can include a battery, buttons, displays or other electronic components.
NagraID Security
Nagra ID Security began in 2008 as a spinoff of Nagra ID to focus on Complex Card development and manufacturing. The company was owned by Kudelski Group (50%), Cyril Lalo (25%) and Philippe Guillaud (25%).
NagraID Security quickly became a leading player in the adoption of Complex Cards due, in large part, to its development of MotionCode cards that featured a small display to enable a Card Security Code (CVV2).
NagraID Security was the first Complex Cards manufacturer to develop a mass market for payment display cards. Their customers included:
ABSA, South Africa,
Banco Bicentenario, Venezuela,
Banco MontePaschi, Belgium,
Erste Bank, Croatia,
Getin Bank, Poland,
Standard Chartered Bank, Singapore.
NagraID Security also delivered One-Time Password cards to companies including:
Bank of America,
HID Security,
Paypal,
RSA Security,
Verisign.
In 2014, NagraID Security was sold to Oberthur Technologies (now Idemia).
nCryptone
nCryptone emerged in 2004 from the renaming of AudioSmartCard. nCryptone was headed by Cyril Lalo and Philippe Guillaud and developed technologies around authentication servers and devices.
nCryptone display card assets were acquired by Innovative Card Technologies in 2006.
Oberthur Technologies, now Idemia
Oberthur Technologies, now Idemia, is one of the major players in the secure transactions industry. It acquired the business of NagraID Security in 2014. Oberthur then merged with Morpho and the combined entity was renamed Idemia in 2017.
Major references in the Complex Cards business include:
BPCE Group, France,
Orange Bank, France,
Société Générale, France.
Plastc
Set up in 2009, Plastc announced a single card that could digitally hold the data of up to 20 credit or debit cards. The company succeeded in raising US$9 million through preorders but failed to deliver any product. Plastc was then acquired in 2017 by Edge Mobile Payments, a Santa Cruz-based Fintech company. The Plastc project continues as the Edge card, a dynamic payment card that consolidates several payment cards in one device. The card is equipped with a battery and an ePaper screen and can store data from up to 50 credit, debit, loyalty and gift cards.
Stratos
Stratos was created in 2012 in Ann Arbor, Michigan, USA. In 2015, Stratos developed the Stratos Bluetooth Connected Card, which was designed to integrate up to three credit and debit card in a single card format and featured a smartphone app used to manage the card. Due to its Lithium ion thin film battery, the Stratos card was equipped with LEDs and communicated in contactless mode and in Bluetooth low Energy.
In 2017 Stratos was acquired by CardLab Innovation, a company headquartered in Herlev, Denmark.
Swyp
SWYP was the brand name of a card developed by Qvivr, a company incorporated in 2014 in Fremont, California. SWYP was introduced in 2015 and dubbed the world’s first smart wallet. SWYP was a metal card with the ability to combine over 25 credit, debit, gift and loyalty cards. The card worked in conjunction with a smartphone app used to manage the cards. The Swyp card included a battery, a button and a matrix display that showed which card was in use. The company registered users in its beta testing program, but the product never shipped on a commercial scale.
Qvivr raised US$5 million in January 2017 and went out of business in November 2017.
Businesses
Complex Cards have been adopted by numerous financial institutions worldwide. They may include different functionalities such as payment cards (credit, debit, prepaid), One-Time Password, mass-transit, and dynamic Card Security Code (CVV2).
Complex Card technology is used by numerous financial institutions including:
ABSA, South Africa,
Banca MontePaschi Belgio,
Bank of America, USA,
BPCE Group, France,
Carpatica Bank, Romania,
Credit Europe Bank, Romania,
Erste&Steiermärkische Ban, Croatia
Getin Bank, Poland,
Newcastle Banking Society, UK,
Orange Bank, France,
Paypal, USA,
Sinopac, Taiwan,
Société Générale, France,
Standard Chartered Bank, Singapore,
Symantec,
TEB, Turkey.
Design
A smart card may have the following generic characteristics:
Dimensions similar to those of a credit card. ID-1 of the ISO/IEC 7810 standard defines cards as nominally . Another popular size is ID-000, which is nominally (commonly used in SIM cards). Both are thick.
Contains a tamper-resistant security system (for example a secure cryptoprocessor and a secure file system) and provides security services (e.g., protects in-memory information).
Managed by an administration system, which securely interchanges information and configuration settings with the card, controlling card blacklisting and application-data updates.
Communicates with external services through card-reading devices, such as ticket readers, ATMs, DIP reader, etc.
Smart cards are typically made of plastic, generally polyvinyl chloride, but sometimes polyethylene-terephthalate-based polyesters, acrylonitrile butadiene styrene or polycarbonate.
Since April 2009, a Japanese company has manufactured reusable financial smart cards made from paper.
Internal structure
Data structures
As mentioned above, data on a smart card may be stored in a file system (FS). In smart card file systems, the root directory is called the "master file" ("MF"), subdirectories are called "dedicated files" ("DF"), and ordinary files are called "elementary files" ("EF").
Logical layout
The file system mentioned above is stored on an EEPROM (storage or memory) within the smartcard. In addition to the EEPROM, other components may be present, depending upon the kind of smartcard. Most smartcards have one of three logical layouts:
EEPROM only.
EEPROM, ROM, RAM, and microprocessor.
EEPROM, ROM, RAM, microprocessor, and crypto-module.
In cards with microprocessors, the microprocessor sits inline between the reader and the other components. The operating system that runs on the microprocessor mediates the reader's access to those components in order to prevent unauthorized access.
Physical interfaces
Contact smart cards
Contact smart cards have a contact area of approximately , comprising several gold-plated contact pads. These pads provide electrical connectivity when inserted into a reader, which is used as a communications medium between the smart card and a host (e.g., a computer, a point of sale terminal) or a mobile telephone. Cards do not contain batteries; power is supplied by the card reader.
The ISO/IEC 7810 and ISO/IEC 7816 series of standards define:
physical shape and characteristics,
electrical connector positions and shapes,
electrical characteristics,
communications protocols, including commands sent to and responses from the card,
basic functionality.
Because the chips in financial cards are the same as those used in subscriber identity modules (SIMs) in mobile phones, programmed differently and embedded in a different piece of PVC, chip manufacturers are building to the more demanding GSM/3G standards. So, for example, although the EMV standard allows a chip card to draw 50 mA from its terminal, cards are normally well below the telephone industry's 6 mA limit. This allows smaller and cheaper financial card terminals.
Communication protocols for contact smart cards include T=0 (character-level transmission protocol, defined in ISO/IEC 7816-3) and T=1 (block-level transmission protocol, defined in ISO/IEC 7816-3).
Contactless smart cards
Contactless smart cards communicate with readers under protocols defined in the ISO/IEC 14443 standard. They support data rates of 106–848 kbit/s. These cards require only proximity to an antenna to communicate.
Like smart cards with contacts, contactless cards do not have an internal power source. Instead, they use a loop antenna coil to capture some of the incident radio-frequency interrogation signal, rectify it, and use it to power the card's electronics. Contactless smart media can be made with PVC, paper/card and PET finish to meet different performance, cost and durability requirements.
APDU transmission by a contactless interface is defined in ISO/IEC 14443-4.
Hybrids
Hybrid cards implement contactless and contact interfaces on a single card with unconnected chips including dedicated modules/storage and processing.
Dual-interface
Dual-interface cards implement contactless and contact interfaces on a single chip with some shared storage and processing. An example is Porto's multi-application transport card, called Andante, which uses a chip with both contact and contactless (ISO/IEC 14443 Type B) interfaces. Numerous payment cards worldwide are based on hybrid card technology allowing them to communicate in contactless as well as contact modes.
USB
The CCID (Chip Card Interface Device) is a USB protocol that allows a smart card to be interfaced to a computer using a card reader which has a standard USB interface. This allows the smart card to be used as a security token for authentication and data encryption such as Bitlocker. A typical CCID is a USB dongle and may contain a SIM.
Logical interfaces
Reader side
Different smart cards implement one or more reader-side protocols. Common protocols here include CT-API and PC/SC.
Application side
Smartcard operating systems may provide application programming interfaces (APIs) so that developers can write programs ("applications") to run on the smartcard. Some such APIs, such as Java Card, allow programs to be uploaded to the card without replacing the card's entire operating system.
Applications
Financial
Smart cards serve as credit or ATM cards, fuel cards, mobile phone SIMs, authorization cards for pay television, household utility pre-payment cards, high-security identification and access badges, and public transport and public phone payment cards.
Smart cards may also be used as electronic wallets. The smart card chip can be "loaded" with funds to pay parking meters, vending machines or merchants. Cryptographic protocols protect the exchange of money between the smart card and the machine. No connection to a bank is needed. The holder of the card may use it even if not the owner. Examples are Proton, Geldkarte, Chipknip and Moneo. The German Geldkarte is also used to validate customer age at vending machines for cigarettes.
These are the best known payment cards (classic plastic card):
Visa: Visa Contactless, Quick VSDC, "qVSDC", Visa Wave, MSD, payWave
Mastercard: PayPass Magstripe, PayPass MChip
American Express: ExpressPay
Discover: Zip
Unionpay: QuickPass
Roll-outs started in 2005 in the U.S. Asia and Europe followed in 2006. Contactless (non-PIN) transactions cover a payment range of ~$5–50. There is an ISO/IEC 14443 PayPass implementation. Some, but not all, PayPass implementations conform to EMV.
Non-EMV cards work like magnetic stripe cards. This is common in the U.S. (PayPass Magstripe and Visa MSD). The cards do not hold or maintain the account balance. All payment passes without a PIN, usually in off-line mode. The security of such a transaction is no greater than with a magnetic stripe card transaction.
EMV cards can have either contact or contactless interfaces. They work as if they were a normal EMV card with a contact interface. Via the contactless interface they work somewhat differently, in that the card commands enabled improved features such as lower power and shorter transaction times. EMV standards include provisions for contact and contactless communications. Typically modern payment cards are based on hybrid card technology and support both contact and contactless communication modes.
SIM
The subscriber identity modules used in mobile-phone systems are reduced-size smart cards, using otherwise identical technologies.
Identification
Smart-cards can authenticate identity. Sometimes they employ a public key infrastructure (PKI). The card stores an encrypted digital certificate issued from the PKI provider along with other relevant information. Examples include the U.S. Department of Defense (DoD) Common Access Card (CAC), and other cards used by other governments for their citizens. If they include biometric identification data, cards can provide superior two- or three-factor authentication.
Smart cards are not always privacy-enhancing, because the subject may carry incriminating information on the card. Contactless smart cards that can be read from within a wallet or even a garment simplify authentication; however, criminals may access data from these cards.
Cryptographic smart cards are often used for single sign-on. Most advanced smart cards include specialized cryptographic hardware that uses algorithms such as RSA and Digital Signature Algorithm (DSA). Today's cryptographic smart cards generate key pairs on board, to avoid the risk from having more than one copy of the key (since by design there usually isn't a way to extract private keys from a smart card). Such smart cards are mainly used for digital signatures and secure identification.
The most common way to access cryptographic smart card functions on a computer is to use a vendor-provided PKCS#11 library. On Microsoft Windows the Cryptographic Service Provider (CSP) API is also supported.
The most widely used cryptographic algorithms in smart cards (excluding the GSM so-called "crypto algorithm") are Triple DES and RSA. The key set is usually loaded (DES) or generated (RSA) on the card at the personalization stage.
Some of these smart cards are also made to support the National Institute of Standards and Technology (NIST) standard for Personal Identity Verification, FIPS 201.
Turkey implemented the first smart card driver's license system in 1987. Turkey had a high level of road accidents and decided to develop and use digital tachograph devices on heavy vehicles, instead of the existing mechanical ones, to reduce speed violations. Since 1987, the professional driver's licenses in Turkey have been issued as smart cards. A professional driver is required to insert his driver's license into a digital tachograph before starting to drive. The tachograph unit records speed violations for each driver and gives a printed report. The driving hours for each driver are also being monitored and reported. In 1990 the European Union conducted a feasibility study through BEVAC Consulting Engineers, titled "Feasibility study with respect to a European electronic drivers license (based on a smart-card) on behalf of Directorate General VII". In this study, chapter seven describes Turkey's experience.
Argentina's Mendoza province began using smart card driver's licenses in 1995. Mendoza also had a high level of road accidents, driving offenses, and a poor record of recovering fines. Smart licenses hold up-to-date records of driving offenses and unpaid fines. They also store personal information, license type and number, and a photograph. Emergency medical information such as blood type, allergies, and biometrics (fingerprints) can be stored on the chip if the card holder wishes. The Argentina government anticipates that this system will help to collect more than $10 million per year in fines.
In 1999 Gujarat was the first Indian state to introduce a smart card license system. As of 2005, it has issued 5 million smart card driving licenses to its people.
In 2002, the Estonian government started to issue smart cards named ID Kaart as primary identification for citizens to replace the usual passport in domestic and EU use.
As of 2010 about 1 million smart cards have been issued (total population is about 1.3 million) and they are widely used in internet banking, buying public transport tickets, authorization on various websites etc.
By the start of 2009, the entire population of Belgium was issued eID cards that are used for identification. These cards contain two certificates: one for authentication and one for signature. This signature is legally enforceable. More and more services in Belgium use eID for authorization.
Spain started issuing national ID cards (DNI) in the form of smart cards in 2006 and gradually replaced all the older ones with smart cards. The idea was that many or most bureaucratic acts could be done online but it was a failure because the Administration did not adapt and still mostly requires paper documents and personal presence.
On August 14, 2012, the ID cards in Pakistan were replaced. The Smart Card is a third generation chip-based identity document that is produced according to international standards and requirements. The card has over 36 physical security features and has the latest encryption codes. This smart card replaced the NICOP (the ID card for overseas Pakistani).
Smart cards may identify emergency responders and their skills. Cards like these allow first responders to bypass organizational paperwork and focus more time on the emergency resolution. In 2004, The Smart Card Alliance expressed the needs: "to enhance security, increase government efficiency, reduce identity fraud, and protect personal privacy by establishing a mandatory, Government-wide standard for secure and reliable forms of identification". emergency response personnel can carry these cards to be positively identified in emergency situations. WidePoint Corporation, a smart card provider to FEMA, produces cards that contain additional personal information, such as medical records and skill sets.
In 2007, the Open Mobile Alliance (OMA) proposed a new standard defining V1.0 of the Smart Card Web Server (SCWS), an HTTP server embedded in a SIM card intended for a smartphone user. The non-profit trade association SIMalliance has been promoting the development and adoption of SCWS. SIMalliance states that SCWS offers end-users a familiar, OS-independent, browser-based interface to secure, personal SIM data. As of mid-2010, SIMalliance had not reported widespread industry acceptance of SCWS. The OMA has been maintaining the standard, approving V1.1 of the standard in May 2009, and V1.2 was expected to be approved in October 2012.
Smart cards are also used to identify user accounts on arcade machines.
Public transit
Smart cards, used as transit passes, and integrated ticketing are used by many public transit operators. Card users may also make small purchases using the cards. Some operators offer points for usage, exchanged at retailers or for other benefits. Examples include Singapore's CEPAS, Malaysia's Touch n Go, Ontario's Presto card, Hong Kong's Octopus card, London's Oyster card, Ireland's Leap card, Brussels' MoBIB, Québec's OPUS card, San Francisco's Clipper card, Auckland's AT Hop, Brisbane's go card, Perth's SmartRider, Sydney's Opal card and Victoria's myki. However, these present a privacy risk because they allow the mass transit operator (and the government) to track an individual's movement. In Finland, for example, the Data Protection Ombudsman prohibited the transport operator Helsinki Metropolitan Area Council (YTV) from collecting such information, despite YTV's argument that the card owner has the right to a list of trips paid with the card. Earlier, such information was used in the investigation of the Myyrmanni bombing.
The UK's Department for Transport mandated smart cards to administer travel entitlements for elderly and disabled residents. These schemes let residents use the cards for more than just bus passes. They can also be used for taxi and other concessionary transport. One example is the "Smartcare go" scheme provided by Ecebs. The UK systems use the ITSO Ltd specification. Other schemes in the UK include period travel passes, carnets of tickets or day passes and stored value which can be used to pay for journeys. Other concessions for school pupils, students and job seekers are also supported. These are mostly based on the ITSO Ltd specification.
Many smart transport schemes include the use of low cost smart tickets for simple journeys, day passes and visitor passes. Examples include Glasgow SPT subway. These smart tickets are made of paper or PET which is thinner than a PVC smart card e.g. Confidex smart media. The smart tickets can be supplied pre-printed and over-printed or printed on demand.
In Sweden, as of 2018-2019, smart cards have started to be phased out and replaced by smart phone apps. The phone apps have less cost, at least for the transit operators who don't need any electronic equipment (the riders provide that). The riders are able buy tickets anywhere and don't need to load money onto smart cards. The smart cards are still in use for foreseeable future (as of 2019).
Video games
In Japanese amusement arcades, contactless smart cards (usually referred to as "IC cards") are used by game manufacturers as a method for players to access in-game features (both online like Konami E-Amusement and Sega ALL.Net and offline) and as a memory support to save game progress. Depending on a case by case scenario, the machines can utilize a game-specific card or a "universal" one usable on multiple machines from the same manufacturer/publisher. Amongst the most widely used there are Banapassport by Bandai Namco, E-amusement pass by Konami, Aime by Sega and Nesica by Taito.
In 2018, in an effort to make arcade game IC cards more user friendly, Konami, Bandai Namco and Sega have agreed on a unified system of cards named Amusement IC. Thanks to this agreement, the three companies are now using a unified card reader in their arcade cabinets, so that players are able to use their card, no matter if a Banapassport, an e-Amusement Pass or an Aime, with hardware and ID services of all three manufacturers. A common logo for Amusement IC cards has been created, and this is now displayed on compatible cards from all three companies. In January 2019, Taito announced that his Nesica card was also joining the Amusement IC agreement with the other three companies.
Computer security
Smart cards can be used as a security token.
Mozilla's Firefox web browser can use smart cards to store certificates for use in secure web browsing.
Some disk encryption systems, such as VeraCrypt and Microsoft's BitLocker, can use smart cards to securely hold encryption keys, and also to add another layer of encryption to critical parts of the secured disk.
GnuPG, the well known encryption suite, also supports storing keys in a smart card.
Smart cards are also used for single sign-on to log on to computers.
Schools
Smart cards are being provided to students at some schools and colleges. Uses include:
Tracking student attendance
As an electronic purse, to pay for items at canteens, vending machines, laundry facilities, etc.
Tracking and monitoring food choices at the canteen, to help the student maintain a healthy diet
Tracking loans from the school library
Access control for admittance to restricted buildings, dormitories, and other facilities. This requirement may be enforced at all times (such as for a laboratory containing valuable equipment), or just during after-hours periods (such as for an academic building that is open during class times, but restricted to authorized personnel at night), depending on security needs.
Access to transportation services
Healthcare
Smart health cards can improve the security and privacy of patient information, provide a secure carrier for portable medical records, reduce health care fraud, support new processes for portable medical records, provide secure access to emergency medical information, enable compliance with government initiatives (e.g., organ donation) and mandates, and provide the platform to implement other applications as needed by the health care organization.
Other uses
Smart cards are widely used to encrypt digital television streams. VideoGuard is a specific example of how smart card security worked.
Multiple-use systems
The Malaysian government promotes MyKad as a single system for all smart-card applications. MyKad started as identity cards carried by all citizens and resident non-citizens. Available applications now include identity, travel documents, drivers license, health information, an electronic wallet, ATM bank-card, public toll-road and transit payments, and public key encryption infrastructure. The personal information inside the MYKAD card can be read using special APDU commands.
Security
Smart cards have been advertised as suitable for personal identification tasks, because they are engineered to be tamper resistant. The chip usually implements some cryptographic algorithm. There are, however, several methods for recovering some of the algorithm's internal state.
Differential power analysis involves measuring the precise time and electric current required for certain encryption or decryption operations. This can deduce the on-chip private key used by public key algorithms such as RSA. Some implementations of symmetric ciphers can be vulnerable to timing or power attacks as well.
Smart cards can be physically disassembled by using acid, abrasives, solvents, or some other technique to obtain unrestricted access to the on-board microprocessor. Although such techniques may involve a risk of permanent damage to the chip, they permit much more detailed information (e.g., photomicrographs of encryption hardware) to be extracted.
Benefits
The benefits of smart cards are directly related to the volume of information and applications that are programmed for use on a card. A single contact/contactless smart card can be programmed with multiple banking credentials, medical entitlement, driver's license/public transport entitlement, loyalty programs and club memberships to name just a few. Multi-factor and proximity authentication can and has been embedded into smart cards to increase the security of all services on the card. For example, a smart card can be programmed to only allow a contactless transaction if it is also within range of another device like a uniquely paired mobile phone. This can significantly increase the security of the smart card.
Governments and regional authorities save money because of improved security, better data and reduced processing costs. These savings help reduce public budgets or enhance public services. There are many examples in the UK, many using a common open LASSeO specification.
Individuals have better security and more convenience with using smart cards that perform multiple services. For example, they only need to replace one card if their wallet is lost or stolen. The data storage on a card can reduce duplication, and even provide emergency medical information.
Advantages
The first main advantage of smart cards is their flexibility. Smart cards have multiple functions which simultaneously can be an ID, a credit card, a stored-value cash card, and a repository of personal information such as telephone numbers or medical history. The card can be easily replaced if lost, and, the requirement for a PIN (or other form of security) provides additional security from unauthorised access to information by others. At the first attempt to use it illegally, the card would be deactivated by the card reader itself.
The second main advantage is security. Smart cards can be electronic key rings, giving the bearer ability to access information and physical places without need for online connections. They are encryption devices, so that the user can encrypt and decrypt information without relying on unknown, and therefore potentially untrustworthy, appliances such as ATMs. Smart cards are very flexible in providing authentication at different level of the bearer and the counterpart. Finally, with the information about the user that smart cards can provide to the other parties, they are useful devices for customizing products and services.
Other general benefits of smart cards are:
Portability
Increasing data storage capacity
Reliability that is virtually unaffected by electrical and magnetic fields.
Smart cards and electronic commerce
Smart cards can be used in electronic commerce, over the Internet, though the business model used in current electronic commerce applications still cannot use the full karthngnof the electronic medium. An advantage of smart cards for electronic commerce is their use customize services. For example, in order for the service supplier to deliver the customized service, the user may need to provide each supplier with their profile, a boring and time-consuming activity. A smart card can contain a non-encrypted profile of the bearer, so that the user can get customized services even without previous contacts with the supplier.
Disadvantages
The plastic or paper card in which the chip is embedded is fairly flexible. The larger the chip, the higher the probability that normal use could damage it. Cards are often carried in wallets or pockets, a harsh environment for a chip and antenna in contactless cards. PVC cards can crack or break if bent/flexed excessively. However, for large banking systems, failure-management costs can be more than offset by fraud reduction.
The production, use and disposal of PVC plastic is known to be more harmful to the environment than other plastics. Alternative materials including chlorine free plastics and paper are available for some smart applications.
If the account holder's computer hosts malware, the smart card security model may be broken. Malware can override the communication (both input via keyboard and output via application screen) between the user and the application. Man-in-the-browser malware (e.g., the Trojan Silentbanker) could modify a transaction, unnoticed by the user. Banks like Fortis and Belfius in Belgium and Rabobank ("random reader") in the Netherlands combine a smart card with an unconnected card reader to avoid this problem. The customer enters a challenge received from the bank's website, a PIN and the transaction amount into the reader. The reader returns an 8-digit signature. This signature is manually entered into the personal computer and verified by the bank, preventing point-of-sale-malware from changing the transaction amount.
Smart cards have also been the targets of security attacks. These attacks range from physical invasion of the card's electronics, to non-invasive attacks that exploit weaknesses in the card's software or hardware. The usual goal is to expose private encryption keys and then read and manipulate secure data such as funds. Once an attacker develops a non-invasive attack for a particular smart card model, he or she is typically able to perform the attack on other cards of that model in seconds, often using equipment that can be disguised as a normal smart card reader. While manufacturers may develop new card models with additional information security, it may be costly or inconvenient for users to upgrade vulnerable systems. Tamper-evident and audit features in a smart card system help manage the risks of compromised cards.
Another problem is the lack of standards for functionality and security. To address this problem, the Berlin Group launched the ERIDANE Project to propose "a new functional and security framework for smart-card based Point of Interaction (POI) equipment".
See also
Java Card
Keycard lock
List of smart cards
MULTOS
Open Smart Card Development Platform
Payment Card Industry Data Security Standard
Proximity card
Radio-frequency identification
SNAPI
Smart card application protocol data unit (APDU)
Smart card management system
References
Further reading
External links
Banking technology
German inventions
ISO standards
Ubiquitous computing
Authentication methods |
60408 | https://en.wikipedia.org/wiki/HCL%20Domino | HCL Domino | HCL Notes (formerly IBM Notes and Lotus Notes; see Branding below) and HCL Domino (formerly IBM Domino and Lotus Domino) are the client and server, respectively, of a collaborative client-server software platform formerly sold by IBM, now by HCL Technologies.
HCL Notes provides business collaboration functions, such as email, calendars, to-do lists, contact management, discussion forums, file sharing, microblogging, instant messaging, blogs, and user directories. It can also be used with other HCL Domino applications and databases. IBM Notes 9 Social Edition removed integration with the office software package IBM Lotus Symphony, which had been integrated with the Lotus Notes client in versions 8.x.
Lotus Development Corporation originally developed "Lotus Notes" in 1989. IBM bought Lotus in 1995 and it became known as the Lotus Development division of IBM. As late as 2015, it formed part of the IBM Software and Systems Group under the name "IBM Collaboration Solutions".
HCL acquired the products in July 2019.
HCL Notes is a desktop workflow application, commonly used in corporate environments for email and to create discussion groups, websites, document libraries, custom applications and business workflows.
On December 6, 2018, IBM announced that it was selling a number of software products to HCL Technologies for $1.8bn, including IBM Notes, Domino, Commerce, Portal, Connections, BigFix, Unica and AppScan. Their location within HCL Technologies' umbrella is named HCL Software. This acquisition was completed in July 2019.
Design
HCL Domino is a client-server cross-platform application runtime environment.
HCL Domino provides email, calendars, instant messaging (with additional HCL software voice- and video-conferencing and web-collaboration), discussions/forums, blogs, and an inbuilt personnel/user directory. In addition to these standard applications, an organization may use the Domino Designer development environment and other tools to develop additional integrated applications such as request approval / workflow and document management.
The HCL Domino product consists of several components:
HCL Notes client application (since version 8, this is based on Eclipse)
HCL Notes client, either:
a rich client
a web client, HCL iNotes
a mobile email client, HCL Notes Traveler
HCL Verse client, either:
a web email client, Verse on Premises (VOP)
a mobile email client, Verse Mobile (for iOS and Android)
HCL Domino server
HCL Domino Administration Client
HCL Domino Designer (Eclipse-based integrated development environment) for creating client-server applications that run within the Notes framework
HCL Domino competes with products from other companies such as Microsoft, Google, Zimbra and others. Because of the application development abilities, HCL Domino is often compared to products like Microsoft Sharepoint. The database in HCL Domino can be replicated between servers and between server and client, thereby allowing clients offline capabilities.
HCL Domino, a business application as well as a messaging server, is compatible with both HCL Notes and web-browsers. HCL Notes (and since IBM Domino 9, the HCAA) may be used to access any HCL Domino application, such as discussion forums, document libraries, and numerous other applications. HCL Notes resembles a web-browser in that it may run any compatible application that the user has permission for.
HCL Domino provides applications that can be used to:
access, store and present information through a user interface
enforce security
replicate, that is, allow many different servers to contain the same information and have many users work with that data
The standard storage mechanism in HCL Domino is a NoSQL document-database format, the "Notes Storage Facility" (.nsf). The .nsf file will normally contain both an application design and its associated data. HCL Domino can also access relational databases, either through an additional server called HCL Enterprise Integrator for Domino, through ODBC calls or through the use of XPages.
As HCL Domino is an application runtime environment, email and calendars operate as applications within HCL Notes, which HCL provides with the product. A Domino application-developer can change or completely replace that application. HCL has released the base templates as open source as well.
Programmers can develop applications for HCL Domino in a variety of development languages including:
the Java programming language either directly or through XPages
LotusScript, a language resembling Visual Basic
the JavaScript programming language via the Domino AppDev Pack
The client supports a formula language as well as JavaScript. Software developers can build applications to run either within the HCL Notes application runtime environment or through a web server for use in a web browser, although the interface would need to be developed separately unless XPages is used.
Use
HCL Notes can be used for email, as a calendar, PIM, instant messaging, Web browsing, and other applications. Notes can access both local- and server-based applications and data.
HCL Notes can function as an IMAP and POP email client with non-Domino mail servers. The system can retrieve recipient addresses from any LDAP server, including Active Directory, and includes a web browser, although it can be configured by a Domino Developer to launch a different web browser instead.
Features include group calendars and schedules, SMTP/MIME-based email, NNTP-based news support, and automatic HTML conversion of all documents by the Domino HTTP task.
HCL Notes can be used with HCL Sametime instant-messaging to allow to see other users online and chat with one or more of them at the same time. Beginning with Release 6.5, this function has been freely available. Presence awareness is available in email and other HCL Domino applications for users in organizations that use both HCL Notes and HCL Sametime.
Since version 7, Notes has provided a Web services interface. Domino can be a Web server for HTML files; authentication of access to Domino databases or HTML files uses the HCL Domino user directory and external systems such as Microsoft Active Directory.
A design client, HCL Domino Designer, can allow the development of database applications consisting of forms (which allow users to create documents) and views (which display selected document fields in columns).
In addition to its role as a groupware system (email, calendaring, shared documents and discussions), HCL Notes and Domino can also construct "workflow"-type applications, particularly those which require approval processes and routing of data.
Since Release 5, server clustering has had the ability to provide geographic redundancy for servers.
Notes System Diagnostic (NSD) gathers information about the running of a Notes workstation or of a Domino server.
On October 10, 2018, IBM released IBM Domino v10.0 and IBM Notes 10.0 as the latest release. In December, 2019, HCL released HCL Domino v11 and HCL Notes v11.
Overview
Client/server
HCL Notes and Domino is a NoSQL client/server database environment. The server software is called HCL Domino and the client software is HCL Notes. HCL Domino software can run on Windows, Unix, AIX, and HCL mid-range systems and can scale to tens of thousands of users per server. There are different supported versions of the HCL Domino server that are supported on the various levels of server operating systems. Usually the latest server operating system is only officially supported by a version of HCL Domino that is released at about the same time as that OS.
HCL Domino has security capabilities on a variety of levels. The authorizations can be granular, down to the field level in specific records all the way up to 10 different parameters that can be set up at a database level, with intermediate options in between. Users can also assign access for other users to their personal calendar and email on a more generic reader, editor, edit with delete and manage my calendar levels. All of the security in HCL Notes and Domino is independent of the server OS or Active Directory. Optionally, the HCL Notes client can be configured to have the user use their Active Directory identity.
Data replication
The first release of Lotus Notes included a generalized replication facility. The generalized nature of this feature set it apart from predecessors like Usenet and continued to differentiate Lotus Notes.
HCL Domino servers and Notes clients identify NSF files by their Replica IDs, and keep replicated files synchronized by bi-directionally exchanging data, metadata, and application logic and design. There are options available to define what meta-data replicates, or specifically exclude certain meta data from replicating. Replication between two servers, or between a client and a server, can occur over a network or a point-to-point modem connection. Replication between servers may occur at intervals according to a defined schedule, in near-real-time when triggered by data changes in server clusters, or when triggered by an administrator or program.
Creation of a local replica of an NSF file on the hard disk of an HCL Notes client enables the user to fully use HCL Notes and Domino databases while working off-line. The client synchronizes any changes when client and server next connect. Local replicas are also sometimes maintained for use while connected to the network in order to reduce network latency. Replication between an HCL Notes client and Domino server can run automatically according to a schedule, or manually in response to a user or programmatic request. Since Notes 6, local replicas maintain all security features programmed into the applications. Earlier releases of Notes did not always do so. Early releases also did not offer a way to encrypt NSF files, raising concerns that local replicas might expose too much confidential data on laptops or insecure home office computers, but more recent releases offer encryption, and as of the default setting for newly created local replicas.
Security
Lotus Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government. This implementation was widely announced, but with some justification many people did consider it to be a backdoor. Some governments objected to being put at a disadvantage to the NSA, and as a result Lotus continued to support the 40-bit version for export to those countries.
HCL Notes and Domino also uses a code-signature framework that controls the security context, runtime, and rights of custom code developed and introduced into the environment. Notes 5 introduced an execution control list (ECL) at the client level. The ECL allows or denies the execution of custom code based on the signature attached to it, preventing code from untrusted (and possibly malignant) sources from running. Notes and Domino 6 allowed client ECLs to be managed centrally by server administrators through the implementation of policies. Since release 4.5, the code signatures listed in properly configured ECLs prevent code from being executed by external sources, to avoid virus propagation through Notes/Domino environments. Administrators can centrally control whether each mailbox user can add exceptions to, and thus override, the ECL.
Database security
Access control lists (ACLs) control a user of server's level of access to that database. Only a user with Manager access can create or modify the ACL. Default entries in the ACL can be set when the Manager creates the database.
Roles, rather than user id, can determine access level.
Programming
HCL Notes and Domino is a cross-platform, distributed document-oriented NoSQL database and messaging framework and rapid application development environment that includes pre-built applications like email, calendar, etc. This sets it apart from its major commercial competitors, such as Microsoft Exchange or Novell GroupWise, which are purpose-built applications for mail and calendaring that offer APIs for extensibility.
HCL Domino databases are built using the Domino Designer client, available only for Microsoft Windows; standard user clients are available for Windows, Linux, and macOS. A key feature of Notes is that many replicas of the same database can exist at the same time on different servers and clients, across dissimilar platforms; the same storage architecture is used for both client and server replicas. Originally, replication in Notes happened at document (i.e., record) level. With release of Notes 4 in 1996, replication was changed so that it now occurs at field level.
A database is a Notes Storage Facility (.nsf) file, containing basic units of storage known as a "note". Every note has a UniqueID that is shared by all its replicas. Every replica also has a UniqueID that uniquely identifies it within any cluster of servers, a domain of servers, or even across domains belonging to many organizations that are all hosting replicas of the same database. Each note also stores its creation and modification dates, and one or more Items.
There are several classes of notes, including design notes and document notes. Design notes are created and modified with the Domino Designer client, and represent programmable elements, such as the GUI layout of forms for displaying and editing data, or formulas and scripts for manipulating data. Document notes represent user data, and are created and modified with the Lotus Notes client, via a web browser, via mail routing and delivery, or via programmed code.
Document notes can have parent-child relationships, but Notes should not be considered a hierarchical database in the classic sense of information management systems. Notes databases are also not relational, although there is a SQL driver that can be used with Notes, and it does have some features that can be used to develop applications that mimic relational features. Notes does not support atomic transactions, and its file locking is rudimentary. Notes is a document-oriented database (document-based, schema-less, loosely structured) with support for rich content and powerful indexing facilities. This structure closely mimics paper-based work flows that Notes is typically used to automate.
Items represent the content of a note. Every item has a name, a type, and may have some flags set. A note can have more than one item with the same name. Item types include Number, Number List, Text, Text List, Date-Time, Date-Time List, and Rich Text. Flags are used for managing attributes associated with the item, such as read or write security. Items in design notes represent the programmed elements of a database. For example, the layout of an entry form is stored in the rich text Body item within a form design note. This means that the design of the database can replicate to users' desktops just like the data itself, making it extremely easy to deploy updated applications.
Items in document notes represent user-entered or computed data. An item named "Form" in a document note can be used to bind a document to a form design note, which directs the Notes client to merge the content of the document note items with the GUI information and code represented in the given form design note for display and editing purposes. However, other methods can be used to override this binding of a document to a form note. The resulting loose binding of documents to design information is one of the cornerstones of the power of Notes. Traditional database developers used to working with rigidly enforced schemas, on the other hand, may consider the power of this feature to be a double-edged sword.
HCL Notes application development uses several programming languages. Formula and LotusScript are the two original ones. LotusScript is similar to, and may even be considered a specialized implementation of, Visual Basic, but with the addition of many native classes that model the Notes environment, whereas Formula is similar to Lotus 1-2-3 formula language but is unique to Notes.
Java was integrated into IBM Notes beginning with Release 4.5. With Release 5, Java support was greatly enhanced and expanded, and JavaScript was added. While LotusScript remains a primary tool in developing applications for the Lotus Notes client, Java and JavaScript are the primary tools for server-based processing, developing applications for browser access, and allowing browsers to emulate the functionality of the IBM Notes client. With XPages, the IBM Notes client can now natively process Java and JavaScript code, although applications development usually requires at least some code specific to only IBM Notes or only a browser.
As of version 6, Lotus established an XML programming interface in addition to the options already available. The Domino XML Language (DXL) provides XML representations of all data and design resources in the Notes model, allowing any XML processing tool to create and modify IBM Notes and Domino data.
Since Release 8.5, XPages were also integrated into IBM Notes.
External to the Notes application, HCL provides toolkits in C, C++, and Java to connect to the Domino database and perform a wide variety of tasks. The C toolkit is the most mature, and the C++ toolkit is an objectized version of the C toolkit, lacking many functions the C toolkit provides. The Java toolkit is the least mature of the three and can be used for basic application needs.
Database
IBM Notes includes a database management system but Notes files are different from relational or object databases because they are document-centric. Document-oriented databases such as Notes allow multiple values in items (fields), do not require a schema, come with built-in document-level access control, and store rich text data. IBM Domino 7 to 8.5.x supports the use of IBM DB2 database as an alternative store for IBM Notes databases. This NSFDB2 feature, however, is now in maintenance mode with no further development planned. An IBM Notes database can be mapped to a relational database using tools like DECS, [LEI], JDBCSql for Domino or NotesSQL.
It could be argued that HCL Notes and Domino is a multi-value database system like PICK, or that it is an object system like Zope, but it is in fact unique. Whereas the temptation for relational database programmers is to normalize databases, Notes databases must be denormalized. RDBMS developers often find it difficult to conceptualize the difference. It may be useful to think of an Notes document (a 'note') as analogous to an XML document natively stored in a database (although with limitations on the data types and structures available).
Since Lotus Notes 8.5, IBM started to change the term Database to Application, because of the reason that these files are not really object databases as mentioned above.
The benefits of this data structure are:
No need to define size of fields, or datatype;
Attributes (Notes fields) that are null take up no space in a database;
Built-in full text searching.
Configuration
The HCL Domino server or the Domino client store their configuration in their own databases / application files (*.nsf). No relevant configuration settings are saved in the Windows Registry if the operating system is Windows. Some other configuration options (primary the start configuration) is stored in the notes.ini (there are currently over 2000 known options available).
Use as an email client
HCL Notes is commonly deployed as an end-user email client in larger organizations, with HCL claiming a cumulative 145 million licenses sold to date.
When an organization employs an HCL Domino server, it usually also deploys the supplied Notes client for accessing the Notes application for email and calendaring but also to use document management and workflow applications. As Notes is a runtime environment, and the email and calendaring functions in Notes are simply an application provided by HCL, the administrators are free to develop alternate email and calendaring applications. It is also possible to alter, amend or extend the HCL supplied email and calendaring application.
The HCL Domino server also supports POP3 and IMAP mail clients, and through an extension product (HCL mail support for Microsoft Outlook) supports native access for Microsoft Outlook clients.
HCL also provides iNotes (in Notes 6.5 renamed to "Domino Web Access" but in version 8.0 reverted to iNotes), to allow the use of email and calendaring features through web browsers on Windows, Mac and Linux, such as Internet Explorer and Firefox. There are several spam filtering programs available (including IBM Lotus Protector), and a rules engine allowing user-defined mail processing to be performed by the server.
Comparison with other email clients
Notes was designed as a collaborative application platform where email was just one of numerous applications that ran in the Notes client software. The Notes client was also designed to run on multiple platforms including Windows, OS/2, classic Mac OS, SCO Open Desktop UNIX, and Linux. These two factors have resulted in the user interface containing some differences from applications that only run on Windows. Furthermore, these differences have often remained in the product to retain backward compatibility with earlier releases, instead of conforming to updated Windows UI standards. The following are some of these differences.
Properties dialog boxes for formatting text, hyperlinks and other rich-text information can remain open after a user makes changes to selected text. This provides flexibility to select new text and apply other formatting without closing the dialog box, selecting new text and opening a new format dialog box. Almost all other Windows applications require the user to close the dialog box, select new text, then open a new dialog box for formatting/changes.
Properties dialog boxes also automatically recognize the type of text selected and display appropriate selections (for instance, a hyperlink properties box).
Users can format tables as tabbed interfaces as part of form design (for applications) or within mail messages (or in rich-text fields in applications). This provides users the ability to provide tab-style organization to documents, similar to popular tab navigation in most web portals, etc.
End-users can readily insert links to Notes applications, Notes views or other Notes documents into Notes documents.
Deleting a document (or email) will delete it from every folder in which it appears, since the folders simply contain links to the same back-end document. Some other email clients only delete the email from the current folder; if the email appears in other folders it is left alone, requiring the user to hunt through multiple folders in order to completely delete a message. In Notes, clicking on "Remove from Folder" will remove the document only from that folder leaving all other instances intact.
The All Documents and Sent "views" differ from other collections of documents known as "folders" and exhibit different behaviors. Specifically, mail cannot be dragged out of them, and so removed from those views; the email can only be "copied" from them. This is because these are views, and their membership indexes are maintained according to characteristics of the documents contained in them, rather than based on user interaction as is the case for a folder. This technical difference can be baffling to users, in environments where no training is given. All Documents contain all of the documents in a mailbox, no matter which folder it is in. The only way to remove something from All Documents is to delete it outright.
Lotus Notes 7 and older versions had more differences, which were removed from subsequent releases:
Users select a "New Memo" to send an email, rather than "New Mail" or "New Message". (Notes 8 calls the command "New Message")
To select multiple documents in a Notes view, one drags one's mouse next to the documents to select, rather than using +single click. (Notes 8 uses keypress conventions.)
The searching function offers a "phrase search", rather than the more common "or search", and Notes requires users to spell out boolean conditions in search-strings. As a result, users must search for "delete AND folder" in order to find help text that contains the phrase "delete a folder". Searching for "delete folder" does not yield the desired result. (Notes 8 uses or-search conventions.)
Lotus Notes 8.0 (released in 2007) became the first version to employ a dedicated user-experience team, resulting in changes in the IBM Notes client experience in the primary and new notes user interface. This new interface runs in the open source Eclipse Framework, which is a project started by IBM, opening up more application development opportunities through the use of Eclipse plug-ins. The new interface provides many new user interface features and the ability to include user-selected applications/applets in small panes in the interface. Lotus Notes 8.0 also included a new email interface / design to match the new Lotus Notes 8.0 eclipse based interface. Eclipse is a Java framework and allows IBM to port Notes to other platforms rapidly. An issue with Eclipse and therefore Notes 8.0 is the applications start-up and user-interaction speed. Lotus Notes 8.5 sped up the application and the increase in general specification of PCs means this is less of an issue.
IBM Notes 9 continued the evolution of the user interface to more closely align with modern application interfaces found in many commercial packaged or web-based software. Currently, the software still does not have an auto-correct option - or even ability - to reverse accidental use of caps lock.
Domino is now running on the Eclipse platform and offers many new development environments and tools such as XPages.
For lower spec PCs, a new version of the old interface is still provided albeit as it is the old interface many of the new features are not available and the email user interface reverts to the Notes 7.x style.
This new user experience builds on Notes 6.5 (released in 2003), which upgraded the email client, previously regarded by many as the product's Achilles heel. Features added at that time included:
drag and drop of folders
replication of unread marks between servers
follow-up flags
reply and forward indicators on emails
ability to edit an attachment and save the changes back to an email id
Reception
Publications such as The Guardian in 2006 have criticized earlier versions of Lotus Notes for having an "unintuitive [user] interface" and cite widespread dissatisfaction with the usability of the client software. The Guardian indicated that Notes has not necessarily suffered as a result of this dissatisfaction due to the fact that "the people who choose [enterprise software] tend not to be the ones who use it."
Earlier versions of Lotus Notes have also been criticized for violating an important usability best practice that suggests a consistent UI is often better than custom alternative. Software written for a particular operating system should follow that particular OS's user interface style guide. Not following those style guides can confuse users. A notable example is F5 keyboard shortcut, which is used to refresh window contents in Microsoft Windows. Pressing F5 in Lotus Notes before release 8.0 caused it to lock screen. Since this was a major point of criticism this was changed in release 8.0. Old versions did not support proportional scrollbars (which give the user an idea of how long the document is, relative to the portion being viewed). Proportional scroll bars were only introduced in Notes 8.
Older versions of Lotus Notes also suffered from similar user interaction choices, many of which were also corrected in subsequent releases. One example that was corrected in Release 8.5: In earlier versions the out-of-office agent needed to be manually enabled when leaving and disabled when coming back, even if start and end date have been set. As of Release 8.5 the out-of-office notification now automatically shuts off without a need for a manual disable.
Unlike some other e-mail client software programs, IBM Notes developers made a choice to not allow individual users to determine whether a return receipt is sent when they open an e-mail; rather, that option is configured at the server level. IBM developers believe "Allowing individual cancellation of return receipt violates the intent of a return receipt function within an organization". So, depending on system settings, users will have no choice in return receipts going back to spammers or other senders of unwanted e-mail. This has led tech sites to publish ways to get around this feature of Notes. For IBM Notes 9.0 and IBM iNotes 9.0, the IBM Domino server's .INI file can now contain an entry to control return receipt in a manner that's more aligned with community expectations (IBM Notes 9 Product Documentation).
When Notes crashes, some processes may continue running and prevent the application from being restarted until they are killed.
Related software
Related IBM Lotus products
Over the 30-year history of IBM Notes, Lotus Development Corporation and later IBM have developed many other software products that are based on, or integrated with IBM Notes. The most prominent of these is the IBM Lotus Domino server software, which was originally known as the Lotus Notes Server and gained a separate name with the release of version 4.5. The server platform also became the foundation for products such as IBM Lotus Quickr for Domino, for document management, and IBM Sametime for instant messaging, audio and video communication, and web conferencing, and with Release 8.5, IBM Connections.
In early releases of IBM Notes, there was considerable emphasis on client-side integration with the IBM Lotus SmartSuite environment. With Microsoft's increasing predominance in office productivity software, the desktop integration focus switched for a time to Microsoft Office. With the release of version 8.0 in 2007, based on the Eclipse framework, IBM again added integration with its own office-productivity suite, the OpenOffice.org-derived IBM Lotus Symphony. IBM Lotus Expeditor is a framework for developing Eclipse-based applications.
Other IBM products and technologies have also been built to integrate with IBM Notes. For mobile-device synchronization, this previously included the client-side IBM Lotus Easysync Pro product (no longer in development) and IBM Notes Traveler, a newer no-charge server-side add-on for mail, calendar and contact sync. A recent addition to IBM's portfolio are two IBM Lotus Protector products for mail security and encryption, which have been built to integrate with IBM Notes.
Related software from other vendors
With a long market history and large installed base, Notes and Domino have spawned a large third-party software ecosystem. Such products can be divided into four broad, and somewhat overlapping classes:
Notes and Domino applications are software programs written in the form of one or more Notes databases, and often supplied as NTF templates. This type of software typically is focused on providing business benefit from Notes' core collaboration, workflow and messaging capabilities. Examples include customer relationship management (CRM), human resources, and project tracking systems. Some applications of this sort may offer a browser interface in addition to Notes client access. The code within these programs typically uses the same languages available to an in-house Domino developer: Notes formula language, LotusScript, Java and JavaScript.
Notes and Domino add-ons, tools and extensions are generally executable programs written in C, C++ or another compiled language that are designed specifically to integrate with Notes and Domino. This class of software may include both client- and server-side executable components. In some cases, Notes databases may be used for configuration and reporting. Since the advent of the Eclipse-based Notes 8 Standard client, client-side add-ons may also include Eclipse plug-ins and XML-based widgets. The typical role for this type of software is to support or extend core Notes functionality. Examples include spam and anti-virus products, server administration and monitoring tools, messaging and storage management products, policy-based tools, data synchronization tools and developer tools.
Notes and Domino-aware adds-ins and agents are also executable programs, but they are designed to extend the reach of a general networked software product to Notes and Domino data. This class includes server and client backup software, anti-spam and anti-virus products, and e-discovery and archiving systems. It also includes add-ins to integrate Notes with third-party offerings such as Cisco WebEx conferencing service or the Salesforce.com CRM platform.
History
Notes has a history spanning more than 30 years. Its chief inspiration was PLATO Notes, created by David R. Woolley at the University of Illinois in 1973. In today's terminology, PLATO Notes supported user-created discussion groups, and it was part of the foundation for an online community which thrived for more than 20 years on the PLATO system. Ray Ozzie worked with PLATO while attending the University of Illinois in the 1970s. When PC network technology began to emerge, Ozzie made a deal with Mitch Kapor, the founder of Lotus Development Corporation, that resulted in the formation of Iris Associates in 1984 to develop products that would combine the capabilities of PCs with the collaborative tools pioneered in PLATO. The agreement put control of product development under Ozzie and Iris, and sales and marketing under Lotus. In 1994, after the release and marketplace success of Notes R3, Lotus purchased Iris. In 1995 IBM purchased Lotus.
In 2008, IBM released XPages technology, based on JavaServer Faces. This allows Domino applications to be better surfaced to browser clients, though the UX and business logic must be completely rewritten. Previously, Domino applications could be accessed through browsers, but required extensive web specific modifications to get full functionality in browsers. XPages also gave the application new capabilities that are not possible with the classic Notes client. The IBM Domino 9 Social Edition included the Notes Browser Plugin, which would surface Notes applications through a minified version of the rich desktop client contained in a browser tab.
Branding
Prior to release 4.5, the Lotus Notes branding encompassed both the client and server applications. In 1996, Lotus released an HTTP server add-on for the Notes 4 server called "Domino". This add-on allowed Notes documents to be rendered as web pages in real time. Later that year, the Domino web server was integrated into release 4.5 of the core Notes server and the entire server program was re-branded, taking on the name "Domino". Only the client program officially retained the "Lotus Notes" name.
In November 2012, IBM announced it would be dropping the Lotus brand and moving forward with the IBM brand only to identify products, including Notes and Domino. On October 9, 2018, IBM announced the availability of the latest version of the client and server software.
In 2019, Domino and Notes became enterprise software products managed under HCL Software.
Release history
IBM donated parts of the IBM Notes and Domino code to OpenOffice.org on September 12, 2007 and since 2008 has been regularly donating code to OpenNTF.org.
21st century
Despite repeated predictions of the decline or impending demise of IBM Notes and Domino, such as Forbes magazine's 1998 "The decline and fall of Lotus", the installed base of Lotus Notes has increased from an estimated 42 million seats in September 1998 to approximately 140 million cumulative licenses sold through 2008. Once IBM Workplace was discontinued in 2006, speculation about dropping Notes was rendered moot. Moreover, IBM introduced iNotes for iPhone two years later.
IBM contributed some of the code it had developed for the integration of the OpenOffice.org suite into Notes 8 to the project. IBM also packaged its version of OpenOffice.org for free distribution as IBM Lotus Symphony.
IBM Notes and Domino 9 Social Edition shipped on March 21, 2013. Changes include significantly updated user interface, near-parity of IBM Notes and IBM iNotes functionality, the IBM Notes Browser Plugin, new XPages controls added to IBM Domino, refreshed IBM Domino Designer user interface, added support for To Dos on Android mobile devices, and additional server functionality as detailed in the Announcement Letter.
In late 2016, IBM announced that there would not be a Notes 9.0.2 release, but 9.0.1 would be supported until at least 2021. In the same presentation IBM also stated that their internal users had been migrated away from Notes and onto the IBM Verse client.
On October 25, 2017, IBM announced a plan to deliver a Domino V10 family update sometime in 2018. The new version will be built in partnership with HCL Technologies. IBM's development and support team responsible for these products are moving to HCL, however, the marketing, and sales continue to be IBM-led. Product strategy is shared between IBM and HCL. As part of the announcement, IBM indicated that there is no formal end to product support planned.
On October 9, 2018, IBM announced IBM Domino 10.0 and IBM Notes 10.0 in Frankfurt, Germany, and made them available to download on October 10, 2018.
Prior to the 2018 sales to HCL, IBM made announcements that it continues to invest heavily in research and development on the IBM Notes and Domino product line.
See also
List of IBM products
IBM Collaboration Solutions (formerly Lotus) Software division
Comparison of email clients
IBM Lotus Domino Web Access
Comparison of feed aggregators
List of applications with iCalendar support
Lotus Multi-Byte Character Set (LMBCS)
NotesPeek
References
Lotus Notes
Notes
1989 software
Lotus Notes
Lotus Notes
Lotus Notes
Lotus Notes
Lotus Notes
Proprietary database management systems
Document-oriented databases
NoSQL
Proprietary commercial software for Linux
Email systems
Divested IBM products
2019 mergers and acquisitions |
60980 | https://en.wikipedia.org/wiki/Digital%20Visual%20Interface | Digital Visual Interface | Digital Visual Interface (DVI) is a video display interface developed by the Digital Display Working Group (DDWG). The digital interface is used to connect a video source, such as a video display controller, to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of digital video content.
This interface is designed to transmit uncompressed digital video and can be configured to support multiple modes such as DVI-A (analog only), DVI-D (digital only) or DVI-I (digital and analog). Featuring support for analog connections, the DVI specification is compatible with the VGA interface. This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets and DVD players.
Technical overview
DVI's digital video transmission format is based on panelLink, a serial format developed by Silicon Image that utilizes a high-speed serial link called transition minimized differential signaling (TMDS). Like modern analog VGA connectors, the DVI connector includes pins for the display data channel (DDC). A newer version of DDC called DDC2 allows the graphics adapter to read the monitor's extended display identification data (EDID). If a display supports both analog and digital signals in one DVI-I input, each input method can host a distinct EDID. Since the DDC can only support one EDID, this can be a problem if both the digital and analog inputs in the DVI-I port detect activity. It is up to the display to choose which EDID to send.
When a source and display are connected, the source first queries the display's capabilities by reading the monitor EDID block over an I²C link. The EDID block contains the display's identification, color characteristics (such as gamma value), and table of supported video modes. The table can designate a preferred mode or native resolution. Each mode is a set of CRT timing values that define the duration and frequency of the horizontal/vertical sync, the positioning of the active display area, the horizontal resolution, vertical resolution, and refresh rate.
For backward compatibility with displays using analog VGA signals, some of the contacts in the DVI connector carry the analog VGA signals. To ensure a basic level of interoperability, DVI compliant devices are required to support one baseline video mode, "low pixel format" (640 × 480 at 60 Hz). Digitally encoded video pixel data is transported using multiple TMDS links. At the electrical level, these links are highly resistant to electrical noise and other forms of analog distortion.
A single link DVI connection consists of four TMDS links; each link transmits data from the source to the device over one twisted pair. Three of the links represent the RGB components (red, green, and blue) of the video signal for a total of 24 bits per pixel. The fourth link carries the pixel clock. The binary data is encoded using 8b/10b encoding. DVI does not use packetization, but rather transmits the pixel data as if it were a rasterized analog video signal. As such, the complete frame is drawn during each vertical refresh period. The full active area of each frame is always transmitted without compression. Video modes typically use horizontal and vertical refresh timings that are compatible with CRT displays, though this is not a requirement. In single-link mode, the maximum pixel clock frequency is 165 MHz that supports a maximum resolution of 2.75 megapixels (including blanking interval) at 60 Hz refresh. For practical purposes, this allows a maximum 16:10 screen resolution of 1920 × 1200 at 60 Hz.
To support higher-resolution display devices, the DVI specification contains a provision for dual link. Dual-link DVI doubles the number of TMDS pairs, effectively doubling the video bandwidth. As a result, higher resolutions up to 2560 × 1600 are supported at 60 Hz.
Cable length
The maximum length recommended for DVI cables is not included in the specification, since it is dependent on the pixel clock frequency. In general, cable lengths up to will work for display resolutions up to 1920 × 1200. Longer cables up to in length can be used with display resolutions 1280 × 1024 or lower. For greater distances, the use of a DVI booster—a signal repeater which may use an external power supply—is recommended to help mitigate signal degradation.
Connector
The DVI connector on a device is given one of three names, depending on which signals it implements:
DVI-I (integrated, combines digital and analog in the same connector; digital may be single or dual link)
DVI-D (digital only, single link or dual link)
DVI-A (analog only)
Most DVI connector types—the exception is DVI-A—have pins that pass digital video signals. These come in two varieties: single link and dual link. Single link DVI employs a single 165 MHz transmitter that supports resolutions up to 1920 × 1200 at 60 Hz. Dual link DVI adds six pins, at the center of the connector, for a second transmitter increasing the bandwidth and supporting resolutions up to 2560 × 1600 at 60 Hz. A connector with these additional pins is sometimes referred to as DVI-DL (dual link). Dual link should not be confused with dual display (also known as dual head), which is a configuration consisting of a single computer connected to two monitors, sometimes using a DMS-59 connector for two single link DVI connections.
In addition to digital, some DVI connectors also have pins that pass an analog signal, which can be used to connect an analog monitor. The analog pins are the four that surround the flat blade on a DVI-I or DVI-A connector. A VGA monitor, for example, can be connected to a video source with DVI-I through the use of a passive adapter. Since the analog pins are directly compatible with VGA signaling, passive adapters are simple and cheap to produce, providing a cost-effective solution to support VGA on DVI. The long flat pin on a DVI-I connector is wider than the same pin on a DVI-D connector, so even if the four analog pins were manually removed, it still wouldn't be possible to connect a male DVI-I to a female DVI-D. It is possible, however, to join a male DVI-D connector with a female DVI-I connector.
DVI is the only widespread video standard that includes analog and digital transmission in the same connector. Competing standards are exclusively digital: these include a system using low-voltage differential signaling (LVDS), known by its proprietary names FPD-Link (flat-panel display) and FLATLINK; and its successors, the LVDS Display Interface (LDI) and OpenLDI.
Some DVD players, HDTV sets, and video projectors have DVI connectors that transmit an encrypted signal for copy protection using the High-bandwidth Digital Content Protection (HDCP) protocol. Computers can be connected to HDTV sets over DVI, but the graphics card must support HDCP to play content protected by digital rights management (DRM).
Specifications
Digital
Minimum clock frequency: 25.175 MHz
Single link maximum data rate including 8b/10b overhead is 4.95 Gbit/s @ 165 MHz. With the 8b/10b overhead subtracted, the maximum data rate is 3.96 Gbit/s.
Dual link maximum data rate is twice that of single link. Including 8b/10b overhead, the maximum data rate is 9.90 Gbit/s @ 165 MHz. With the 8b/10b overhead subtracted, the maximum data rate is 7.92 Gbit/s.
Pixels per clock cycle:
1 (single link at 24 bits or less per pixel, and dual link at between 25 and 48 bits inclusively per pixel) or
2 (dual link at 24 bits or less per pixel)
Bits per pixel:
24 bits per pixel support is mandatory in all resolutions supported.
Less than 24 bits per pixel is optional.
Up to 48 bits per pixel are supported in dual link DVI, and is optional. If a mode greater than 24 bits per pixel is desired, the least significant bits are sent on the second link.
Example display modes (single link):
SXGA () @ 85 Hz with GTF blanking (159 MHz)
HDTV () @ 60 Hz with CVT-RB blanking (139 MHz)
UXGA () @ 60 Hz with GTF blanking (161 MHz)
WUXGA () @ 60 Hz with CVT-RB blanking (154 MHz)
WQXGA () @ 30 Hz with CVT-RB blanking (132 MHz)
Example display modes (dual link):
QXGA () @ 72 Hz with CVT blanking (2 × 163 MHz)
HDTV () @ 120 Hz with CVT-RB blanking (2 × 143 MHz)
WUXGA () @ 120 Hz with CVT-RB blanking (2 × 154 MHz)
WQXGA () @ 60 Hz with CVT-RB blanking (2 × 135 MHz)
WQUXGA () @ 30 Hz with CVT-RB blanking (2 × 146 MHz)
Generalized Timing Formula (GTF) is a VESA standard which can easily be calculated with the Linux gtf utility. Coordinated Video Timings-Reduced Blanking (CVT-RB) is a VESA standard which offers reduced horizontal and vertical blanking for non-CRT based displays.
Digital data encoding
One of the purposes of DVI stream encoding is to provide a DC-balanced output link that reduces decoding errors. This goal is achieved by using 10-bit symbols for 8-bit or less characters and using the extra bits for the DC balancing.
Like other ways of transmitting video, there are two different regions: the active region, where pixel data is sent, and the control region, where synchronization signals are sent. The active region is encoded using transition-minimized differential signaling, where the control region is encoded with a fixed 8b/10b encoding. As the two schemes yield different 10-bit symbols, a receiver can fully differentiate between active and control regions.
When DVI was designed, most computer monitors were still of the cathode ray tube type that require analog video synchronization signals. The timing of the digital synchronization signals matches the equivalent analog ones, making the process of transforming DVI to and from an analog signal a process that does not require extra (high-speed) memory, expensive at the time.
HDCP is an extra layer that transforms the 10-bit symbols before sending through the link. Only after correct authorization can the receiver undo the HDCP encryption. Control regions are not encrypted in order to let the receiver know when the active region starts.
Clock and data relationship
The DVI data channel operates at a bit-rate that is 10 times the frequency of the clock signal. In other words, in each DVI clock period there is a 10-bit symbol per channel. The set of three 10-bit symbols represents one complete pixel in single link mode and can represent either one or two complete pixels as a set of six 10-bit symbols in dual link mode.
DVI links provide differential pairs for data and for the clock. The specification document allows the data and the clock to not be aligned. However, as the ratio between clock and bit rate is fixed at 1:10, the unknown alignment is kept over time. The receiver must recover the bits on the stream using any of the techniques of clock/data recovery and find then the correct symbol boundary. The DVI specification allows the input clock to vary between 25 MHz and 165 MHz. This 1:6.6 ratio can make pixel recovery difficult, as phase-locked loops, if used, need to work over a large frequency range. One benefit of DVI over other links is that it is relatively straightforward to transform the signal from the digital domain into the analog domain using a video DAC, as both clock and synchronization signals are sent over the link. Fixed frequency links, like DisplayPort, need to reconstruct the clock from the data sent over the link.
Display power management
The DVI specification includes signaling for reducing power consumption. Similar to the analog VESA display power management signaling (DPMS) standard, a connected device can turn a monitor off when the connected device is powered down, or programmatically if the display controller of the device supports it. Devices with this capability can also attain Energy Star certification.
Analog
The analog section of the DVI specification document is brief and points to other specifications like VESA VSIS for electrical characteristics and GTFS for timing information. The idea of the analog link is to keep compatibility with the previous VGA cables and connectors. HSync, Vsync and three video channels are available in both VGA and DVI connectors and are electrically compatible. Auxiliary links like DDC are also available. A passive adapter can be used in order to carry the analog signals between the two connectors.
DVI and HDMI compatibility
HDMI is a newer digital audio/video interface developed and promoted by the consumer electronics industry. DVI and HDMI have the same electrical specifications for their TMDS and VESA/DDC links. However HDMI and DVI differ in several key ways.
HDMI lacks VGA compatibility and does not include analog signals.
DVI is limited to the RGB color model while HDMI also supports YCbCr 4:4:4 and YCbCr 4:2:2 color spaces which are generally not used for computer graphics.
In addition to digital video, HDMI supports the transport of packets used for digital audio.
HDMI sources differentiate between legacy DVI displays and HDMI-capable displays by reading the display's EDID block.
To promote interoperability between DVI-D and HDMI devices, HDMI source components and displays support DVI-D signalling. For example, an HDMI display can be driven by a DVI-D source because HDMI and DVI-D both define an overlapping minimum set of supported resolutions and frame buffer formats.
Some DVI-D sources use non-standard extensions to output HDMI signals including audio (e.g. ATI 3000-series and NVIDIA GTX 200-series). Some multimedia displays use a DVI to HDMI adapter to input the HDMI signal with audio. Exact capabilities vary by video card specifications.
In the reverse scenario, a DVI display that lacks optional support for HDCP might be unable to display protected content even though it is otherwise compatible with the HDMI source. Features specific to HDMI such as remote control, audio transport, xvYCC and deep color are not usable in devices that support only DVI signals. HDCP compatibility between source and destination devices is subject to manufacturer specifications for each device.
Proposed successors
IEEE 1394 is proposed by High-Definition Audio-Video Network Alliance (HANA Alliance) for all cabling needs, including video, over coax or 1394 cable as a combined data stream. However, this interface does not have enough throughput to handle uncompressed HD video, so it is unsuitable for applications that require uncompressed HD video like video games and interactive program guides.
High-Definition Multimedia Interface (HDMI), a forward-compatible standard that also includes digital audio transmission
Unified Display Interface (UDI) was proposed by Intel to replace both DVI and HDMI, but was deprecated in favor of DisplayPort.
DisplayPort (a license-free standard proposed by VESA to succeed DVI that has optional DRM mechanisms) / Mini DisplayPort
Thunderbolt: an interface that has the same form factor as Mini DisplayPort (in version 1 and 2) or USB-C (in version 3 and 4) but combines PCI Express (PCIe) and DisplayPort (DP) into one serial signal, permitting the connection of PCIe devices in addition to video displays. It provides DC power as well.
In December 2010, Intel, AMD, and several computer and display manufacturers announced they would stop supporting DVI-I, VGA and LVDS-technologies from 2013/2015, and instead speed up adoption of DisplayPort and HDMI. They also stated: "Legacy interfaces such as VGA, DVI and LVDS have not kept pace, and newer standards such as DisplayPort and HDMI clearly provide the best connectivity options moving forward. In our opinion, DisplayPort 1.2 is the future interface for PC monitors, along with HDMI 1.4a for TV connectivity".
See also
DMS-59 - a single DVI sized connector providing two single link DVI or VGA channels
List of video connectors
DiiVA
Lightning (connector)
References
Further reading
Computer connectors
Computer display standards
Computer-related introductions in 1999
Digital display connectors
Film and video technology
High-definition television
American inventions
Television technology
Television transmission standards
Video signal |
61118 | https://en.wikipedia.org/wiki/Load%20balancing%20%28computing%29 | Load balancing (computing) | In computing, load balancing refers to the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle.
Load balancing is the subject of research in the field of parallel computers. Two main approaches exist: static algorithms, which do not take into account the state of the different machines, and dynamic algorithms, which are usually more general and more efficient, but require exchanges of information between the different computing units, at the risk of a loss of efficiency.
Problem overview
A load balancing algorithm always tries to answer a specific problem. Among other things, the nature of the tasks, the algorithmic complexity, the hardware architecture on which the algorithms will run as well as required error tolerance, must be taken into account. Therefore compromise must be found to best meet application-specific requirements.
Nature of tasks
The efficiency of load balancing algorithms critically depends on the nature of the tasks. Therefore, the more information about the tasks is available at the time of decision making, the greater the potential for optimization.
Size of tasks
A perfect knowledge of the execution time of each of the tasks allows to reach an optimal load distribution (see algorithm of prefix sum). Unfortunately, this is in fact an idealized case. Knowing the exact execution time of each task is an extremely rare situation.
For this reason, there are several techniques to get an idea of the different execution times. First of all, in the fortunate scenario of having tasks of relatively homogeneous size, it is possible to consider that each of them will require approximately the average execution time. If, on the other hand, the execution time is very irregular, more sophisticated techniques must be used. One technique is to add some metadata to each task. Depending on the previous execution time for similar metadata, it is possible to make inferences for a future task based on statistics.
Dependencies
In some cases, tasks depend on each other. These interdependencies can be illustrated by a directed acyclic graph. Intuitively, some tasks cannot begin until others are completed.
Assuming that the required time for each of the tasks is known in advance, an optimal execution order must lead to the minimization of the total execution time. Although this is an NP-hard problem and therefore can be difficult to be solved exactly. There are algorithms, like job scheduler, that calculate optimal task distributions using metaheuristic methods.
Segregation of tasks
Another feature of the tasks critical for the design of a load balancing algorithm is their ability to be broken down into subtasks during execution. The "Tree-Shaped Computation" algorithm presented later takes great advantage of this specificity.
Static and dynamic algorithms
Static
A load balancing algorithm is "static" when it does not take into account the state of the system for the distribution of tasks. Thereby, the system state includes measures such as the load level (and sometimes even overload) of certain processors. Instead, assumptions about the overall system are made beforehand, such as the arrival times and resource requirements of incoming tasks. In addition, the number of processors, their respective power and communication speeds are known. Therefore, static load balancing aims to associate a known set of tasks with the available processors in order to minimize a certain performance function. The trick lies in the concept of this performance function.
Static load balancing techniques are commonly centralized around a router, or Master, which distributes the loads and optimizes the performance function. This minimization can take into account information related to the tasks to be distributed, and derive an expected execution time.
The advantage of static algorithms is that they are easy to set up and extremely efficient in the case of fairly regular tasks (such as processing HTTP requests from a website). However, there is still some statistical variance in the assignment of tasks which can lead to overloading of some computing units.
Dynamic
Unlike static load distribution algorithms, dynamic algorithms take into account the current load of each of the computing units (also called nodes) in the system. In this approach, tasks can be moved dynamically from an overloaded node to an underloaded node in order to receive faster processing. While these algorithms are much more complicated to design, they can produce excellent results, in particular, when the execution time varies greatly from one task to another.
Dynamic load balancing architecture can be more modular since it is not mandatory to have a specific node dedicated to the distribution of work. When tasks are uniquely assigned to a processor according to its state at a given moment, it is unique assignment. If, on the other hand, the tasks can be permanently redistributed according to the state of the system and its evolution, this is called dynamic assignment. Obviously, a load balancing algorithm that requires too much communication in order to reach its decisions runs the risk of slowing down the resolution of the overall problem.
Hardware architecture
Heterogenous machines
Parallel computing infrastructures are often composed of units of different computing power, which should be taken into account for the load distribution.
For example, lower-powered units may receive requests that require a smaller amount of computation, or, in the case of homogeneous or unknown request sizes, receive fewer requests than larger units.
Shared and distributed memory
Parallel computers are often divided into two broad categories: those where all processors share a single common memory on which they read and write in parallel (PRAM model), and those where each computing unit has its own memory (distributed memory model), and where information is exchanged by messages.
For shared-memory computers, managing write conflicts greatly slows down the speed of individual execution of each computing unit. However, they can work perfectly well in parallel. Conversely, in the case of message exchange, each of the processors can work at full speed. On the other hand, when it comes to collective message exchange, all processors are forced to wait for the slowest processors to start the communication phase.
In reality, few systems fall into exactly one of the categories. In general, the processors each have an internal memory to store the data needed for the next calculations, and are organized in successive clusters. Often, these processing elements are then coordinated through distributed memory and message passing. Therefore, the load balancing algorithm should be uniquely adapted to a parallel architecture. Otherwise, there is a risk that the efficiency of parallel problem solving will be greatly reduced.
Hierarchy
Adapting to the hardware structures seen above, there are two main categories of load balancing algorithms. On the one hand, the one where tasks are assigned by “master” and executed by “workers” who keep the master informed of the progress of their work, and the master can then take charge of assigning or reassigning the workload in case of dynamic algorithm. The literature refers to this as "Master-Worker" architecture. On the other hand, the control can be distributed between the different nodes. The load balancing algorithm is then executed on each of them and the responsibility for assigning tasks (as well as re-assigning and splitting as appropriate) is shared. The last category assumes a dynamic load balancing algorithm.
Since the design of each load balancing algorithm is unique, the previous distinction must be qualified. Thus, it is also possible to have an intermediate strategy, with, for example, "master" nodes for each sub-cluster, which are themselves subject to a global "master". There are also multi-level organizations, with an alternation between master-slave and distributed control strategies. The latter strategies quickly become complex and are rarely encountered. Designers prefer algorithms that are easier to control.
Adaptation to larger architectures (scalability)
In the context of algorithms that run over the very long term (servers, cloud...), the computer architecture evolves over time. However, it is preferable not to have to design a new algorithm each time.
An extremely important parameter of a load balancing algorithm is therefore its ability to adapt to a scalable hardware architecture. This is called the scalability of the algorithm. An algorithm is called scalable for an input parameter when its performance remains relatively independent of the size of that parameter.
When the algorithm is capable of adapting to a varying number of computing units, but the number of computing units must be fixed before execution, it is called moldable. If, on the other hand, the algorithm is capable of dealing with a fluctuating amount of processors during its execution, the algorithm is said to be malleable. Most load balancing algorithms are at least moldable.
Fault tolerance
Especially in large-scale computing clusters, it is not tolerable to execute a parallel algorithm which cannot withstand failure of one single component. Therefore, fault tolerant algorithms are being developed which can detect outages of processors and recover the computation.
Approaches
Static distribution with full knowledge of the tasks: prefix sum
If the tasks are independent of each other, and if their respective execution time and the tasks can be subdivided, there is a simple and optimal algorithm.
By dividing the tasks in such a way as to give the same amount of computation to each processor, all that remains to be done is to group the results together. Using a prefix sum algorithm, this division can be calculated in logarithmic time with respect to the number of processors.
If, however, the tasks cannot be subdivided (i.e., they are atomic), although optimizing task assignment is a difficult problem, it is still possible to approximate a relatively fair distribution of tasks, provided that the size of each of them is much smaller than the total computation performed by each of the nodes.
Most of the time, the execution time of a task is unknown and only rough approximation are available. This algorithm, although particularly efficient, is not viable for these scenarios.
Static load distribution without prior knowledge
Even if the execution time is not known in advance at all, static load distribution is always possible.
Round-robin scheduling
In a round-robin algorithm, the first request is sent to the first server, then the next to the second, and so on down to the last. Then it is started again, assigning the next request to the first server, and so on.
This algorithm can be weighted such that the most powerful units receive the largest number of requests and receive them first.
Randomized static
Randomized static load balancing is simply a matter of randomly assigning tasks to the different servers. This method works quite well. If, on the other hand, the number of tasks is known in advance, it is even more efficient to calculate a random permutation in advance. This avoids communication costs for each assignment. There is no longer a need for a distribution master because every processor knows what task is assigned to it. Even if the number of tasks is unknown, it is still possible to avoid communication with a pseudo-random assignment generation known to all processors.
The performance of this strategy (measured in total execution time for a given fixed set of tasks) decreases with the maximum size of the tasks.
Others
Of course, there are other methods of assignment as well:
Less work: Assign more tasks to the servers by performing less (the method can also be weighted).
Hash: allocates queries according to a hash table.
Power of Two Choices: pick two servers at random and choose the better of the two options.
Master-Worker Scheme
Master-Worker schemes are among the simplest dynamic load balancing algorithms. A master distributes the workload to all workers (also sometimes referred to as "slaves"). Initially, all workers are idle and report this to the master. The master answers worker requests and distributes the tasks to them. When he has no more tasks to give, he informs the workers so that they stop asking for tasks.
The advantage of this system is that it distributes the burden very fairly. In fact, if one does not take into account the time needed for the assignment, the execution time would be comparable to the prefix sum seen above.
The problem of this algorithm is that it has difficulty to adapt to a large number of processors because of the high amount of necessary communications. This lack of scalability makes it quickly inoperable in very large servers or very large parallel computers. The master acts as a bottleneck.
However, the quality of the algorithm can be greatly improved by replacing the master by a task list which can be used by different processors. Although this algorithm is a little more difficult to implement, it promises much better scalability, although still insufficient for very large computing centers.
Non-hierarchical architecture, without knowledge of the system: work stealing
Another technique to overcome scalability problems when the time needed for task completion is unknown is work stealing.
The approach consists of assigning to each processor a certain number of tasks in a random or predefined manner, then allowing inactive processors to "steal" work from active or overloaded processors. Several implementations of this concept exist, defined by a task division model and by the rules determining the exchange between processors. While this technique can be particularly effective, it is difficult to implement because it is necessary to ensure that communication does not become the primary occupation of the processors instead of solving the problem.
In the case of atomic tasks, two main strategies can be distinguished, those where the processors with low load offer their computing capacity to those with the highest load, and those where the most loaded units wish to lighten the workload assigned to them. It has been shown that when the network is heavily loaded, it is more efficient for the least loaded units to offer their availability and when the network is lightly loaded, it is the overloaded processors that require support from the most inactive ones. This rule of thumb limits the number of exchanged messages.
In the case where one starts from a single large task that cannot be divided beyond an atomic level, there is a very efficient algorithm "Tree-Shaped computation", where the parent task is distributed in a work tree.
Principle
Initially, many processors have an empty task, except one that works sequentially on it. Idle processors issue requests randomly to other processors (not necessarily active). If the latter is able to subdivide the task it is working on, it does so by sending part of its work to the node making the request. Otherwise, it returns an empty task. This induces a tree structure. It is then necessary to send a termination signal to the parent processor when the subtask is completed, so that it in turn sends the message to its parent until it reaches the root of the tree. When the first processor, i.e. the root, has finished, a global termination message can be broadcast. At the end, it is necessary to assemble the results by going back up the tree.
Efficiency
The efficiency of such an algorithm is close to the prefix sum when the job cutting and communication time is not too high compared to the work to be done. To avoid too high communication costs, it is possible to imagine a list of jobs on shared memory. Therefore, a request is simply reading from a certain position on this shared memory at the request of the master processor.
Use cases
In addition to efficient problem solving through parallel computations, load balancing algorithms are widely used in HTTP request management where a site with a large audience must be able to handle a large number of requests per second.
Internet-based services
One of the most commonly used applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm. Commonly load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol (FTP) sites, Network News Transfer Protocol (NNTP) servers, Domain Name System (DNS) servers, and databases.
Round-robin DNS
Round-robin DNS is an alternate method of load balancing that does not require a dedicated software or hardware node. In this technique, multiple IP addresses are associated with a single domain name; clients are given IP in a round-robin fashion. IP is assigned to clients with a short expiration so the client is more likely to use a different IP the next time they access the Internet service being requested.
DNS delegation
Another more effective technique for load-balancing using DNS is to delegate as a sub-domain whose zone is served by each of the same servers that are serving the website. This technique works particularly well where individual servers are spread geographically on the Internet. For example:
one.example.org A 192.0.2.1
two.example.org A 203.0.113.2
www.example.org NS one.example.org
www.example.org NS two.example.org
However, the zone file for on each server is different such that each server resolves its own IP Address as the A-record. On server one the zone file for reports:
@ in a 192.0.2.1
On server two the same zone file contains:
@ in a 203.0.113.2
This way, when a server is down, its DNS will not respond and the web service does not receive any traffic. If the line to one server is congested, the unreliability of DNS ensures less HTTP traffic reaches that server. Furthermore, the quickest DNS response to the resolver is nearly always the one from the network's closest server, ensuring geo-sensitive load-balancing . A short TTL on the A-record helps to ensure traffic is quickly diverted when a server goes down. Consideration must be given the possibility that this technique may cause individual clients to switch between individual servers in mid-session.
Client-side random load balancing
Another approach to load balancing is to deliver a list of server IPs to the client, and then to have client randomly select the IP from the list on each connection. This essentially relies on all clients generating similar loads, and the Law of Large Numbers to achieve a reasonably flat load distribution across servers. It has been claimed that client-side random load balancing tends to provide better load distribution than round-robin DNS; this has been attributed to caching issues with round-robin DNS, that in case of large DNS caching servers, tend to skew the distribution for round-robin DNS, while client-side random selection remains unaffected regardless of DNS caching.
With this approach, the method of delivery of list of IPs to the client can vary, and may be implemented as a DNS list (delivered to all the clients without any round-robin), or via hardcoding it to the list. If a "smart client" is used, detecting that randomly selected server is down and connecting randomly again, it also provides fault tolerance.
Server-side load balancers
For Internet services, a server-side load balancer is usually a software program that is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the "backend" servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It also prevents clients from contacting back-end servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.
Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer or displaying a message regarding the outage.
It is also important that the load balancer itself does not become a single point of failure. Usually, load balancers are implemented in high-availability pairs which may also replicate session persistence data if required by the specific application. Certain applications are programmed with immunity to this problem, by offsetting the load balancing point over differential sharing platforms beyond the defined network. The sequential algorithms paired to these functions are defined by flexible parameters unique to the specific database.
Scheduling algorithms
Numerous scheduling algorithms, also called load-balancing methods, are used by load balancers to determine which back-end server to send a request to.
Simple algorithms include random choice, round robin, or least connections. More sophisticated load balancers may take additional factors into account, such as a server's reported load, least response times, up/down status (determined by a monitoring poll of some kind), number of active connections, geographic location, capabilities, or how much traffic it has recently been assigned.
Persistence
An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session. If this information is stored locally on one backend server, then subsequent requests going to different backend servers would not be able to find it. This might be cached information that can be recomputed, in which case load-balancing a request to a different backend server just introduces a performance issue.
Ideally, the cluster of servers behind the load balancer should not be session-aware, so that if a client connects to any backend server at any time the user experience is unaffected. This is usually achieved with a shared database or an in-memory session database like Memcached.
One basic solution to the session data issue is to send all requests in a user session consistently to the same backend server. This is known as "persistence" or "stickiness". A significant downside to this technique is its lack of automatic failover: if a backend server goes down, its per-session information becomes inaccessible, and any sessions depending on it are lost. The same problem is usually relevant to central database servers; even if web servers are "stateless" and not "sticky", the central database is (see below).
Assignment to a particular server might be based on a username, client IP address, or be random. Because of changes of the client's perceived address resulting from DHCP, network address translation, and web proxies this method may be unreliable. Random assignments must be remembered by the load balancer, which creates a burden on storage. If the load balancer is replaced or fails, this information may be lost, and assignments may need to be deleted after a timeout period or during periods of high load to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients maintain some state, which can be a problem, for example when a web browser has disabled storage of cookies. Sophisticated load balancers use multiple persistence techniques to avoid some of the shortcomings of any one method.
Another solution is to keep the per-session data in a database. This is generally bad for performance because it increases the load on the database: the database is best used to store information less transient than per-session data. To prevent a database from becoming a single point of failure, and to improve scalability, the database is often replicated across multiple machines, and load balancing is used to spread the query load across those replicas. Microsoft's ASP.net State Server technology is an example of a session database. All servers in a web farm store their session data on State Server and any server in the farm can retrieve the data.
In the very common case where the client is a web browser, a simple but efficient approach is to store the per-session data in the browser itself. One way to achieve this is to use a browser cookie, suitably time-stamped and encrypted. Another is URL rewriting. Storing session data on the client is generally the preferred solution: then the load balancer is free to pick any backend server to handle a request. However, this method of state-data handling is poorly suited to some complex business logic scenarios, where session state payload is big and recomputing it with every request on a server is not feasible. URL rewriting has major security issues, because the end-user can easily alter the submitted URL and thus change session streams.
Yet another solution to storing persistent data is to associate a name with each block of data, and use a distributed hash table to pseudo-randomly assign that name to one of the available servers, and then store that block of data in the assigned server.
Load balancer features
Hardware and software load balancers may have a variety of special features. The fundamental feature of a load balancer is to be able to distribute incoming requests over a number of backend servers in the cluster according to a scheduling algorithm. Most of the following features are vendor specific:
Asymmetric load
A ratio can be manually assigned to cause some backend servers to get a greater share of the workload than others. This is sometimes used as a crude way to account for some servers having more capacity than others and may not always work as desired.
Priority activation
When the number of available servers drops below a certain number, or load gets too high, standby servers can be brought online.
TLS Offload and Acceleration
TLS (or its predecessor SSL) acceleration is a technique of offloading cryptographic protocol calculations onto a specialized hardware. Depending on the workload, processing the encryption and authentication requirements of an TLS request can become a major part of the demand on the Web Server's CPU; as the demand increases, users will see slower response times, as the TLS overhead is distributed among Web servers. To remove this demand on Web servers, a balancer can terminate TLS connections, passing HTTPS requests as HTTP requests to the Web servers. If the balancer itself is not overloaded, this does not noticeably degrade the performance perceived by end users. The downside of this approach is that all of the TLS processing is concentrated on a single device (the balancer) which can become a new bottleneck. Some load balancer appliances include specialized hardware to process TLS. Instead of upgrading the load balancer, which is quite expensive dedicated hardware, it may be cheaper to forgo TLS offload and add a few web servers. Also, some server vendors such as Oracle/Sun now incorporate cryptographic acceleration hardware into their CPUs such as the T2000. F5 Networks incorporates a dedicated TLS acceleration hardware card in their local traffic manager (LTM) which is used for encrypting and decrypting TLS traffic. One clear benefit to TLS offloading in the balancer is that it enables it to do balancing or content switching based on data in the HTTPS request.
Distributed Denial of Service (DDoS) attack protection
Load balancers can provide features such as SYN cookies and delayed-binding (the back-end servers don't see the client until it finishes its TCP handshake) to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
HTTP compression
HTTP compression reduces the amount of data to be transferred for HTTP objects by utilising gzip compression available in all modern web browsers. The larger the response and the further away the client is, the more this feature can improve response times. The trade-off is that this feature puts additional CPU demand on the load balancer and could be done by web servers instead.
TCP offload
Different vendors use different terms for this, but the idea is that normally each HTTP request from each client is a different TCP connection. This feature utilises HTTP/1.1 to consolidate multiple HTTP requests from multiple clients into a single TCP socket to the back-end servers.
TCP buffering
The load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the web server to free a thread for other tasks faster than it would if it had to send the entire request to the client directly.
Direct Server Return
An option for asymmetrical load distribution, where request and reply have different network paths.
Health checking
The balancer polls servers for application layer health and removes failed servers from the pool.
HTTP caching
The balancer stores static content so that some requests can be handled without contacting the servers.
Content filtering
Some balancers can arbitrarily modify traffic on the way through.
HTTP security
Some balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so that end users cannot manipulate them.
Priority queuing
Also known as rate shaping, the ability to give different priority to different traffic.
Content-aware switching
Most load balancers can send requests to different servers based on the URL being requested, assuming the request is not encrypted (HTTP) or if it is encrypted (via HTTPS) that the HTTPS request is terminated (decrypted) at the load balancer.
Client authentication
Authenticate users against a variety of authentication sources before allowing them access to a website.
Programmatic traffic manipulation
At least one balancer allows the use of a scripting language to allow custom balancing methods, arbitrary traffic manipulations, and more.
Firewall
Firewalls can prevent direct connections to backend servers, for network security reasons.
Intrusion prevention system
Intrusion prevention systems offer application layer security in addition to network/transport layer offered by firewall security.
Telecommunications
Load balancing can be useful in applications with redundant communications links. For example, a company may have multiple Internet connections ensuring network access if one of the connections fails. A failover arrangement would mean that one link is designated for normal use, while the second link is used only if the primary link fails.
Using load balancing, both links can be in use all the time. A device or program monitors the availability of all links and selects the path for sending packets. The use of multiple links simultaneously increases the available bandwidth.
Shortest Path Bridging
TRILL (TRansparent Interconnection of Lots of Links) facilitates an Ethernet to have an arbitrary topology, and enables per flow pair-wise load splitting by way of Dijkstra's algorithm, without configuration and user intervention. The catalyst for TRILL was an event at Beth Israel Deaconess Medical Center which began on 13 November 2002. The concept of Rbridges [sic] was first proposed to the Institute of Electrical and Electronics Engineers in the year 2004, whom in 2005 rejected what came to be known as TRILL, and in the years 2006 through 2012 devised an incompatible variation known as Shortest Path Bridging.
The IEEE approved the IEEE 802.1aq standard May 2012, also known as Shortest Path Bridging (SPB). SPB allows all links to be active through multiple equal cost paths, provides faster convergence times to reduce down time, and simplifies the use of load balancing in mesh network topologies (partially connected and/or fully connected) by allowing traffic to load share across all paths of a network. SPB is designed to virtually eliminate human error during configuration and preserves the plug-and-play nature that established Ethernet as the de facto protocol at Layer 2.
Routing 1
Many telecommunications companies have multiple routes through their networks or to external networks. They use sophisticated load balancing to shift traffic from one path to another to avoid network congestion on any particular link, and sometimes to minimize the cost of transit across external networks or improve network reliability.
Another way of using load balancing is in network monitoring activities. Load balancers can be used to split huge data flows into several sub-flows and use several network analyzers, each reading a part of the original data. This is very useful for monitoring fast networks like 10GbE or STM64, where complex processing of the data may not be possible at wire speed.
Data center networks
Load balancing is widely used in data center networks to distribute traffic across many existing paths between any two servers. It allows more efficient use of network bandwidth and reduces provisioning costs. In general, load balancing in datacenter networks can be classified as either static or dynamic.
Static load balancing distributes traffic by computing a hash of the source and destination addresses and port numbers of traffic flows and using it to determine how flows are assigned to one of the existing paths. Dynamic load balancing assigns traffic flows to paths by monitoring bandwidth use on different paths. Dynamic assignment can also be proactive or reactive. In the former case, the assignment is fixed once made, while in the latter the network logic keeps monitoring available paths and shifts flows across them as network utilization changes (with arrival of new flows or completion of existing ones). A comprehensive overview of load balancing in datacenter networks has been made available.
Failovers
Load balancing is often used to implement failover—the continuation of a service after the failure of one or more of its components. The components are monitored continually (e.g., web servers may be monitored by fetching known pages), and when one becomes unresponsive, the load balancer is informed and no longer sends traffic to it. When a component comes back online, the load balancer starts rerouting traffic to it. For this to work, there must be at least one component in excess of the service's capacity (N+1 redundancy). This can be much less expensive and more flexible than failover approaches where each single live component is paired with a single backup component that takes over in the event of a failure (dual modular redundancy). Some RAID systems can also utilize hot spare for a similar effect.
See also
Affinity mask
Application Delivery Controller
Autoscaling
Cloud computing
Cloud load balancing
Common Address Redundancy Protocol
Edge computing
Network Load Balancing
SRV record
References
External links
Server routing for load balancing with full auto failure recovery
Network management
Servers (computing)
Routing
Balancing technology |
61419 | https://en.wikipedia.org/wiki/Tokenization%20%28data%20security%29 | Tokenization (data security) | Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no extrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created from random numbers. The tokenization system must be secured and validated using security best practices applicable to sensitive data protection, secure storage, audit, authentication and authorization. The tokenization system provides data processing applications with the authority and interfaces to request tokens, or detokenize back to sensitive data.
The security and risk reduction benefits of tokenization require that the tokenization system is logically isolated and segmented from data processing systems and applications that previously processed or stored sensitive data replaced by tokens. Only the tokenization system can tokenize data to create tokens, or detokenize back to redeem sensitive data under strict security controls. The token generation method must be proven to have the property that there is no feasible means through direct attack, cryptanalysis, side channel analysis, token mapping table exposure or brute force techniques to reverse tokens back to live data.
Replacing live data with tokens in systems is intended to minimize exposure of sensitive data to those applications, stores, people and processes, reducing risk of compromise or accidental exposure and unauthorized access to sensitive data. Applications can operate using tokens instead of live data, with the exception of a small number of trusted applications explicitly permitted to detokenize when strictly necessary for an approved business purpose. Tokenization systems may be operated in-house within a secure isolated segment of the data center, or as a service from a secure service provider.
Tokenization may be used to safeguard sensitive data involving, for example, bank accounts, financial statements, medical records, criminal records, driver's licenses, loan applications, stock trades, voter registrations, and other types of personally identifiable information (PII). Tokenization is often used in credit card processing. The PCI Council defines tokenization as "a process by which the primary account number (PAN) is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated PAN value. The security of an individual token relies predominantly on the infeasibility of determining the original PAN knowing only the surrogate value". The choice of tokenization as an alternative to other techniques such as encryption will depend on varying regulatory requirements, interpretation, and acceptance by respective auditing or assessment entities. This is in addition to any technical, architectural or operational constraint that tokenization imposes in practical use.
Concepts and origins
The concept of tokenization, as adopted by the industry today, has existed since the first currency systems emerged centuries ago as a means to reduce risk in handling high value financial instruments by replacing them with surrogate equivalents. In the physical world, coin tokens have a long history of use replacing the financial instrument of minted coins and banknotes. In more recent history, subway tokens and casino chips found adoption for their respective systems to replace physical currency and cash handling risks such as theft. Exonumia, and scrip are terms synonymous with such tokens.
In the digital world, similar substitution techniques have been used since the 1970s as a means to isolate real data elements from exposure to other data systems. In databases for example, surrogate key values have been used since 1976 to isolate data associated with the internal mechanisms of databases and their external equivalents for a variety of uses in data processing. More recently, these concepts have been extended to consider this isolation tactic to provide a security mechanism for the purposes of data protection.
In the payment card industry, tokenization is one means of protecting sensitive cardholder data in order to comply with industry standards and government regulations.
In 2001, TrustCommerce created the concept of Tokenization to protect sensitive payment data for a client, Classmates.com. It engaged Rob Caulfield, founder of TrustCommerce, because the risk of storing card holder data was too great if the systems were ever hacked. TrustCommerce developed TC Citadel®, with which customers could reference a token in place of card holder data and TrustCommerce would process a payment on the merchant's behalf. This billing application allowed clients to process recurring payments without the need to store cardholder payment information. Tokenization replaces the Primary Account Number (PAN) with randomly generated tokens. If intercepted, the data contains no cardholder information, rendering it useless to hackers. The PAN cannot be retrieved, even if the token and the systems it resides on are compromised, nor can the token be reverse engineered to arrive at the PAN.
Tokenization was applied to payment card data by Shift4 Corporation and released to the public during an industry Security Summit in Las Vegas, Nevada in 2005. The technology is meant to prevent the theft of the credit card information in storage. Shift4 defines tokenization as: “The concept of using a non-decryptable piece of data to represent, by reference, sensitive or secret data. In payment card industry (PCI) context, tokens are used to reference cardholder data that is managed in a tokenization system, application or off-site secure facility.”
To protect data over its full lifecycle, tokenization is often combined with end-to-end encryption to secure data in transit to the tokenization system or service, with a token replacing the original data on return. For example, to avoid the risks of malware stealing data from low-trust systems such as point of sale (POS) systems, as in the Target breach of 2013, cardholder data encryption must take place prior to card data entering the POS and not after. Encryption takes place within the confines of a security hardened and validated card reading device and data remains encrypted until received by the processing host, an approach pioneered by Heartland Payment Systems as a means to secure payment data from advanced threats, now widely adopted by industry payment processing companies and technology companies. The PCI Council has also specified end-to-end encryption (certified point-to-point encryption—P2PE) for various service implementations in various PCI Council Point-to-point Encryption documents.
Difference from encryption
Tokenization and “classic” encryption effectively protect data if implemented properly, and a computer security system may use both. While similar in certain regards, tokenization and classic encryption differ in a few key aspects. Both are cryptographic data security methods and they essentially have the same function, however they do so with differing processes and have different effects on the data they are protecting.
Tokenization is a non-mathematical approach that replaces sensitive data with non-sensitive substitutes without altering the type or length of data. This is an important distinction from encryption because changes in data length and type can render information unreadable in intermediate systems such as databases. Tokenized data can still be processed by legacy systems which makes tokenization more flexible than classic encryption.
Another difference is that tokens require significantly less computational resources to process. With tokenization, specific data is kept fully or partially visible for processing and analytics while sensitive information is kept hidden. This allows tokenized data to be processed more quickly and reduces the strain on system resources. This can be a key advantage in systems that rely on high performance.
Types of tokens
There are many ways that tokens can be classified however there is currently no unified classification. Tokens can be: single or multi-use, cryptographic or non-cryptographic, reversible or irreversible, authenticable or non-authenticable, and various combinations thereof.
In the context of payments, the difference between high and low value tokens plays a significant role.
High-value tokens (HVTs)
HVTs serve as surrogates for actual PANs in payment transactions and are used as an instrument for completing a payment transaction. In order to function, they must look like actual PANs. Multiple HVTs can map back to a single PAN and a single physical credit card without the owner being aware of it.
Additionally, HVTs can be limited to certain networks and/or merchants whereas PANs cannot.
HVTs can also be bound to specific devices so that anomalies between token use, physical devices, and geographic locations can be flagged as potentially fraudulent.
Low-value tokens (LVTs) or security tokens
LVTs also act as surrogates for actual PANs in payment transactions, however they serve a different purpose. LVTs cannot be used by themselves to complete a payment transaction. In order for an LVT to function, it must be possible to match it back to the actual PAN it represents, albeit only in a tightly controlled fashion. Using tokens to protect PANs becomes ineffectual if a tokenization system is breached, therefore securing the tokenization system itself is extremely important.
System operations, limitations and evolution
First generation tokenization systems use a database to map from live data to surrogate substitute tokens and back. This requires the storage, management, and continuous backup for every new transaction added to the token database to avoid data loss. Another problem is ensuring consistency across data centers, requiring continuous synchronization of token databases. Significant consistency, availability and performance trade-offs, per the CAP theorem, are unavoidable with this approach. This overhead adds complexity to real-time transaction processing to avoid data loss and to assure data integrity across data centers, and also limits scale. Storing all sensitive data in one service creates an attractive target for attack and compromise, and introduces privacy and legal risk in the aggregation of data Internet privacy, particularly in the EU.
Another limitation of tokenization technologies is measuring the level of security for a given solution through independent validation. With the lack of standards, the latter is critical to establish the strength of tokenization offered when tokens are used for regulatory compliance. The PCI Council recommends independent vetting and validation of any claims of security and compliance: "Merchants considering the use of tokenization should perform a thorough evaluation and risk analysis to identify and document the unique characteristics of their particular implementation, including all interactions with payment card data and the particular tokenization systems and processes"
The method of generating tokens may also have limitations from a security perspective. With concerns about security and attacks to random number generators, which are a common choice for the generation of tokens and token mapping tables, scrutiny must be applied to ensure proven and validated methods are used versus arbitrary design. Random number generators have limitations in terms of speed, entropy, seeding and bias, and security properties must be carefully analysed and measured to avoid predictability and compromise.
With tokenization's increasing adoption, new tokenization technology approaches have emerged to remove such operational risks and complexities and to enable increased scale suited to emerging big data use cases and high performance transaction processing, especially in financial services and banking. Stateless tokenization enables random mapping of live data elements to surrogate values without needing a database while retaining the isolation properties of tokenization.
November 2014, American Express released its token service which meets the EMV tokenization standard.
Application to alternative payment systems
Building an alternate payments system requires a number of entities working together in order to deliver near field communication (NFC) or other technology based payment services to the end users. One of the issues is the interoperability between the players and to resolve this issue the role of trusted service manager (TSM) is proposed to establish a technical link between mobile network operators (MNO) and providers of services, so that these entities can work together. Tokenization can play a role in mediating such services.
Tokenization as a security strategy lies in the ability to replace a real card number with a surrogate (target removal) and the subsequent limitations placed on the surrogate card number (risk reduction). If the surrogate value can be used in an unlimited fashion or even in a broadly applicable manner, the token value gains as much value as the real credit card number. In these cases, the token may be secured by a second dynamic token that is unique for each transaction and also associated to a specific payment card. Example of dynamic, transaction-specific tokens include cryptograms used in the EMV specification.
Application to PCI DSS standards
The Payment Card Industry Data Security Standard, an industry-wide set of guidelines that must be met by any organization that stores, processes, or transmits cardholder data, mandates that credit card data must be protected when stored. Tokenization, as applied to payment card data, is often implemented to meet this mandate, replacing credit card and ACH numbers in some systems with a random value or string of characters. Tokens can be formatted in a variety of ways. Some token service providers or tokenization products generate the surrogate values in such a way as to match the format of the original sensitive data. In the case of payment card data, a token might be the same length as a Primary Account Number (bank card number) and contain elements of the original data such as the last four digits of the card number. When a payment card authorization request is made to verify the legitimacy of a transaction, a token might be returned to the merchant instead of the card number, along with the authorization code for the transaction. The token is stored in the receiving system while the actual cardholder data is mapped to the token in a secure tokenization system. Storage of tokens and payment card data must comply with current PCI standards, including the use of strong cryptography.
Standards (ANSI, the PCI Council, Visa, and EMV)
Tokenization is currently in standards definition in ANSI X9 as X9.119 Part 2. X9 is responsible for the industry standards for financial cryptography and data protection including payment card PIN management, credit and debit card encryption and related technologies and processes. The PCI Council has also stated support for tokenization in reducing risk in data breaches, when combined with other technologies such as Point-to-Point Encryption (P2PE) and assessments of compliance to PCI DSS guidelines. Visa Inc. released Visa Tokenization Best Practices for tokenization uses in credit and debit card handling applications and services. In March 2014, EMVCo LLC released its first payment tokenization specification for EMV. NIST standardized the FF1 and FF3 Format-Preserving Encryption algorithms in its Special Publication 800-38G.
Risk reduction
Tokenization can render it more difficult for attackers to gain access to sensitive data outside of the tokenization system or service. Implementation of tokenization may simplify the requirements of the PCI DSS, as systems that no longer store or process sensitive data may have a reduction of applicable controls required by the PCI DSS guidelines.
As a security best practice, independent assessment and validation of any technologies used for data protection, including tokenization, must be in place to establish the security and strength of the method and implementation before any claims of privacy compliance, regulatory compliance, and data security can be made. This validation is particularly important in tokenization, as the tokens are shared externally in general use and thus exposed in high risk, low trust environments. The infeasibility of reversing a token or set of tokens to a live sensitive data must be established using industry accepted measurements and proofs by appropriate experts independent of the service or solution provider.
See also
Adaptive Redaction
PAN truncation
Format preserving encryption
References
External links
Cloud vs Payment - Cloud vs Payment - Introduction to tokenization via cloud payments.
Cryptography |
61571 | https://en.wikipedia.org/wiki/Ross%20J.%20Anderson | Ross J. Anderson | Ross John Anderson, FRS, FREng (born 15 September 1956) is a researcher, author, and industry consultant in security engineering. He is Professor of Security Engineering at the Department of Computer Science and Technology, University of Cambridge where he is part of the University's security group.
Education
Anderson was educated at the High School of Glasgow. In 1978, he graduated with a Bachelor of Arts in mathematics and natural science from Trinity College, Cambridge, and subsequently received a qualification in computer engineering. Anderson worked in the avionics and banking industry before moving back to the University of Cambridge in 1992, to work on his doctorate under the supervision of Roger Needham and start his career as an academic researcher. He received his PhD in 1995, and became a lecturer in the same year.
Research
Anderson's research interests are in security, cryptology, dependability and technology policy. In cryptography, he designed with Eli Biham the BEAR, LION and Tiger cryptographic primitives, and co-wrote with Biham and Lars Knudsen the block cipher Serpent, one of the finalists in the Advanced Encryption Standard (AES) competition. He has also discovered weaknesses in the FISH cipher and designed the stream cipher Pike.
Anderson has always campaigned for computer security to be studied in a wider social context. Many of his writings emphasise the human, social, and political dimension of security. On online voting, for example, he writes "When you move from voting in person to voting at home (whether by post, by phone or over the internet) it vastly expands the scope for vote buying and coercion", making the point that it's not just a question of whether the encryption can be cracked.
In 1998, Anderson founded the Foundation for Information Policy Research, a think tank and lobbying group on information-technology policy.
Anderson is also a founder of the UK-Crypto mailing list and the economics of security research domain.
He is well-known among Cambridge academics as an outspoken defender of academic freedoms, intellectual property and other matters of university politics. He is engaged in the "Campaign for Cambridge Freedoms" and has been an elected member of Cambridge University Council since 2002. In January 2004, the student newspaper Varsity declared Anderson to be Cambridge University's "most powerful person".
In 2002, he became an outspoken critic of trusted computing proposals, in particular Microsoft's Palladium operating system vision.
Anderson's TCPA FAQ has been characterised by IBM TC researcher David R. Safford as "full of technical errors" and of "presenting speculation as fact."
For years Anderson has been arguing that by their nature large databases will never be free of abuse by breaches of security. He has said that if a large system is designed for ease of access it becomes insecure; if made watertight it becomes impossible to use. This is sometimes known as Anderson's Rule.
Anderson is the author of Security Engineering, published by Wiley in 2001. He was the founder and editor of Computer and Communications Security Reviews.
After the vast Global surveillance disclosure leaked by Edward Snowden beginning in June 2013 Anderson suggested one way to begin stamping out the British state's unaccountable involvement in this NSA spying scandal is to entirely end the domestic secret services. Anderson: "Were I a legislator, I would simply abolish MI5". Anderson notes the only way this kind of systemic data collection has been made possible was through the business models of private industry. The value of information-driven web companies such as Facebook and Google is built around their ability to gather vast tracts of data. It was something the intelligence agencies would have struggled with alone.
Anderson is a critic of smart meters, writing that there are various privacy and energy security concerns.
Awards and honours
Anderson was elected a Fellow of the Royal Society (FRS) in 2009. His nomination reads:
Anderson was also elected a Fellow of the Royal Academy of Engineering (FREng) in 2009. He is a fellow of Churchill College, Cambridge.
References
British technology writers
Modern cryptographers
Fellows of the Institute of Physics
Fellows of Churchill College, Cambridge
Computer security academics
Copyright scholars
Alumni of Trinity College, Cambridge
Members of the University of Cambridge Computer Laboratory
Living people
Fellows of the Royal Society
1956 births
People from Sandy, Bedfordshire |
62317 | https://en.wikipedia.org/wiki/Speex | Speex | Speex is an audio compression codec specifically tuned for the reproduction of human speech and also a free software speech codec that may be used on VoIP applications and podcasts. It is based on the CELP speech coding algorithm. Speex claims to be free of any patent restrictions and is licensed under the revised (3-clause) BSD license. It may be used with the Ogg container format or directly transmitted over UDP/RTP. It may also be used with the FLV container format.
The Speex designers see their project as complementary to the Vorbis general-purpose audio compression project.
Speex is a lossy format, i.e. quality is permanently degraded to reduce file size.
The Speex project was created on February 13, 2002. The first development versions of Speex were released under LGPL license, but as of version 1.0 beta 1, Speex is released under Xiph's version of the (revised) BSD license. Speex 1.0 was announced on March 24, 2003, after a year of development. The last stable version of Speex encoder and decoder is 1.2.0.
Xiph.Org now considers Speex obsolete; its successor is the more modern Opus codec, which uses the SILK format under license from Microsoft and surpasses its performance in most areas except at the lowest sample rates.
Description
Speex is targeted at voice over IP (VoIP) and file-based compression. The design goals have been to make a codec that would be optimized for high quality speech and low bit rate. To achieve this the codec uses multiple bit rates, and supports ultra-wideband (32 kHz sampling rate), wideband (16 kHz sampling rate) and narrowband (telephone quality, 8 kHz sampling rate). Since Speex was designed for VoIP instead of cell phone use, the codec must be robust to lost packets, but not to corrupted ones. All this led to the choice of code excited linear prediction (CELP) as the encoding technique to use for Speex. One of the main reasons is that CELP has long proven that it could do the job and scale well to both low bit rates (as evidenced by DoD CELP @ 4.8 kbit/s) and high bit rates (as with G.728 @ 16 kbit/s).
The main characteristics can be summarized as follows:
Free software/open-source, patent and royalty-free.
Integration of narrowband and wideband in the same bit-stream.
Wide range of bit rates available (from 2 kbit/s to 44 kbit/s).
Dynamic bit rate switching and variable bit-rate (VBR).
Voice activity detection (VAD, integrated with VBR) (not working from version 1.2).
Variable complexity.
Ultra-wideband mode at 32 kHz (up to 48 kHz).
Intensity stereo encoding option.
Features
Sampling rate Speex is mainly designed for three different sampling rates: 8 kHz (the same sampling rate to transmit telephone calls), 16 kHz, and 32 kHz. These are respectively referred to as narrowband, wideband and ultra-wideband.
Quality Speex encoding is controlled most of the time by a quality parameter that ranges from 0 to 10. In constant bit-rate (CBR) operation, the quality parameter is an integer, while for variable bit-rate (VBR), the parameter is a real (floating point) number.
Complexity (variable) With Speex, it is possible to vary the complexity allowed for the encoder. This is done by controlling how the search is performed with an integer ranging from 1 to 10 in a way similar to the -1 to -9 options to gzip compression utilities. For normal use, the noise level at complexity 1 is between 1 and 2 dB higher than at complexity 10, but the CPU requirements for complexity 10 is about five times higher than for complexity 1. In practice, the best trade-off is between complexity 2 and 4, though higher settings are often useful when encoding non-speech sounds like DTMF tones, or if encoding is not in real-time.
Variable bit-rate (VBR) Variable bit-rate (VBR) allows a codec to change its bit rate dynamically to adapt to the "difficulty" of the audio being encoded. In the example of Speex, sounds like vowels and high-energy transients require a higher bit rate to achieve good quality, while fricatives (e.g. s and f sounds) can be coded adequately with fewer bits. For this reason, VBR can achieve lower bit rate for the same quality, or a better quality for a certain bit rate. Despite its advantages, VBR has three main drawbacks: first, by only specifying quality, there is no guarantee about the final average bit-rate. Second, for some real-time applications like voice over IP (VoIP), what counts is the maximum bit-rate, which must be low enough for the communication channel. Third, encryption of VBR-encoded speech may not ensure complete privacy, as phrases can still be identified, at least in a controlled setting with a small dictionary of phrases, by analysing the pattern of variation of the bit rate.
Average bit-rate (ABR) Average bit-rate solves one of the problems of VBR, as it dynamically adjusts VBR quality in order to meet a specific target bit-rate. Because the quality/bit-rate is adjusted in real-time (open-loop), the global quality will be slightly lower than that obtained by encoding in VBR with exactly the right quality setting to meet the target average bitrate.
Voice Activity Detection (VAD) When enabled, voice activity detection detects whether the audio being encoded is speech or silence/background noise. VAD is always implicitly activated when encoding in VBR, so the option is only useful in non-VBR operation. In this case, Speex detects non-speech periods and encodes them with just enough bits to reproduce the background noise. This is called "comfort noise generation" (CNG). Last version VAD was working fine is 1.1.12, since v 1.2 it has been replaced with simple Any Activity Detection.
Discontinuous transmission (DTX) Discontinuous transmission is an addition to VAD/VBR operation which allows ceasing transmitting completely when the background noise is stationary. In a file, 5 bits are used for each missing frame (corresponding to 250 bit/s).
Perceptual enhancement Perceptual enhancement is a part of the decoder which, when turned on, tries to reduce (the perception of) the noise produced by the coding/decoding process. In most cases, perceptual enhancement makes the sound further from the original objectively (signal-to-noise ratio), but in the end it still sounds better (subjective improvement).
Algorithmic delay Every codec introduces a delay in the transmission. For Speex, this delay is equal to the frame size, plus some amount of "look-ahead" required to process each frame. In narrowband operation (8 kHz), the delay is 30 ms, while for wideband (16 kHz), the delay is 34 ms. These values do not account for the CPU time it takes to encode or decode the frames.
Applications
There are a large base of applications supporting the Speex codec. Examples include:
Streaming applications like teleconference (e.g. TeamSpeak, Mumble)
VoIP systems (e.g. Asterisk)
Videogames (e.g. Xbox Live, Civilization 4, DropMix vocal tracks, ...)
Audio processing applications.
Most of these are based on the DirectShow filter or OpenACM codec (e.g. Microsoft NetMeeting) on Microsoft Windows, or Xiph.org's reference implementation, libvorbis, on Linux (e.g. Ekiga). There are also plugins for many audio players. See the plugin and software page on the speex.org site for more details.
The media type for Speex is audio/ogg while contained by Ogg, and audio/speex (previously audio/x-speex) when transported through RTP or without container.
The United States Army's Land Warrior system, designed by General Dynamics, also uses Speex for VoIP on an EPLRS radio designed by Raytheon.
The Ear Bible is a single-ear headphone with a built-in Speex player with 1 GB of flash memory, preloaded with a recording of the New American Standard Bible.
ASL Safety & Security's Linux based VIPA OS software which is used in long line public address systems and voice alarm systems at major international air transport hubs and rail networks.
The Rockbox project uses Speex for its voice interface. It can also play Speex files on supported players, such as the Apple iPod or the iRiver H10.
The Vernier LabQuest handheld data acquisition device for science education uses Speex for voice annotations created by students and teachers using either the built-in or an external microphone.
The Google Mobile App for iPhone currently incorporates Speex. It has also been suggested that the new Google voice search iPhone app is using Speex to transmit voice to Google servers for interpretation.
Adobe Flash Player supports Speex starting with Flash Player 10.0.12.36, released in October 2008. Because of some bugs in Flash Player, the first recommended version for Speex support is 10.0.22.87 and later. Speex in Flash Player can be used for both kind of communication, through Flash Media Server or P2P. Speex can be decoded or converted to any format unlike Nellymoser audio, which was the only speech format in previous versions of Flash Player. Speex can be also used in the Flash Video container format (.flv), starting with version 10 of Video File Format Specification (published in November 2008).
The JavaSonics ListenUp voice recorder uses Speex to compress voice messages that are recorded in a browser and then uploaded to a web server. Primary applications are language training, transcription and social networking.
Speex is used as the voice compression algorithm in the Siri voice assistance on the iPhone 4S. Since text-to-speech occurs on Apple's servers, the Speex codec is used to minimize network bandwidth.
See also
Comparison of audio coding formats
Opus (audio format) - successor of Speex
Sources
This article uses material from the Speex Codec Manual which is copyright © Jean-Marc Valin and licensed under the terms of the GFDL.
References
External links
– RTP Payload Format for the Speex Codec
Official Speex homepage
Plugin & software page
JSpeex is a port of Speex to the Java platform
NSpeex is a port of Speex to the .NET platform and Silverlight based on JSpeex
CSpeex is a port of Speex to the .NET platform based on JSpeex
– Ogg Media Types
http://dirac.epucfe.eu/projets/wakka.php?wiki=P12AB10 - Speex Encoder Player (César MBUMBA)
Speech codecs
Free audio codecs
Xiph.Org projects
GNU Project software
Open formats |
62437 | https://en.wikipedia.org/wiki/SCADA | SCADA | Supervisory control and data acquisition (SCADA) is a control system architecture comprising computers, networked data communications and graphical user interfaces for high-level supervision of machines and processes. It also covers sensors and other devices, such as programmable logic controllers, which interface with process plant or machinery.
Explanation
The operator interfaces which enable monitoring and the issuing of process commands, like controller set point changes, are handled through the SCADA computer system. The subordinated operations, e.g. the real-time control logic or controller calculations, are performed by networked modules connected to the field sensors and actuators.
The SCADA concept was developed to be a universal means of remote-access to a variety of local control modules, which could be from different manufacturers and allowing access through standard automation protocols. In practice, large SCADA systems have grown to become very similar to distributed control systems in function, while using multiple means of interfacing with the plant. They can control large-scale processes that can include multiple sites, and work over large distances as well as small distance. It is one of the most commonly-used types of industrial control systems, in spite of concerns about SCADA systems being vulnerable to cyberwarfare/cyberterrorism attacks.
Control operations
The key attribute of a SCADA system is its ability to perform a supervisory operation over a variety of other proprietary devices.
The accompanying diagram is a general model which shows functional manufacturing levels using computerised control.
Referring to the diagram,
Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves.
Level 1 contains the industrialised input/output (I/O) modules, and their associated distributed electronic processors.
Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens.
Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and targets.
Level 4 is the production scheduling level.
Level 1 contains the programmable logic controllers (PLCs) or remote terminal units (RTUs).
Level 2 contains the SCADA to readings and equipment status reports that are communicated to level 2 SCADA as required. Data is then compiled and formatted in such a way that a control room operator using the HMI (Human Machine Interface) can make supervisory decisions to adjust or override normal RTU (PLC) controls. Data may also be fed to a historian, often built on a commodity database management system, to allow trending and other analytical auditing.
SCADA systems typically use a tag database, which contains data elements called tags or points, which relate to specific instrumentation or actuators within the process system. Data is accumulated against these unique process control equipment tag references.
Examples of use
Both large and small systems can be built using the SCADA concept. These systems can range from just tens to thousands of control loops, depending on the application. Example processes include industrial, infrastructure, and facility-based processes, as described below:
Industrial processes include manufacturing, process control, power generation, fabrication, and refining, and may run in continuous, batch, repetitive, or discrete modes.
Infrastructure processes may be public or private, and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electric power transmission and distribution, and wind farms.
Facility processes, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation, and air conditioning systems (HVAC), access, and energy consumption.
However, SCADA systems may have security vulnerabilities, so the systems should be evaluated to identify risks and solutions implemented to mitigate those risks.
System components
A SCADA system usually consists of the following main elements:
Supervisory computers
This is the core of the SCADA system, gathering data on the process and sending control commands to the field connected devices. It refers to the computer and software responsible for communicating with the field connection controllers, which are RTUs and PLCs, and includes the HMI software running on operator workstations. In smaller SCADA systems, the supervisory computer may be composed of a single PC, in which case the HMI is a part of this computer. In larger SCADA systems, the master station may include several HMIs hosted on client computers, multiple servers for data acquisition, distributed software applications, and disaster recovery sites. To increase the integrity of the system the multiple servers will often be configured in a dual-redundant or hot-standby formation providing continuous control and monitoring in the event of a server malfunction or breakdown.
Remote terminal units
Remote terminal units, also known as (RTUs), connect to sensors and actuators in the process, and are networked to the supervisory computer system. RTUs have embedded control capabilities and often conform to the IEC 61131-3 standard for programming and support automation via ladder logic, a function block diagram or a variety of other languages. Remote locations often have little or no local infrastructure so it is not uncommon to find RTUs running off a small solar power system, using radio, GSM or satellite for communications, and being ruggedised to survive from -20C to +70C or even -40C to +85C without external heating or cooling equipment.
Programmable logic controllers
Also known as PLCs, these are connected to sensors and actuators in the process, and are networked to the supervisory system. In factory automation, PLCs typically have a high speed connection to the SCADA system. In remote applications, such as a large water treatment plant, PLCs may connect directly to SCADA over a wireless link, or more commonly, utilise an RTU for the communications management. PLCs are specifically designed for control and were the founding platform for the IEC 61131-3 programming languages. For economical reasons, PLCs are often used for remote sites where there is a large I/O count, rather than utilising an RTU alone.
Communication infrastructure
This connects the supervisory computer system to the RTUs and PLCs, and may use industry standard or manufacturer proprietary protocols.
Both RTU's and PLC's operate autonomously on the near-real time control of the process, using the last command given from the supervisory system. Failure of the communications network does not necessarily stop the plant process controls, and on resumption of communications, the operator can continue with monitoring and control. Some critical systems will have dual redundant data highways, often cabled via diverse routes.
Human-machine interface
The human-machine interface (HMI) is the operator window of the supervisory system. It presents plant information to the operating personnel graphically in the form of mimic diagrams, which are a schematic representation of the plant being controlled, and alarm and event logging pages. The HMI is linked to the SCADA supervisory computer to provide live data to drive the mimic diagrams, alarm displays and trending graphs. In many installations the HMI is the graphical user interface for the operator, collects all data from external devices, creates reports, performs alarming, sends notifications, etc.
Mimic diagrams consist of line graphics and schematic symbols to represent process elements, or may consist of digital photographs of the process equipment overlain with animated symbols.
Supervisory operation of the plant is by means of the HMI, with operators issuing commands using mouse pointers, keyboards and touch screens. For example, a symbol of a pump can show the operator that the pump is running, and a flow meter symbol can show how much fluid it is pumping through the pipe. The operator can switch the pump off from the mimic by a mouse click or screen touch. The HMI will show the flow rate of the fluid in the pipe decrease in real time.
The HMI package for a SCADA system typically includes a drawing program that the operators or system maintenance personnel use to change the way these points are represented in the interface. These representations can be as simple as an on-screen traffic light, which represents the state of an actual traffic light in the field, or as complex as a multi-projector display representing the position of all of the elevators in a skyscraper or all of the trains on a railway.
A "historian", is a software service within the HMI which accumulates time-stamped data, events, and alarms in a database which can be queried or used to populate graphic trends in the HMI. The historian is a client that requests data from a data acquisition server.
Alarm handling
An important part of most SCADA implementations is alarm handling. The system monitors whether certain alarm conditions are satisfied, to determine when an alarm event has occurred. Once an alarm event has been detected, one or more actions are taken (such as the activation of one or more alarm indicators, and perhaps the generation of email or text messages so that management or remote SCADA operators are informed). In many cases, a SCADA operator may have to acknowledge the alarm event; this may deactivate some alarm indicators, whereas other indicators remain active until the alarm conditions are cleared.
Alarm conditions can be explicit—for example, an alarm point is a digital status point that has either the value NORMAL or ALARM that is calculated by a formula based on the values in other analogue and digital points—or implicit: the SCADA system might automatically monitor whether the value in an analogue point lies outside high and low- limit values associated with that point.
Examples of alarm indicators include a siren, a pop-up box on a screen, or a coloured or flashing area on a screen (that might act in a similar way to the "fuel tank empty" light in a car); in each case, the role of the alarm indicator is to draw the operator's attention to the part of the system 'in alarm' so that appropriate action can be taken.
PLC/RTU programming
"Smart" RTUs, or standard PLCs, are capable of autonomously executing simple logic processes without involving the supervisory computer. They employ standardized control programming languages such as under, IEC 61131-3 (a suite of five programming languages including function block, ladder, structured text, sequence function charts and instruction list), is frequently used to create programs which run on these RTUs and PLCs. Unlike a procedural language like the C or FORTRAN, IEC 61131-3 has minimal training requirements by virtue of resembling historic physical control arrays. This allows SCADA system engineers to perform both the design and implementation of a program to be executed on an RTU or PLC.
A programmable automation controller (PAC) is a compact controller that combines the features and capabilities of a PC-based control system with that of a typical PLC. PACs are deployed in SCADA systems to provide RTU and PLC functions. In many electrical substation SCADA applications, "distributed RTUs" use information processors or station computers to communicate with digital protective relays, PACs, and other devices for I/O, and communicate with the SCADA master in lieu of a traditional RTU.
PLC commercial integration
Since about 1998, virtually all major PLC manufacturers have offered integrated HMI/SCADA systems, many of them using open and non-proprietary communications protocols. Numerous specialized third-party HMI/SCADA packages, offering built-in compatibility with most major PLCs, have also entered the market, allowing mechanical engineers, electrical engineers and technicians to configure HMIs themselves, without the need for a custom-made program written by a software programmer.
The Remote Terminal Unit (RTU) connects to physical equipment. Typically, an RTU converts the electrical signals from the equipment to digital values. By converting and sending these electrical signals out to equipment the RTU can control equipment.
Communication infrastructure and methods
SCADA systems have traditionally used combinations of radio and direct wired connections, although SONET/SDH is also frequently used for large systems such as railways and power stations. The remote management or monitoring function of a SCADA system is often referred to as telemetry. Some users want SCADA data to travel over their pre-established corporate networks or to share the network with other applications. The legacy of the early low-bandwidth protocols remains, though.
SCADA protocols are designed to be very compact. Many are designed to send information only when the master station polls the RTU. Typical legacy SCADA protocols include Modbus RTU, RP-570, Profibus and Conitel. These communication protocols, with the exception of Modbus (Modbus has been made open by Schneider Electric), are all SCADA-vendor specific but are widely adopted and used. Standard protocols are IEC 60870-5-101 or 104, IEC 61850 and DNP3. These communication protocols are standardized and recognized by all major SCADA vendors. Many of these protocols now contain extensions to operate over TCP/IP. Although the use of conventional networking specifications, such as TCP/IP, blurs the line between traditional and industrial networking, they each fulfill fundamentally differing requirements. Network simulation can be used in conjunction with SCADA simulators to perform various 'what-if' analyses.
With increasing security demands (such as North American Electric Reliability Corporation (NERC) and critical infrastructure protection (CIP) in the US), there is increasing use of satellite-based communication. This has the key advantages that the infrastructure can be self-contained (not using circuits from the public telephone system), can have built-in encryption, and can be engineered to the availability and reliability required by the SCADA system operator. Earlier experiences using consumer-grade VSAT were poor. Modern carrier-class systems provide the quality of service required for SCADA.
RTUs and other automatic controller devices were developed before the advent of industry wide standards for interoperability. The result is that developers and their management created a multitude of control protocols. Among the larger vendors, there was also the incentive to create their own protocol to "lock in" their customer base. A list of automation protocols is compiled here.
An example of efforts by vendor groups to standardize automation protocols is the OPC-UA (formerly "OLE for process control" now Open Platform Communications Unified Architecture).
Architecture development
SCADA systems have evolved through four generations as follows:
First generation: "Monolithic"
Early SCADA system computing was done by large minicomputers. Common network services did not exist at the time SCADA was developed. Thus SCADA systems were independent systems with no connectivity to other systems. The communication protocols used were strictly proprietary at that time. The first-generation SCADA system redundancy was achieved using a back-up mainframe system connected to all the Remote Terminal Unit sites and was used in the event of failure of the primary mainframe system. Some first generation SCADA systems were developed as "turn key" operations that ran on minicomputers such as the PDP-11 series.
Second generation: "Distributed"
SCADA information and command processing were distributed across multiple stations which were connected through a LAN. Information was shared in near real time. Each station was responsible for a particular task, which reduced the cost as compared to First Generation SCADA. The network protocols used were still not standardized. Since these protocols were proprietary, very few people beyond the developers knew enough to determine how secure a SCADA installation was. Security of the SCADA installation was usually overlooked.
Third generation: "Networked"
Similar to a distributed architecture, any complex SCADA can be reduced to the simplest components and connected through communication protocols. In the case of a networked design, the system may be spread across more than one LAN network called a process control network (PCN) and separated geographically. Several distributed architecture SCADAs running in parallel, with a single supervisor and historian, could be considered a network architecture. This allows for a more cost-effective solution in very large scale systems.
Fourth generation: "Web-based"
The growth of the internet has led SCADA systems to implement web technologies allowing users to view data, exchange information and control processes from anywhere in the world through web SOCKET connection. The early 2000s saw the proliferation of Web SCADA systems. Web SCADA systems use internet browsers such as Google Chrome and Mozilla Firefox as the graphical user interface (GUI) for the operators HMI. This simplifies the client side installation and enables users to access the system from various platforms with web browsers such as servers, personal computers, laptops, tablets and mobile phones.
Security issues
SCADA systems that tie together decentralized facilities such as power, oil, gas pipelines, water distribution and wastewater collection systems were designed to be open, robust, and easily operated and repaired, but not necessarily secure. The move from proprietary technologies to more standardized and open solutions together with the increased number of connections between SCADA systems, office networks and the Internet has made them more vulnerable to types of network attacks that are relatively common in computer security. For example, United States Computer Emergency Readiness Team (US-CERT) released a vulnerability advisory warning that unauthenticated users could download sensitive configuration information including password hashes from an Inductive Automation Ignition system utilizing a standard attack type leveraging access to the Tomcat Embedded Web server. Security researcher Jerry Brown submitted a similar advisory regarding a buffer overflow vulnerability in a Wonderware InBatchClient ActiveX control. Both vendors made updates available prior to public vulnerability release. Mitigation recommendations were standard patching practices and requiring VPN access for secure connectivity. Consequently, the security of some SCADA-based systems has come into question as they are seen as potentially vulnerable to cyber attacks.
In particular, security researchers are concerned about
the lack of concern about security and authentication in the design, deployment and operation of some existing SCADA networks
the belief that SCADA systems have the benefit of security through obscurity through the use of specialized protocols and proprietary interfaces
the belief that SCADA networks are secure because they are physically secured
the belief that SCADA networks are secure because they are disconnected from the Internet
SCADA systems are used to control and monitor physical processes, examples of which are transmission of electricity, transportation of gas and oil in pipelines, water distribution, traffic lights, and other systems used as the basis of modern society. The security of these SCADA systems is important because compromise or destruction of these systems would impact multiple areas of society far removed from the original compromise. For example, a blackout caused by a compromised electrical SCADA system would cause financial losses to all the customers that received electricity from that source. How security will affect legacy SCADA and new deployments remains to be seen.
There are many threat vectors to a modern SCADA system. One is the threat of unauthorized access to the control software, whether it is human access or changes induced intentionally or accidentally by virus infections and other software threats residing on the control host machine. Another is the threat of packet access to the network segments hosting SCADA devices. In many cases, the control protocol lacks any form of cryptographic security, allowing an attacker to control a SCADA device by sending commands over a network. In many cases SCADA users have assumed that having a VPN offered sufficient protection, unaware that security can be trivially bypassed with physical access to SCADA-related network jacks and switches. Industrial control vendors suggest approaching SCADA security like Information Security with a defense in depth strategy that leverages common IT practices.
The reliable function of SCADA systems in our modern infrastructure may be crucial to public health and safety. As such, attacks on these systems may directly or indirectly threaten public health and safety. Such an attack has already occurred, carried out on Maroochy Shire Council's sewage control system in Queensland, Australia. Shortly after a contractor installed a SCADA system in January 2000, system components began to function erratically. Pumps did not run when needed and alarms were not reported. More critically, sewage flooded a nearby park and contaminated an open surface-water drainage ditch and flowed 500 meters to a tidal canal. The SCADA system was directing sewage valves to open when the design protocol should have kept them closed. Initially this was believed to be a system bug. Monitoring of the system logs revealed the malfunctions were the result of cyber attacks. Investigators reported 46 separate instances of malicious outside interference before the culprit was identified. The attacks were made by a disgruntled ex-employee of the company that had installed the SCADA system. The ex-employee was hoping to be hired by the utility full-time to maintain the system.
In April 2008, the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack issued a Critical Infrastructures Report which discussed the extreme vulnerability of SCADA systems to an electromagnetic pulse (EMP) event. After testing and analysis, the Commission concluded: "SCADA systems are vulnerable to an EMP event. The large numbers and widespread reliance on such systems by all of the Nation’s critical infrastructures represent a systemic threat to their continued operation following an EMP event. Additionally, the necessity to reboot, repair, or replace large numbers of geographically widely dispersed systems will considerably impede the Nation’s recovery from such an assault."
Many vendors of SCADA and control products have begun to address the risks posed by unauthorized access by developing lines of specialized industrial firewall and VPN solutions for TCP/IP-based SCADA networks as well as external SCADA monitoring and recording equipment.
The International Society of Automation (ISA) started formalizing SCADA security requirements in 2007 with a working group, WG4. WG4 "deals specifically with unique technical requirements, measurements, and other features required to evaluate and assure security resilience and performance of industrial automation and control systems devices".
The increased interest in SCADA vulnerabilities has resulted in vulnerability researchers discovering vulnerabilities in commercial SCADA software and more general offensive SCADA techniques presented to the general security community. In electric and gas utility SCADA systems, the vulnerability of the large installed base of wired and wireless serial communications links is addressed in some cases by applying bump-in-the-wire devices that employ authentication and Advanced Encryption Standard encryption rather than replacing all existing nodes.
In June 2010, anti-virus security company VirusBlokAda reported the first detection of malware that attacks SCADA systems (Siemens' WinCC/PCS 7 systems) running on Windows operating systems. The malware is called Stuxnet and uses four zero-day attacks to install a rootkit which in turn logs into the SCADA's database and steals design and control files. The malware is also capable of changing the control system and hiding those changes. The malware was found on 14 systems, the majority of which were located in Iran.
In October 2013 National Geographic released a docudrama titled American Blackout which dealt with an imagined large-scale cyber attack on SCADA and the United States' electrical grid.
See also
DNP3
IEC 60870
EPICS
References
External links
UK SCADA security guidelines
BBC NEWS | Technology | Spies 'infiltrate US power grid'
Articles containing video clips
Automation
Control engineering
Telemetry
Electric power |
62661 | https://en.wikipedia.org/wiki/Defensive%20programming | Defensive programming | Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances. Defensive programming practices are often used where high availability, safety, or security is needed.
Defensive programming is an approach to improve software and source code, in terms of:
General quality – reducing the number of software bugs and problems.
Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit.
Making the software behave in a predictable manner despite unexpected inputs or user actions.
Overly defensive programming, however, may safeguard against errors that will never be encountered, thus incurring run-time and maintenance costs. There is also a risk that code traps prevent too many exceptions, potentially resulting in unnoticed, incorrect results.
Secure programming
Secure programming is the subset of defensive programming concerned with computer security. Security is the concern, not necessarily safety or availability (the software may be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective; however, the motivation is not as much to reduce the likelihood of failure in normal operation (as if safety were the concern), but to reduce the attack surface – the programmer must assume that the software might be misused actively to reveal bugs, and that bugs could be exploited maliciously.
int risky_programming(char *input) {
char str[1000];
// ...
strcpy(str, input); // Copy input.
// ...
}
The function will result in undefined behavior when the input is over 1000 characters. Some novice programmers may not feel that this is a problem, supposing that no user will enter such a long input. This particular bug demonstrates a vulnerability which enables buffer overflow exploits. Here is a solution to this example:
int secure_programming(char *input) {
char str[1000+1]; // One more for the null character.
// ...
// Copy input without exceeding the length of the destination.
strncpy(str, input, sizeof(str));
// If strlen(input) >= sizeof(str) then strncpy won't null terminate.
// We counter this by always setting the last character in the buffer to NUL,
// effectively cropping the string to the maximum length we can handle.
// One can also decide to explicitly abort the program if strlen(input) is
// too long.
str[sizeof(str) - 1] = '\0';
// ...
}
Offensive programming
Offensive programming is a category of defensive programming, with the added emphasis that certain errors should not be handled defensively. In this practice, only errors from outside the program's control are to be handled (such as user input); the software itself, as well as data from within the program's line of defense, are to be trusted in this methodology.
Trusting internal data validity
Overly defensive programming
const char* trafficlight_colorname(enum traffic_light_color c) {
switch (c) {
case TRAFFICLIGHT_RED: return "red";
case TRAFFICLIGHT_YELLOW: return "yellow";
case TRAFFICLIGHT_GREEN: return "green";
}
return "black"; // To be handled as a dead traffic light.
// Warning: This last 'return' statement will be dropped by an optimizing
// compiler if all possible values of 'traffic_light_color' are listed in
// the previous 'switch' statement...
}
Offensive programming
const char* trafficlight_colorname(enum traffic_light_color c) {
switch (c) {
case TRAFFICLIGHT_RED: return "red";
case TRAFFICLIGHT_YELLOW: return "yellow";
case TRAFFICLIGHT_GREEN: return "green";
}
assert(0); // Assert that this section is unreachable.
// Warning: This 'assert' function call will be dropped by an optimizing
// compiler if all possible values of 'traffic_light_color' are listed in
// the previous 'switch' statement...
}
Trusting software components
Overly defensive programming
if (is_legacy_compatible(user_config)) {
// Strategy: Don't trust that the new code behaves the same
old_code(user_config);
} else {
// Fallback: Don't trust that the new code handles the same cases
if (new_code(user_config) != OK) {
old_code(user_config);
}
}
Offensive programming
// Expect that the new code has no new bugs
if (new_code(user_config) != OK) {
// Loudly report and abruptly terminate program to get proper attention
report_error("Something went very wrong");
exit(-1);
}
Techniques
Here are some defensive programming techniques:
Intelligent source code reuse
If existing code is tested and known to work, reusing it may reduce the chance of bugs being introduced.
However, reusing code is not always good practice. Reuse of existing code, especially when widely distributed, can allow for exploits to be created that target a wider audience than would otherwise be possible and brings with it all the security and vulnerabilities of the reused code.
When considering using existing source code, a quick review of the modules(sub-sections such as classes or functions) will help eliminate or make the developer aware of any potential vulnerabilities and ensure it is suitable to use in the project.
Legacy problems
Before reusing old source code, libraries, APIs, configurations and so forth, it must be considered if the old work is valid for reuse, or if it is likely to be prone to legacy problems.
Legacy problems are problems inherent when old designs are expected to work with today's requirements, especially when the old designs were not developed or tested with those requirements in mind.
Many software products have experienced problems with old legacy source code; for example:
Legacy code may not have been designed under a defensive programming initiative, and might therefore be of much lower quality than newly designed source code.
Legacy code may have been written and tested under conditions which no longer apply. The old quality assurance tests may have no validity any more.
Example 1: legacy code may have been designed for ASCII input but now the input is UTF-8.
Example 2: legacy code may have been compiled and tested on 32-bit architectures, but when compiled on 64-bit architectures, new arithmetic problems may occur (e.g., invalid signedness tests, invalid type casts, etc.).
Example 3: legacy code may have been targeted for offline machines, but becomes vulnerable once network connectivity is added.
Legacy code is not written with new problems in mind. For example, source code written in 1990 is likely to be prone to many code injection vulnerabilities, because most such problems were not widely understood at that time.
Notable examples of the legacy problem:
BIND 9, presented by Paul Vixie and David Conrad as "BINDv9 is a complete rewrite", "Security was a key consideration in design", naming security, robustness, scalability and new protocols as key concerns for rewriting old legacy code.
Microsoft Windows suffered from "the" Windows Metafile vulnerability and other exploits related to the WMF format. Microsoft Security Response Center describes the WMF-features as "Around 1990, WMF support was added... This was a different time in the security landscape... were all completely trusted", not being developed under the security initiatives at Microsoft.
Oracle is combating legacy problems, such as old source code written without addressing concerns of SQL injection and privilege escalation, resulting in many security vulnerabilities which have taken time to fix and also generated incomplete fixes. This has given rise to heavy criticism from security experts such as David Litchfield, Alexander Kornbrust, Cesar Cerrudo. An additional criticism is that default installations (largely a legacy from old versions) are not aligned with their own security recommendations, such as Oracle Database Security Checklist, which is hard to amend as many applications require the less secure legacy settings to function correctly.
Canonicalization
Malicious users are likely to invent new kinds of representations of incorrect data. For example, if a program attempts to reject accessing the file "/etc/passwd", a cracker might pass another variant of this file name, like "/etc/./passwd". Canonicalization libraries can be employed to avoid bugs due to non-canonical input.
Low tolerance against "potential" bugs
Assume that code constructs that appear to be problem prone (similar to known vulnerabilities, etc.) are bugs and potential security flaws. The basic rule of thumb is: "I'm not aware of all types of security exploits. I must protect against those I do know of and then I must be proactive!".
Other Tips to Secure Your Code
One of the most common problems is unchecked use of constant-size or pre-allocated structures for dynamic-size data such as inputs to the program (the buffer overflow problem). This is especially common for string data in C. C library functions like gets should never be used since the maximum size of the input buffer is not passed as an argument. C library functions like scanf can be used safely, but require the programmer to take care with the selection of safe format strings, by sanitizing it before using it.
Encrypt/authenticate all important data transmitted over networks. Do not attempt to implement your own encryption scheme, use a proven one instead. Message checking with CRC or similar technology will also help secure data sent over a network.
The 3 Rules of Data Security
* All data is important until proven otherwise.
* All data is tainted until proven otherwise.
* All code is insecure until proven otherwise.
You cannot prove the security of any code in userland, or, more commonly known as: "never trust the client".
These three rules about data security describe how to handle any data, internally or externally sourced:
All data is important until proven otherwise - means that all data must be verified as garbage before being destroyed.
All data is tainted until proven otherwise - means that all data must be handled in a way that does not expose the rest of the runtime environment without verifying integrity.
All code is insecure until proven otherwise - while a slight misnomer, does a good job reminding us to never assume our code is secure as bugs or Undefined Behavior may expose the project or system to attacks such as common SQL injection attacks.
More Information
If data is to be checked for correctness, verify that it is correct, not that it is incorrect.
Design by contract
Assertions (also called assertive programming)
Prefer exceptions to return codes
Generally speaking, it is preferable to throw exception messages that enforce part of your API contract and guide the developer instead of returning error code values that do not point to where the exception occurred or what the program stack looked liked, Better logging and exception handling will increase robustness and security of your software, while minimizing developer stress.
See also
Computer security
Immunity-aware programming
References
External links
CERT Secure Coding Standards
Programming paradigms
Programming principles |
63852 | https://en.wikipedia.org/wiki/Chosen-plaintext%20attack | Chosen-plaintext attack | A chosen-plaintext attack (CPA) is an attack model for cryptanalysis which presumes that the attacker can obtain the ciphertexts for arbitrary plaintexts. The goal of the attack is to gain information that reduces the security of the encryption scheme.
Modern ciphers aim to provide semantic security, also known as ciphertext indistinguishability under chosen-plaintext attack, and they are therefore, by design, generally immune to chosen-plaintext attacks if correctly implemented.
Introduction
In a chosen-plaintext attack the adversary can (possibly adaptively) ask for the ciphertexts of arbitrary plaintext messages. This is formalized by allowing the adversary to interact with an encryption oracle, viewed as a black box. The attacker’s goal is to reveal all or a part of the secret encryption key.
It may seem infeasible in practice that an attacker could obtain ciphertexts for given plaintexts. However, modern cryptography is implemented in software or hardware and is used for a diverse range of applications; for many cases, a chosen-plaintext attack is often very feasible (see also In practice). Chosen-plaintext attacks become extremely important in the context of public key cryptography where the encryption key is public and so attackers can encrypt any plaintext they choose.
Different forms
There are two forms of chosen-plaintext attacks:
Batch chosen-plaintext attack, where the adversary chooses all of the plaintexts before seeing any of the corresponding ciphertexts. This is often the meaning intended by "chosen-plaintext attack" when this is not qualified.
Adaptive chosen-plaintext attack (CPA2), where the adversary can request the ciphertexts of additional plaintexts after seeing the ciphertexts for some plaintexts.
General method of an attack
A general batch chosen-plaintext attack is carried out as follows :
The attacker may choose n plaintexts. (This parameter n is specified as part of the attack model, it may or may not be bounded.)
The attacker then sends these n plaintexts to the encryption oracle.
The encryption oracle will then encrypt the attacker's plaintexts and send them back to the attacker.
The attacker receives n ciphertexts back from the oracle, in such a way that the attacker knows which ciphertext corresponds to each plaintext.
Based on the plaintext–ciphertext pairs, the attacker can attempt to extract the key used by the oracle to encode the plaintexts. Since the attacker in this type of attack is free to craft the plaintext to match his needs, the attack complexity may be reduced.
Consider the following extension of the above situation. After the last step,
The adversary outputs two plaintexts 0 and 1.
A bit is chosen uniformly at random .
The adversary receives the encryption of b, and attempts to "guess" which plaintext it received, and outputs a bit .
A cipher has indistinguishable encryptions under a chosen-plaintext attack if after running the above experiment with =1 the adversary can't guess correctly (=) with probability non-negligibly better than 1/2.
Examples
The following examples demonstrate how some ciphers that meet other security definitions may be broken with a chosen-plaintext attack.
Caesar cipher
The following attack on the Caesar cipher allows full recovery of the secret key:
Suppose the adversary sends the message: ,
and the oracle returns .
The adversary can then work through to recover the key in the same way you would decrypt a Caesar cipher. The adversary could deduce the substitutions , and so on. This would lead the adversary to determine that 13 was the key used in the Caesar cipher.
With more intricate or complex encryption methodologies the decryption method becomes more resource-intensive, however, the core concept is still relatively the same.
One-time pads
The following attack on a one-time pad allows full recovery of the secret key. Suppose the message length and key length are equal to .
The adversary sends a string consisting of zeroes to the oracle.
The oracle returns the bitwise exclusive-or of the key with the string of zeroes.
The string returned by the oracle is the secret key.
While the one-time pad is used as an example of an information-theoretically secure cryptosystem, this security only holds under security definitions weaker than CPA security. This is because under the formal definition of CPA security the encryption oracle has no state. This vulnerability may not be applicable to all practical implementations – the one-time pad can still be made secure if key reuse is avoided (hence the name "one-time" pad).
In practice
In World War II US Navy cryptanalysts discovered that Japan was planning to attack a location referred to as "AF". They believed that "AF" might be Midway Island, because other locations in the Hawaiian Islands had codewords that began with "A". To prove their hypothesis that "AF" corresponded to "Midway Island" they asked the US forces at Midway to send a plaintext message about low supplies. The Japanese intercepted the message and immediately reported to their superiors that "AF" was low on water, confirming the Navy's hypothesis and allowing them to position their force to win the battle.
Also during World War II, Allied codebreakers at Bletchley Park would sometimes ask the Royal Air Force to lay mines at a position that didn't have any abbreviations or alternatives in the German naval system's grid reference. The hope was that the Germans, seeing the mines, would use an Enigma machine to encrypt a warning message about the mines and an "all clear" message after they were removed, giving the allies enough information about the message to break the German naval Enigma. This process of planting a known-plaintext was called gardening. Allied codebreakers also helped craft messages sent by double agent Juan Pujol García, whose encrypted radio reports were received in Madrid, manually decrypted, and then re-encrypted with an Enigma machine for transmission to Berlin. This helped the codebreakers decrypt the code used on the second leg, having supplied the original text.
In modern day, chosen-plaintext attacks (CPAs) are often used to break symmetric ciphers. To be considered CPA-secure, the symmetric cipher must not be vulnerable to chosen-plaintext attacks. Thus, it is important for symmetric cipher implementors to understand how an attacker would attempt to break their cipher and make relevant improvements.
For some chosen-plaintext attacks, only a small part of the plaintext may need to be chosen by the attacker; such attacks are known as plaintext injection attacks.
Relation to other attacks
A chosen-plaintext attack is more powerful than known-plaintext attack, because the attacker can directly target specific terms or patterns without having to wait for these to appear naturally, allowing faster gathering of data relevant to cryptanalysis. Therefore, any cipher that prevents chosen-plaintext attacks is also secure against known-plaintext and ciphertext-only attacks.
However, a chosen-plaintext attack is less powerful than a chosen-ciphertext attack, where the attacker can obtain the plaintexts of arbitrary ciphertexts. A CCA-attacker can sometimes break a CPA-secure system. For example, the El Gamal cipher is secure against chosen plaintext attacks, but vulnerable to chosen ciphertext attacks because it is unconditionally malleable.
References |
63860 | https://en.wikipedia.org/wiki/Picture%20archiving%20and%20communication%20system | Picture archiving and communication system | A picture archiving and communication system (PACS) is a medical imaging technology which provides economical storage and convenient access to images from multiple modalities (source machine types). Electronic images and reports are transmitted digitally via PACS; this eliminates the need to manually file, retrieve, or transport film jackets, the folders used to store and protect X-ray film. The universal format for PACS image storage and transfer is DICOM (Digital Imaging and Communications in Medicine). Non-image data, such as scanned documents, may be incorporated using consumer industry standard formats like PDF (Portable Document Format), once encapsulated in DICOM. A PACS consists of four major components: The imaging modalities such as X-ray plain film (PF), computed tomography (CT) and magnetic resonance imaging (MRI), a secured network for the transmission of patient information, workstations for interpreting and reviewing images, and archives for the storage and retrieval of images and reports. Combined with available and emerging web technology, PACS has the ability to deliver timely and efficient access to images, interpretations, and related data. PACS reduces the physical and time barriers associated with traditional film-based image retrieval, distribution, and display.
Types of images
Most PACS handle images from various medical imaging instruments, including ultrasound (US), magnetic resonance (MR), Nuclear Medicine imaging, positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), digital radiography (DR), phosphor plate radiography, Histopathology, ophthalmology, etc. Additional types of image formats are always being added. Clinical areas beyond radiology; cardiology, oncology, gastroenterology, and even the laboratory are creating medical images that can be incorporated into PACS. (see DICOM Application areas).
Uses
PACS has four main uses:
Hard copy replacement: PACS replaces hard-copy based means of managing medical images, such as film archives. With the decreasing price of digital storage, PACS provide a growing cost and space advantage over film archives in addition to the instant access to prior images at the same institution. Digital copies are referred to as Soft-copy.
Remote access: It expands on the possibilities of conventional systems by providing capabilities of off-site viewing and reporting (distance education, telediagnosis). It enables practitioners in different physical locations to access the same information simultaneously for teleradiology.
Electronic image integration platform: PACS provides the electronic platform for radiology images interfacing with other medical automation systems such as Hospital Information System (HIS), Electronic Medical Record (EMR), Practice Management Software, and Radiology Information System (RIS).
Radiology Workflow Management: PACS is used by radiology personnel to manage the workflow of patient exams.
PACS is offered by virtually all the major medical imaging equipment manufacturers, medical IT companies and many independent software companies. Basic PACS software can be found free on the Internet.
Architecture
The architecture is the physical implementation of required functionality, or what one sees from the outside. There are different views, depending on the user. A radiologist typically sees a viewing station, a technologist a QA workstation, while a PACS administrator might spend most of their time in the climate-controlled computer room. The composite view is rather different for the various vendors.
Typically a PACS consists of a multitude of devices. The first step in typical PACS systems is the modality. Modalities are typically computed tomography (CT), ultrasound, nuclear medicine, positron emission tomography (PET), and magnetic resonance imaging (MRI). Depending on the facility's workflow most modalities send to a quality assurance (QA) workstation or sometimes called a PACS gateway. The QA workstation is a checkpoint to make sure patient demographics are correct as well as other important attributes of a study. If the study information is correct the images are passed to the archive for storage. The central storage device (archive) stores images and in some cases reports, measurements and other information that resides with the images. The next step in the PACS workflow is the reading workstations. The reading workstation is where the radiologist reviews the patient's study and formulates their diagnosis. Normally tied to the reading workstation is a reporting package that assists the radiologist with dictating the final report. Reporting software is optional and there are various ways in which doctors prefer to dictate their report. Ancillary to the workflow mentioned, there is normally CD/DVD authoring software used to burn patient studies for distribution to patients or referring physicians. The diagram above shows a typical workflow in most imaging centers and hospitals. Note that this section does not cover integration to a Radiology Information System, Hospital Information System and other such front-end system that relates to the PACS workflow.
More and more PACS include web-based interfaces to utilize the internet or a wide area network (WAN) as their means of communication, usually via VPN (Virtual Private Network) or SSL (Secure Sockets Layer). The clients side software may use ActiveX, JavaScript and/or a Java Applet. More robust PACS clients are full applications which can utilize the full resources of the computer they are executing on and are unaffected by the frequent unattended Web Browser and Java updates. As the need for distribution of images and reports becomes more widespread there is a push for PACS systems to support DICOM part 18 of the DICOM standard. Web Access to DICOM Objects (WADO) creates the necessary standard to expose images and reports over the web through truly portable medium. Without stepping outside the focus of the PACS architecture, WADO becomes the solution to cross platform capability and can increase the distribution of images and reports to referring physicians and patients.
PACS image backup is a critical, but sometimes overlooked, part of the PACS Architecture (see below). HIPAA requires that backup copies of patient images be made in case of image loss from the PACS. There are several methods of backing up the images, but they typically involve automatically sending copies of the images to a separate computer for storage, preferably off-site.
Querying (C-FIND) and Image (Instance) Retrieval (C-MOVE and C-GET)
The communication with the PACS server is done through DICOM messages that are similar to DICOM image "headers", but with different attributes. A query (C-FIND) is performed as follows:
The client establishes the network connection to the PACS server.
The client prepares a C-FIND request message which is a list of DICOM attributes.
The client fills in the C-FIND request message with the keys that should be matched. E.g. to query for a patient ID, the patient ID attribute is filled with the patient's ID.
The client creates empty (zero length) attributes for all the attributes it wishes to receive from the server. E.g. if the client wishes to receive an ID that it can use to receive images (see image retrieval) it should include a zero-length SOPInstanceUID (0008,0018) attribute in the C-FIND request messages.
The C-FIND request message is sent to the server.
The server sends back to the client a list of C-FIND response messages, each of which is also a list of DICOM attributes, populated with values for each match.
The client extracts the attributes that are of interest from the response messages objects.
Images (and other composite instances like Presentation States and Structured Reports) are then retrieved from a PACS server through either a C-MOVE or C-GET request, using the DICOM network protocol. Retrieval can be performed at the Study, Series or Image (instance) level. The C-MOVE request specifies where the retrieved instances should be sent (using separate C-STORE messages on one or more separate connections) with an identifier known as the destination Application Entity Title (AE Title). For a C-MOVE to work, the server must be configured with mapping of the AE Title to a TCP/IP address and port, and as a consequence the server must know in advance all the AE Titles that it will ever be requested to send images to. A C-GET, on the other hand, performs the C-STORE operations on the same connection as the request, and hence does not require that the "server" know the "client" TCP/IP address and port, and hence also works more easily through firewalls and with network address translation, environments in which the incoming TCP C-STORE connections required for C-MOVE may not get through. The difference between C-MOVE and C-GET is somewhat analogous to the difference between active and passive FTP. C-MOVE is most commonly used within enterprises and facilities, whereas C-GET is more practical between enterprises.
In addition to the traditional DICOM network services, particularly for cross-enterprise use, DICOM (and IHE) define other retrieval mechanisms, including WADO, WADO-WS and most recently WADO-RS.
Image archival and backup
Digital medical images are typically stored locally on a PACS for retrieval. It is important (and required in the United States by the Security Rule's Administrative Safeguards section of HIPAA) that facilities have a means of recovering images in the event of an error or disaster.
While each facility is different, the goal in image back-up is to make it automatic and as easy to administer as possible. The hope is that the copies won't be needed; however, disaster recovery and business continuity planning dictates that plans should include maintaining copies of data even when an entire site is temporarily or permanently lost.
Ideally, copies of images should be maintained in several locations, including off-site to provide disaster recovery capabilities. In general, PACS data is no different than other business critical data and should be protected with multiple copies at multiple locations. As PACS data can be considered protected health information (PHI), regulations may apply, most notably HIPAA and HIPAA Hi-Tech requirements.
Images may be stored both locally and remotely on off-line media such as disk, tape or optical media. The use of storage systems, using modern data protection technologies has become increasingly common, particularly for larger organizations with greater capacity and performance requirements. Storage systems may be configured and attached to the PACS server in various ways, either as Direct-Attached Storage (DAS), Network-attached storage (NAS), or via a Storage Area Network (SAN). However the storage is attached, enterprise storage systems commonly utilize RAID and other technologies to provide high availability and fault tolerance to protect against failures. In the event that it is necessary to reconstruct a PACS partially or completely, some means of rapidly transferring data back to the PACS is required, preferably while the PACS continues to operate.
Modern data storage replication technologies may be applied to PACS information, including the creation of local copies via point-in-time copy for locally protected copies, along with complete copies of data on separate repositories including disk and tape based systems. Remote copies of data should be created, either by physically moving tapes off-site, or copying data to remote storage systems. Whenever HIPAA protected data is moved, it should be encrypted, which includes sending via physical tape or replication technologies over WAN to a secondary location.
Other options for creating copies of PACS data include removable media (hard drives, DVDs or other media that can hold many patients' images) that is physically transferred off-site. HIPAA HITECH mandates encryption of stored data in many instances or other security mechanisms to avoid penalties for failure to comply.
The back-up infrastructure may also be capable of supporting the migration of images to a new PACS. Due to the high volume of images that need to be archived many rad centers are migrating their systems to a Cloud-based PACS.
Integration
A full PACS should provide a single point of access for images and their associated data. That is, it should support all digital modalities, in all departments, throughout the enterprise.
However, until PACS penetration is complete, individual islands of digital imaging not yet connected to a central PACS may exist. These may take the form of a localized, modality-specific network of modalities, workstations and storage (a so-called "mini-PACS"), or may consist of a small cluster of modalities directly connected to reading workstations without long term storage or management. Such systems are also often not connected to the departmental information system. Historically, Ultrasound, Nuclear Medicine and Cardiology Cath Labs are often departments that adopt such an approach.
More recently, Full Field digital mammography (FFDM) has taken a similar approach, largely because of the large image size, highly specialized reading workflow and display requirements, and intervention by regulators. The rapid deployment of FFDM in the US following the DMIST study has led to the integration of Digital Mammography and PACS becoming more commonplace.
All PACS, whether they span the entire enterprise or are localized within a department, should also interface with existing hospital information systems: Hospital information system (HIS) and Radiology Information System (RIS).
There are several data flowing into PACS as inputs for next procedures and back to HIS as results corresponding inputs:
In: Patient Identification and Orders for examination. These data are sent from HIS to RIS via integration interface, in most of hospital, via HL7 protocol. Patient ID and Orders will be sent to Modality (CT,MR,etc) via DICOM protocol (Worklist). Images will be created after images scanning and then forwarded to PACS Server. Diagnosis Report is created based on the images retrieved for presenting from PACS Server by physician/radiologist and then saved to RIS System.
Out: Diagnosis Report and Images created accordingly. Diagnosis Report is sent back to HIS via HL7 usually and Images are sent back to HIS via DICOM usually if there is a DICOM Viewer integrated with HIS in hospitals (In most of cases, Clinical Physician gets reminder of Diagnosis Report coming and then queries images from PACS Server).
Interfacing between multiple systems provides a more consistent and more reliable dataset:
Less risk of entering an incorrect patient ID for a study – modalities that support DICOM worklists can retrieve identifying patient information (patient name, patient number, accession number) for upcoming cases and present that to the technologist, preventing data entry errors during acquisition. Once the acquisition is complete, the PACS can compare the embedded image data with a list of scheduled studies from RIS, and can flag a warning if the image data does not match a scheduled study.
Data saved in the PACS can be tagged with unique patient identifiers (such as a social security number or NHS number) obtained from HIS. Providing a robust method of merging datasets from multiple hospitals, even where the different centers use different ID systems internally.
An interface can also improve workflow patterns:
When a study has been reported by a radiologist the PACS can mark it as read. This avoids needless double-reading. The report can be attached to the images and be viewable via a single interface.
Improved use of online storage and nearline storage in the image archive. The PACS can obtain lists of appointments and admissions in advance, allowing images to be pre-fetched from off-line storage or near-line storage onto online disk storage.
Recognition of the importance of integration has led a number of suppliers to develop fully integrated RIS/PACS. These may offer a number of advanced features:
Dictation of reports can be integrated into a single system. Integrated speech-to-text voice recognition software may be used to create and upload a report to the patient's chart within minutes of the patient's scan, or the reporting physician may dictate their findings into a phone system or voice recorder. That recording may be automatically sent to a transcript writer's workstation for typing, but it can also be made available for access by physicians, avoiding typing delays for urgent results, or retained in case of typing error.
Provides a single tool for quality control and audit purposes. Rejected images can be tagged, allowing later analysis (as may be required under radiation protection legislation). Workloads and turn-around time can be reported automatically for management purposes.
Acceptance testing
The PACS installation process is complicated requiring time, resources, planning, and testing. Installation is not complete until the acceptance test is passed. Acceptance testing of a new installation is a vital step to assure user compliance, functionality, and especially clinical safety. Take for example the Therac-25, a radiation medical device involved in accidents in which patients were given massive overdoses of radiation, due to unverified software control.
The acceptance test determines whether the PACS is ready for clinical use and marks the warranty timeline while serving as a payment milestone. The test process varies in time requirements depending on facility size but contract condition of 30-day time limit is not unusual. It requires detailed planning and development of testing criteria prior to writing the contract. It is a joint process requiring defined test protocols and benchmarks.
Testing uncovers deficiencies. A study determined that the most frequently cited deficiencies were the most costly components. Failures ranked from most-to-least common are: Workstation; HIS/RIS/ACS broker interfaces; RIS; Computer Monitors; Web-based image distribution system; Modality interfaces; Archive devices; Maintenance; Training; Network; DICOM; Teleradiology; Security; Film digitizer.
History
One of the first basic PACS was created in 1972 by Dr Richard J. Steckel.
The principles of PACS were first discussed at meetings of radiologists in 1982. Various people are credited with the coinage of the term PACS. Cardiovascular radiologist Dr Andre Duerinckx reported in 1983 that he had first used the term in 1981. Dr Samuel Dwyer, though, credits Dr Judith M. Prewitt for introducing the term.
Dr Harold Glass, a medical physicist working in London in the early 1990s secured UK Government funding and managed the project over many years which transformed Hammersmith Hospital in London as the first filmless hospital in the United Kingdom. Dr Glass died a few months after the project came live but is credited with being one of the pioneers of PACS.
The first large-scale PACS installation was in 1982 at the University of Kansas, Kansas City. This first installation became more of a teaching experience of what not to do rather than what to do in a PACS installation.
Regulatory concerns
In the US PACS are classified as Medical Devices, and hence if for sale are regulated by the USFDA. In general they are subject to Class 2 controls and hence require a 510(k), though individual PACS components may be subject to less stringent general controls. Some specific applications, such as the use for primary mammography interpretation, are additionally regulated within the scope of the Mammography Quality Standards Act.
The Society for Imaging Informatics in Medicine (SIIM) is the worldwide professional and trade organization that provides an annual meeting and a peer-reviewed journal to promote research and education about PACS and related digital topics.
See also
DICOM
Electronic Health Record (EHR)
Electronic Medical Record (EMR)
Enterprise Imaging
Medical device
Medical image sharing
Medical imaging
Medical software
Radiographer
Radiology
Radiology Information System
Teleradiology
Vendor Neutral Archive (VNA)
Visible Light Imaging
X-ray
References
Citations
Sources
External links
PACS History Web Site
USC IPILab Research Article on Backup
Medical imaging
Electronic health records |
63941 | https://en.wikipedia.org/wiki/Session%20key | Session key | A session key is a single-use symmetric key used for encrypting all messages in one communication session. A closely related term is content encryption key (CEK), traffic encryption key (TEK), or multicast key which refers to any key used for encrypting messages, contrary to other uses like encrypting other keys (key encryption key (KEK) or key encryption has been made public key).
Session keys can introduce complications into a system, yet they solve some real problems. There are two primary reasons to use session keys:
Several cryptanalytic attacks become easier the more material encrypted with a specific key is available. By limiting the amount of data processed using a particular key, those attacks are rendered harder to perform.
asymmetric encryption is too slow for many purposes, and all secret key algorithms require that the key is securely distributed. By using an asymmetric algorithm to encrypt the secret key for another, faster, symmetric algorithm, it's possible to improve overall performance considerably. This is the process used by PGP and GPG.
Like all cryptographic keys, session keys must be chosen so that they cannot be predicted by an attacker, usually requiring them to be chosen randomly. Failure to choose session keys (or any key) properly is a major (and too common in actual practice) design flaw in any crypto system.
See also
Ephemeral key
Random number generator
List of cryptographic key types
One-time pad
Perfect forward secrecy
References
Key management |
63973 | https://en.wikipedia.org/wiki/Wi-Fi | Wi-Fi | Wi-Fi () is a family of wireless network protocols, based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks in the world, used globally in home and small office networks to link desktop and laptop computers, tablet computers, smartphones, smart TVs, printers, and smart speakers together and to a wireless router to connect them to the Internet, and in wireless access points in public places like coffee shops, hotels, libraries and airports to provide the public Internet access for mobile devices.
WiFi is a trademark of the non-profit Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi enabled devices are shipped globally each year.
Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to interwork seamlessly with its wired sibling, Ethernet. Compatible devices can network through wireless access points to each other as well as to wired devices and the Internet. The different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with the different radio technologies determining radio bands, and the maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands; these bands are subdivided into multiple channels. Channels can be shared between networks but only one transmitter can locally transmit on a channel at any moment in time.
Wi-Fi's wavebands have relatively high absorption and work best for line-of-sight use. Many common obstructions such as walls, pillars, home appliances, etc. may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. An access point (or hotspot) often has a range of about indoors while some modern access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres (miles) using many overlapping access points with roaming permitted between them. Over time the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second).
History
A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens and are thus subject to interference.
A Prototype Test Bed for a wireless local area network was developed in 1992 by researchers from the Radiophysics Division of CSIRO in Australia.
About the same time in The Netherlands in 1991, the NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for 10 years, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard and were involved in designing the initial 802.11b and 802.11a standards within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame.
The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, and this proved popular.
In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay Australia's CSIRO $1 billion for infringements on CSIRO patents. Australia claims Wi-Fi is an Australian invention, at the time the subject of a little controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012, with global firms in the United States required to pay CSIRO licensing rights estimated at an additional $1 billion in royalties. In 2016, the CSIRO wireless local area network (WLAN) Prototype Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia.
Etymology and terminology
The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance, has stated that the term Wi-Fi was chosen from a list of ten potential names invented by Interbrand.
The name Wi-Fi has no further meaning, and was never officially a shortened form of "Wireless Fidelity". Nevertheless, the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc" in some publications. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity".
Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability.
Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are usually described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, 5G and LTE.
To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses.
Wi-Fi nodes often operate in infrastructure mode where all communications go through a base station. Ad hoc mode refers to devices talking directly to each other without the need to first talk to an access point.
A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, or mesh or a combination.
Each service set has an associated identifier, the 32-byte Service Set Identifier (SSID), which identifies the particular network. The SSID is configured within the devices that are considered part of the network.
A Basic Service Set (BSS) is a group of stations that all share the same wireless channel, SSID, and other wireless settings that have wirelessly connected (usually to the same access point). Each BSS is identified by a MAC address which is called the BSSID.
Certification
The IEEE does not test equipment for compliance with their standards. The non-profit Wi-Fi Alliance was formed in 1999 to fill this void—to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. , the Wi-Fi Alliance includes more than 800 companies. It includes 3Com (now owned by HPE/Hewlett-Packard Enterprise), Aironet (now owned by Cisco), Harris Semiconductor (now owned by Intersil), Lucent (now owned by Nokia), Nokia and Symbol Technologies (now owned by Zebra Technologies). The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. This includes wireless local area network (WLAN) connections, a device to device connectivity (such as Wi-Fi Peer to Peer aka Wi-Fi Direct), Personal area network (PAN), local area network (LAN), and even some limited wide area network (WAN) connections. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo.
Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving.
Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US.
Versions and generations
Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference.
Historically, the equipment has simply listed the versions of Wi-Fi using the name of the IEEE standard that it supports. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength.
The list of most important versions of Wi-Fi is: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11-2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay.
Uses
Internet
Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points (hotspots) can extend from an area as small as a few rooms to as large as many square kilometres (miles). Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon.
Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free-of-charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access and internetworking for the structure.
Similarly, battery-powered routers may include a cellular Internet radio modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet over 2G, 3G, or 4G networks using the tethering technique. Many smartphones have a built-in capability of this sort, including those based on Android, BlackBerry, Bada, iOS, Windows Phone, and Symbian, though carriers often disable the feature, or charge a separate fee to enable it, especially for customers with unlimited data plans. "Internet packs" provide standalone facilities of this type as well, without the use of a smartphone; examples include the MiFi- and WiBro-branded devices. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points.
Many traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding originated. By February 1997, the CMU Wi-Fi zone was fully operational. Many universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure.
City-wide
In the early 2000s, many cities around the world announced plans to construct citywide Wi-Fi networks. There are many successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages.
In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider.
In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point.
New York City announced a city-wide campaign to convert old phone booths into digitized "kiosks" in 2014. The project, titled LinkNYC, has created a network of kiosks which serve as public WiFi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the Camden borough of London.
Officials in South Korea's capital Seoul are moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul will grant leases to KT, LG Telecom, and SK Telecom. The companies will invest $44 million in the project, which was to be completed in 2015.
Geolocation
Wi-Fi positioning systems use the positions of Wi-Fi hotspots to identify a device's location.
Motion detection
Wi-Fi sensing is used in applications such as motion detection and gesture recognition.
Operational principles
Wi-Fi stations communicate by sending each other data packets: blocks of data individually sent and delivered over radio. As with all radio, this is done by the modulating and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses DSSS on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use multiple carriers on slightly different frequencies within the channel (OFDM).
As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address (often printed on the equipment) so that each Wi-Fi station has a unique address. The MAC addresses are used to specify both the destination and the source of each data packet. Wi-Fi establishes link-level connections, which can be defined using both the destination and source addresses. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A network interface normally does not accept packets addressed to other Wi-Fi stations.
Due to the ubiquity of Wi-Fi and the ever-decreasing cost of the hardware needed to support it, many manufacturers now build Wi-Fi interfaces directly into PC motherboards, eliminating the need for installation of a separate wireless network card.
Channels are used half duplex and can be time-shared by multiple networks. When communication happens on the same channel, any information sent by one computer is locally received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it. The use of the same channel also means that the data bandwidth is shared, such that, for example, available data bandwidth to each device is halved when two stations are actively transmitting.
A scheme known as carrier sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be "idle", but then transmit their packet data in its entirety. However, for geometric reasons, it cannot completely prevent collisions. A collision happens when a station receives multiple signals on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely.
Waveband
The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 5.9 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for channels to be bonded together to form wider channels for higher throughput.
Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. The "ISM" band ranges are also often improperly used because some do not know the difference between Part 15 and Part 18 of the FCC rules.
802.11b/g/n can use the 2.4 GHz Part 15 band, operating in the United States under Part 15 Rules and Regulations. In this frequency band equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, and Bluetooth devices.
Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). In the US and other countries, 802.11a and 802.11g devices may be operated without a licence, as allowed in Part 15 of the FCC Rules and Regulations.
802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels rather than the 2.4 GHz frequency band, where the channels are only 5 MHz wide. In general, lower frequencies have better range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range.
As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their use of bandwidth. Additionally, they have gained the ability to aggregate (or 'bond') channels together to gain still more throughput where the bandwidth is available. 802.11n allows for double radio spectrum/bandwidth (40 MHz- 8 channels) compared to 802.11a or 802.11g (20 MHz). 802.11n can also be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz bandwidth signals are permitted with some restrictions, giving much faster connections.
Communication stack
Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN.
Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard.
In addition to 802.11 the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack.
For internetworking purposes, Wi-Fi is usually layered as a link layer (equivalent to the physical and data link layers of the OSI model) below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access.
Modes
Infrastructure
In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which enormously simplifies the protocols.
Ad hoc and Wi-Fi direct
Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around.
Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001.
This wireless ad hoc network mode has proven popular with multiplayer handheld game consoles, such as the Nintendo DS, PlayStation Portable, digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or "virtual routers".
Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery- and security-methodology. Wi-Fi Direct launched in October 2010.
Another mode of direct communication over Wi-Fi is Tunneled Direct-Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point.
Multiple access points
An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set.
Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast roaming, and increased overall network-capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward "thin" access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of "dumb" transceivers. Outdoor applications may use mesh topologies.
Performance
Wi-Fi operational range depends on factors such as the frequency band, radio power output, receiver sensitivity, antenna gain, and antenna type as well as the modulation technique. Also, the propagation characteristics of the signals can have a big impact.
At longer distances, and with greater signal absorption, speed is usually reduced.
Transmitter power
Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW).
To reach requirements for wireless LAN applications, Wi-Fi has higher power consumption compared to some other standards designed to support wireless personal area network (PAN) applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as ZigBee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern.
Antenna
An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of . The same radio with an external semi parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 20 miles.
Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. Note that this assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW.
On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres (miles) through the use of high gain directional antennas at the router and remote device(s).
MIMO (multiple-input and multiple-output)
Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much faster speeds and greater range.
Wi-Fi 4 can more than double the range over previous standards.
The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices.
Radio propagation
With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing) and water (such as found in vegetation.)
Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage.
Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles.
Distance records
Distance records (using non-standard devices) include in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish Space Agency transferred data , using 6 watt amplifiers to reach an overhead stratospheric balloon.
Interference
Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem.
A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amount—much less than reusing channels (which causes co-channel interference)—if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is recommended.
However, many 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with many Wi-Fi access points.
Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, ZigBee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band.
These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines.
Throughput
Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link.
This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa.
Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput).
Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices.
The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored.
Hardware
Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals.
A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips.
Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices.
Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world.
Access point
A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices.
Wireless adapter
Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as PCI, miniPCI, USB, ExpressCard, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters.
Router
Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS.
Bridge
Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS).
Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G).
A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port.
Repeater
Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network.
Embedded systems
The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of Things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan.
Increasingly in the last few years (particularly ), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet.
These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products.
In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects.
Network security
The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption.
An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply.
Securing methods
A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address.
Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities.
The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK.
A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed, and effectively broken in many situations. The only remedy as of late 2011 was to turn off Wi-Fi Protected Setup, which is not always possible.
Virtual Private Networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks.
A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MECARD-like format is supported by Android and iOS 11+.
Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>;
Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;;
Data security risks
The older wireless encryption-standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA and WPA2) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN or secure Hypertext Transfer Protocol over Transport Layer Security (HTTPS).
Wi-Fi Protected Access encryption (WPA2) is considered secure, provided a strong passphrase is used. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on June 26.
Piggybacking
Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge.
During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time.
Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking.
Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS.
On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase and/or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized and/or unlawful activities.
Societal aspects
Wireless internet access has become much more embedded in society. It has thus changed how the society functions in many ways.
Influence on developing countries
Over half the world does not have access to the internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and low energy efficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007 a 450 km (280 mile) network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the aforementioned example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis.
Work habits
Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area.
Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot.
Housing
The internet has become an integral part of living. 81.9% of American households have internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi.
Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi network.
Health concerns
The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks.
The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call".
A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required."
Alternatives
Several other wireless technologies provide alternatives to Wi-Fi for different use cases:
Bluetooth, a short-distance network
Bluetooth Low Energy, a low-power variant of Bluetooth
Zigbee, a low-power, low data rate, short-distance communication protocol
Cellular networks, used by smartphones
WiMax, for providing long range wireless internet connectivity
LoRa, for long range wireless with low data rate
Some alternatives are "no new wires", re-using existing cable:
G.hn, which uses existing home wiring, such as phone and power lines
Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi:
Ethernet over twisted pair
See also
Gi-Fi—a term used by some trade press to refer to faster versions of the IEEE 802.11 standards
HiperLAN
Indoor positioning system
Li-Fi
List of WLAN channels
Operating system Wi-Fi support
Power-line communication
San Francisco Digital Inclusion Strategy
WiGig
Wireless Broadband Alliance
Wi-Fi Direct
Hotspot (Wi-Fi)
Bluetooth
Notes
References
Further reading
Australian inventions
Computer-related introductions in 1999
Networking standards
Wireless communication systems |
64019 | https://en.wikipedia.org/wiki/Riding%20the%20Bullet | Riding the Bullet | Riding the Bullet is a horror novella by American writer Stephen King. It marked King's debut on the Internet. Simon & Schuster, with technology by SoftLock, first published Riding the Bullet in 2000 as the world's first mass-market e-book, available for download at $2.50. That year, the novella was nominated for the Bram Stoker Award for Superior Achievement in Long Fiction and the International Horror Guild Award for Best Long Form. In 2002, the novella was included in King's collection Everything's Eventual.
Publication
During the first 24 hours, over 400,000 copies of Riding the Bullet were downloaded, jamming SoftLock's server. Some Stephen King fans waited hours for the download.
With over 500,000 downloads, Stephen King seemed to pave the way for the future of publishing. The actual number of readers was unclear because the encryption caused countless computers to crash.
The total financial gross of the electronic publication remains uncertain. Initially offered at $2.50 by SoftLock and then Simon & Schuster, it was later available free for download from Amazon and Barnes & Noble.
In 2009, Lonely Road Books announced the impending release of Riding the Bullet: The Deluxe Special Edition Double, by Stephen King and Mick Garris, as an oversized slipcased hardcover bound in the flip book or tête-bêche format (like an Ace Double) featuring the novella Riding the Bullet, the original script for the eponymous 2004 film by Mick Garris, and artwork by Alan M. Clark and Bernie Wrightson. The book was available in three editions:
Collector's Gift Edition: limited to 3000 slipcased copies (not signed)
Limited Edition of 500 copies (signed by Mick Garris and the artist)
Lettered Edition of 52 copies (signed by Stephen King)
Plot summary
Alan Parker is a student at the University of Maine who is trying to find himself. He gets a call from a neighbor in his hometown of Lewiston, telling him that his mother has been taken to the hospital after having a stroke. Lacking a functioning car, Parker decides to hitchhike the 120 miles (200 km) south to visit his mother.
His first ride is with a hippie in a VW bus who is a horrible driver. While tokin on a doobie, almost has a head on collision, loses control, and hits a tree. Alan walks away from the wreck and continues hitching. His second ride is an old man who continually tugs at his crotch in a car that stinks of urine. Eventually frightened and glad to escape the vehicle, Alan starts walking, thumbing his next ride. Coming upon a graveyard, he begins to explore it and notices a headstone for a stranger named George Staub (in German, Staub means dust), which reads: "Well Begun, Too Soon Done". Sure enough, the next car to pick him up is driven by George Staub, complete with black stitches around his neck where his head had been sewn on after being severed and wearing a button saying, "I Rode The Bullet At Thrill Village, Laconia."
During the ride, George talks to Alan about the amusement park ride he was too scared to ride as a kid: The Bullet in Thrill Village, Laconia, New Hampshire. George tells Alan that before they reach the lights of town, Alan must choose who goes on the death ride with George: Alan or his mother. In a moment of fright, Alan saves himself and tells him: "Take her. Take my mother."
George shoves Alan out of the car. Alan reappears alone at the graveyard, wearing the "I Rode the Bullet at Thrill Village" button. He eventually reaches the hospital, where he learns that despite his guilt and the impending feeling that his mother is dead or will die any moment, she is fine.
Alan takes the button and treasures it as a good (or bad) luck charm. His mother returns to work. Alan graduates and takes care of his mother for several years, and she suffers another stroke.
One day, Alan loses the button and receives a phone call; he knows what the call is about. He finds the button underneath his mother's bed and, after a final moment of sadness, guilt, and meditation, decides to carry on. His mother's "ride" is over.
Film
A movie adaptation of the story, starring Jonathan Jackson, Barbara Hershey and David Arquette, was released in 2004.
Reception
F&SF (The Magazine of Fantasy & Science Fiction) reviewer Charles de Lint praised the novella as "a terrific story, highlighting King's gift for characterization and his sheer narrative drive."
In contrast, The New York Times Christopher Lehmann-Haupt, who read the book in both available online formats (computer download and an e-book supplied by the publisher, neither of which permitted a user to print out a copy), was more critical. He disliked reading digital content on a backlit monitor ("I was also restlessly aware of the unusual effort it was taking to read onscreen") and the book's content ("after getting off to such a strong start, Mr. King writes himself into a corner that makes Alan's scary adventure seem something of a shaggy dog story"). He concludes: "reading 'Riding the Bullet,' I sorely missed the solidity of good old print on paper. And who knows, maybe old-fashioned print would have made Mr. King's story seem a little more substantial?"
See also
Stephen King short fiction bibliography
References
External links
2000 American novels
Novellas by Stephen King
Horror short stories
American novels adapted into films
Novels first published online |
66181 | https://en.wikipedia.org/wiki/Role-based%20access%20control | Role-based access control | In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users. It is an approach to implement mandatory access control (MAC) or discretionary access control (DAC).
Role-based access control (RBAC) is a policy-neutral access-control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations. RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication.
Design
Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Members or staff (or other system users) are assigned particular roles, and through those role assignments acquire the permissions needed to perform particular system functions. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department.
Role based access control interference is a relatively new issue in security applications, where multiple user accounts with dynamic access levels may lead to encryption key instability, allowing an outside user to exploit the weakness for unauthorized access. Key sharing applications within dynamic virtualized environments have shown some success in addressing this problem.
Three primary rules are defined for RBAC:
Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role.
Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.
Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized.
Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles.
With the concepts of role hierarchy and constraints, one can control RBAC to create or simulate lattice-based access control (LBAC). Thus RBAC can be considered to be a superset of LBAC.
When defining an RBAC model, the following conventions are useful:
S = Subject = A person or automated agent
R = Role = Job function or title which defines an authority level
P = Permissions = An approval of a mode of access to a resource
SE = Session = A mapping involving S, R and/or P
SA = Subject Assignment
PA = Permission Assignment
RH = Partially ordered Role Hierarchy. RH can also be written: ≥ (The notation: x ≥ y means that x inherits the permissions of y.)
A subject can have multiple roles.
A role can have multiple subjects.
A role can have many permissions.
A permission can be assigned to many roles.
An operation can be assigned to many permissions.
A permission can be assigned to many operations.
A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles, thus it can be used to achieve appropriate separation of duties. For example, the same person should not be allowed to both create a login account and to authorize the account creation.
Thus, using set theory notation:
and is a many to many permission to role assignment relation.
and is a many to many subject to role assignment relation.
A subject may have multiple simultaneous sessions with/in different roles.
Standardized levels
The NIST/ANSI/INCITS RBAC standard (2004) recognizes three levels of RBAC:
core RBAC
hierarchical RBAC, which adds support for inheritance between roles
constrained RBAC, which adds separation of duties
Relation to other models
RBAC is a flexible access control technology whose flexibility allows it to implement DAC or MAC. DAC with groups (e.g., as implemented in POSIX file systems) can emulate RBAC. MAC can simulate RBAC if the role graph is restricted to a tree rather than a partially ordered set.
Prior to the development of RBAC, the Bell-LaPadula (BLP) model was synonymous with MAC and file system permissions were synonymous with DAC. These were considered to be the only known models for access control: if a model was not BLP, it was considered to be a DAC model, and vice versa. Research in the late 1990s demonstrated that RBAC falls in neither category. Unlike context-based access control (CBAC), RBAC does not look at the message context (such as a connection's source). RBAC has also been criticized for leading to role explosion, a problem in large enterprise systems which require access control of finer granularity than what RBAC can provide as roles are inherently assigned to operations and data types. In resemblance to CBAC, an Entity-Relationship Based Access Control (ERBAC, although the same acronym is also used for modified RBAC systems, such as Extended Role-Based Access Control) system is able to secure instances of data by considering their association to the executing subject.
Comparing to ACL
Access control lists (ACLs) are used in traditional discretionary access-control systems to affect low-level data-objects. RBAC differs from ACL in assigning permissions to operations which change the direct-relations between several entities (see: ACLg below). For example, an ACL could be used for granting or denying write access to a particular system file, but it wouldn't dictate how that file could be changed. In an RBAC-based system, an operation might be to 'create a credit account' transaction in a financial application or to 'populate a blood sugar level test' record in a medical application. A Role is thus a sequence of operations within a larger activity. RBAC has been shown to be particularly well suited to separation of duties (SoD) requirements, which ensure that two or more people must be involved in authorizing critical operations. Necessary and sufficient conditions for safety of SoD in RBAC have been analyzed. An underlying principle of SoD is that no individual should be able to effect a breach of security through dual privilege. By extension, no person may hold a role that exercises audit, control or review authority over another, concurrently held role.
Then again, a "minimal RBAC Model", RBACm, can be compared with an ACL mechanism, ACLg, where only groups are permitted as entries in the ACL. Barkley (1997) showed that RBACm and ACLg are equivalent.
In modern SQL implementations, like ACL of the CakePHP framework, ACLs also manage groups and inheritance in a hierarchy of groups. Under this aspect, specific "modern ACL" implementations can be compared with specific "modern RBAC" implementations, better than "old (file system) implementations".
For data interchange, and for "high level comparisons", ACL data can be translated to XACML.
Attribute-based access control
Attribute-based access control or ABAC is a model which evolves from RBAC to consider additional attributes in addition to roles and groups. In ABAC, it is possible to use attributes of:
the user e.g. citizenship, clearance,
the resource e.g. classification, department, owner,
the action, and
the context e.g. time, location, IP.
ABAC is policy-based in the sense that it uses policies rather than static permissions to define what is allowed or what is not allowed.
Use and availability
The use of RBAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice. A 2010 report prepared for NIST by the Research Triangle Institute analyzed the economic value of RBAC for enterprises, and estimated benefits per employee from reduced employee downtime, more efficient provisioning, and more efficient access control policy administration.
In an organization with a heterogeneous IT infrastructure and requirements that span dozens or hundreds of systems and applications, using RBAC to manage sufficient roles and assign adequate role memberships becomes extremely complex without hierarchical creation of roles and privilege assignments. Newer systems extend the older NIST RBAC model to address the limitations of RBAC for enterprise-wide deployments. The NIST model was adopted as a standard by INCITS as ANSI/INCITS 359-2004. A discussion of some of the design choices for the NIST model has also been published.
See also
References
Further reading
External links
FAQ on RBAC models and standards
Role Based Access Controls at NIST
XACML core and hierarchical role based access control profile
Institute for Cyber Security at the University of Texas San Antonio
Practical experiences in implementing RBAC
Computer security models
Access control |
66245 | https://en.wikipedia.org/wiki/DSC | DSC | DSC may refer to:
Academia
D.Sc., Doctor of Science
Dalton State College, Georgia
Daytona State College, Florida
Deep Springs College, California
District Selection Committee
Dixie State University, Utah
Doctor of Surgical Chiropody, superseded in the 1960s by Doctor of Podiatric Medicine
Science and technology
.dsc, filename extension for files with description of source package in Debian
DECT Standard Cipher, an encryption algorithm used by wireless telephone systems
Differential scanning calorimetry, or the differential scanning calorimeter
Digital Selective Calling in marine telecommunications
Digital setting circles on telescopes
Digital Signal Controller, a hybrid microcontroller and digital signal processor
Digital Still Camera, used in automatic numbering of files of certain manufacturers within the Design rule for Camera File system (DCF) standard
Document Structuring Conventions in PostScript programming
Distributed source coding, a technique regarding the compression of multiple correlated information sources that do not communicate with each other.
Dye-sensitized solar cell
Dynamic stability control, computerized technology that improves a vehicle's stability
Dynamic susceptibility contrast, a type of perfusion MRI
Desired State Configuration, a feature of Windows PowerShell that enables the deployment and management of configuration data for systems
Display Stream Compression, a technique part of the VESA standard, used to lower the bandwidth used by a video signal
Dice similarity coefficient
A subarctic climate in Köppen climate classification
Government, politics and the military
Defence Security Corps, Indian military agency
Defense Security Command, South Korean military agency
Distinguished Service Cross (Australia), Australian military award
Distinguished Service Cross (United Kingdom), British naval award
Distinguished Service Cross (United States), American military award
United States District Court for the District of South Carolina
Directory of Social Change, British charity
Media, sports and entertainment
Daily Source Code, a podcast by Adam Curry
Dave, Shelly, and Chainsaw, a long-running morning radio show in the San Diego, California area
Dresdner SC, a German multisport club playing in Dresden, Saxony
Dubai Sports City, a multi-venue sports complex in Dubai, United Arab Emirates
Star Trek: Discovery, an American science fiction television series and part of the Star Trek franchise officially abbreviated to "DSC".
U.S. Postal Service Pro Cycling Team (UCI code: DSC), a United States-based professional road bicycle racing team
Dutch Swing College Band
Other
Down Syndrome Centre, a registered charity in Ireland
Democratic Socialists of Canada, Canadian political organization |
66255 | https://en.wikipedia.org/wiki/Bzip2 | Bzip2 | bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver. It was developed by Julian Seward, and maintained by Mark Wielaard and Micah Snyder.
History
Seward made the first public release of bzip2, version 0.15, in July 1996. The compressor's stability and popularity grew over the next several years, and Seward released version 1.0 in late 2000. Following a nine-year hiatus of updates for the project since 2010, on 4 June 2019 Federico Mena accepted maintainership of the bzip2 project. Since June 2021, the maintainer is Micah Snyder.
Implementation
Bzip2 uses several layers of compression techniques stacked on top of each other, which occur in the following order during compression and the reverse order during decompression:
Run-length encoding (RLE) of initial data.
Burrows–Wheeler transform (BWT), or block sorting.
Move-to-front (MTF) transform.
Run-length encoding (RLE) of MTF result.
Huffman coding.
Selection between multiple Huffman tables.
Unary base-1 encoding of Huffman table selection.
Delta encoding (Δ) of Huffman-code bit lengths.
Sparse bit array showing which symbols are used.
Any sequence of 4 to 255 consecutive duplicate symbols is replaced by the first 4 symbols and a repeat length between 0 and 251. Thus the sequence AAAAAAABBBBCCCD is replaced with AAAA\3BBBB\0CCCD, where \3 and \0 represent byte values 3 and 0 respectively. Runs of symbols are always transformed after 4 consecutive symbols, even if the run-length is set to zero, to keep the transformation reversible.
In the worst case, it can cause an expansion of 1.25, and in the best case, a reduction to <0.02. While the specification theoretically allows for runs of length 256–259 to be encoded, the reference encoder will not produce such output.
The author of bzip2 has stated that the RLE step was a historical mistake and was only intended to protect the original BWT implementation from pathological cases.
The Burrows–Wheeler transform is the reversible block-sort that is at the core of bzip2. The block is entirely self-contained, with input and output buffers remaining of the same size—in bzip2, the operating limit for this stage is For the block-sort, a (notional) matrix is created, in which row i contains the whole of the buffer, rotated to start from the i-th symbol. Following rotation, the rows of the matrix are sorted into alphabetic (numerical) order. A 24-bit pointer is stored marking the starting position for when the block is untransformed. In practice, it is not necessary to construct the full matrix; rather, the sort is performed using pointers for each position in the buffer. The output buffer is the last column of the matrix; this contains the whole buffer, but reordered so that it is likely to contain large runs of identical symbols.
The move-to-front transform again does not alter the size of the processed block. Each of the symbols in use in the document is placed in an array. When a symbol is processed, it is replaced by its location (index) in the array and that symbol is shuffled to the front of the array. The effect is that immediately recurring symbols are replaced by zero symbols (long runs of any arbitrary symbol thus become runs of zero symbols), while other symbols are remapped according to their local frequency.
Much "natural" data contains identical symbols that recur within a limited range (text is a good example). As the MTF transform assigns low values to symbols that reappear frequently, this results in a data stream containing many symbols in the low integer range, many of them being identical (different recurring input symbols can actually map to the same output symbol). Such data can be very efficiently encoded by any legacy compression method.
Long strings of zeros in the output of the move-to-front transform (which come from repeated symbols in the output of the BWT) are replaced by a sequence of two special codes, RUNA and RUNB, which represent the run-length as a binary number. Actual zeros are never encoded in the output; a lone zero becomes RUNA. (This step in fact is done at the same time as MTF is; whenever MTF would produce zero, it instead increases a counter to then encode with RUNA and RUNB.)
The sequence 0, 0, 0, 0, 0, 1 would be represented as RUNA, RUNB, 1; RUNA, RUNB represents the value 5 as described below. The run-length code is terminated by reaching another normal symbol. This RLE process is more flexible than the initial RLE step, as it is able to encode arbitrarily long integers (in practice, this is usually limited by the block size, so that this step does not encode a run of more than ). The run-length is encoded in this fashion: assigning place values of 1 to the first bit, 2 to the second, 4 to the third, etc. in the sequence, multiply each place value in a RUNB spot by 2, and add all the resulting place values (for RUNA and RUNB values alike) together. This is similar to base-2 bijective numeration. Thus, the sequence RUNA, RUNB results in the value (1 + 2 × 2) = 5. As a more complicated example:
RUNA RUNB RUNA RUNA RUNB (ABAAB)
1 2 4 8 16
1 4 4 8 32 = 49
This process replaces fixed-length symbols in the range 0–258 with variable-length codes based on the frequency of use. More frequently used codes end up shorter (2–3 bits), whilst rare codes can be allocated up to 20 bits. The codes are selected carefully so that no sequence of bits can be confused for a different code.
The end-of-stream code is particularly interesting. If there are n different bytes (symbols) used in the uncompressed data, then the Huffman code will consist of two RLE codes (RUNA and RUNB), n − 1 symbol codes and one end-of-stream code. Because of the combined result of the MTF and RLE encodings in the previous two steps, there is never any need to explicitly reference the first symbol in the MTF table (would be zero in the ordinary MTF), thus saving one symbol for the end-of-stream marker (and explaining why only n − 1 symbols are coded in the Huffman tree). In the extreme case where only one symbol is used in the uncompressed data, there will be no symbol codes at all in the Huffman tree, and the entire block will consist of RUNA and RUNB (implicitly repeating the single byte) and an end-of-stream marker with value 2.
0: RUNA,
1: RUNB,
2–257: byte values 0–255,
258: end of stream, finish processing (could be as low as 2).
Several identically sized Huffman tables can be used with a block if the gain from using them is greater than the cost of including the extra table. At least 2 and up to 6 tables can be present, with the most appropriate table being reselected before every 50 symbols processed. This has the advantage of having very responsive Huffman dynamics without having to continuously supply new tables, as would be required in DEFLATE. Run-length encoding in the previous step is designed to take care of codes that have an inverse probability of use higher than the shortest code Huffman code in use.
If multiple Huffman tables are in use, the selection of each table (numbered 0 to 5) is done from a list by a zero-terminated bit run between 1 and 6 bits in length. The selection is into a MTF list of the tables. Using this feature results in a maximal expansion of around 1.015, but generally less. This expansion is likely to be greatly over-shadowed by the advantage of selecting more appropriate Huffman tables, and the common-case of continuing to use the same Huffman table is represented as a single bit. Rather than unary encoding, effectively this is an extreme form of a Huffman tree, where each code has half the probability of the previous code.
Huffman-code bit lengths are required to reconstruct each of the used canonical Huffman tables. Each bit length is stored as an encoded difference against the previous-code bit length. A zero bit (0) means that the previous bit length should be duplicated for the current code, whilst a one bit (1) means that a further bit should be read and the bit length incremented or decremented based on that value. In the common case a single bit is used per symbol per table and the worst case—going from length 1 to length 20—would require approximately 37 bits. As a result of the earlier MTF encoding, code lengths would start at 2–3 bits long (very frequently used codes) and gradually increase, meaning that the delta format is fairly efficient, requiring around 300 bits (38 bytes) per full Huffman table.
A bitmap is used to show which symbols are used inside the block and should be included in the Huffman trees. Binary data is likely to use all 256 symbols representable by a byte, whereas textual data may only use a small subset of available values, perhaps covering the ASCII range between 32 and 126. Storing 256 zero bits would be inefficient if they were mostly unused. A sparse method is used: the 256 symbols are divided up into 16 ranges, and only if symbols are used within that block is a 16-bit array included. The presence of each of these 16 ranges is indicated by an additional 16-bit bit array at the front. The total bitmap uses between 32 and 272 bits of storage (4–34 bytes). For contrast, the DEFLATE algorithm would show the absence of symbols by encoding the symbols as having a zero bit length with run-length encoding and additional Huffman coding.
File format
No formal specification for bzip2 exists, although an informal specification has been reverse engineered from the reference implementation.
As an overview, a .bz2 stream consists of a 4-byte header, followed by zero or more compressed blocks, immediately followed by an end-of-stream marker containing a 32-bit CRC for the plaintext whole stream processed. The compressed blocks are bit-aligned and no padding occurs.
.magic:16 = 'BZ' signature/magic number
.version:8 = 'h' for Bzip2 ('H'uffman coding), '0' for Bzip1 (deprecated)
.hundred_k_blocksize:8 = '1'..'9' block-size 100 kB-900 kB (uncompressed)
.compressed_magic:48 = 0x314159265359 (BCD (pi))
.crc:32 = checksum for this block
.randomised:1 = 0=>normal, 1=>randomised (deprecated)
.origPtr:24 = starting pointer into BWT for after untransform
.huffman_used_map:16 = bitmap, of ranges of 16 bytes, present/not present
.huffman_used_bitmaps:0..256 = bitmap, of symbols used, present/not present (multiples of 16)
.huffman_groups:3 = 2..6 number of different Huffman tables in use
.selectors_used:15 = number of times that the Huffman tables are swapped (each 50 symbols)
*.selector_list:1..6 = zero-terminated bit runs (0..62) of MTF'ed Huffman table (*selectors_used)
.start_huffman_length:5 = 0..20 starting bit length for Huffman deltas
*.delta_bit_length:1..40 = 0=>next symbol; 1=>alter length
{ 1=>decrement length; 0=>increment length } (*(symbols+2)*groups)
.contents:2..∞ = Huffman encoded data stream until end of block (max. 7372800 bit)
.eos_magic:48 = 0x177245385090 (BCD sqrt(pi))
.crc:32 = checksum for whole stream
.padding:0..7 = align to whole byte
Because of the first-stage RLE compression (see above), the maximum length of plaintext that a single 900 kB bzip2 block can contain is around 46 MB (45,899,236 bytes). This can occur if the whole plaintext consists entirely of repeated values (the resulting .bz2 file in this case is 46 bytes long). An even smaller file of 40 bytes can be achieved by using an input containing entirely values of 251, an apparent compression ratio of 1147480.9:1.
The compressed blocks in bzip2 can be independently decompressed, without having to process earlier blocks. This means that bzip2 files can be decompressed in parallel, making it a good format for use in big data applications with cluster computing frameworks like Hadoop and Apache Spark.
Efficiency
bzip2 compresses most files more effectively than the older LZW (.Z) and Deflate (.zip and .gz) compression algorithms, but is considerably slower. LZMA is generally more space-efficient than bzip2 at the expense of even slower compression speed, while having much faster decompression.
bzip2 compresses data in blocks of size between 100 and 900 kB and uses the Burrows–Wheeler transform to convert frequently-recurring character sequences into strings of identical letters. It then applies move-to-front transform and Huffman coding. bzip2's ancestor bzip used arithmetic coding instead of Huffman. The change was made because of a software patent restriction.
bzip2 performance is asymmetric, as decompression is relatively fast. Motivated by the large CPU time required for compression, a modified version was created in 2003 called pbzip2 that supported multi-threading, giving almost linear speed improvements on multi-CPU and multi-core computers. , this functionality has not been incorporated into the main project.
Like gzip, bzip2 is only a data compressor. It is not an archiver like tar or ZIP; the program itself has no facilities for multiple files, encryption or archive-splitting, but, in the UNIX tradition, relies instead on separate external utilities such as tar and GnuPG for these tasks.
The grep-based bzgrep tool allows directly searching through compressed text without needing to uncompress the contents first.
See also
Comparison of archive formats
Comparison of file archivers
List of archive formats
List of file archivers
rzip
References
External links
The bzip2 Command - by The Linux Information Project (LINFO)
bzip2 for Windows
Graphical bzip2 for Windows(WBZip2)
MacBzip2 (for Classic Mac OS; under Mac OS X, the standard bzip2 is available at the command line)
Feature comparison and benchmarks for different kinds of parallel bzip2 implementations available
4 Parallel bzip2 Implementations at The Data Compression News Blog
The original bzip compressor - may be restricted by patents
1996 software
Archive formats
Cross-platform software
Free data compression software
Lossless compression algorithms
Unix archivers and compression-related utilities |
66505 | https://en.wikipedia.org/wiki/Secrecy | Secrecy | Secrecy is the practice of hiding information from certain individuals or groups who do not have the "need to know", perhaps while sharing it with other individuals. That which is kept hidden is known as the secret.
Secrecy is often controversial, depending on the content or nature of the secret, the group or people keeping the secret, and the motivation for secrecy.
Secrecy by government entities is often decried as excessive or in promotion of poor operation; excessive revelation of information on individuals can conflict with virtues of privacy and confidentiality. It is often contrasted with social transparency.
Secrecy can exist in a number of different ways: encoding or encryption (where mathematical and technical strategies are used to hide messages), true secrecy (where restrictions are put upon those who take part of the message, such as through government security classification) and obfuscation, where secrets are hidden in plain sight behind complex idiosyncratic language (jargon) or steganography.
Another classification proposed by Claude Shannon in 1948 reads there are three systems of secrecy within communication:
concealment systems, including such methods as invisible ink, concealing a message in an innocent text, or in a fake covering cryptogram, or other methods in which the existence of the message is concealed from the enemy
privacy systems, for example, voice inversion, in which special equipment is required to recover the message
"true" secrecy systems where the meaning of the message is concealed by the cypher, code, etc., although its existence is not hidden, and the enemy is assumed to have any special equipment necessary to intercept and record the transmitted signal
Sociology
Animals conceal the location of their den or nest from predators. Squirrels bury nuts, hiding them, and they try to remember their locations later.
Humans attempt to consciously conceal aspects of themselves from others due to shame, or from fear of violence, rejection, harassment, loss of acceptance, or loss of employment. Humans may also attempt to conceal aspects of their own self which they are not capable of incorporating psychologically into their conscious being. Families sometimes maintain "family secrets", obliging family members never to discuss disagreeable issues concerning the family with outsiders or sometimes even within the family. Many "family secrets" are maintained by using a mutually agreed-upon construct (an official family story) when speaking with outside members. Agreement to maintain the secret is often coerced through "shaming" and reference to family honor. The information may even be something as trivial as a recipe.
Secrets are sometimes kept to provide the pleasure of surprise. This includes keeping secret about a surprise party, not telling spoilers of a story, and avoiding exposure of a magic trick.
Keeping one's strategy secret is important in many aspects of game theory.
In anthropology secret sharing is one way for people to establish traditional relations with other people. A commonly used narrative that describes this kind of behavior is Joseph Conrad's short story "The Secret Sharer".
Government
Governments often attempt to conceal information from other governments and the public. These state secrets can include weapon designs, military plans, diplomatic negotiation tactics, and secrets obtained illicitly from others ("intelligence"). Most nations have some form of Official Secrets Act (the Espionage Act in the U.S.) and classify material according to the level of protection needed (hence the term "classified information"). An individual needs a security clearance for access and other protection methods, such as keeping documents in a safe, are stipulated.
Few people dispute the desirability of keeping Critical Nuclear Weapon Design Information secret, but many believe government secrecy to be excessive and too often employed for political purposes. Many countries have laws that attempt to limit government secrecy, such as the U.S. Freedom of Information Act and sunshine laws. Government officials sometimes leak information they are supposed to keep secret. (For a recent (2005) example, see Plame affair.)
Secrecy in elections is a growing issue, particularly secrecy of vote counts on computerized vote counting machines. While voting, citizens are acting in a unique sovereign or "owner" capacity (instead of being a subject of the laws, as is true outside of elections) in selecting their government servants. It is argued that secrecy is impermissible as against the public in the area of elections where the government gets all of its power and taxing authority. In any event, permissible secrecy varies significantly with the context involved.
Corporations
Organizations, ranging from multi-national for profit corporations to nonprofit charities, keep secrets for competitive advantage, to meet legal requirements, or, in some cases, to conceal nefarious behavior. New products under development, unique manufacturing techniques, or simply lists of customers are types of information protected by trade secret laws.
Research on corporate secrecy has studied the factors supporting secret organizations. In particular, scholars in economics and management have paid attention to the way firms participating in cartels work together to maintain secrecy and conceal their activities from antitrust authorities. The diversity of the participants (in terms of age and size of the firms) influences their ability to coordinate to avoid being detected.
The patent system encourages inventors to publish information in exchange for a limited time monopoly on its use, though patent applications are initially secret. Secret societies use secrecy as a way to attract members by creating a sense of importance.
Shell companies may be used to launder money from criminal activity, to finance terrorism, or to evade taxes. Registers of beneficial ownership aim at fighting corporate secrecy in that sense.
Other laws require organizations to keep certain information secret, such as medical records (HIPAA in the U.S.), or financial reports that are under preparation (to limit insider trading). Europe has particularly strict laws about database privacy.
In many countries, neoliberal reforms of government have included expanding the outsourcing of government tasks and functions to private businesses with the aim of improving efficiency and effectiveness in government administration. However, among the criticisms of these reforms is the claim that the pervasive use of "Commercial-in-confidence" (or secrecy) clauses in contracts between government and private providers further limits public accountability of governments and prevents proper public scrutiny of the performance and probity of the private companies. Concerns have been raised that 'commercial-in-confidence' is open to abuse because it can be deliberately used to hide corporate or government maladministration and even corruption.
Computing
Preservation of secrets is one of the goals of information security. Techniques used include physical security and cryptography. The latter depends on the secrecy of cryptographic keys. Many believe that security technology can be more effective if it itself is not kept secret.
Information hiding is a design principle in much software engineering. It is considered easier to verify software reliability if one can be sure that different parts of the program can only access (and therefore depend on) a known limited amount of information.
Military
Military secrecy is the concealing of information about martial affairs that is purposely not made available to the general public and hence to any enemy, in order to gain an advantage or to not reveal a weakness, to avoid embarrassment, or to help in propaganda efforts. Most military secrets are tactical in nature, such as the strengths and weaknesses of weapon systems, tactics, training methods, plans, and the number and location of specific weapons. Some secrets involve information in broader areas, such as secure communications, cryptography, intelligence operations, and cooperation with third parties.
Views
Excessive secrecy is often cited as a source of much human conflict. One may have to lie in order to hold a secret, which might lead to psychological repercussions. The alternative, declining to answer when asked something, may suggest the answer and may therefore not always be suitable for keeping a secret. Also, the other may insist that one answer the question.
Nearly 2500 years ago, Sophocles wrote, "Do nothing secretly; for Time sees and hears all things, and discloses all." Gautama Siddhartha, the Buddha, once said "Three things cannot long stay hidden: the sun, the moon and the truth". The Bible addresses this: "Be sure your sin will find you out." Numbers 32:23
See also
Ambiguity
Banking secrecy
Classified information
Concealment device
Confidentiality
Conspiracy theory
Covert operation
Cover-up
Deception
Don't ask, don't tell
Espionage
Freedom of information legislation
Media transparency
Need to know
Open secret
Secrecy (sociology)
Secret passage
Secret sharing
Self-concealment
Somebody Else's Problem
Smuggling
State Secrets Privilege
Sub rosa
WikiLeaks
References
*
Also available as: Preview.
Further reading
External links
An Open Source Collection of Readings on Secrecy
Secrecy News from the Federation of American Scientists
Classified information |
66535 | https://en.wikipedia.org/wiki/Alternation%20of%20generations | Alternation of generations | Alternation of generations (also known as metagenesis or heterogenesis) is the type of life cycle that occurs in those plants and algae in the Archaeplastida and the Heterokontophyta that have distinct haploid sexual and diploid asexual stages. In these groups, a multicellular haploid gametophyte with n chromosomes alternates with a multicellular diploid sporophyte with 2n chromosomes, made up of n pairs. A mature sporophyte produces haploid spores by meiosis, a process which reduces the number of chromosomes to half, from 2n to n.
The haploid spores germinate and grow into a haploid gametophyte. At maturity, the gametophyte produces gametes by mitosis, which does not alter the number of chromosomes. Two gametes (originating from different organisms of the same species or from the same organism) fuse to produce a diploid zygote, which develops into a diploid sporophyte. This cycle, from gametophyte to sporophyte (or equally from sporophyte to gametophyte), is the way in which all land plants and many algae undergo sexual reproduction.
The relationship between the sporophyte and gametophyte varies among different groups of plants. In those algae which have alternation of generations, the sporophyte and gametophyte are separate independent organisms, which may or may not have a similar appearance. In liverworts, mosses and hornworts, the sporophyte is less well developed than the gametophyte and is largely dependent on it. Although moss and hornwort sporophytes can photosynthesise, they require additional photosynthate from the gametophyte to sustain growth and spore development and depend on it for supply of water, mineral nutrients and nitrogen. By contrast, in all modern vascular plants the gametophyte is less well developed than the sporophyte, although their Devonian ancestors had gametophytes and sporophytes of approximately equivalent complexity. In ferns the gametophyte is a small flattened autotrophic prothallus on which the young sporophyte is briefly dependent for its nutrition. In flowering plants, the reduction of the gametophyte is much more extreme; it consists of just a few cells which grow entirely inside the sporophyte.
Animals develop differently. They produce haploid gametes. No haploid spores capable of dividing are produced, so generally there is no multicellular haploid phase. (Some insects have a sex-determining system whereby haploid males are produced from unfertilized eggs; however females produced from fertilized eggs are diploid.)
Life cycles of plants and algae with alternating haploid and diploid multicellular stages are referred to as diplohaplontic (the equivalent terms haplodiplontic, diplobiontic and dibiontic are also in use, as is describing such an organism as having a diphasic ontogeny). Life cycles, such as those of animals, in which there is only a diploid multicellular stage are referred to as diplontic. Life cycles in which there is only a haploid multicellular stage are referred to as haplontic.
Definition
Alternation of generations is defined as the alternation of multicellular diploid and haploid forms in the organism's life cycle, regardless of whether these forms are free-living. In some species, such as the alga Ulva lactuca, the diploid and haploid forms are indeed both free-living independent organisms, essentially identical in appearance and therefore said to be isomorphic. The free-swimming, haploid gametes form a diploid zygote which germinates into a multicellular diploid sporophyte. The sporophyte produces free-swimming haploid spores by meiosis that germinate into haploid gametophytes.
However, in some other groups, either the sporophyte or the gametophyte is very much reduced and is incapable of free living. For example, in all bryophytes the gametophyte generation is dominant and the sporophyte is dependent on it. By contrast, in all modern vascular land plants the gametophytes are strongly reduced, although the fossil evidence indicates that they were derived from isomorphic ancestors. In seed plants, the female gametophyte develops totally within the sporophyte, which protects and nurtures it and the embryonic sporophyte that it produces. The pollen grains, which are the male gametophytes, are reduced to only a few cells (just three cells in many cases). Here the notion of two generations is less obvious; as Bateman & Dimichele say "[s]porophyte and gametophyte effectively function as a single organism". The alternative term 'alternation of phases' may then be more appropriate.
History
Debates about alternation of generations in the early twentieth century can be confusing because various ways of classifying "generations" co-exist (sexual vs. asexual, gametophyte vs. sporophyte, haploid vs. diploid, etc.).
Initially, Chamisso and Steenstrup described the succession of differently organized generations (sexual and asexual) in animals as "alternation of generations", while studying the development of tunicates, cnidarians and trematode animals. This phenomenon is also known as heterogamy. Presently, the term "alternation of generations" is almost exclusively associated with the life cycles of plants, specifically with the alternation of haploid gametophytes and diploid sporophytes.
Wilhelm Hofmeister demonstrated the morphological alternation of generations in plants, between a spore-bearing generation (sporophyte) and a gamete-bearing generation (gametophyte). By that time, a debate emerged focusing on the origin of the asexual generation of land plants (i.e., the sporophyte) and is conventionally characterized as a conflict between theories of antithetic (Čelakovský, 1874) and homologous (Pringsheim, 1876) alternation of generations. Čelakovský coined the words sporophyte and gametophyte.
Eduard Strasburger (1874) discovered the alternation between diploid and haploid nuclear phases, also called cytological alternation of nuclear phases. Although most often coinciding, morphological alternation and nuclear phases alternation are sometimes independent of one another, e.g., in many red algae, the same nuclear phase may correspond to two diverse morphological generations. In some ferns which lost sexual reproduction, there is no change in nuclear phase, but the alternation of generations is maintained.
Alternation of generations in plants
Fundamental elements
The diagram above shows the fundamental elements of the alternation of generations in plants. The many variations found in different groups of plants are described by use of these concepts later in the article. Starting from the right of the diagram, the processes involved are as follows:
Two single-celled haploid gametes, each containing n unpaired chromosomes, fuse to form a single-celled diploid zygote, which now contains n pairs of chromosomes, i.e. 2n chromosomes in total.
The single-celled diploid zygote germinates, dividing by the normal process (mitosis), which maintains the number of chromosomes at 2n. The result is a multi-cellular diploid organism, called the sporophyte (because at maturity it produces spores).
When it reaches maturity, the sporophyte produces one or more sporangia (singular: sporangium) which are the organs that produce diploid spore mother cells (sporocytes). These divide by a special process (meiosis) that reduces the number of chromosomes by a half. This initially results in four single-celled haploid spores, each containing n unpaired chromosomes.
The single-celled haploid spore germinates, dividing by the normal process (mitosis), which maintains the number of chromosomes at n. The result is a multi-cellular haploid organism, called the gametophyte (because it produces gametes at maturity).
When it reaches maturity, the gametophyte produces one or more gametangia (singular: gametangium) which are the organs that produce haploid gametes. At least one kind of gamete possesses some mechanism for reaching another gamete in order to fuse with it.
The 'alternation of generations' in the life cycle is thus between a diploid (2n) generation of sporophytes and a haploid (n) generation of gametophytes.
The situation is quite different from that in animals, where the fundamental process is that a diploid (2n) individual produces haploid (n) gametes by meiosis. In animals, spores (i.e. haploid cells which are able to undergo mitosis) are not produced, so there is no asexual multi-cellular generation. Some insects have haploid males that develop from unfertilized eggs, but the females are all diploid.
Variations
The diagram shown above is a good representation of the life cycle of some multi-cellular algae (e.g. the genus Cladophora) which have sporophytes and gametophytes of almost identical appearance and which do not have different kinds of spores or gametes.
However, there are many possible variations on the fundamental elements of a life cycle which has alternation of generations. Each variation may occur separately or in combination, resulting in a bewildering variety of life cycles. The terms used by botanists in describing these life cycles can be equally bewildering. As Bateman and Dimichele say "[...] the alternation of generations has become a terminological morass; often, one term represents several concepts or one concept is represented by several terms."
Possible variations are:
Relative importance of the sporophyte and the gametophyte.
Equal (homomorphy or isomorphy).Filamentous algae of the genus Cladophora, which are predominantly found in fresh water, have diploid sporophytes and haploid gametophytes which are externally indistinguishable. No living land plant has equally dominant sporophytes and gametophytes, although some theories of the evolution of alternation of generations suggest that ancestral land plants did.
Unequal (heteromorphy or anisomorphy).
Dominant gametophyte (gametophytic).In liverworts, mosses and hornworts, the dominant form is the haploid gametophyte. The diploid sporophyte is not capable of an independent existence, gaining most of its nutrition from the parent gametophyte, and having no chlorophyll when mature.
Dominant sporophyte (sporophytic).In ferns, both the sporophyte and the gametophyte are capable of living independently, but the dominant form is the diploid sporophyte. The haploid gametophyte is much smaller and simpler in structure. In seed plants, the gametophyte is even more reduced (at the minimum to only three cells), gaining all its nutrition from the sporophyte. The extreme reduction in the size of the gametophyte and its retention within the sporophyte means that when applied to seed plants the term 'alternation of generations' is somewhat misleading: "[s]porophyte and gametophyte effectively function as a single organism". Some authors have preferred the term 'alternation of phases'.
Differentiation of the gametes.
Both gametes the same (isogamy).Like other species of Cladophora, C. callicoma has flagellated gametes which are identical in appearance and ability to move.
Gametes of two distinct sizes (anisogamy).
Both of similar motility.Species of Ulva, the sea lettuce, have gametes which all have two flagella and so are motile. However they are of two sizes: larger 'female' gametes and smaller 'male' gametes.
One large and sessile, one small and motile (oogamy). The larger sessile megagametes are eggs (ova), and smaller motile microgametes are sperm (spermatozoa, spermatozoids). The degree of motility of the sperm may be very limited (as in the case of flowering plants) but all are able to move towards the sessile eggs. When (as is almost always the case) the sperm and eggs are produced in different kinds of gametangia, the sperm-producing ones are called antheridia (singular antheridium) and the egg-producing ones archegonia (singular archegonium).
Antheridia and archegonia occur on the same gametophyte, which is then called monoicous. (Many sources, including those concerned with bryophytes, use the term 'monoecious' for this situation and 'dioecious' for the opposite. Here 'monoecious' and 'dioecious' are used only for sporophytes.)The liverwort Pellia epiphylla has the gametophyte as the dominant generation. It is monoicous: the small reddish sperm-producing antheridia are scattered along the midrib while the egg-producing archegonia grow nearer the tips of divisions of the plant.
Antheridia and archegonia occur on different gametophytes, which are then called dioicous.The moss Mnium hornum has the gametophyte as the dominant generation. It is dioicous: male plants produce only antheridia in terminal rosettes, female plants produce only archegonia in the form of stalked capsules. Seed plant gametophytes are also dioicous. However, the parent sporophyte may be monoecious, producing both male and female gametophytes or dioecious, producing gametophytes of one gender only. Seed plant gametophytes are extremely reduced in size; the archegonium consists only of a small number of cells, and the entire male gametophyte may be represented by only two cells.
Differentiation of the spores.
All spores the same size (homospory or isospory).Horsetails (species of Equisetum) have spores which are all of the same size.
Spores of two distinct sizes (heterospory or anisospory): larger megaspores and smaller microspores. When the two kinds of spore are produced in different kinds of sporangia, these are called megasporangia and microsporangia. A megaspore often (but not always) develops at the expense of the other three cells resulting from meiosis, which abort.
Megasporangia and microsporangia occur on the same sporophyte, which is then called monoecious.Most flowering plants fall into this category. Thus the flower of a lily contains six stamens (the microsporangia) which produce microspores which develop into pollen grains (the microgametophytes), and three fused carpels which produce integumented megasporangia (ovules) each of which produces a megaspore which develops inside the megasporangium to produce the megagametophyte. In other plants, such as hazel, some flowers have only stamens, others only carpels, but the same plant (i.e. sporophyte) has both kinds of flower and so is monoecious.
Megasporangia and microsporangia occur on different sporophytes, which are then called dioecious.An individual tree of the European holly (Ilex aquifolium) produces either 'male' flowers which have only functional stamens (microsporangia) producing microspores which develop into pollen grains (microgametophytes) or 'female' flowers which have only functional carpels producing integumented megasporangia (ovules) that contain a megaspore that develops into a multicellular megagametophyte.
There are some correlations between these variations, but they are just that, correlations, and not absolute. For example, in flowering plants, microspores ultimately produce microgametes (sperm) and megaspores ultimately produce megagametes (eggs). However, in ferns and their allies there are groups with undifferentiated spores but differentiated gametophytes. For example, the fern Ceratopteris thalictrioides has spores of only one kind, which vary continuously in size. Smaller spores tend to germinate into gametophytes which produce only sperm-producing antheridia.
A complex life cycle
The diagram shows the alternation of generations in a species which is heteromorphic, sporophytic, oogametic, dioicous, heterosporic and dioecious. A seed plant example might be a willow tree (most species of the genus Salix are dioecious). Starting in the centre of the diagram, the processes involved are:
An immobile egg, contained in the archegonium, fuses with a mobile sperm, released from an antheridium. The resulting zygote is either 'male' or 'female'.
A 'male' zygote develops by mitosis into a microsporophyte, which at maturity produces one or more microsporangia. Microspores develop within the microsporangium by meiosis.In a willow (like all seed plants) the zygote first develops into an embryo microsporophyte within the ovule (a megasporangium enclosed in one or more protective layers of tissue known as integument). At maturity, these structures become the seed. Later the seed is shed, germinates and grows into a mature tree. A 'male' willow tree (a microsporophyte) produces flowers with only stamens, the anthers of which are the microsporangia.
Microspores germinate producing microgametophytes; at maturity one or more antheridia are produced. Sperm develop within the antheridia.In a willow, microspores are not liberated from the anther (the microsporangium), but develop into pollen grains (microgametophytes) within it. The whole pollen grain is moved (e.g. by an insect or by the wind) to an ovule (megagametophyte), where a sperm is produced which moves down a pollen tube to reach the egg.
A 'female' zygote develops by mitosis into a megasporophyte, which at maturity produces one or more megasporangia. Megaspores develop within the megasporangium; typically one of the four spores produced by meiosis gains bulk at the expense of the remaining three, which disappear.'Female' willow trees (megasporophytes) produce flowers with only carpels (modified leaves that bear the megasporangia).
Megaspores germinate producing megagametophytes; at maturity one or more archegonia are produced. Eggs develop within the archegonia. The carpels of a willow produce ovules, megasporangia enclosed in integuments. Within each ovule, a megaspore develops by mitosis into a megagametophyte. An archegonium develops within the megagametophyte and produces an egg. The whole of the gametophytic 'generation' remains within the protection of the sporophyte except for pollen grains (which have been reduced to just three cells contained within the microspore wall).
Life cycles of different plant groups
The term "plants" is taken here to mean the Archaeplastida, i.e. the glaucophytes, red and green algae and land plants.
Alternation of generations occurs in almost all multicellular red and green algae, both freshwater forms (such as Cladophora) and seaweeds (such as Ulva). In most, the generations are homomorphic (isomorphic) and free-living. Some species of red algae have a complex triphasic alternation of generations, in which there is a gametophyte phase and two distinct sporophyte phases. For further information, see Red algae: Reproduction.
Land plants all have heteromorphic (anisomorphic) alternation of generations, in which the sporophyte and gametophyte are distinctly different. All bryophytes, i.e. liverworts, mosses and hornworts, have the gametophyte generation as the most conspicuous. As an illustration, consider a monoicous moss. Antheridia and archegonia develop on the mature plant (the gametophyte). In the presence of water, the biflagellate sperm from the antheridia swim to the archegonia and fertilisation occurs, leading to the production of a diploid sporophyte. The sporophyte grows up from the archegonium. Its body comprises a long stalk topped by a capsule within which spore-producing cells undergo meiosis to form haploid spores. Most mosses rely on the wind to disperse these spores, although Splachnum sphaericum is entomophilous, recruiting insects to disperse its spores. For further information, see Liverwort: Life cycle, Moss: Life cycle, Hornwort: Life cycle.
In ferns and their allies, including clubmosses and horsetails, the conspicuous plant observed in the field is the diploid sporophyte. The haploid spores develop in sori on the underside of the fronds and are dispersed by the wind (or in some cases, by floating on water). If conditions are right, a spore will germinate and grow into a rather inconspicuous plant body called a prothallus. The haploid prothallus does not resemble the sporophyte, and as such ferns and their allies have a heteromorphic alternation of generations. The prothallus is short-lived, but carries out sexual reproduction, producing the diploid zygote that then grows out of the prothallus as the sporophyte. For further information, see Fern: Life cycle.
In the spermatophytes, the seed plants, the sporophyte is the dominant multicellular phase; the gametophytes are strongly reduced in size and very different in morphology. The entire gametophyte generation, with the sole exception of pollen grains (microgametophytes), is contained within the sporophyte. The life cycle of a dioecious flowering plant (angiosperm), the willow, has been outlined in some detail in an earlier section (A complex life cycle). The life cycle of a gymnosperm is similar. However, flowering plants have in addition a phenomenon called 'double fertilization'. Two sperm nuclei from a pollen grain (the microgametophyte), rather than a single sperm, enter the archegonium of the megagametophyte; one fuses with the egg nucleus to form the zygote, the other fuses with two other nuclei of the gametophyte to form 'endosperm', which nourishes the developing embryo. For further information, see Double fertilization.
Evolution of the dominant diploid phase
It has been proposed that the basis for the emergence of the diploid phase of the life cycle (sporophyte) as the dominant phase (e.g. as in vascular plants) is that diploidy allows masking of the expression of deleterious mutations through genetic complementation. Thus if one of the parental genomes in the diploid cells contained mutations leading to defects in one or more gene products, these deficiencies could be compensated for by the other parental genome (which nevertheless may have its own defects in other genes). As the diploid phase was becoming predominant, the masking effect likely allowed genome size, and hence information content, to increase without the constraint of having to improve accuracy of DNA replication. The opportunity to increase information content at low cost was advantageous because it permitted new adaptations to be encoded. This view has been challenged, with evidence showing that selection is no more effective in the haploid than in the diploid phases of the lifecycle of mosses and angiosperms.
Similar processes in other organisms
Rhizaria
Some organisms currently classified in the clade Rhizaria and thus not plants in the sense used here, exhibit alternation of generations. Most Foraminifera undergo a heteromorphic alternation of generations between haploid gamont and diploid agamont forms. The single-celled haploid organism is typically much larger than the diploid organism.
Fungi
Fungal mycelia are typically haploid. When mycelia of different mating types meet, they produce two multinucleate ball-shaped cells, which join via a "mating bridge". Nuclei move from one mycelium into the other, forming a heterokaryon (meaning "different nuclei"). This process is called plasmogamy. Actual fusion to form diploid nuclei is called karyogamy, and may not occur until sporangia are formed. Karogamy produces a diploid zygote, which is a short-lived sporophyte that soon undergoes meiosis to form haploid spores. When the spores germinate, they develop into new mycelia.
Slime moulds
The life cycle of slime moulds is very similar to that of fungi. Haploid spores germinate to form swarm cells or myxamoebae. These fuse in a process referred to as plasmogamy and karyogamy to form a diploid zygote. The zygote develops into a plasmodium, and the mature plasmodium produces, depending on the species, one to many fruiting bodies containing haploid spores.
Animals
Alternation between a multicellular diploid and a multicellular haploid generation is never encountered in animals. In some animals, there is an alternation between parthenogenic and sexually reproductive phases (heterogamy). Both phases are diploid. This has sometimes been called "alternation of generations", but is quite different. In some other animals, such as hymenopterans, males are haploid and females diploid, but this is always the case rather than there being an alternation between distinct generations.
See also
: Evolutionary origin of the alternation of phases
Notes and references
Bibliography
Plant reproduction
Reproduction |
68056 | https://en.wikipedia.org/wiki/DIVX | DIVX | DIVX (Digital Video Express) is a discontinued digital video format, an unsuccessful attempt to create an alternative to video rental in the United States.
Format
DIVX was a rental format variation on the DVD player in which a customer would buy a DIVX disc (similar to a DVD) for approximately US$4.50, which was watchable for up to 48 hours from its initial viewing. After this period, the disc could be viewed by paying a continuation fee to play it for two more days. Viewers who wanted to watch a disc an unlimited number of times could convert the disc to a "DIVX silver" disc for an additional fee. "DIVX gold" discs that could be played an unlimited number of times on any DIVX player were announced at the time of DIVX's introduction, but no DIVX gold titles were ever released.
Each DIVX disc was marked with a unique barcode in the burst cutting area that could be read by the player, and used to track the discs. The status of the discs was monitored through an account over a phone line. DIVX player owners had to set up an account with DIVX to which additional viewing fees could be charged. The player would call an account server over the phone line to charge for viewing fees similar to the way DirecTV and Dish Network satellite systems handle pay-per-view.
In addition to the normal Content Scramble System (CSS) encryption, DIVX discs used Triple DES encryption and an alternative channel modulation coding scheme, which prevented them from being read in standard DVD players. Most of the discs would be manufactured by United Kingdom-based Nimbus CD International.
DIVX players manufactured by Zenith Electronics (who would go bankrupt shortly before the launch of the format), Thomson Consumer Electronics (RCA and ProScan), and Matsushita Electric (Panasonic) started to become available in mid-1998. These players differed from regular DVD players with the addition of a security IC chip (powered by ARM RISC and manufactured by VLSI) that controlled the encode/decode of the digital content. Mail systems were included on some players as well. Because of widespread studio support, manufacturers anticipated that demand for the units would be high. Initially, the players were approximately twice as expensive as standard DVD players, but price reductions occurred within months of release.
History
Development and launch
DIVX was introduced on September 8, 1997 (after previously being made under the code name Zoom TV), with the format under development since 1995. The format was a partnership between Circuit City and entertainment law firm Ziffren, Brittenham, Branca & Fischer, with the former company investing $100 million into the latter firm. One advertiser attempted to sign with the company, but was unable to do so, which spurred a lawsuit between the two.
The product made a quiet showing at the Consumer Electronics Show in Las Vegas in early January 1998, but won the attention of 20th Century Fox which on February 20, 1998 signed a deal to release their titles on the format. After multiple delays, the initial trial of the DIVX format was run in the San Francisco, California and Richmond, Virginia areas starting on June 8, 1998. Initially, only a single Zenith player was available starting at $499, along with no more than 50 (but more than 19) titles. Very few players sold during this time period, with The Good Guys chain alleging that fewer than ten players were sold during this time period. A nationwide rollout began three months later, on September 21, again with only one Zenith player and 150 titles available in 190 stores in the western U.S.
At the format's launch, DIVX was sold primarily through the Circuit City, Good Guys, and Ultimate Electronics retailers. The format was promoted to consumers as an alternative to traditional video rental schemes with the promise of "No returns, no late fees." Though consumers could just discard a DIVX disc after the initial viewing period, several DIVX retailers maintained DIVX recycling bins on their premises. On September 22, 1998, a fourth retailer, Canadian Future Shop, signed a contract with DIVX to stock the format, although only in 23 stores in the U.S. only. Thomson's player, after multiple delays, arrived on October 3, 1998, followed by Panasonic's on December 10. The format made its overall national debut on October 12, 1998. A marketing push began that November for the 1998 holiday season, with more than $1 million going into the campaign. The fortunes of the format would seemingly turn for the better in mid-December 1998, when a shortage of DVD players occurred. In total, 87,000 players were sold during the final quarter of 1998, with 535,000 discs across 300 titles being sold, although fewer than 17,000 accounts for DIVX were created.
Opposition
Almost immediately after the format's reveal, a movement on the Internet was initiated against DIVX, particularly in home theater forums by existing owners of the then-still nascent DVD format. Broader groups of consumers had environmental concerns with the format, since under the advertised "no returns" model a disc would be discarded as waste once the initial user was done with it, rather than being reused as they were under the traditional rental model. Both companies that created the DVD format (Sony and Toshiba) also denounced DIVX, as did major studio distributor Warner Home Video (who was the first major American studio to distribute DVD) and the DVD Forum (a consortium of developers on the format who standardized DVDs). Titles in the DIVX catalog were released primarily in pan and scan format with limited special features, usually only a trailer (although a few widescreen titles did arrive on the format in early December 1998). This caused many home theater enthusiasts to become concerned that the success of DIVX would significantly diminish the release of films on the DVD format in the films' original aspect ratios and with supplementary material. Some early demos were also noted to have unique instances of artifacting on the discs that were not present on standard DVDs. Many people in various technology and entertainment communities were afraid that there would be DIVX exclusive releases, and that the then-fledgling DVD format would suffer as a result. DreamWorks, 20th Century Fox, and Paramount Pictures, for instance, initially released their films exclusively on the DIVX format (something that DIVX did not originally intend to happen), as did Disney, which released on both formats. DIVX featured stronger encryption technology than DVD (Triple DES instead of CSS) which many studios stated was a contributing factor in the decision to support DIVX. Others cited the higher price of DIVX-compatible DVD players and rental costs as their reason for opposing the format, with one declaring DIVX as "holding my VCR hostage". One online poll surveyed 786 people on the format, of which nearly 97% disapproved of the format's concept, and another poll in December 1998 reflected 86% disapproval even if the format were free – a testament to the fierce online backlash the format received. As early as December 1997, news outlets were already calling the format a failure for Circuit City.
In addition to the hostile Internet response, competitors such as Hollywood Video ran advertisements touting the benefits of "Open DVD" over DIVX, with one ad in the Los Angeles Times depicting a hand holding a telephone line with the caption: "Don't let anyone feed you the line." The terminology "Open DVD" had been used by DVD supporters and later Sony themselves; in response to DIVX's labeling of DVD as "Basic DVD" and DIVX/DVD players as "DIVX-enhanced". Other retailers, such as Best Buy, also had their concerns, most of them citing possible customer confusion and cumbersomeness with the two formats. Pay-per-view companies were also concerned with the format intruding on their business sector, namely with their objective of single-use rentals of a film being offered to the consumer.
However, early concerns of alleged or feared constant usage of the phone line proved to be somewhat exaggerated, as all players needed to do was verify its usage twice a month. Despite this, informational-freedom advocates were concerned that the players' "dial-home" ability could be used to spy on people's watching habits, as well as copyright and privacy concerns about its licensing of the media, with some alleging it violated fair-use laws entirely.
Allegations of anti-competitive vaporware, as well as concerns within the software industry prompted David Dranove of Northwestern University and Neil Gandal of Tel Aviv University and University of California, Berkeley, to conduct an empirical study designed to measure the effect of the DIVX announcement on the DVD market. This study suggests that the DIVX announcement slowed the adoption of DVD technology. According to Dranove and Gandal, the study suggests that the "general antitrust concern about vaporware seems justified".
Demise
Right after the launch of the format, Circuit City announced that despite a gain of 4.1% in net profit, huge expenses of launching that format (among other issues) massively undercut that profit. As early as September 1998, Circuit City was looking for partners to share their losses from the format's launch. Retailers such as Blockbuster Video did not carry the format at all. Not helping the format's defenders was suspicious activity of pro-DIVX sites, with one shutting down as quickly as it opened.
DIVX and Thomson teamed up in January 1999 to create another format made for high-definition video using existing DVD technology, predating the development of both Blu-ray and HD DVD by many years. The market share for DIVX players was 23% in January 1999, and by that March, around 419 titles were available in the DIVX format. However, sales for the format quickly fell off after the 1998 holiday season, with all three third-party retailers pulling out of DIVX sales by that point. In May, studio support for DIVX would start to be phased out with Paramount refusing to convert their titles to "Silver" discs (and then later stopping DIVX releases entirely), and Disney increasing their DVD activity. By the format's first anniversary, the future of the format was very grim - with only five DIVX-compatible players (and no DIVX-compatible computer drives), 478 titles, and only Circuit City selling DIVX discs.
The format was discontinued on June 16, 1999, because of the costs of introducing the format, as well as its very limited acceptance by the general public and retailers. At the end of the format's life, Circuit City announced a $114 million after-tax loss, and Variety estimated the total loss on the scheme was around $337 million. Over the next two years, the DIVX system was phased out. Customers could still view all their DIVX discs and were given a $100 refund for every player that was purchased before June 16, 1999. All discs that were unsold at the end of the summer of 1999 were destroyed. The program officially cut off access to accounts on July 7, 2001. The player's Security Module, which had an internal Real-Time Clock, ceased to allow DIVX functions after 30 days without a connection to the central system. Unsold players were liquidated in online auctions, but not before being modified to remove the DIVX Security Module. As a result, certain player models demonstrated lockups when DIVX menus were accessed.
On the company website to announce discontinuation of the product on June 16, 1999, it stated: "All DIVX-featured DVD players are fully functional DVD players and will continue to operate as such. All DIVX discs, including those previously purchased by consumers and those remaining in retailer inventories, can be viewed on registered players anytime between now and June 30, 2001. Subsequent viewings also will be available during that period. Discs can no longer be upgraded to unlimited viewing, known as DIVX Silver. Customers who have converted discs to DIVX Silver can continue viewing the discs until June 30, 2001, or can receive a full refund of the conversion price at their request". This meant no DIVX discs could play any content after June 30, 2001, rendering the medium worthless.
DIVX appeared as a "dishonorable mention" alongside PC World's list of "25 Worst Tech Products of All Time" in 2006.
List of films available on DIVX
101 Dalmatians (1996)
The Abyss
Air Bud: Golden Receiver
Affliction (1997)
Alice in Wonderland (1951)
Alien Resurrection
Antz
Apollo 13
Armageddon (1998)
Army of Darkness
At First Sight (1999)
A Thousand Acres
Babe
BASEketball (1998)
Beloved (1998)
Blown Away (1994)
The Blues Brothers
Born on the Fourth of July
Brassed Off
Brazil (1985)
The Breakfast Club
Brubaker
Bulworth
The Boxer (1997)
A Bug's Life
The Chamber (1996)
Chairman of the Board
Chasing Amy
A Civil Action
Con Air
Conan the Destroyer
Cop Land
Courage Under Fire
Crimson Tide (film)
The Crow (1994)
The Crow: City of Angels
Dante's Peak
Daylight (1996)
The Day of the Jackal
Death Becomes Her
Deep Impact
Deep Rising
Dirty Work (1998)
Disturbing Behavior
Dragnet (1987)
The Edge
Ed Wood
The Eiger Sanction
Enemy of the State
The End of Violence
Escape from L.A.
Ever After
Evita (1996)
Father of the Bride Part II
Fled
The Flintstones
For Richer or Poorer
The Full Monty
Gang Related
George of the Jungle
GoldenEye
The Ghost and the Darkness
G.I. Jane
Good Will Hunting
Hackers
Half-Baked
Halloween: H20
Hard Rain
Highlander: The Final Dimension
Holy Man
Hope Floats
The Horse Whisperer
Houseguest
The Hunt for Red October
The Impostors
Invasion of the Body Snatchers (1978)
The Jackal (1997)
Jane Austen's Mafia!
Judge Dredd
Kissing a Fool
Kiss the Girls (1997)
Liar Liar
A Life Less Ordinary
The Madness of King George
Mafia!
The Man in the Iron Mask (1998)
Mercury Rising
Mr. Magoo
Mrs. Doubtfire
MouseHunt
Mulholland Falls
Nothing to Lose (1997)
The Object of My Affection
One True Thing
Oscar and Lucinda
Patriot Games
Patch Adams
The Peacemaker (1997)
Phantoms
Pulp Fiction
Rapid Fire (1992)
Rising Sun
The River (1984)
The Rock
RocketMan
Rollerball (1975)
Ronin
Scream (1996)
Scream 2
The Shadow (1994)
Six Days Seven Nights
Sling Blade
Slums of Beverly Hills
Small Soldiers
Sneakers (1992)
Speed 2: Cruise Control
Spy Hard
Street Fighter (1994)
Star Trek: First Contact
Star Trek Generations
Strange Days
Supercop
That Thing You Do!
There's Something About Mary
The Thin Red Line (1998)
The Thing (1982)
Tomorrow Never Dies
Twelve Monkeys
Twilight (1998)
Ulee's Gold
Waking Ned Devine
A Walk in the Clouds
Welcome to Sarajevo
Wing Commander (canceled)
The X-Files
Young Frankenstein
See also
Planned obsolescence
Digital rights management
Flexplay (another disposable DVD format)
DVD-D (another disposable DVD format)
References
External links
Divx Owners Association, Archived on April 17, 2008.
Official Website, Archived on May 8, 1999.
Circuit City
Audiovisual introductions in 1997
Discontinued media formats
Digital rights management
Video storage
DVD
Audiovisual ephemera |
68057 | https://en.wikipedia.org/wiki/Ron%20Rivest | Ron Rivest | Ronald Linn Rivest (; born May 6, 1947) is a cryptographer and an Institute Professor at MIT. He is a member of MIT's Department of Electrical Engineering and Computer Science (EECS) and a member of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). His work has spanned the fields of algorithms and combinatorics, cryptography, machine learning, and election integrity.
Rivest is one of the inventors of the RSA algorithm (along with Adi Shamir and Len Adleman). He is the inventor of the symmetric key encryption algorithms RC2, RC4, RC5, and co-inventor of RC6. The "RC" stands for "Rivest Cipher", or alternatively, "Ron's Code". (RC3 was broken at RSA Security during development; similarly, RC1 was never published.) He also authored the MD2, MD4, MD5 and MD6 cryptographic hash functions.
Education
Rivest earned a Bachelor's degree in Mathematics from Yale University in 1969, and a Ph.D. degree in Computer Science from Stanford University in 1974 for research supervised by Robert W. Floyd.
Career and research
At MIT, Rivest is a member of the Theory of Computation Group, and founder of MIT CSAIL's Cryptography and Information Security Group.
He is a co-author of Introduction to Algorithms (also known as CLRS), a standard textbook on algorithms, with Thomas H. Cormen, Charles E. Leiserson and Clifford Stein. Other contributions to the field of algorithms include the paper, "Time Bounds for Selection", which gives a worst-case linear-time algorithm.
In 2006, he published his invention of the ThreeBallot voting system, a voting system that incorporates the ability for the voter to discern that their vote was counted while still protecting their voter privacy. Most importantly, this system does not rely on cryptography at all. Stating "Our democracy is too important", he simultaneously placed ThreeBallot in the public domain. He was a member of the Election Assistance Commission's Technical Guidelines Development Committee, tasked with assisting the EAC in drafting the Voluntary Voting System Guidelines.
Rivest frequently collaborates with other researchers in combinatorics, for example working with David A. Klarner to find an upper bound on the number of polyominoes of a given order and working with Jean Vuillemin to prove the deterministic form of the Aanderaa–Rosenberg conjecture.
He was also a founder of RSA Data Security (now merged with Security Dynamics to form RSA Security), Verisign, and of Peppercoin. Rivest has research interests in algorithms, cryptography and voting. His former doctoral students include Avrim Blum, Burt Kaliski, Anna Lysyanskaya, Ron Pinter, Robert Schapire, Alan Sherman,
and Mona Singh.
Publications
His publications include:
Honors and awards
Rivest is a member of the National Academy of Engineering, the National Academy of Sciences, and is a Fellow of the Association for Computing Machinery, the International Association for Cryptologic Research, and the American Academy of Arts and Sciences. Together with Adi Shamir and Len Adleman, he has been awarded the 2000 IEEE Koji Kobayashi Computers and Communications Award and the Secure Computing Lifetime Achievement Award. He also shared with them the Turing Award. Rivest has received an honorary degree (the "laurea honoris causa") from the Sapienza University of Rome. In 2005, he received the MITX Lifetime Achievement Award. Rivest was named in 2007 the Marconi Fellow, and on May 29, 2008 he also gave the Chesley lecture at Carleton College. He was named an Institute Professor at MIT in June 2015.
References
External links
List of Ron Rivest's patents on IPEXL
Home page of Ronald L. Rivest
Official site of RSA Security Inc.
Ron Rivest election research papers
American computer scientists
American cryptographers
1947 births
Living people
Computer security academics
Public-key cryptographers
Election technology people
International Association for Cryptologic Research fellows
Members of the United States National Academy of Sciences
Members of the United States National Academy of Engineering
Turing Award laureates
MIT School of Engineering faculty
Scientists from Schenectady, New York
Fellows of the Association for Computing Machinery
Yale University alumni
Timothy Dwight College alumni
Stanford University alumni
People from Arlington, Massachusetts
20th-century American engineers
21st-century American engineers
20th-century American scientists
21st-century American scientists
Mathematicians from New York (state) |
69509 | https://en.wikipedia.org/wiki/Vigen%C3%A8re%20cipher | Vigenère cipher | The Vigenère cipher () is a method of encrypting alphabetic text by using a series of interwoven Caesar ciphers, based on the letters of a keyword. It employs a form of polyalphabetic substitution.
First described by Giovan Battista Bellaso in 1553, the cipher is easy to understand and implement, but it resisted all attempts to break it until 1863, three centuries later. This earned it the description le chiffrage indéchiffrable (French for 'the indecipherable cipher'). Many people have tried to implement encryption schemes that are essentially Vigenère ciphers. In 1863, Friedrich Kasiski was the first to publish a general method of deciphering Vigenère ciphers.
In the 19th century the scheme was misattributed to Blaise de Vigenère (1523–1596), and so acquired its present name.
History
The very first well-documented description of a polyalphabetic cipher was by Leon Battista Alberti around 1467 and used a metal cipher disk to switch between cipher alphabets. Alberti's system only switched alphabets after several words, and switches were indicated by writing the letter of the corresponding alphabet in the ciphertext. Later, Johannes Trithemius, in his work Polygraphiae (which was completed in manuscript form in 1508 but first published in 1518), invented the tabula recta, a critical component of the Vigenère cipher. The Trithemius cipher, however, provided a progressive, rather rigid and predictable system for switching between cipher alphabets.
In 1586 Blaise de Vigenère published a type of polyalphabetic cipher called an autokey cipher – because its key is based on the original plaintext – before the court of Henry III of France. The cipher now known as the Vigenère cipher, however, is that originally described by Giovan Battista Bellaso in his 1553 book La cifra del Sig. Giovan Battista Bellaso. He built upon the tabula recta of Trithemius but added a repeating "countersign" (a key) to switch cipher alphabets every letter. Whereas Alberti and Trithemius used a fixed pattern of substitutions, Bellaso's scheme meant the pattern of substitutions could be easily changed, simply by selecting a new key. Keys were typically single words or short phrases, known to both parties in advance, or transmitted "out of band" along with the message. Bellaso's method thus required strong security for only the key. As it is relatively easy to secure a short key phrase, such as by a previous private conversation, Bellaso's system was considerably more secure.
In the 19th century, the invention of Bellaso's cipher was misattributed to Vigenère. David Kahn, in his book, The Codebreakers lamented this misattribution, saying that history had "ignored this important contribution and instead named a regressive and elementary cipher for him [Vigenère] though he had nothing to do with it".
The Vigenère cipher gained a reputation for being exceptionally strong. Noted author and mathematician Charles Lutwidge Dodgson (Lewis Carroll) called the Vigenère cipher unbreakable in his 1868 piece "The Alphabet Cipher" in a children's magazine. In 1917, Scientific American described the Vigenère cipher as "impossible of translation". That reputation was not deserved. Charles Babbage is known to have broken a variant of the cipher as early as 1854 but did not publish his work. Kasiski entirely broke the cipher and published the technique in the 19th century, but even in the 16th century, some skilled cryptanalysts could occasionally break the cipher.
The Vigenère cipher is simple enough to be a field cipher if it is used in conjunction with cipher disks. The Confederate States of America, for example, used a brass cipher disk to implement the Vigenère cipher during the American Civil War. The Confederacy's messages were far from secret, and the Union regularly cracked its messages. Throughout the war, the Confederate leadership primarily relied upon three key phrases: "Manchester Bluff", "Complete Victory" and, as the war came to a close, "Come Retribution".
A Vigenère cipher with a completely random (and non-reusable) key which is as long as the message becomes a one-time pad, a theoretically unbreakable cipher. Gilbert Vernam tried to repair the broken cipher (creating the Vernam–Vigenère cipher in 1918), but the technology he used was so cumbersome as to be impracticable.
Description
In a Caesar cipher, each letter of the alphabet is shifted along some number of places. For example, in a Caesar cipher of shift 3, a would become D, b would become E, y would become B and so on. The Vigenère cipher has several Caesar ciphers in sequence with different shift values.
To encrypt, a table of alphabets can be used, termed a tabula recta, Vigenère square or Vigenère table. It has the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating keyword.
For example, suppose that the plaintext to be encrypted is
attackatdawn.
The person sending the message chooses a keyword and repeats it until it matches the length of the plaintext, for example, the keyword "LEMON":
LEMONLEMONLE
Each row starts with a key letter. The rest of the row holds the letters A to Z (in shifted order). Although there are 26 key rows shown, a code will use only as many keys (different alphabets) as there are unique letters in the key string, here just 5 keys: {L, E, M, O, N}. For successive letters of the message, successive letters of the key string will be taken and each message letter enciphered by using its corresponding key row. The next letter of the key is chosen, and that row is gone along to find the column heading that matches the message character. The letter at the intersection of [key-row, msg-col] is the enciphered letter.
For example, the first letter of the plaintext, a, is paired with L, the first letter of the key. Therefore, row L and column A of the Vigenère square are used, namely L. Similarly, for the second letter of the plaintext, the second letter of the key is used. The letter at row E and column T is X. The rest of the plaintext is enciphered in a similar fashion:
Decryption is performed by going to the row in the table corresponding to the key, finding the position of the ciphertext letter in that row and then using the column's label as the plaintext. For example, in row L (from LEMON), the ciphertext L appears in column A, so a is the first plaintext letter. Next, in row E (from LEMON), the ciphertext X is located in column T. Thus t is the second plaintext letter.
Algebraic description
Vigenère can also be described algebraically. If the letters A–Z are taken to be the numbers 0–25 (, , etc.), and addition is performed modulo 26, Vigenère encryption using the key can be written as
and decryption using the key as
in which is the message, is the ciphertext and is the key obtained by repeating the keyword times in which is the keyword length.
Thus, by using the previous example, to encrypt with key letter the calculation would result in .
Therefore, to decrypt with key letter , the calculation would result in .
In general, if is the alphabet of length , and is the length of key, Vigenère encryption and decryption can be written:
denotes the offset of the i-th character of the plaintext in the alphabet . For example, by taking the 26 English characters as the alphabet , the offset of A is 0, the offset of B is 1 etc. and are similar.
Cryptanalysis
The idea behind the Vigenère cipher, like all other polyalphabetic ciphers, is to disguise the plaintext letter frequency to interfere with a straightforward application of frequency analysis. For instance, if P is the most frequent letter in a ciphertext whose plaintext is in English, one might suspect that P corresponds to e since e is the most frequently used letter in English. However, by using the Vigenère cipher, e can be enciphered as different ciphertext letters at different points in the message, which defeats simple frequency analysis.
The primary weakness of the Vigenère cipher is the repeating nature of its key. If a cryptanalyst correctly guesses the key's length n, the cipher text can be treated as n interleaved Caesar ciphers, which can easily be broken individually. The key length may be discovered by brute force testing each possible value of n, or Kasiski examination and the Friedman test can help to determine the key length (see below: and ).
Kasiski examination
In 1863, Friedrich Kasiski was the first to publish a successful general attack on the Vigenère cipher. Earlier attacks relied on knowledge of the plaintext or the use of a recognizable word as a key. Kasiski's method had no such dependencies. Although Kasiski was the first to publish an account of the attack, it is clear that others had been aware of it. In 1854, Charles Babbage was goaded into breaking the Vigenère cipher when John Hall Brock Thwaites submitted a "new" cipher to the Journal of the Society of the Arts. When Babbage showed that Thwaites' cipher was essentially just another recreation of the Vigenère cipher, Thwaites presented a challenge to Babbage: given an original text (from Shakespeare's The Tempest : Act 1, Scene 2) and its enciphered version, he was to find the key words that Thwaites had used to encipher the original text. Babbage soon found the key words: "two" and "combined". Babbage then enciphered the same passage from Shakespeare using different key words and challenged Thwaites to find Babbage's key words. Babbage never explained the method that he used. Studies of Babbage's notes reveal that he had used the method later published by Kasiski and suggest that he had been using the method as early as 1846.
The Kasiski examination, also called the Kasiski test, takes advantage of the fact that repeated words are, by chance, sometimes encrypted using the same key letters, leading to repeated groups in the ciphertext. For example, consider the following encryption using the keyword ABCD:
Key: ABCDABCDABCDABCDABCDABCDABCD
Plaintext: cryptoisshortforcryptography
Ciphertext: CSASTPKVSIQUTGQUCSASTPIUAQJB
There is an easily noticed repetition in the ciphertext, and so the Kasiski test will be effective.
The distance between the repetitions of CSASTP is 16. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 16, 8, 4, 2, or 1 characters long. (All factors of the distance are possible key lengths; a key of length one is just a simple Caesar cipher, and its cryptanalysis is much easier.) Since key lengths 2 and 1 are unrealistically short, one needs to try only lengths 16, 8 or 4. Longer messages make the test more accurate because they usually contain more repeated ciphertext segments. The following ciphertext has two segments that are repeated:
Ciphertext: VHVSSPQUCEMRVBVBBBVHVSURQGIBDUGRNICJQUCERVUAXSSR
The distance between the repetitions of VHVS is 18. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 18, 9, 6, 3, 2 or 1 character long. The distance between the repetitions of QUCE is 30 characters. That means that the key length could be 30, 15, 10, 6, 5, 3, 2 or 1 character long. By taking the intersection of those sets, one could safely conclude that the most likely key length is 6 since 3, 2, and 1 are unrealistically short.
Friedman test
The Friedman test (sometimes known as the kappa test) was invented during the 1920s by William F. Friedman, who used the index of coincidence, which measures the unevenness of the cipher letter frequencies to break the cipher. By knowing the probability that any two randomly chosen source language letters are the same (around 0.067 for monocase English) and the probability of a coincidence for a uniform random selection from the alphabet ( = 0.0385 for English), the key length can be estimated as the following:
from the observed coincidence rate
in which c is the size of the alphabet (26 for English), N is the length of the text and n1 to nc are the observed ciphertext letter frequencies, as integers.
That is, however, only an approximation; its accuracy increases with the length of the text. It would, in practice, be necessary to try various key lengths that are close to the estimate. A better approach for repeating-key ciphers is to copy the ciphertext into rows of a matrix with as many columns as an assumed key length and then to compute the average index of coincidence with each column considered separately. When that is done for each possible key length, the highest average I.C. then corresponds to the most-likely key length. Such tests may be supplemented by information from the Kasiski examination.
Frequency analysis
Once the length of the key is known, the ciphertext can be rewritten into that many columns, with each column corresponding to a single letter of the key. Each column consists of plaintext that has been encrypted by a single Caesar cipher. The Caesar key (shift) is just the letter of the Vigenère key that was used for that column. Using methods similar to those used to break the Caesar cipher, the letters in the ciphertext can be discovered.
An improvement to the Kasiski examination, known as Kerckhoffs' method, matches each column's letter frequencies to shifted plaintext frequencies to discover the key letter (Caesar shift) for that column. Once every letter in the key is known, all the cryptanalyst has to do is to decrypt the ciphertext and reveal the plaintext. Kerckhoffs' method is not applicable if the Vigenère table has been scrambled, rather than using normal alphabetic sequences, but Kasiski examination and coincidence tests can still be used to determine key length.
Key elimination
The Vigenère cipher, with normal alphabets, essentially uses modulo arithmetic, which is commutative. Therefore, if the key length is known (or guessed), subtracting the cipher text from itself, offset by the key length, will produce the plain text subtracted from itself, also offset by the key length. If any "probable word" in the plain text is known or can be guessed, its self-subtraction can be recognized, which allows recovery of the key by subtracting the known plaintext from the cipher text. Key elimination is especially useful against short messages. For example using LION as the key below:
Then subtract the ciphertext from itself with a shift of the key length 4 for LION.
Which is nearly equivalent to subtracting the plaintext from itself by the same shift.
Which is algebraically represented for as:
In this example, the words brownfox are known.
This result omaz corresponds with the 9th through 12th letters in the result of the larger examples above. The known section and its location is verified.
Subtract brow from that range of the ciphertext.
This produces the final result, the reveal of the key LION.
Variants
Running key
The running key variant of the Vigenère cipher was also considered unbreakable at one time. For the key, this version uses a block of text as long as the plaintext. Since the key is as long as the message, the Friedman and Kasiski tests no longer work, as the key is not repeated.
If multiple keys are used, the effective key length is the least common multiple of the lengths of the individual keys. For example, using the two keys GO and CAT, whose lengths are 2 and 3, one obtains an effective key length of 6 (the least common multiple of 2 and 3). This can be understood as the point where both keys line up.
Encrypting twice, first with the key GO and then with the key CAT is the same as encrypting once with a key produced by encrypting one key with the other.
This is demonstrated by encrypting attackatdawn with IOZQGH, to produce the same ciphertext as in the original example.
If key lengths are relatively prime, the effective key length grows exponentially as the individual key lengths are increased. For example, while the effective length of keys 10, 12, and 15 characters is only 60, that of keys of 8, 11, and 15 characters is 1320. If this effective key length is longer than the ciphertext, it achieves the same immunity to the Friedman and Kasiski tests as the running key variant.
If one uses a key that is truly random, is at least as long as the encrypted message, and is used only once, the Vigenère cipher is theoretically unbreakable. However, in that case, the key, not the cipher, provides cryptographic strength, and such systems are properly referred to collectively as one-time pad systems, irrespective of the ciphers employed.
Variant Beaufort
A simple variant is to encrypt by using the Vigenère decryption method and to decrypt by using Vigenère encryption. That method is sometimes referred to as "Variant Beaufort". It is different from the Beaufort cipher, created by Francis Beaufort, which is similar to Vigenère but uses a slightly modified enciphering mechanism and tableau. The Beaufort cipher is a reciprocal cipher.
Gronsfeld cipher
Despite the Vigenère cipher's apparent strength, it never became widely used throughout Europe. The Gronsfeld cipher is a variant created by Count Gronsfeld (Josse Maximilaan van Gronsveld né van Bronckhorst); it is identical to the Vigenère cipher except that it uses just 10 different cipher alphabets, corresponding to the digits 0 to 9). A Gronsfeld key of 0123 is the same as a Vigenere key of ABCD. The Gronsfeld cipher is strengthened because its key is not a word, but it is weakened because it has just 10 cipher alphabets. It is Gronsfeld's cipher that became widely used throughout Germany and Europe, despite its weaknesses.
Vigenèreʼs autokey cipher
Vigenère actually invented a stronger cipher, an autokey cipher. The name "Vigenère cipher" became associated with a simpler polyalphabetic cipher instead. In fact, the two ciphers were often confused, and both were sometimes called le chiffre indéchiffrable. Babbage actually broke the much-stronger autokey cipher, but Kasiski is generally credited with the first published solution to the fixed-key polyalphabetic ciphers.
See also
Roger Frontenac (Nostradamus quatrain decryptor, 1950)
A Simple Vigenere Cipher for Excel VBA: Provides VBA code to use the Excel worksheet for Vigenere cipher encryption and decryption.
References
Citations
Sources
Notes
External links
Articles
History of the cipher from Cryptologia
Basic Cryptanalysis at H2G2
"Lecture Notes on Classical Cryptology" including an explanation and derivation of the Friedman Test
Classical ciphers
Stream ciphers
de:Polyalphabetische Substitution#Vigen.C3.A8re-Verschl.C3.BCsselung |
69794 | https://en.wikipedia.org/wiki/Cryptogram | Cryptogram | A cryptogram is a type of puzzle that consists of a short piece of encrypted text. Generally the cipher used to encrypt the text is simple enough that the cryptogram can be solved by hand. Substitution ciphers where each letter is replaced by a different letter or number are frequently used. To solve the puzzle, one must recover the original lettering. Though once used in more serious applications, they are now mainly printed for entertainment in newspapers and magazines.
Other types of classical ciphers are sometimes used to create cryptograms. An example is the book cipher where a book or article is used to encrypt a message.
History of cryptograms
The ciphers used in cryptograms were not originally created for entertainment purposes, but for real encryption of military or personal secrets.
The first use of the cryptogram for entertainment purposes occurred during the Middle Ages by monks who had spare time for intellectual games. A manuscript found at Bamberg states that Irish visitors to the court of Merfyn Frych ap Gwriad (died 844), king of Gwynedd in Wales were given a cryptogram which could only be solved by transposing the letters from Latin into Greek. Around the thirteenth century, the English monk Roger Bacon wrote a book in which he listed seven cipher methods, and stated that "a man is crazy who writes a secret in any other way than one which will conceal it from the vulgar." In the 19th century Edgar Allan Poe helped to popularize cryptograms with many newspaper and magazine articles.
Well-known examples of cryptograms in contemporary culture are the syndicated newspaper puzzles Cryptoquip and Cryptoquote, from King Features.
In a public challenge, writer J.M. Appel announced on September 28, 2014, that the table of contents page of his short story collection, Scouting for the Reaper, also doubled as a cryptogram, and he pledged an award for the first to solve it.
Solving a cryptogram
Cryptograms based on substitution ciphers can often be solved by frequency analysis and by recognizing letter patterns in words, such as one letter words, which, in English, can only be "i" or "a" (and sometimes "o"). Double letters, apostrophes, and the fact that no letter can substitute for itself in the cipher also offer clues to the solution. Occasionally, cryptogram puzzle makers will start the solver off with a few letters.
Other crypto puzzles
While the Cryptogram has remained popular, over time other puzzles similar to it have emerged. One of these is the Cryptoquote, which is a famous quote encrypted in the same way as a Cryptogram. A more recent version, with a biblical twist, is CodedWord. This puzzle makes the solution available only online where it provides a short exegesis on the biblical text. Yet a third is the Cryptoquiz. This puzzle starts off at the top with a category (unencrypted). For example, "Flowers" might be used. Below this is a list of encrypted words which are related to the stated category. The person must then solve for the entire list to finish the puzzle. Yet another type involves using numbers as they relate to texting to solve the puzzle.
The Zodiac Killer sent four cryptograms to police while he was still active. Despite much research and many investigations, only two of these have been translated, which was of no help in identifying the serial killer.
See also
List of famous ciphertexts
Musical cryptogram
American Cryptogram Association
References
History of cryptography
Word puzzles |
70247 | https://en.wikipedia.org/wiki/Bureau%20of%20Industry%20and%20Security | Bureau of Industry and Security | The Bureau of Industry and Security (BIS) is an agency of the United States Department of Commerce that deals with issues involving national security and high technology. A principal goal for the bureau is helping stop the proliferation of weapons of mass destruction, while furthering the growth of United States exports. The Bureau is led by the Under Secretary of Commerce for Industry and Security.
The mission of the BIS is to advance U.S. national security, foreign policy, and economic interests. BIS's activities include regulating the export of sensitive goods and dual-use technologies in an effective and efficient manner; enforcing export control, anti-boycott, and public safety laws; cooperating with and assisting other countries on export control and strategic trade issues; assisting U.S. industry to comply with international arms control agreements; monitoring the viability of the U.S. defense–industrial base; and promoting federal initiatives and public-private partnerships to protect the nation's critical infrastructures.
Items on the Commerce Control List (CCL) – which includes many sensitive goods and technologies like encryption software – require a permit from the Department of Commerce before they can be exported. To determine whether an export permit is required, an Export Control Classification Number (ECCN) is used.
Organization
The Bureau of Industry and Security, a component of the United States Department of Commerce, is organized by the United States Secretary of Commerce as follows:
Under Secretary of Commerce for Industry and Security
Deputy Under Secretary of Commerce for Industry and Security
Assistant Secretary of Commerce for Export Administration
Office of National Security and Technology Transfer Controls
Office of Nonproliferation and Treaty Compliance
Office of Strategic Industries and Economic Security
Office of Exporter Services
Office of Technology Evaluation
Assistant Secretary of Commerce for Export Enforcement
Office of Export Enforcement
Office of Enforcement Analysis
Office of Antiboycott Compliance
Guiding principles of the Bureau of Industry and Security
The main focus of the bureau is the security of the United States, which includes its national security, economic security, cyber security, and homeland security. For example, in the area of dual-use export controls, BIS administers and enforces such controls to stem the proliferation of weapons of mass destruction and the means of delivering them, to halt the spread of weapons to terrorists or countries of concern, and to further U.S. foreign policy objectives. Where there is credible evidence suggesting that the export of a dual-use item threatens U.S. security, the Bureau is empowered to prevent export of the item.
In addition to national security, BIS's function is to ensure the health of the U.S. economy and the competitiveness of U.S. industry. BIS promotes a strong defense–industrial base that can develop and provide technologies that will enable the United States to maintain its military superiority. BIS takes care to ensure that its regulations do not impose unreasonable restrictions on legitimate international commercial activity that are necessary for the health of U.S. industry.
Private sector collaboration
BIS works with the private sectors of the aerospace manufacturers, microprocessor, defense and other high-tech industries, which today controls a greater share of critical U.S. resources than in the past. Because the health of U.S. industry is dependent on U.S. security, BIS has formed a symbiotic relationship between industry and security, which is reflected in the formulation, application, and enforcement of BIS rules and policies.
Shifting global priorities
BIS activities and regulations also seek to adapt to changing global conditions and challenges. The political, economic, technological, and security environment that exists today is substantially different than that of only a decade ago. Laws, regulations, or practices that do not take into account these new global realities—and that do not have sufficient flexibility to allow for adaptation in response to future changes—ultimately harm national security by imposing costs and burdens on U.S. industry without any corresponding benefit to U.S. security. In the area of exports, these significant geopolitical changes suggest that the U.S. control regime that in the past was primarily list-based must shift to a mix of list-based controls and controls that target specific end-uses and end-users of concern. BIS thinks about how new technologies can be utilized in designing better export controls and enforcing controls more effectively.
BIS strives to work cooperatively with state and local government officials, first responders, and federal executive departments and agencies, including the National Security Council, Department of Homeland Security, Department of State, Department of Defense, Department of Energy, Department of Justice, and the Intelligence Community. BIS consults with its oversight committees, (the House Foreign Affairs Committee and Senate Banking, Housing, and Urban Affairs Committee) and other appropriate Members of Congress and congressional staff on matters of mutual interest.
International cooperation
International cooperation is critical to BIS's activities. The mission of promoting security depends heavily upon international cooperation with the United States's principal trading partners and other countries of strategic importance, such as major transshipment hubs. BIS takes the viewpoint that when seeking to control the spread of dangerous goods and technologies, protecting critical infrastructures, and ensuring the existence of a strong defense industrial base, international cooperation is critical. With regard to export control laws in particular, effective enforcement is greatly enhanced by both international cooperation and an effort to harmonize the substance of U.S. laws with those of our principal trading partners. International cooperation, however, does not mean "settling on the lowest common denominator." Where consensus cannot be broadly obtained, the BIS will maintain its principles, often through cooperation among smaller groups of like-minded partners.
See also
Title 15 of the Code of Federal Regulations
Commodity Classification Automated Tracking System
References
External links
Bureau of Industry and Security
Bureau of Industry and Security in the Federal Register
Search BIS Screening List
Export and import control
United States Department of Commerce agencies
Federal law enforcement agencies of the United States |
70342 | https://en.wikipedia.org/wiki/Direct%20Connect%20%28protocol%29 | Direct Connect (protocol) | Direct Connect (DC) is a peer-to-peer file sharing protocol. Direct Connect clients connect to a central hub and can download files directly from one another. Advanced Direct Connect can be considered a successor protocol.
Hubs feature a list of clients or users connected to them. Users can search for files and download them from other clients, as well as chat with other users.
History
NeoModus was started as a company funded by the adware "Direct Connect" by Jon Hess in November, 1999 while he was in high school.
The first third-party client was called "DClite", which never fully supported the file sharing aspects of the protocol. Hess released a new version of Direct Connect, requiring a simple encryption key to initiate a connection, locking out third-party clients. The encryption key was cracked, and the author of DClite released a new version of DClite compatible with the new software from NeoModus. Some time after, DClite was rewritten as Open Direct Connect with the purpose of having an MDI user interface and using plug-ins for file sharing protocols (similar to MLDonkey). Open Direct Connect also did not have complete support for the full file sharing aspects of the protocol, but a port to Java, however, did. Later on, other clients such as DCTC (Direct Connect Text Client) and DC++ became popular.
The DCDev archive contains discussions of protocol changes for development of DC in the years 2003–2005.
Protocol
The Direct Connect protocol is a text-based computer protocol, in which commands and their information are sent in clear text, without encryption in original NeoModus software (encryption is available as a protocol extension). As clients connect to a central source of distribution (the hub) of information, the hub requires a substantial amount of upload bandwidth available.
There is no official specification of the protocol, meaning that every client and hub (besides the original NeoModus client and hub) has been forced to reverse engineer the information. As such, any protocol specification this article may reference is likely inaccurate and/or incomplete.
The client-server (as well as client-client, where one client acts as "server") aspect of the protocol stipulates that the server respond first when a connection is being made. For example, when a client connects to a hub's socket, the hub is first to respond to the client.
The protocol lacks a specified default character encoding for clients or hubs. The original client and hub use ASCII encoding instead of that of the Operating system. This allows migration to UTF-8 encoding in newer software.
Port 411 is the default port for hubs, and 412 for client-to-client connections. If either of these ports are already in use, the port number is incremented until the number of a free port is found for use. For example, if 411, 412 and 413 are in use, then port 414 will be used.
Hub addresses are in the following form: dchub://example.com[:411], where 411 is an optional port.
There is no global identification scheme; instead, users are identified with their nickname on a hub-to-hub basis.
An incoming request for a client-client connection cannot be linked with an actual connection.
A search result cannot be linked with a particular search.
The ability to kick or move (redirect) a user to another hub is supported by the protocol. If a user is kicked, the hub is not required to give that user a specific reason, and there is no restriction on where a user can be redirected to. However, if another client in power instructs the hub to kick, that client may send out a notification message before doing so. Redirecting a user must be accompanied by a reason. There is no HTTP referer equivalent.
Hubs may send out user commands to clients. These commands are only raw protocol commands and are used mostly for making a particular task simpler. For example, the hub cannot send a user command that will trigger the default browser to visit a website. It can, however, add the command "+rules" (where '+' indicates to the hub that it's a command - this may vary) to display the hub's rules.
The peer-to-peer part of the protocol is based on a concept of "slots" (similar to number of open positions for a job). These slots denote the number of people that are allowed to download from a user at any given time and are controlled by the client.
In client-to-client connections, the parties generate a random number to see who should be allowed to download first, and the client with the greater number wins.
Transporting downloads and connecting to the hub requires TCP, while active searches use UDP.
There are two kinds of modes a user can be in: either "active" or "passive" mode. Clients using active mode can download from anyone else on the network, while clients using passive mode users can only download from active users. In NeoModus Direct Connect, passive mode users receive other passive mode users' search results, but the user will not be able to download anything. In DC++, users will not receive those search results. In NeoModus Direct Connect, all users will be sent at most five search results per query. If a user has searched, DC++ will respond with ten search results when the user is in active mode and five when the user is in passive mode. Passive clients will be sent search results through the hub, while active clients will receive the results directly.
Protocol delimiters are "$", "|", and . Protocol have for them (and few others) escape sequence and most software use them correctly in login
(Lock to Key) sequence. For some reason that escape sequence was ignored by DC++ developers and they use HTML equivalent if these characters are to be viewed by the user.
Continued interest exists in features such as ratings and language packs. However, the authors of DC++ have been actively working on a complete replacement of the Direct Connect protocol called Advanced Direct Connect.
One example of an added feature to the protocol, in comparison with the original protocol, is the broadcasting of Tiger-Tree Hashing of shared files (TTH). The advantages of this include verifying that a file is downloaded correctly, and the ability to find files independently of their names.
Hublists
Direct Connect used for DDoS attacks
As the protocol allows hubs to redirect users to other hubs, malicious hubs have redirected users to places other than real Direct Connect hubs, effectively causing a Distributed Denial of Service attack. The hubs may alter the IP in client to client connections, pointing to a potential victim.
The CTM Exploit surfaced in 2006–2007, during which period the whole Direct Connect network suffered from DDoS attacks. The situation prompted developers to take security issues more seriously.
As of February 2009, an extension for clients was proposed in order for the attacked party to find out the hub sending the connecting users.
Direct Connect Network Foundation
The Direct Connect Network Foundation (DCNF) is a non-profit organization registered in Sweden that aims to improve the DC network by improving software, protocols and other services in the network.
Articles and papers
The DCNF maintains a list of articles, papers and more documentation that relate to DC.
See also
Comparison of Direct Connect software
References
External links
NMDC Protocol Wiki (Mirror)
NMDC Protocol Document
NMDC Protocol
Direct Connect network
File sharing networks |
70662 | https://en.wikipedia.org/wiki/Chaffing%20and%20winnowing | Chaffing and winnowing | Chaffing and winnowing is a cryptographic technique to achieve confidentiality without using encryption when sending data over an insecure channel. The name is derived from agriculture: after grain has been harvested and threshed, it remains mixed together with inedible fibrous chaff. The chaff and grain are then separated by winnowing, and the chaff is discarded. The cryptographic technique was conceived by Ron Rivest and published in an on-line article on 18 March 1998. Although it bears similarities to both traditional encryption and steganography, it cannot be classified under either category.
This technique allows the sender to deny responsibility for encrypting their message. When using chaffing and winnowing, the sender transmits the message unencrypted, in clear text. Although the sender and the receiver share a secret key, they use it only for authentication. However, a third party can make their communication confidential by simultaneously sending specially crafted messages through the same channel.
How it works
The sender (Alice) wants to send a message to the receiver (Bob). In the simplest setup, Alice enumerates the symbols (usually bits) in her message and sends out each in a separate packet. In general the method requires each symbol to arrive in-order and to be authenticated by the receiver. When implemented over networks that may change the order of packets, the sender places the symbol's serial number in the packet, the symbol itself (both unencrypted), and a message authentication code (MAC). Many MACs use a secret key Alice shares with Bob, but it is sufficient that the receiver has a method to authenticate the packets. Charles, who transmits Alice's packets to Bob, interleaves the packets with corresponding bogus packets (called "chaff") with corresponding serial numbers, arbitrary symbols, and a random number in place of the MAC. Charles does not need to know the key to do that (real MAC are large enough that it is extremely unlikely to generate a valid one by chance, unlike in the example). Bob uses the MAC to find the authentic messages and drops the "chaff" messages. This process is called "winnowing".
An eavesdropper located between Alice and Charles can easily read Alice's message. But an eavesdropper between Charles and Bob would have to tell which packets are bogus and which are real (i.e. to winnow, or "separate the wheat from the chaff"). That is infeasible if the MAC used is secure and Charles does not leak any information on packet authenticity (e.g. via timing).
If a fourth party joins the example (named Darth) who wants to send counterfeit messages to impersonate Alice, it would require Alice to disclose her secret key. If Darth cannot force Alice to disclose an authentication key (the knowledge of which would enable him to forge messages from Alice), then her messages will remain confidential. Charles, on the other hand, is no target of Darth's at all, since Charles does not even possess any secret keys that could be disclosed.
Variations
The simple variant of the chaffing and winnowing technique described above adds many bits of overhead per bit of original message. To make the transmission more efficient, Alice can process her message with an all-or-nothing transform and then send it out in much larger chunks. The chaff packets will have to be modified accordingly. Because the original message can be reconstructed only by knowing all of its chunks, Charles needs to send only enough chaff packets to make finding the correct combination of packets computationally infeasible.
Chaffing and winnowing lends itself especially well to use in packet-switched network environments such as the Internet, where each message (whose payload is typically small) is sent in a separate network packet. In another variant of the technique, Charles carefully interleaves packets coming from multiple senders. That eliminates the need for Charles to generate and inject bogus packets in the communication. However, the text of Alice's message cannot be well protected from other parties who are communicating via Charles at the same time. This variant also helps protect against information leakage and traffic analysis.
Implications for law enforcement
Ron Rivest suggests that laws related to cryptography, including export controls, would not apply to chaffing and winnowing because it does not employ any encryption at all.
The author of the paper proposes that the security implications of handing everyone's authentication keys to the government for law-enforcement purposes would be far too risky, since possession of the key would enable someone to masquerade and communicate as another entity, such as an airline controller. Furthermore, Ron Rivest contemplates the possibility of rogue law enforcement officials framing up innocent parties by introducing the chaff into their communications, concluding that drafting a law restricting chaffing and winnowing would be far too difficult.
Trivia
The term winnowing was suggested by Ronald Rivest's father. Before the publication of Rivest's paper in 1998 other people brought to his attention a 1965 novel, Rex Stout's The Doorbell Rang, which describes the same concept and was thus included in the paper's references.
See also
References
Cryptography |
71291 | https://en.wikipedia.org/wiki/PLI | PLI | PLI may refer to:
Pascual Liner Inc.
Performance-linked incentives
Perpetual Income & Growth Investment Trust (LSE: PLI)
Practising Law Institute
Pragmatic language impairment
PL/I ("Programming Language One")
PLI (gene)
Pli: the Warwick Journal of Philosophy
Portland–Lewiston Interurban, Maine, U.S.
Pragmatic language impairment, an earlier term for Social Communication Disorder
Private line interface, part of ARPANET encryption devices
Program Language Interface, in Verilog
Verilog Procedural Interface or PLI 2
Politics
Independent Liberal Party (Nicaragua) or Partido Liberal Independiente
Italian Liberal Party or Partito Liberale Italiano
Italian Liberal Party (1997)
See also
Ply (disambiguation)
ISO 639:pli or Pali language
Pli selon pli (Fold by fold), a 1960 classical piece by French composer Pierre Boulez |
71630 | https://en.wikipedia.org/wiki/Unicity%20distance | Unicity distance | In cryptography, unicity distance is the length of an original ciphertext needed to break the cipher by reducing the number of possible spurious keys to zero in a brute force attack. That is, after trying every possible key, there should be just one decipherment that makes sense, i.e. expected amount of ciphertext needed to determine the key completely, assuming the underlying message has redundancy.
Claude Shannon defined the unicity distance in his 1949 paper "Communication Theory of Secrecy Systems".
Consider an attack on the ciphertext string "WNAIW" encrypted using a Vigenère cipher with a five letter key. Conceivably, this string could be deciphered into any other string—RIVER and WATER are both possibilities for certain keys. This is a general rule of cryptanalysis: with no additional information it is impossible to decode this message.
Of course, even in this case, only a certain number of five letter keys will result in English words. Trying all possible keys we will not only get RIVER and WATER, but SXOOS and KHDOP as well. The number of "working" keys will likely be very much smaller than the set of all possible keys. The problem is knowing which of these "working" keys is the right one; the rest are spurious.
Relation with key size and possible plaintexts
In general, given particular assumptions about the size of the key and the number of possible messages, there is an average ciphertext length where there is only one key (on average) that will generate a readable message. In the example above we see only upper case English characters, so if we assume that the plaintext has this form, then there are 26 possible letters for each position in the string. Likewise if we assume five-character upper case keys, there are K=265 possible keys, of which the majority will not "work".
A tremendous number of possible messages, N, can be generated using even this limited set of characters: N = 26L, where L is the length of the message. However, only a smaller set of them is readable plaintext due to the rules of the language, perhaps M of them, where M is likely to be very much smaller than N. Moreover, M has a one-to-one relationship with the number of keys that work, so given K possible keys, only K × (M/N) of them will "work". One of these is the correct key, the rest are spurious.
Since M/N gets arbitrarily small as the length L of the message increases, there is eventually some L that is large enough to make the number of spurious keys equal to zero. Roughly speaking, this is the L that makes KM/N=1. This L is the unicity distance.
Relation with key entropy and plaintext redundancy
The unicity distance can equivalently be defined as the minimum amount of ciphertext required to permit a computationally unlimited adversary to recover the unique encryption key.
The expected unicity distance can then be shown to be:
where U is the unicity distance, H(k) is the entropy of the key space (e.g. 128 for 2128 equiprobable keys, rather less if the key is a memorized pass-phrase). D is defined as the plaintext redundancy in bits per character.
Now an alphabet of 32 characters can carry 5 bits of information per character (as 32 = 25). In general the number of bits of information per character is , where N is the number of characters in the alphabet and is the binary logarithm. So for English each character can convey bits of information.
However the average amount of actual information carried per character in meaningful English text is only about 1.5 bits per character. So the plain text redundancy is D = 4.7 − 1.5 = 3.2.
Basically the bigger the unicity distance the better. For a one time pad of unlimited size, given the unbounded entropy of the key space, we have , which is consistent with the one-time pad being unbreakable.
Unicity distance of substitution cipher
For a simple substitution cipher, the number of possible keys is , the number of ways in which the alphabet can be permuted. Assuming all keys are equally likely, bits. For English text , thus .
So given 28 characters of ciphertext it should be theoretically possible to work out an English plaintext and hence the key.
Practical application
Unicity distance is a useful theoretical measure, but it doesn't say much about the security of a block cipher when attacked by an adversary with real-world (limited) resources. Consider a block cipher with a unicity distance of three ciphertext blocks. Although there is clearly enough information for a computationally unbounded adversary to find the right key (simple exhaustive search), this may be computationally infeasible in practice.
The unicity distance can be increased by reducing the plaintext redundancy. One way to do this is to deploy data compression techniques prior to encryption, for example by removing redundant vowels while retaining readability. This is a good idea anyway, as it reduces the amount of data to be encrypted.
Ciphertexts greater than the unicity distance can be assumed to have only one meaningful decryption. Ciphertexts shorter than the unicity distance may have multiple plausible decryptions. Unicity distance is not a measure of how much ciphertext is required for cryptanalysis, but how much ciphertext is required for there to be only one reasonable solution for cryptanalysis.
References
External links
Bruce Schneier: How to Recognize Plaintext (Crypto-Gram Newsletter December 15, 1998)
Unicity Distance computed for common ciphers
Cryptography
Cryptographic attacks
Information theory |
71996 | https://en.wikipedia.org/wiki/BlackBerry | BlackBerry | BlackBerry was a brand of smartphones, tablets, and services originally designed and marketed by Canadian company BlackBerry Limited (formerly known as Research In Motion, or RIM). Beginning in 2016, BlackBerry Limited licensed third-party companies to design, manufacture, and market smartphones under the BlackBerry brand. The original licensors were BB Merah Putih for the Indonesian market, Optiemus Infracom for the South Asian market, and BlackBerry Mobile (a trade name of TCL Technology) for all other markets. In summer 2020, the Texas-based startup OnwardMobility signed a new licensing agreement with BlackBerry Limited to develop a new 5G BlackBerry smartphone. OnwardMobility was cooperating with BlackBerry Limited and FIH Mobile (a subsidiary of Foxconn) as they "sought to revitalize the iconic BlackBerry brand through an Android-based, next-gen Wi-Fi device." However, in a statement released on February 18, 2022, OnwardMobility said that not only would the development of the new BlackBerry device would not be moving forward but the company itself would be shutting down, as well.
BlackBerry was one of the most prominent smartphone brands in the world, specializing in secure communications and mobile productivity, and well known for the keyboards on most of its devices. At its peak in September 2013, there were 85 million BlackBerry subscribers worldwide. However, BlackBerry lost its dominant position in the market due to the success of the Android and iOS platforms; its numbers had fallen to 23 million in March 2016 and slipped even further to 11 million in May 2017.
Historically, BlackBerry devices used a proprietary operating system—known as BlackBerry OS—developed by BlackBerry Limited. In 2013, BlackBerry introduced BlackBerry 10, a major revamp of the platform based on the QNX operating system. BlackBerry 10 was meant to replace the aging BlackBerry OS platform with a new system that was more in line with the user experiences of Android and iOS platforms. The first BB10 powered device was the all-touch BlackBerry Z10, which was followed by other all-touch devices (Blackberry Z30, BlackBerry Leap) as well as more traditional keyboard-equipped models (BlackBerry Q10, BlackBerry Classic, BlackBerry Passport). In 2015, BlackBerry began releasing Android-based smartphones, beginning with the BlackBerry Priv slider and then the BlackBerry DTEK50.
On September 28, 2016, BlackBerry announced it would cease designing its own phones in favor of licensing to partners. TCL Communication became the global licensee of the brand, under the name "BlackBerry Mobile". Optiemus Infracom, under the name BlackBerry Mobile India, and BB Merah Putih also serve as licensees of the brand, serving the Indian and Indonesian markets, respectively.
In 2017, BlackBerry Mobile released the BlackBerry KeyOne—which was known for having a physical keyboard below its 4.5 inch touchscreen and a long battery life—and was the last device to be designed internally by BlackBerry. Also in 2017, BlackBerry Mobile, under their partner license agreements, released the BlackBerry Aurora, BlackBerry KeyOne L/E BLACK, and the BlackBerry Motion.
In June 2018, the BlackBerry Key2 was launched in international markets, and in India by licensee Optiemus Infracom. The Key2 sports a dual camera setup and incorporates features such as portrait mode and optical zoom. In August 2018, after the launch of the BlackBerry Key2, Optiemus Infracom announced the launch of the BlackBerry Evolve and Evolve X smartphones for the Indian market sold exclusively on Amazon India. The smartphones have been conceptualized, designed and manufactured in India.
As of 2019, BB Merah Putih's website has been repurposed, with BlackBerry Limited stating that only technical support will be offered for the Indonesian devices built by the company. Additionally, the operational status of Optiemus is unknown as of September 2020, as there have not been any updates posted regarding BlackBerry products in India since 2018.
History
Research in Motion (RIM), founded in Waterloo, Ontario, first developed the Inter@ctive Pager 900, announced on September 18, 1996. The Inter@ctive Pager 900 was a clamshell-type device that allowed two-way paging. After the success of the 900, the Inter@ctive Pager 800 was created for IBM, which bought US$10 million worth of them on February 4, 1998. The next device to be released was the Inter@ctive Pager 950, on August 26, 1998. The very first device to carry the BlackBerry name was the BlackBerry 850, an email pager, released January 19, 1999. Although identical in appearance to the 950, the 850 was the first device to integrate email and the name Inter@ctive Pager was no longer used to brand the device.
The first BlackBerry device, the 850, was introduced in 1999 as a two-way pager in Munich, Germany. The name BlackBerry was coined by the marketing company Lexicon Branding. The name was chosen due to the resemblance of the keyboard's buttons to that of the drupelets that compose the blackberry fruit.
The original BlackBerry devices, the RIM 850 and 857, used the DataTAC network. In 2002, the more commonly known convergent smartphone BlackBerry was released, which supports push email, mobile telephone, text messaging, Internet faxing, Web browsing and other wireless information services.
BlackBerry gained market share in the mobile industry by concentrating on email. BlackBerry began to offer email service on non-BlackBerry devices, such as the Palm Treo, through the proprietary BlackBerry Connect software.
The original BlackBerry device had a monochrome display while newer models installed color displays. All newer models have been optimized for "thumbing", the use of only the thumbs to type on a keyboard. The Storm 1 and Storm 2 include a SureType keypad for typing. Originally, system navigation was achieved with the use of a scroll wheel mounted on the right side of device models prior to the 8700. The trackwheel was replaced by the trackball with the introduction of the Pearl series, which allowed four-way scrolling. The trackball was replaced by the optical trackpad with the introduction of the Curve 8500 series. Models made to use iDEN networks, such as Nextel, SouthernLINC, NII Holdings, and Mike also incorporate a push-to-talk (PTT) feature, similar to a two-way radio.
On January 30, 2013, BlackBerry announced the release of the Z10 and Q10 smartphones. Both models consist of touch screens: the Z10 features an all-touch design and the Q10 combines a QWERTY keyboard with touchscreen features.
During the second financial quarter of 2013, BlackBerry sold 6.8 million handsets, but was eclipsed by the sales of competitor Nokia's Lumia model for the first time.
On August 12, 2013, BlackBerry announced the intention to sell the company due to their increasingly unfavourable financial position and competition in the mobile industry. Largely due to lower than expected sales on the Z10, BlackBerry announced on September 20, 2013, that 4,500 full- and part-time positions (an estimated 40% of its operating staff) have been terminated and its product line has been reduced from six to four models. On September 23, 2013, Fairfax Financial, which owns a 10% equity stake in BlackBerry, made an offer to acquire BlackBerry for $4.7 billion (at $9.00 per share). Following the announcement, BlackBerry announced an acceptance of the offer provisionally but it would continue to seek other offers until November 4, 2013.
On November 4, 2013, BlackBerry replaced Thorsten Heins with new interim CEO John S. Chen, the former CEO of Sybase. On November 8, the BlackBerry board rejected proposals from several technology companies for various BlackBerry assets on grounds that a break-up did not serve the interest of all stakeholders, which include employees, customers and suppliers in addition to shareholders, said the sources, who did not want to be identified as the discussions were confidential. On November 13, 2013, Chen released an open message: "We are committed to reclaiming our success."
In early July 2014, the TechCrunch online publication published an article titled "BlackBerry Is One Of The Hottest Stocks Of 2014, Seriously", following a 50 percent rise in the company's stock, an increase that was greater than peer companies such as Apple and Google; however, an analysis of BlackBerry's financial results showed that neither revenue or profit margin were improved, but, instead, costs were markedly reduced. During the same period, BlackBerry also introduced the new Passport handset—consisting of a square screen with "Full HD-class" (1,440 x 1,440) resolution and marketed to professional fields such as healthcare and architecture—promoted its Messenger app and released minor updates for the BB10 mobile operating system.
On December 17, 2014, the BlackBerry Classic was introduced; it is meant to be more in line with the former Bold series, incorporating navigation buttons similar to the previous BlackBerry OS devices. When it was discontinued in June 2016, it was the last BlackBerry with a keyboard that dominates the front of the phone in the classic style.
In September 2015, BlackBerry officially unveiled the BlackBerry Priv, a slider, with a German made camera lens with 18 megapixels, phablet that utilizes the Android operating system with additional security and productivity-oriented features inspired by the BlackBerry operating systems. However, BlackBerry COO Marty Beard told Bloomberg that "The company's never said that we would not build another BB10 device."
On July 26, 2016, the company hinted that another model with a physical keyboard was "coming shortly". The same day, BlackBerry unveiled a mid-range Android model with only an on-screen keyboard, the BlackBerry DTEK50, powered by the then latest version of Android, 6.0, Marshmallow. (The Priv could also be upgraded to 6.0) This device featured a 5.2-inch full high-definition display. BlackBerry chief security officer David Kleidermacher stressed data security during the launch, indicating that this model included built-in malware protection and encryption of all user information. Industry observers pointed out that the DTEK50 is a re-branded version of the Alcatel Idol 4 with additional security-oriented software customizations, manufactured and designed by TCL.
In September 2016, BlackBerry Limited agreed to a licensing partnership with an Indonesian company to set up a new joint venture company called BB Merah Putih to "source, distribute, and market BlackBerry handsets in Indonesia".
On October 25, 2016, BlackBerry released the BlackBerry DTEK60, the second device in the DTEK series, manufactured and designed by TCL. The device features a 5.5-inch Quad-HD touch screen display running on Qualcomm's Snapdragon 820 processor with support for Quick Charge 3.0, USB Type-C, and a fingerprint sensor.
In October 2016, it was announced that BlackBerry will be working with the Ford Motor Company of Canada to develop software for the car manufacturer's connected vehicles.
In February 2017, a $20m class action lawsuit against BlackBerry was announced by the former employees of the company.
In March 2017, BB Merah Putih announced the BlackBerry Aurora, an Indonesian-made and sold device, running an operating system based on Android 7.0 out of the box.
In March 2018, it was announced that BlackBerry would be working with Jaguar Land Rover to develop software for the car manufacturer's vehicles. In June 2018, BlackBerry in partnership with TCL Mobile and Optiemus Infracom launched the KEY2 at a global launch in New York. This is the third device to sport a keyboard while running Google's Android OS.
Intellectual property litigation
NTP Inc case
In 2000 NTP sent notice of its wireless email patents to a number of companies and offered to license the patents to them. NTP brought a patent-infringement lawsuit against one of the companies, Research In Motion, in the United States District Court for the Eastern District of Virginia. This court is well known for its strict adherence to timetables and deadlines, sometimes referred to as the "rocket docket", and is particularly efficient at trying patent cases.
The jury eventually found that the NTP patents were valid; furthermore, the jury established that RIM had infringed the patents in a "willful" manner, and the infringement had cost NTP US$33 million in damages (the greater of a reasonable royalty or lost profits). The judge, James R. Spencer, increased the damages to US$53 million as a punitive measure due to the willful nature of the infringement. He also instructed RIM to pay NTP's legal fees of US$4.5 million and issued an injunction ordering RIM to cease and desist infringing the patents—this decision would have resulted in the closure of BlackBerry's systems in the US. RIM appealed all of the findings of the court. The injunction and other remedies were stayed pending the outcome of the appeals.
In March 2005 during appeal, RIM and NTP tried to negotiate a settlement of their dispute; the settlement was to be for $450 million. Negotiations broke down due to other issues. On June 10, 2005, the matter returned to the courts. In early November 2005 the US Department of Justice filed a brief requesting that RIM's service be allowed to continue because of the large number of BlackBerry users in the US Federal Government.
In January 2006 the US Supreme Court refused to hear RIM's appeal of the holding of liability for patent infringement, and the matter was returned to a lower court. The prior granted injunction preventing all RIM sales in the US and use of the BlackBerry device might have been enforced by the presiding district court judge had the two parties been unable to reach a settlement.
On February 9, 2006, the US Department of Defense (DOD) filed a brief stating that an injunction shutting down the BlackBerry service while excluding government users was unworkable. The DOD also stated that the BlackBerry was crucial for national security given the large number of government users.
On February 9, 2006, RIM announced that it had developed software workarounds that would not infringe the NTP patents, and would implement those if the injunction was enforced.
On March 3, 2006, after a stern warning from Judge Spencer, RIM and NTP announced that they had settled their dispute. Under the terms of the settlement, RIM has agreed to pay NTP $612.5 million (USD) in a "full and final settlement of all claims." In a statement, RIM said that "all terms of the agreement have been finalized and the litigation against RIM has been dismissed by a court order this afternoon. The agreement eliminates the need for any further court proceedings or decisions relating to damages or injunctive relief." The settlement amount is believed low by some analysts, because of the absence of any future royalties on the technology in question.
On May 26, 2017, BlackBerry announced that it had reached an agreement with Qualcomm Incorporated resolving all amounts payable in connection with the interim arbitration decision announced on April 12, 2017. Following a joint stipulation by the parties, the arbitration panel has issued a final award providing for the payment by Qualcomm to BlackBerry of a total amount of U.S.$940,000,000 including interest and attorneys' fees, net of certain royalties due from BlackBerry for calendar 2016 and the first quarter of calendar 2017.
KIK
On November 24, 2010, Research In Motion (RIM) removed Kik Messenger from BlackBerry App World and limited the functionality of the software for its users. RIM also sued Kik Interactive for patent infringement and misuse of trademarks. In October 2013, the companies settled the lawsuit, with the terms undisclosed.
Facebook and Instagram
In 2018 it was reported that BlackBerry would be filing legal action against Facebook over perceived intellectual property infringements within both Facebook Messenger and WhatsApp as well as with Instagram.
BlackBerry retail stores
Many BlackBerry retail stores operate outside North America, such as in Thailand, Indonesia, United Arab Emirates, and Mexico. In December 2007 a BlackBerry Store opened in Farmington Hills, Michigan. The store offers BlackBerry device models from AT&T, T-Mobile, Verizon, and Sprint, the major U.S. carriers which offer smartphones. There were three prior attempts at opening BlackBerry stores in Toronto and London (UK), but they eventually folded. There are also BlackBerry Stores operated by Wireless Giant at airports in Atlanta, Boston, Charlotte, Minneapolis–St. Paul, Philadelphia, Houston, and Newark, but several have been slated for closing.
On September 23, 2015, Blackberry opened its first pop-up store in Frankfurt, Germany.
2005, 2007, 2009, 2011 and 2012 outages
At various stages of the company's history it suffered occasional service outages that have been referred to in the media as "embarrassing".
In 2005 the company suffered a relatively short-term outage reportedly among a small handful of North America carriers. The service was restored after several hours.
In 2007 the e-mail service suffered an outage which led for calls by some questioning the integrity towards BlackBerry's perceived centralized system.
In 2009 the company had an outage reportedly covering the whole of North America.
At 2011-10-10 10:00 UTC began a multi-day outage in Europe, the Middle East and Africa, affecting millions of BlackBerry users. There was another outage the following day. By October 12, 2011, the Blackberry Internet Service went down in North America. Research In Motion attributed data overload due to switch failures in their two data centres in Waterloo in Canada and Slough in England as the cause of the service disruptions. The outage intensified calls by shareholders for a shake-up in the company's leadership.
Some estimates by BlackBerry are that the company lost between $50 million to $54 million due to global email service failure and outage in 2011.
Certification
BCESA (BlackBerry Certified Enterprise Sales Associate, BCESA40 in full) is a BlackBerry Certification for professional users of RIM (Research In Motion) BlackBerry wireless email devices. The Certification requires the user to pass several exams relating to the BlackBerry Devices, all its functions including Desktop software and providing technical support to Customers of BlackBerry Devices.
The BCESA, BlackBerry Certified Enterprise Sales Associate qualification, is the first of three levels of professional BlackBerry Certification.
BCTA (BlackBerry Certified Technical Associate)
BlackBerry Certified Support Associate T2
More information on certifications is on the BlackBerry.com website.
The BlackBerry Technical Certifications available are:
BlackBerry Certified Enterprise Server Consultant (BCESC)
BlackBerry Certified Server Support Technician (BCSST)
BlackBerry Certified Support Technician (BCSTR)
Products
Android devices:
BlackBerry Evolve X (2018)
BlackBerry Evolve (2018)
BlackBerry Key2 (2018)
BlackBerry Motion (2017)
BlackBerry Aurora (2017)
BlackBerry KeyOne (2017)
BlackBerry DTEK60 (2016)
BlackBerry DTEK50 (2016)
BlackBerry Priv (2015)
BlackBerry 10 devices:
BlackBerry Leap (2015)
BlackBerry Classic (2014)
BlackBerry Passport (2014)
BlackBerry Porsche Design P'9983 (2014)
BlackBerry Z3 (2014)
BlackBerry Z30 (2013)
BlackBerry Porsche Design P'9982 (2013)
BlackBerry Q10 (2013)
BlackBerry Z10 (2013)
BlackBerry Q5 (2013)
BlackBerry 7 devices:
BlackBerry Bold series (2011): BlackBerry Bold 9900/9930/9790
BlackBerry 9720 (2013)
BlackBerry Porsche Design (2012): BlackBerry Porsche Design P'9981
BlackBerry Torch series (2011): BlackBerry Torch 9810
BlackBerry Torch series (2011): BlackBerry Torch 9850/9860
BlackBerry Curve series (2011): BlackBerry 9350/9360/9370/9380
BlackBerry Curve 9320/9220 (2012)
BlackBerry 6 devices:
BlackBerry Torch series (2010): BlackBerry Torch 9800
BlackBerry Curve series (2010): BlackBerry Curve 9300/9330
BlackBerry Style 9670 (2010)
BlackBerry Pearl series (2010): BlackBerry Pearl 3G 9100/9105
BlackBerry Bold series (2010–2011): BlackBerry Bold 9780/9788
BlackBerry 5 devices:
BlackBerry Bold series (2008–2010): BlackBerry Bold 9000/9700/9650
BlackBerry Tour series (2009): BlackBerry Tour (9630)
BlackBerry Storm series (2009): BlackBerry Storm 2 (9520/9550)
BlackBerry Storm series (2008): BlackBerry Storm (9500/9530)
BlackBerry Curve series (2009–2010): BlackBerry Curve 8900 (8900/8910/8980)
BlackBerry Curve series (2009): BlackBerry Curve 8520/8530
Blackberry 4 devices:
BlackBerry 8800 series (2007): BlackBerry 8800/8820/8830
BlackBerry Pearl series (2006): BlackBerry Pearl 8100/8110/8120/8130
BlackBerry Pearl Flip series (2008): BlackBerry Pearl Flip 8220/8230
BlackBerry Curve series (2007): BlackBerry Curve 8300 (8300/8310/8320/8330/8350i)
Blackberry 3 devices:
Blackberry Java-based series: 5000, 6000
Blackberry 2 devices:
Blackberry phone series: 7100
Blackberry color series: 7200, 7500, 7700
Blackberry 1 devices:
Blackberry pager models: 850, 857, 950, 957
Hardware
Modern LTE based phones such as the BlackBerry Z10 have a Qualcomm Snapdragon S4 Plus, a proprietary Qualcomm SOC which is based on ARMv7-A architecture, featuring two 1.5 GHz Qualcomm Krait CPU cores, and a 400 MHz Adreno 225 GPU. GSM-based BlackBerry phones incorporate an ARM 7, 9 or 11 processor. Some of the BlackBerry models (Torch 9850/9860, Torch 9810, and Bold 9900/9930) have a 1.2 GHz MSM8655 Snapdragon S2 SOC, 768 MB system memory, and 8 GB of on-board storage. Entry-level models, such as the Curve 9360, feature a Marvell PXA940 clocked at 800 MHz.
Some previous BlackBerry devices, such as the Bold 9000, were equipped with Intel XScale 624 MHz processors. The Bold 9700 featured a newer version of the Bold 9000's processor but is clocked at the same speed. The Curve 8520 featured a 512 MHz processor, while BlackBerry 8000 series smartphones, such as the 8700 and the Pearl, are based on the 312 MHz ARM XScale ARMv5TE PXA900. An exception to this is the BlackBerry 8707 which is based on the 80 MHz Qualcomm 3250 chipset; this was due to the PXA900 chipset not supporting 3G networks. The 80 MHz processor in the BlackBerry 8707 meant the device was often slower to download and render web pages over 3G than the 8700 was over EDGE networks. Early BlackBerry devices, such as the BlackBerry 950, used Intel 80386-based processors.
BlackBerry's latest Flagship phone the BlackBerry Z30 based on a 5-inch Super AMOLED, 1280×720 resolution, at 295 ppi 24-bit color depth and powered by Quad-Graphics and Qualcomm's Dual Core 1.7 GHz MSM8960T Pro.
The first BlackBerry with an Android operating system was released in late November 2015, the 192 gram/6.77 ounce BlackBerry Priv. It launched with version 5.1.1 but was later upgraded to version 6.0 Android Marshmallow. It was first available in four countries but increased to 31 countries by February 28, 2016.
Employing a Qualcomm 8992 Snapdragon 808 Hexa-Core, 64 bit, Adreno 418, 600 MHz GPU with 3GB RAM processor, this unit is equipped with a curved 5.4-inch (2560 x 1440) OLED display and a sliding QWERTY keyboard which is hidden when not in use; Google's voice recognition that allows for dictating e-mails is also available. The Priv retained the best BlackBerry 10 features. Its 3,410mAh battery is said to provide 22.5 hours of mixed use. The 18-megapixel camera, with a Schneider-Kreuznach lens, can also record 4K video; a secondary selfie camera is also provided. Several important apps unique to the Priv were available from Google Play by mid December.
Software
A new operating system, BlackBerry 10, was released for two new BlackBerry models (Z10 and Q10) on January 30, 2013. At BlackBerry World 2012, RIM CEO Thorsten Heins demonstrated some of the new features of the OS, including a camera which is able to rewind frame-by-frame separately of individual faces in an image, to allow selection of the best of different shots, which is then stitched seamlessly to an optimal composite, an intelligent, predictive, and adapting keyboard, and a gesture based user interface designed around the idea of "peek" and "flow". Apps are available for BlackBerry 10 devices through the BlackBerry World storefront.
The previous operating system developed for older BlackBerry devices was BlackBerry OS, a proprietary multitasking environment developed by RIM. The operating system is designed for use of input devices such as the track wheel, track ball, and track pad. The OS provides support for Java MIDP 1.0 and WAP 1.2. Previous versions allowed wireless synchronisation with Microsoft Exchange Server email and calendar, as well as with Lotus Domino email. OS 5.0 provides a subset of MIDP 2.0, and allows complete wireless activation and synchronisation with Exchange email, calendar, tasks, notes and contacts, and adds support for Novell GroupWise and Lotus Notes. The BlackBerry Curve 9360, BlackBerry Torch 9810, Bold 9900/9930, Curve 9310/9320 and Torch 9850/9860 featured the 2011 BlackBerry OS 7. Apps are available for these devices through BlackBerry World (which before 2013 was called BlackBerry App World).
Third-party developers can write software using these APIs, and proprietary BlackBerry APIs as well. Any application that makes use of certain restricted functionality must be digitally signed so that it can be associated to a developer account at RIM. This signing procedure guarantees the authorship of an application but does not guarantee the quality or security of the code. RIM provides tools for developing applications and themes for BlackBerry. Applications and themes can be loaded onto BlackBerry devices through BlackBerry World, Over The Air (OTA) through the BlackBerry mobile browser, or through BlackBerry Desktop Manager.
BlackBerry devices, as well as Android, iOS, and Windows Phone platforms, have the ability to use the proprietary BlackBerry Messenger, also known as BBM, software for sending and receiving encrypted instant messages, voice notes, images and videos via BlackBerry PIN. As long as your cell phone has a data plan these messages are all free of charge. Some of the features of BBM include groups, bar-code scanning, lists, shared calendars, BBM Music and integration with apps and games using the BBM social platform.
In April 2013, BlackBerry announced that it was shutting down its streaming music service, BBM Music, which was active for almost two years since its launch. BlackBerry Messenger Music closed on June 2, 2013.
In July 2014, BlackBerry revealed BlackBerry Assistant, a new feature for BlackBerry OS 10.3, and BlackBerry Passport hardware. The feature is a digital personal assistant to help keep you "organized, informed and productive."
In December 2014, BlackBerry and NantHealth, a healthcare-focused data provider, launched a secure cancer genome browser, giving doctors the ability to access patients' genetic data on the BlackBerry Passport smartphone.
In January 2022, BlackBerry announced that they would discontinue their services on all BlackBerry phones not running on Android on January 4. According to BlackBerry, "As of this date, devices running these legacy services and software through either carrier or Wi-Fi connections will no longer reliably function, including for data, phone calls, SMS and 9-1-1 functionality".
Phones with BlackBerry email client
Several non-BlackBerry mobile phones have been released featuring the BlackBerry email client which connects to BlackBerry servers. Many of these phones have full QWERTY keyboards.
AT&T Tilt
HTC Advantage X7500
HTC TyTN
Motorola MPx220, some models
Nokia 6810
Nokia 6820
Nokia 9300
Nokia 9300i
Nokia 9500
Nokia Eseries phones, except models Nokia E66, Nokia E71
Qtek 9100
Qtek 9000
Samsung t719
Siemens SK65
Sony Ericsson P910
Sony Ericsson P990
Sony Ericsson M600i
Sony Ericsson P1
Third-party software
Third-party software available for use on BlackBerry devices includes full-featured database management systems, which can be used to support customer relationship management clients and other applications that must manage large volumes of potentially complex data.
In March 2011, RIM announced an optional Android player that could play applications developed for the Android system would be available for the BlackBerry PlayBook, RIM's first entry in the tablet market.
On August 24, 2011 Bloomberg News reported unofficial rumors that BlackBerry devices would be able to run Android applications when RIM brings QNX and the Android App Player to BlackBerry.
On October 20, 2011, RIM officially announced that Android applications could run, unmodified, on the BlackBerry tablet and the newest BlackBerry phones, using the newest version of its operating system.
Connectivity
BlackBerry smartphones can be integrated into an organization's email system through a software package called BlackBerry Enterprise Server (BES) through version 5, and BlackBerry Enterprise Service (BES) as of version 10. (There were no versions 6 through 9.) Versions of BES are available for Microsoft Exchange, Lotus Domino, Novell GroupWise and Google Apps. While individual users may be able to use a wireless provider's email services without having to install BES themselves, organizations with multiple users usually run BES on their own network. BlackBerry devices running BlackBerry OS 10 or later can also be managed directly by a Microsoft Exchange Server, using Exchange ActiveSync (EAS) policies, in the same way that an iOS or Android device can. (EAS supports fewer management controls than BES does.) Some third-party companies provide hosted BES solutions. Every BlackBerry has a unique ID called a BlackBerry PIN, which is used to identify the device to the BES. BlackBerry at one time provided a free BES software called BES Express (BESX).
The primary BES feature is to relay email from a corporate mailbox to a BlackBerry phone. The BES monitors the user's mailbox, relaying new messages to the phone via BlackBerry's Network Operations Center (NOC) and user's wireless provider. This feature is known as push email, because all new emails, contacts, task entries, memopad entries, and calendar entries are pushed out to the BlackBerry device immediately (as opposed to the user synchronising the data manually or having the device poll the server at intervals).
BlackBerry also supports polling email, through third-party applications. The messaging system built into the BlackBerry only understands how to receive messages from a BES or the BIS, these services handle the connections to the user's mail providers. Device storage also enables the mobile user to access all data off-line in areas without wireless service. When the user reconnects to wireless service, the BES sends the latest data.
A feature of the newer models of the BlackBerry is their ability to quickly track the user's current location through trilateration without the use of GPS, thus saving battery life and time. Trilateration can be used as a quick, less battery intensive way to provide location-aware applications with the co-ordinates of the user. However, the accuracy of BlackBerry trilateration is less than that of GPS due to a number of factors, including cell tower blockage by large buildings, mountains, or distance.
BES also provides phones with TCP/IP connectivity accessed through a component called MDS (Mobile Data System) Connection Service. This allows custom application development using data streams on BlackBerry devices based on the Sun Microsystems Java ME platform.
In addition, BES provides network security, in the form of Triple DES or, more recently, AES encryption of all data (both email and MDS traffic) that travels between the BlackBerry phone and a BlackBerry Enterprise Server.
Most providers offer flat monthly pricing via special Blackberry tariffs for unlimited data between BlackBerry units and BES. In addition to receiving email, organizations can make intranets or custom internal applications with unmetered traffic.
With more recent versions of the BlackBerry platform, the MDS is no longer a requirement for wireless data access. Starting with OS 3.8 or 4.0, BlackBerry phones can access the Internet (i.e. TCP/IP access) without an MDS – formerly only email and WAP access was possible without a BES/MDS. The BES/MDS is still required for secure email, data access, and applications that require WAP from carriers that do not allow WAP access.
The primary alternative to using BlackBerry Enterprise Server is to use the BlackBerry Internet Service (BIS). BlackBerry Internet Service is available in 91 countries internationally. BlackBerry Internet Service was developed primarily for the average consumer rather than for the business consumer. The service allows users to access POP3, IMAP, and Outlook Web App (not via Exchange ActiveSync) email accounts without connecting through a BlackBerry Enterprise Server (BES). BlackBerry Internet Service allows up to 10 email accounts to be accessed, including proprietary as well as public email accounts (such as Gmail, Outlook, Yahoo and AOL). BlackBerry Internet Service also supports the push capabilities of various other BlackBerry Applications. Various applications developed by RIM for BlackBerry utilise the push capabilities of BIS, such as the Instant Messaging clients (like Google Talk, Windows Live Messenger and Yahoo Messenger). The MMS, PIN, interactive gaming, mapping and trading applications require data plans like BIS (not just Wi-Fi) for use. The service is usually provisioned through a mobile phone service provider, though BlackBerry actually runs the service.
BlackBerry PIN
The BlackBerry PIN (Personal Identification Number) is an eight-character hexadecimal identification number assigned to each BlackBerry device. PINs cannot be changed manually on the device (though BlackBerry technicians are able to reset or update a PIN server-side), and are locked to each specific BlackBerry. BlackBerry devices can message each other using the PIN directly or by using the BlackBerry Messenger application. BlackBerry PINs are tracked by BlackBerry Enterprise Servers and the BlackBerry Internet Service and are used to direct messages to a BlackBerry device. Emails and any other messages, such as those from the BlackBerry Push Service, are typically directed to a BlackBerry device's PIN. The message can then be routed by a RIM Network Operations Center, and sent to a carrier, which will deliver the message the last mile to the device. In September 2012 RIM announced that the BlackBerry PIN would be replaced by users' BlackBerry ID starting in 2013 with the launch of the BlackBerry 10 platform.
Competition and financial results
The primary competitors of the BlackBerry are Android smartphones and the iPhone. BlackBerry has struggled to compete against both and its market share has plunged since 2011, leading to speculation that it will be unable to survive as an independent going concern. However, it has managed to maintain significant positions in some markets.
Despite market share loss, on a global basis, the number of active BlackBerry subscribers has increased substantially through the years. For example, for the fiscal period during which the Apple iPhone was first released, RIM reported that they had a subscriber base of 10.5 million BlackBerry subscribers. At the end of 2008, when Android first hit the market, RIM reported that the number of BlackBerry subscribers had increased to 21 million. After the release of the Apple iPhone 5 in September 2012 RIM CEO Thorsten Heins announced that the current global subscribers is up to 80 million, which sparked a 7% jump in shares price.
However, since then, BlackBerry's global user base (meaning active accounts) has declined dramatically since its peak of 80 million in June 2012, dropping to 46 million users in September 2014. Its market share globally has also declined to less than 1 percent.
In 2011, BlackBerry shipped 43% of all smartphones to Indonesia. By April 2014 this had fallen to 3%. The decline in the Indonesian market share mirrors a global trend for the company (0.6% of North America). The retail price of 2,199,000 Indonesian Rupiah ($189) failed to give BlackBerry the boost it needed in Indonesia. The company launched the device with a discounted offer to the first 1000 purchasers, which resulted in a stampede in the capital in which several people were injured. BlackBerry lost market share in Indonesia despite the launch of the Z3 on May 13, 2014. The new device was given a worldwide launch in the city of Jakarta and came on the back of the news that Research in Motion (RIM) was to cut hardware production costs by outsourcing this to Taiwan-based Foxconn Group.
During the report of its third quarter 2015 results on December 18, 2015, the company said that approximately 700,000 handsets had been sold, down from 1.9 million in the same quarter in 2014, and down from 800,000 in Q2 of 2015. The average sale price per unit was up from $240 to $315, however. This should continue to increase with sales of the new Android Priv device which was selling at a premium price ($800 in Canada, for example). In Q3 of 2015, BlackBerry had a net loss of $89 million U.S. or 17 cents per share, but only a $15 million net loss, or three cents per share, when excluding restructuring charges and other one-time items.
Revenue was up slightly from a year earlier, at $557 million U.S. vs. $548 million, primarily because of software sales. Chief executive officer John Chen said that he expects the company's software business to grow at (14 percent) or above the market. At the time, the company was not ready to provide sales figures for the Android-based Priv handset which had been released only weeks earlier, and in only four countries at that time, but Chen offered this comment to analysts: "Depending on how Priv does ... there is a chance we could achieve or get closer to break-even operating profitability for our overall device business in the (fourth) quarter".
Due to a continuous reduction in BlackBerry users, in February 2016 the Blackberry headquarters in Waterloo, Ontario, Canada, slashed 35 percent of its workforce. By early 2016, Blackberry market share dropped to 0.2%. In Q4 2016, reports indicate Blackberry sold only 207,900 units—equivalent to a 0.0% market share.
User base
The number of active BlackBerry users since 2003 globally:
Security agencies access
Research in Motion agreed to give access to private communications to the governments of United Arab Emirates and Saudi Arabia in 2010, and India in 2012. The Saudi and UAE governments had threatened to ban certain services because their law enforcement agencies could not decrypt messages between people of interest.
It was revealed as a part of the 2013 mass surveillance disclosures that the American and British intelligence agencies, the National Security Agency (NSA) and the Government Communications Headquarters (GCHQ) respectively, have access to the user data on BlackBerry devices. The agencies are able to read almost all smartphone information, including SMS, location, e-mails, and notes through BlackBerry Internet Service, which operates outside corporate networks, and which, in contrast to the data passing through internal BlackBerry services (BES), only compresses but does not encrypt data.
Documents stated that the NSA was able to access the BlackBerry e-mail system and that they could "see and read SMS traffic". There was a brief period in 2009 when the NSA was unable to access BlackBerry devices, after BlackBerry changed the way they compress their data. Access to the devices was re-established by GCHQ. GCHQ has a tool named SCRAPHEAP CHALLENGE, with the capability of "Perfect spoofing of emails from Blackberry targets".
In response to the revelations BlackBerry officials stated that "It is not for us to comment on media reports regarding alleged government surveillance of telecommunications traffic" and added that a "back door pipeline" to their platform had not been established and did not exist.
Similar access by the intelligence agencies to many other mobile devices exists, using similar techniques to hack into them.
The BlackBerry software includes support for the Dual EC DRBG CSPRNG algorithm which, due to being probably backdoored by the NSA, the US National Institute of Standards and Technology "strongly recommends" no longer be used. BlackBerry Ltd. has however not issued an advisory to its customers, because they do not consider the probable backdoor a vulnerability. BlackBerry Ltd. also owns US patent 2007189527, which covers the technical design of the backdoor.
Usage
The (formerly) advanced encryption capabilities of the BlackBerry Smartphone made it eligible for use by government agencies and state forces. On January 4, 2022, Blackberry announced that older phones running Blackberry 10, 7.1 OS and earlier will no longer work.
Barack Obama
Former United States president Barack Obama became known for his dependence on a BlackBerry device for communication during his 2008 Presidential campaign. Despite the security issues, he insisted on using it even after inauguration. This was seen by some as akin to a "celebrity endorsement", which marketing experts have estimated to be worth between $25 million and $50 million. His usage of BlackBerry continued until around the end of his presidency.
Hillary Clinton
The Hillary Clinton email controversy is associated with Hillary Clinton continuing to use her BlackBerry after assuming the office of Secretary of State.
Use by government forces
An example is the West Yorkshire Police, which has allowed the increase in the presence of police officers along the streets and a reduction in public spending, given that each officer could perform desk work directly via the mobile device, as well as in several other areas and situations. The US Federal Government has been slow to move away from the Blackberry platform, a State Department spokesperson saying in 2013 that Blackberry devices were still the only mobile devices approved for U.S. missions abroad by the State Department. The high encryption standard that made BlackBerry smartphones and the PlayBook tablet unique, have since been implemented in other devices, including most Apple devices released after the iPhone 4.
The Bangalore City Police is one of the few police departments in India along with the Pune Police and Kochi Police to use BlackBerry devices.
Use by transportation staff
In the United Kingdom, South West Trains and Northern Rail have issued BlackBerry devices to guards in order to improve the communication between control, guards and passengers.
In Canada, Toronto and many other municipalities within Canada have issued BlackBerry devices to most of its employees including but not limited to transportation, technical, water and operations inspection staff and all management staff in order to improve the communication between contracted construction companies, its winter maintenance operations and to assist and successfully organize multimillion-dollar contracts. The devices are the standard mobile device to receive e-mail redirected from GroupWise.
As part of their Internet of Things endeavours, the company announced plans of moving into the shipping industry by adapting the smartphones devices to the communication necessities of freight containers.
Other users
Eric Schmidt, Executive Chairman of Google from 2001 to 2011, is a longtime BlackBerry user. Although smartphones running Google's Android mobile operating system compete with BlackBerry, Schmidt said in a 2013 interview that he uses a BlackBerry because he prefers its keyboard.
The Italian criminal group known as the 'Ndrangheta was reported in February 2009 to have communicated overseas with the Gulf Cartel, a Mexican drug cartel, through the use of the BlackBerry Messenger, since the BBM Texts are "very difficult to intercept".
Kim Kardashian was also a BlackBerry user. In 2014, she reportedly said at the Code/Mobile conference that "BlackBerry has my heart and soul. I love it. I'll never get rid of it" and "I have anxiety that I will run out and I won't be able to have a BlackBerry. I'm afraid it will go extinct." Kardashian also admitted to keeping a supply of BlackBerry phones with her because she was loyal to the BlackBerry Bold model. She has since been spotted using an iPhone instead.
See also
BlackBerry Limited (formerly Research in Motion)
BlackBerry Mobile
Comparison of smartphones
Index of articles related to BlackBerry OS
List of BlackBerry products
QWERTY
Science and technology in Canada
T9 (predictive text)
References
Bibliography
Research In Motion Reports Fourth Quarter and Year-End Results For Fiscal 2005
Research In Motion Fourth Quarter and 2007 Fiscal Year End Results
External links
Computer-related introductions in 1999
BlackBerry Limited mobile phones
Canadian brands
C++ software
Goods manufactured in Canada
Information appliances
Personal digital assistants
Science and technology in Canada
Pager companies |
72375 | https://en.wikipedia.org/wiki/Highlands%20County%2C%20Florida | Highlands County, Florida | Highlands County is a county located in the U.S. state of Florida. As of the 2010 census, the population was 98,786. Its county seat is Sebring.
Highlands County comprises the Sebring-Avon Park, FL Metropolitan Statistical Area.
History
Highlands County was created in 1921 along with Charlotte, Glades, and Hardee, when they were separated from DeSoto County. It was named for the terrain of the county. It boasted the fifth-oldest population in America in 2012.
Geography
According to the U.S. Census Bureau, the county has a total area of , of which is land and (8.1%) is water. In area, it is the 14th largest county in Florida. Highlands County is bounded on the east by the Kissimmee River. Lake Istokpoga, the largest lake in the county, is connected to the Kissimmee River by two canals; the Istokpoga canal, and the C41 (outflow) canal.
Adjacent counties
Osceola County, Florida - northeast
Okeechobee County, Florida - east
Glades County, Florida - south
Charlotte County, Florida - southwest
DeSoto County, Florida - west
Hardee County, Florida - west
Polk County, Florida - north
National protected area
Lake Wales Ridge National Wildlife Refuge (part)
Demographics
As of 2015, there were 99,491 people and 39,931 households living in the county. The population density was 97.2 people per square mile. The racial makeup of the county was 85.8% White, 10.4% Black or African American, 0.7% Native American, 1.5% Asian, 0.1% Pacific Islander, and 1.6% from two or more races. 18.2% of the population were Hispanic or Latino of any race. 51.3% of the entire population are female. The median household income was $35,560 with 20.1% of the population being below the poverty level from 2009 to 2013. The poverty line for Florida was $11,490 in 2013.
As of the census of 2000, there were 87,366 people, 37,471 households, and 25,780 families living in the county. The population density was 85.00 people per square mile (32.82/km2). There were 48,846 housing units at an average density of 47.5 per square mile (18.34/km2). The racial makeup of the county was 83.47% White, 9.33% Black or African American, 0.44% Native American, 1.05% Asian, 0.03% Pacific Islander, 4.14% from other races, and 1.53% from two or more races. 12.07% of the population were Hispanic or Latino of any race.
In 2000 there were 37,471 households, out of which 20.00% had children under the age of 18 living with them, 57.20% were married couples living together, 8.50% had a female householder with no husband present, and 31.20% were non-families. 26.30% of all households were made up of individuals, and 16.70% had someone living alone who was 65 years of age or older. The average household size was 2.30 and the average family size was 2.70.
In the county, the population was spread out, with 19.20% under the age of 18, 6.30% from 18 to 24, 19.30% from 25 to 44, 22.20% from 45 to 64, and 33.00% who were 65 years of age or older. The median age was 50 years. For every 100 females, there were 95.20 males. For every 100 females age 18 and over, there were 92.20 males.
The median income for a household in the county was $30,160, and the median income for a family was $35,647. Males had a median income of $26,811 versus $20,725 for females. The per capita income for the county was $17,222. About 10.20% of families and 15.20% of the population were below the poverty line, including 25.60% of those under age 18 and 7.40% of those age 65 or over.
Transportation
Highways
U.S. Route 27
State Road 17
U.S. Route 98
State Road 64
State Road 66
State Road 70
Sebring Parkway/Panther Parkway
Airports
Sebring Regional Airport (KSEF)
Avon Park Executive Airport (KAVO)
Rail
CSX Transportation (CSXT)
Amtrak (AMTK)
Government
Highlands County is governed by five elected County Commissioners and an appointed County Administrator. The administrator has executive powers to implement all decisions, ordinances, motions, and policies/procedures set forth by the Board. The FY 2013-2014 adopted budget of the county is approximately $123 million and the county employees over 350 people in 31 departments of the administration. Other organizations of the county include, the Clerk of Courts with about 75 positions, Sheriff's Office with about 340 positions, County Appraisers Office with about 30 positions, Tax Collectors Office with about 40 positions, and Elections Office with 5 positions. In all there are about 860 positions in Highlands County government.
Law Enforcement
Highlands County Sheriffs Office is the primary law enforcement agency for the non incorporated areas of Highlands County, Paul Blackman is the Sheriff. The City of Sebring and Town of Lake Placid have their own respective police departments. Avon Park Police Department closed its doors in 2015, the Sheriffs Office is now the primary law enforcement agency for the town. All public safety in Highlands County utilize a Motorola P25 Trunked Radio System which was initiated by Polk County. Highlands and Hardee Counties have piggybacked onto the system. To date, Highlands County Law Enforcement is the only law enforcement on the entire system to use 24/7 ADP encryption.
Politics
Highlands County, like the relatively nearby southwest coast, is strongly Republican: the last Democrat to win a majority in the county was Harry Truman in 1948. Like North Florida, but unlike the southwest coast, George Wallace was able to outpoll the Democratic Party here in 1968, and only in 1992 and 1996 has the Republican candidate not won an absolute majority since then.
Economy
Top employers
The top private employers of Highlands County are as follows:
1. Advent Health Hospital (1500)
2. Walmart (796)
3. Agero (600)
4. Highlands Regional Medical Center (413)
5. Delray Plants (350)
6. Palms of Sebring (257)
7. Alan Jay Automotive Network (250)
8. Lake Placid Health Care (210)
9. Positive Medical Transport (150)
10. E-Stone USA (87)
Libraries
Highlands County is part of the Heartland Library Cooperative which serve Highlands County and some of the surrounding counties in the Florida Heartland, including Glades, DeSoto, Hardee, and Okeechobee. Based in Sebring, the cooperative has seven branches within the Heartland region, with three of those branches in Highlands County: Avon Park, Lake Placid and Sebring.
Communities
Cities
Avon Park
Sebring
Town
Lake Placid
Unincorporated communities
Avon Park Lakes
Brighton
Cornwell
DeSoto City
Fort Basinger
Fort Kissimmee
Hicoria
Lorida
Placid Lakes
Spring Lake
Sun 'n Lake of Sebring
Sylvan Shores
Venus
See also
Florida Heartland
Lake Denton
National Register of Historic Places listings in Highlands County, Florida
References
External links
Government links/Constitutional offices
Highlands County Board of County Commissioners official website
Highlands County Supervisor of Elections
Highlands County Property Appraiser
Highlands County Tax Collector
Florida DOT Highlands county General Highway Map
Special districts
Highlands County Public Schools
South Florida Water Management District
Southwest Florida Water Management District
Heartland Library Cooperative
Judicial branch
Highlands County Clerk of Courts
Public Defender, 10th Judicial Circuit of Florida serving Hardee, Highlands, and Polk counties
Office of the State Attorney, 10th Judicial Circuit of Florida
Circuit and County Court for the 10th Judicial Circuit of Florida
Tourism links
Highlands County Convention and Visitors Bureau
Highlands Hammock State Park
Florida counties
1921 establishments in Florida
Populated places established in 1921
Micropolitan areas of Florida |
74567 | https://en.wikipedia.org/wiki/Dynamic%20random-access%20memory | Dynamic random-access memory | Dynamic random-access memory (dynamic RAM or DRAM) is a type of random-access semiconductor memory that stores each bit of data in a memory cell, usually consisting of a tiny capacitor and a transistor, both typically based on metal-oxide-semiconductor (MOS) technology. While most DRAM memory cell designs use a capacitor and transistor, some only use two transistors. In the designs where a capacitor is used, the capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors gradually leaks away; without intervention the data on the capacitor would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence.
DRAM typically takes the form of an integrated circuit chip, which can consist of dozens to billions of DRAM memory cells. DRAM chips are widely used in digital electronics where low-cost and high-capacity computer memory is required. One of the largest applications for DRAM is the main memory (colloquially called the "RAM") in modern computers and graphics cards (where the "main memory" is called the graphics memory). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors.
The need to refresh DRAM demands more complicated circuitry and timing than SRAM. This is offset by the structural simplicity of DRAM memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities with a simultaneous reduction in cost per bit. Refreshing the data consumes power and a variety of techniques are used to manage the overall power consumption.
DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% jump in 1988, while in recent years the price has been going down.
History
The cryptanalytic machine code-named "Aquarius" used at Bletchley Park during World War II incorporated a hard-wired dynamic memory. Paper tape was read and the characters on it "were remembered in a dynamic store. ... The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross (1) and an uncharged capacitor dot (0). Since the charge gradually leaked away, a periodic pulse was applied to top up those still charged (hence the term 'dynamic')".
In 1964 Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch. They replaced the latch with two transistors and two resistors, a configuration that became known as the Farber-Schlig cell. That year they submitted an invention closure, but it was initially rejected. In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, and 4 diodes. The Toshiba "Toscal" BC-1411 electronic calculator, which was introduced in November 1965, used a form of capacitive DRAM (180 bit) built from discrete bipolar memory cells.
The earliest forms of DRAM mentioned above used bipolar transistors. While it offered improved performance over magnetic-core memory, bipolar DRAM could not compete with the lower price of the then-dominant magnetic-core memory. Capacitors had also been used for earlier memory schemes, such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.
In 1966, Dr. Robert Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory and was trying to create an alternative to SRAM which required six MOS transistors for each bit of data. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of the single-transistor MOS DRAM memory cell. He filed a patent in 1967, and was granted U.S. patent number 3,387,286 in 1968. MOS memory offered higher performance, was cheaper, and consumed less power, than magnetic-core memory.
MOS DRAM chips were commercialized in 1969 by Advanced Memory system, Inc of Sunnyvale, CA. This 1000 bit chip was sold to Honeywell, Raytheon, Wang Laboratories, and others.
The same year, Honeywell asked Intel to make a DRAM using a three-transistor cell that they had developed. This became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell. This became the first commercially available DRAM, the Intel 1103, in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was designed by Joel Karp and laid out by Pat Earhart. The masks were cut by Barbara Maness and Judy Garcia. MOS memory overtook magnetic-core memory as the dominant memory technology in the early 1970s.
The first DRAM with multiplexed row and column address lines was the Mostek MK4096 4 kbit DRAM designed by Robert Proebsting and introduced in 1973. This addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles. This was a radical advance, effectively halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a very robust design for customer applications. At the 16 kbit density, the cost advantage increased; the 16 kbit Mostek MK4116 DRAM, introduced in 1976, achieved greater than 75% worldwide DRAM market share. However, as density increased to 64 kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers, which dominated the US and worldwide markets during the 1980s and 1990s.
Early in 1985, Gordon Moore decided to withdraw Intel from producing DRAM.
By 1986, all United States chip makers had stopped making DRAMs.
In 1985, when 64K DRAM memory chips were the most common memory chips used in computers, and when more than 60 percent of those chips were produced by Japanese companies, semiconductor makers in the United States accused Japanese companies of export dumping for the purpose of driving makers in the United States out of the commodity memory chip business.
Synchronous dynamic random-access memory (SDRAM) was developed by Samsung. The first commercial SDRAM chip was the Samsung KM48SL2000, which had a capacity of 16Mb, and was introduced in 1992. The first commercial DDR SDRAM (double data rate SDRAM) memory chip was Samsung's 64Mb DDR SDRAM chip, released in 1998.
Later, in 2001, Japanese DRAM makers accused Korean DRAM manufacturers of dumping.
In 2002, US computer makers made claims of DRAM price fixing.
Principles of operation
DRAM is usually arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit. The figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in height and width.
The long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column (the illustration to the right does not include this important detail). They are generally known as the "+" and "−" bit lines.
A sense amplifier is essentially a pair of cross-connected inverters between the bit-lines. The first inverter is connected with input from the + bit-line and output to the − bit-line. The second inverter's input is from the − bit-line with output to the + bit-line. This results in positive feedback which stabilizes after one bit-line is fully at its highest voltage and the other bit-line is at the lowest possible voltage.
Operations to read a data bit from a DRAM storage cell
The sense amplifiers are disconnected.
The bit-lines are precharged to exactly equal voltages that are in between high and low logic levels (e.g., 0.5 V if the two levels are 0 and 1 V). The bit-lines are physically symmetrical to keep the capacitance equal, and therefore at this time their voltages are equal.
The precharge circuit is switched off. Because the bit-lines are relatively long, they have enough capacitance to maintain the precharged voltage for a brief time. This is an example of dynamic logic.
The desired row's word-line is then driven high to connect a cell's storage capacitor to its bit-line. This causes the transistor to conduct, transferring charge from the storage cell to the connected bit-line (if the stored value is 1) or from the connected bit-line to the storage cell (if the stored value is 0). Since the capacitance of the bit-line is typically much higher than the capacitance of the storage cell, the voltage on the bit-line increases very slightly if the storage cell's capacitor is discharged and decreases very slightly if the storage cell is charged (e.g., 0.54 and 0.45 V in the two cases). As the other bit-line holds 0.50 V there is a small voltage difference between the two twisted bit-lines.
The sense amplifiers are now connected to the bit-lines pairs. Positive feedback then occurs from the cross-connected inverters, thereby amplifying the small voltage difference between the odd and even row bit-lines of a particular column until one bit line is fully at the lowest voltage and the other is at the maximum high voltage. Once this has happened, the row is "open" (the desired cell data is available).
All storage cells in the open row are sensed simultaneously, and the sense amplifier outputs latched. A column address then selects which latch bit to connect to the external data bus. Reads of different columns in the same row can be performed without a row opening delay because, for the open row, all data has already been sensed and latched.
While reading of columns in an open row is occurring, current is flowing back up the bit-lines from the output of the sense amplifiers and recharging the storage cells. This reinforces (i.e. "refreshes") the charge in the storage cell by increasing the voltage in the storage capacitor if it was charged to begin with, or by keeping it discharged if it was empty. Note that due to the length of the bit-lines there is a fairly long propagation delay for the charge to be transferred back to the cell's capacitor. This takes significant time past the end of sense amplification, and thus overlaps with one or more column reads.
When done with reading all the columns in the current open row, the word-line is switched off to disconnect the storage cell capacitors (the row is "closed") from the bit-lines. The sense amplifier is switched off, and the bit-lines are precharged again.
To write to memory
To store data, a row is opened and a given column's sense amplifier is temporarily forced to the desired high or low voltage state, thus causing the bit-line to charge or discharge the cell storage capacitor to the desired value. Due to the sense amplifier's positive feedback configuration, it will hold a bit-line at stable voltage even after the forcing voltage is removed. During a write to a particular cell, all the columns in a row are sensed simultaneously just as during reading, so although only a single column's storage-cell capacitor charge is changed, the entire row is refreshed (written back in), as illustrated in the figure to the right.
Refresh rate
Typically, manufacturers specify that each row must be refreshed every 64 ms or less, as defined by the JEDEC standard.
Some systems refresh every row in a burst of activity involving all rows every 64 ms. Other systems refresh one row at a time staggered throughout the 64 ms interval. For example, a system with 213 = 8,192 rows would require a staggered refresh rate of one row every 7.8 µs which is 64 ms divided by 8,192 rows. A few real-time systems refresh a portion of memory at a time determined by an external timer function that governs the operation of the rest of a system, such as the vertical blanking interval that occurs every 10–20 ms in video equipment.
The row address of the row that will be refreshed next is maintained by external logic or a counter within the DRAM. A system that provides the row address (and the refresh command) does so to have greater control over when to refresh and which row to refresh. This is done to minimize conflicts with memory accesses, since such a system has both knowledge of the memory access patterns and the refresh requirements of the DRAM. When the row address is supplied by a counter within the DRAM, the system relinquishes control over which row is refreshed and only provides the refresh command. Some modern DRAMs are capable of self-refresh; no external logic is required to instruct the DRAM to refresh or to provide a row address.
Under some conditions, most of the data in DRAM can be recovered even if the DRAM has not been refreshed for several minutes.
Memory timing
Many parameters are required to fully describe the timing of DRAM operation. Here are some examples for two timing grades of asynchronous DRAM, from a data sheet published in 1998:
Thus, the generally quoted number is the /RAS access time. This is the time to read a random bit from a precharged DRAM array. The time to read additional bits from an open page is much less.
When such a RAM is accessed by clocked logic, the times are generally rounded up to the nearest clock cycle. For example, when accessed by a 100 MHz state machine (i.e. a 10 ns clock), the 50 ns DRAM can perform the first read in five clock cycles, and additional reads within the same page every two clock cycles. This was generally described as timing, as bursts of four reads within a page were common.
When describing synchronous memory, timing is described by clock cycle counts separated by hyphens. These numbers represent in multiples of the DRAM clock cycle time. Note that this is half of the data transfer rate when double data rate signaling is used. JEDEC standard PC3200 timing is with a 200 MHz clock, while premium-priced high performance PC3200 DDR DRAM DIMM might be operated at timing.
Minimum random access time has improved from tRAC = 50 ns to , and even the premium 20 ns variety is only 2.5 times better compared to the typical case (~2.22 times better). CAS latency has improved even less, from to 10 ns. However, the DDR3 memory does achieve 32 times higher bandwidth; due to internal pipelining and wide data paths, it can output two words every 1.25 ns , while the EDO DRAM can output one word per tPC = 20 ns (50 Mword/s).
Timing abbreviations
Memory cell design
Each bit of data in a DRAM is stored as a positive or negative electrical charge in a capacitive structure. The structure providing the capacitance, as well as the transistors that control access to it, is collectively referred to as a DRAM cell. They are the fundamental building block in DRAM arrays. Multiple DRAM memory cell variants exist, but the most commonly used variant in modern DRAMs is the one-transistor, one-capacitor (1T1C) cell. The transistor is used to admit current into the capacitor during writes, and to discharge the capacitor during reads. The access transistor is designed to maximize drive strength and minimize transistor-transistor leakage (Kenner, pg. 34).
The capacitor has two terminals, one of which is connected to its access transistor, and the other to either ground or VCC/2. In modern DRAMs, the latter case is more common, since it allows faster operation. In modern DRAMs, a voltage of +VCC/2 across the capacitor is required to store a logic one; and a voltage of -VCC/2 across the capacitor is required to store a logic zero. The electrical charge stored in the capacitor is measured in coulombs. For a logic one, the charge is: , where Q is the charge in coulombs and C is the capacitance in farads. A logic zero has a charge of: .
Reading or writing a logic one requires the wordline is driven to a voltage greater than the sum of VCC and the access transistor's threshold voltage (VTH). This voltage is called VCC pumped (VCCP). The time required to discharge a capacitor thus depends on what logic value is stored in the capacitor. A capacitor containing logic one begins to discharge when the voltage at the access transistor's gate terminal is above VCCP. If the capacitor contains a logic zero, it begins to discharge when the gate terminal voltage is above VTH.
Capacitor design
Up until the mid-1980s, the capacitors in DRAM cells were co-planar with the access transistor (they were constructed on the surface of the substrate), thus they were referred to as planar capacitors. The drive to increase both density and, to a lesser extent, performance, required denser designs. This was strongly motivated by economics, a major consideration for DRAM devices, especially commodity DRAMs. The minimization of DRAM cell area can produce a denser device and lower the cost per bit of storage. Starting in the mid-1980s, the capacitor was moved above or below the silicon substrate in order to meet these objectives. DRAM cells featuring capacitors above the substrate are referred to as stacked or folded plate capacitors. Those with capacitors buried beneath the substrate surface are referred to as trench capacitors. In the 2000s, manufacturers were sharply divided by the type of capacitor used in their DRAMs and the relative cost and long-term scalability of both designs have been the subject of extensive debate. The majority of DRAMs, from major manufactures such as Hynix, Micron Technology, Samsung Electronics use the stacked capacitor structure, whereas smaller manufacturers such Nanya Technology use the trench capacitor structure (Jacob, pp. 355–357).
The capacitor in the stacked capacitor scheme is constructed above the surface of the substrate. The capacitor is constructed from an oxide-nitride-oxide (ONO) dielectric sandwiched in between two layers of polysilicon plates (the top plate is shared by all DRAM cells in an IC), and its shape can be a rectangle, a cylinder, or some other more complex shape. There are two basic variations of the stacked capacitor, based on its location relative to the bitline—capacitor-over-bitline (COB) and capacitor-under-bitline (CUB). In a former variation, the capacitor is underneath the bitline, which is usually made of metal, and the bitline has a polysilicon contact that extends downwards to connect it to the access transistor's source terminal. In the latter variation, the capacitor is constructed above the bitline, which is almost always made of polysilicon, but is otherwise identical to the COB variation. The advantage the COB variant possesses is the ease of fabricating the contact between the bitline and the access transistor's source as it is physically close to the substrate surface. However, this requires the active area to be laid out at a 45-degree angle when viewed from above, which makes it difficult to ensure that the capacitor contact does not touch the bitline. CUB cells avoid this, but suffer from difficulties in inserting contacts in between bitlines, since the size of features this close to the surface are at or near the minimum feature size of the process technology (Kenner, pp. 33–42).
The trench capacitor is constructed by etching a deep hole into the silicon substrate. The substrate volume surrounding the hole is then heavily doped to produce a buried n+ plate and to reduce resistance. A layer of oxide-nitride-oxide dielectric is grown or deposited, and finally the hole is filled by depositing doped polysilicon, which forms the top plate of the capacitor. The top of the capacitor is connected to the access transistor's drain terminal via a polysilicon strap (Kenner, pp. 42–44). A trench capacitor's depth-to-width ratio in DRAMs of the mid-2000s can exceed 50:1 (Jacob, p. 357).
Trench capacitors have numerous advantages. Since the capacitor is buried in the bulk of the substrate instead of lying on its surface, the area it occupies can be minimized to what is required to connect it to the access transistor's drain terminal without decreasing the capacitor's size, and thus capacitance (Jacob, pp. 356–357). Alternatively, the capacitance can be increased by etching a deeper hole without any increase to surface area (Kenner, pg. 44). Another advantage of the trench capacitor is that its structure is under the layers of metal interconnect, allowing them to be more easily made planar, which enables it to be integrated in a logic-optimized process technology, which have many levels of interconnect above the substrate. The fact that the capacitor is under the logic means that it is constructed before the transistors are. This allows high-temperature processes to fabricate the capacitors, which would otherwise be degrading the logic transistors and their performance. This makes trench capacitors suitable for constructing embedded DRAM (eDRAM) (Jacob, p. 357). Disadvantages of trench capacitors are difficulties in reliably constructing the capacitor's structures within deep holes and in connecting the capacitor to the access transistor's drain terminal (Kenner, pg. 44).
Historical cell designs
First-generation DRAM ICs (those with capacities of 1 kbit), of which the first was the Intel 1103, used a three-transistor, one-capacitor (3T1C) DRAM cell. By the second-generation, the requirement to increase density by fitting more bits in a given area, or the requirement to reduce cost by fitting the same amount of bits in a smaller area, lead to the almost universal adoption of the 1T1C DRAM cell, although a couple of devices with 4 and 16 kbit capacities continued to use the 3T1C cell for performance reasons (Kenner, p. 6). These performance advantages included, most significantly, the ability to read the state stored by the capacitor without discharging it, avoiding the need to write back what was read out (non-destructive read). A second performance advantage relates to the 3T1C cell has separate transistors for reading and writing; the memory controller can exploit this feature to perform atomic read-modify-writes, where a value is read, modified, and then written back as a single, indivisible operation (Jacob, p. 459).
Proposed cell designs
The one-transistor, zero-capacitor (1T) DRAM cell has been a topic of research since the late-1990s. 1T DRAM is a different way of constructing the basic DRAM memory cell, distinct from the classic one-transistor/one-capacitor (1T/1C) DRAM cell, which is also sometimes referred to as "1T DRAM", particularly in comparison to the 3T and 4T DRAM which it replaced in the 1970s.
In 1T DRAM cells, the bit of data is still stored in a capacitive region controlled by a transistor, but this capacitance is no longer provided by a separate capacitor. 1T DRAM is a "capacitorless" bit cell design that stores data using the parasitic body capacitance that is inherent to silicon on insulator (SOI) transistors. Considered a nuisance in logic design, this floating body effect can be used for data storage. This gives 1T DRAM cells the greatest density as well as allowing easier integration with high-performance logic circuits since they are constructed with the same SOI process technologies.
Refreshing of cells remains necessary, but unlike with 1T1C DRAM, reads in 1T DRAM are non-destructive; the stored charge causes a detectable shift in the threshold voltage of the transistor. Performance-wise, access times are significantly better than capacitor-based DRAMs, but slightly worse than SRAM. There are several types of 1T DRAMs: the commercialized Z-RAM from Innovative Silicon, the TTRAM from Renesas and the A-RAM from the UGR/CNRS consortium.
Array structures
DRAM cells are laid out in a regular rectangular, grid-like pattern to facilitate their control and access via wordlines and bitlines. The physical layout of the DRAM cells in an array is typically designed so that two adjacent DRAM cells in a column share a single bitline contact to reduce their area. DRAM cell area is given as n F2, where n is a number derived from the DRAM cell design, and F is the smallest feature size of a given process technology. This scheme permits comparison of DRAM size over different process technology generations, as DRAM cell area scales at linear or near-linear rates with respect to feature size. The typical area for modern DRAM cells varies between 6–8 F2.
The horizontal wire, the wordline, is connected to the gate terminal of every access transistor in its row. The vertical bitline is connected to the source terminal of the transistors in its column. The lengths of the wordlines and bitlines are limited. The wordline length is limited by the desired performance of the array, since propagation time of the signal that must transverse the wordline is determined by the RC time constant. The bitline length is limited by its capacitance (which increases with length), which must be kept within a range for proper sensing (as DRAMs operate by sensing the charge of the capacitor released onto the bitline). Bitline length is also limited by the amount of operating current the DRAM can draw and by how power can be dissipated, since these two characteristics are largely determined by the charging and discharging of the bitline.
Bitline architecture
Sense amplifiers are required to read the state contained in the DRAM cells. When the access transistor is activated, the electrical charge in the capacitor is shared with the bitline. The bitline's capacitance is much greater than that of the capacitor (approximately ten times). Thus, the change in bitline voltage is minute. Sense amplifiers are required to resolve the voltage differential into the levels specified by the logic signaling system. Modern DRAMs use differential sense amplifiers, and are accompanied by requirements as to how the DRAM arrays are constructed. Differential sense amplifiers work by driving their outputs to opposing extremes based on the relative voltages on pairs of bitlines. The sense amplifiers function effectively and efficient only if the capacitance and voltages of these bitline pairs are closely matched. Besides ensuring that the lengths of the bitlines and the number of attached DRAM cells attached to them are equal, two basic architectures to array design have emerged to provide for the requirements of the sense amplifiers: open and folded bitline arrays.
Open bitline arrays
The first generation (1 kbit) DRAM ICs, up until the 64 kbit generation (and some 256 kbit generation devices) had open bitline array architectures. In these architectures, the bitlines are divided into multiple segments, and the differential sense amplifiers are placed in between bitline segments. Because the sense amplifiers are placed between bitline segments, to route their outputs outside the array, an additional layer of interconnect placed above those used to construct the wordlines and bitlines is required.
The DRAM cells that are on the edges of the array do not have adjacent segments. Since the differential sense amplifiers require identical capacitance and bitline lengths from both segments, dummy bitline segments are provided. The advantage of the open bitline array is a smaller array area, although this advantage is slightly diminished by the dummy bitline segments. The disadvantage that caused the near disappearance of this architecture is the inherent vulnerability to noise, which affects the effectiveness of the differential sense amplifiers. Since each bitline segment does not have any spatial relationship to the other, it is likely that noise would affect only one of the two bitline segments.
Folded bitline arrays
The folded bitline array architecture routes bitlines in pairs throughout the array. The close proximity of the paired bitlines provide superior common-mode noise rejection characteristics over open bitline arrays. The folded bitline array architecture began appearing in DRAM ICs during the mid-1980s, beginning with the 256 kbit generation. This architecture is favored in modern DRAM ICs for its superior noise immunity.
This architecture is referred to as folded because it takes its basis from the open array architecture from the perspective of the circuit schematic. The folded array architecture appears to remove DRAM cells in alternate pairs (because two DRAM cells share a single bitline contact) from a column, then move the DRAM cells from an adjacent column into the voids.
The location where the bitline twists occupies additional area. To minimize area overhead, engineers select the simplest and most area-minimal twisting scheme that is able to reduce noise under the specified limit. As process technology improves to reduce minimum feature sizes, the signal to noise problem worsens, since coupling between adjacent metal wires is inversely proportional to their pitch. The array folding and bitline twisting schemes that are used must increase in complexity in order to maintain sufficient noise reduction. Schemes that have desirable noise immunity characteristics for a minimal impact in area is the topic of current research (Kenner, p. 37).
Future array architectures
Advances in process technology could result in open bitline array architectures being favored if it is able to offer better long-term area efficiencies; since folded array architectures require increasingly complex folding schemes to match any advance in process technology. The relationship between process technology, array architecture, and area efficiency is an active area of research.
Row and column redundancy
The first DRAM integrated circuits did not have any redundancy. An integrated circuit with a defective DRAM cell would be discarded. Beginning with the 64 kbit generation, DRAM arrays have included spare rows and columns to improve yields. Spare rows and columns provide tolerance of minor fabrication defects which have caused a small number of rows or columns to be inoperable. The defective rows and columns are physically disconnected from the rest of the array by a triggering a programmable fuse or by cutting the wire by a laser. The spare rows or columns are substituted in by remapping logic in the row and column decoders (Jacob, pp. 358–361).
Error detection and correction
Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state. The majority of one-off ("soft") errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read/write them.
The problem can be mitigated by using redundant memory bits and additional circuitry that use these bits to detect and correct soft errors. In most cases, the detection and correction are performed by the memory controller; sometimes, the required logic is transparently implemented within DRAM chips or modules, enabling the ECC memory functionality for otherwise ECC-incapable systems. The extra memory bits are used to record parity and to enable missing data to be reconstructed by error-correcting code (ECC). Parity allows the detection of all single-bit errors (actually, any odd number of wrong bits). The most common error-correcting code, a SECDED Hamming code, allows a single-bit error to be corrected and, in the usual configuration, with an extra parity bit, double-bit errors to be detected.
Recent studies give widely varying error rates with over seven orders of magnitude difference, ranging from , roughly one bit error, per hour, per gigabyte of memory to one bit error, per century, per gigabyte of memory. The Schroeder et al. 2009 study reported a 32% chance that a given computer in their study would suffer from at least one correctable error per year, and provided evidence that most such errors are intermittent hard rather than soft errors. A 2010 study at the University of Rochester also gave evidence that a substantial fraction of memory errors are intermittent hard errors. Large scale studies on non-ECC main memory in PCs and laptops suggest that undetected memory errors account for a substantial number of system failures: the study reported a 1-in-1700 chance per 1.5% of memory tested (extrapolating to an approximately 26% chance for total memory) that a computer would have a memory error every eight months.
Security
Data remanence
Although dynamic memory is only specified and guaranteed to retain its contents when supplied with power and refreshed every short period of time (often ), the memory cell capacitors often retain their values for significantly longer time, particularly at low temperatures. Under some conditions most of the data in DRAM can be recovered even if it has not been refreshed for several minutes.
This property can be used to circumvent security and recover data stored in the main memory that is assumed to be destroyed at power-down. The computer could be quickly rebooted, and the contents of the main memory read out; or by removing a computer's memory modules, cooling them to prolong data remanence, then transferring them to a different computer to be read out. Such an attack was demonstrated to circumvent popular disk encryption systems, such as the open source TrueCrypt, Microsoft's BitLocker Drive Encryption, and Apple's FileVault. This type of attack against a computer is often called a cold boot attack.
Memory corruption
Dynamic memory, by definition, requires periodic refresh. Furthermore, reading dynamic memory is a destructive operation, requiring a recharge of the storage cells in the row that has been read. If these processes are imperfect, a read operation can cause soft errors. In particular, there is a risk that some charge can leak between nearby cells, causing the refresh or read of one row to cause a disturbance error in an adjacent or even nearby row. The awareness of disturbance errors dates back to the first commercially available DRAM in the early 1970s (the Intel 1103). Despite the mitigation techniques employed by manufacturers, commercial researchers proved in a 2014 analysis that commercially available DDR3 DRAM chips manufactured in 2012 and 2013 are susceptible to disturbance errors. The associated side effect that led to observed bit flips has been dubbed row hammer.
Packaging
Memory module
Dynamic RAM ICs are usually packaged in molded epoxy cases, with an internal lead frame for interconnections between the silicon die and the package leads. The original IBM PC design used ICs packaged in dual in-line packages, soldered directly to the main board or mounted in sockets. As memory density skyrocketed, the DIP package was no longer practical. For convenience in handling, several dynamic RAM integrated circuits may be mounted on a single memory module, allowing installation of 16-bit, 32-bit or 64-bit wide memory in a single unit, without the requirement for the installer to insert multiple individual integrated circuits. Memory modules may include additional devices for parity checking or error correction. Over the evolution of desktop computers, several standardized types of memory module have been developed. Laptop computers, game consoles, and specialized devices may have their own formats of memory modules not interchangeable with standard desktop parts for packaging or proprietary reasons.
Embedded
DRAM that is integrated into an integrated circuit designed in a logic-optimized process (such as an application-specific integrated circuit, microprocessor, or an entire system on a chip) is called embedded DRAM (eDRAM). Embedded DRAM requires DRAM cell designs that can be fabricated without preventing the fabrication of fast-switching transistors used in high-performance logic, and modification of the basic logic-optimized process technology to accommodate the process steps required to build DRAM cell structures.
Versions
Since the fundamental DRAM cell and array has maintained the same basic structure for many years, the types of DRAM are mainly distinguished by the many different interfaces for communicating with DRAM chips.
Asynchronous DRAM
The original DRAM, now known by the retronym "asynchronous DRAM" was the first type of DRAM in use. From its origins in the late 1960s, it was commonplace in computing up until around 1997, when it was mostly replaced by Synchronous DRAM. In the present day, manufacture of asynchronous RAM is relatively rare.
Principles of operation
An asynchronous DRAM chip has power connections, some number of address inputs (typically 12), and a few (typically one or four) bidirectional data lines. There are four active-low control signals:
, the Row Address Strobe. The address inputs are captured on the falling edge of , and select a row to open. The row is held open as long as is low.
, the Column Address Strobe. The address inputs are captured on the falling edge of , and select a column from the currently open row to read or write.
, Write Enable. This signal determines whether a given falling edge of is a read (if high) or write (if low). If low, the data inputs are also captured on the falling edge of .
, Output Enable. This is an additional signal that controls output to the data I/O pins. The data pins are driven by the DRAM chip if and are low, is high, and is low. In many applications, can be permanently connected low (output always enabled), but switching can be useful when connecting multiple memory chips in parallel.
This interface provides direct control of internal timing. When is driven low, a cycle must not be attempted until the sense amplifiers have sensed the memory state, and must not be returned high until the storage cells have been refreshed. When is driven high, it must be held high long enough for precharging to complete.
Although the DRAM is asynchronous, the signals are typically generated by a clocked memory controller, which limits their timing to multiples of the controller's clock cycle.
RAS Only Refresh
Classic asynchronous DRAM is refreshed by opening each row in turn.
The refresh cycles are distributed across the entire refresh interval in such a way that all rows are refreshed within the required interval. To refresh one row of the memory array using only refresh (ROR), the following steps must occur:
The row address of the row to be refreshed must be applied at the address input pins.
must switch from high to low. must remain high.
At the end of the required amount of time, must return high.
This can be done by supplying a row address and pulsing low; it is not necessary to perform any cycles. An external counter is needed to iterate over the row addresses in turn.
CAS before RAS refresh
For convenience, the counter was quickly incorporated into the DRAM chips themselves. If the line is driven low before (normally an illegal operation), then the DRAM ignores the address inputs and uses an internal counter to select the row to open. This is known as -before- (CBR) refresh. This became the standard form of refresh for asynchronous DRAM, and is the only form generally used with SDRAM.
Hidden refresh
Given support of -before- refresh, it is possible to deassert while holding low to maintain data output. If is then asserted again, this performs a CBR refresh cycle while the DRAM outputs remain valid. Because data output is not interrupted, this is known as hidden refresh.
Page mode DRAM
Page mode DRAM is a minor modification to the first-generation DRAM IC interface which improved the performance of reads and writes to a row by avoiding the inefficiency of precharging and opening the same row repeatedly to access a different column. In Page mode DRAM, after a row was opened by holding low, the row could be kept open, and multiple reads or writes could be performed to any of the columns in the row. Each column access was initiated by asserting and presenting a column address. For reads, after a delay (tCAC), valid data would appear on the data out pins, which were held at high-Z before the appearance of valid data. For writes, the write enable signal and write data would be presented along with the column address.
Page mode DRAM was later improved with a small modification which further reduced latency. DRAMs with this improvement were called fast page mode DRAMs (FPM DRAMs). In page mode DRAM, was asserted before the column address was supplied. In FPM DRAM, the column address could be supplied while was still deasserted. The column address propagated through the column address data path, but did not output data on the data pins until was asserted. Prior to being asserted, the data out pins were held at high-Z. FPM DRAM reduced tCAC latency. Fast page mode DRAM was introduced in 1986 and was used with Intel 80486.
Static column is a variant of fast page mode in which the column address does not need to be stored in, but rather, the address inputs may be changed with held low, and the data output will be updated accordingly a few nanoseconds later.
Nibble mode is another variant in which four sequential locations within the row can be accessed with four consecutive pulses of . The difference from normal page mode is that the address inputs are not used for the second through fourth edges; they are generated internally starting with the address supplied for the first edge.
Extended data out DRAM
Extended data out DRAM (EDO DRAM) was invented and patented in the 1990s by Micron Technology who then licensed technology to many other memory manufacturers. EDO RAM, sometimes referred to as Hyper Page Mode enabled DRAM, is similar to Fast Page Mode DRAM with the additional feature that a new access cycle can be started while keeping the data output of the previous cycle active. This allows a certain amount of overlap in operation (pipelining), allowing somewhat improved performance. It is up to 30% faster than FPM DRAM, which it began to replace in 1995 when Intel introduced the 430FX chipset with EDO DRAM support. Irrespective of the performance gains, FPM and EDO SIMMs can be used interchangeably in many (but not all) applications.
To be precise, EDO DRAM begins data output on the falling edge of , but does not stop the output when rises again. It holds the output valid (thus extending the data output time) until either is deasserted, or a new falling edge selects a different column address.
Single-cycle EDO has the ability to carry out a complete memory transaction in one clock cycle. Otherwise, each sequential RAM access within the same page takes two clock cycles instead of three, once the page has been selected. EDO's performance and capabilities created an opportunity to reduce the immense performance loss associated with a lack of L2 cache in low-cost, commodity PCs. This was also good for notebooks due to difficulties with their limited form factor, and battery life limitations. Additionally, for systems with an L2 cache, the availability of EDO memory improved the average memory latency seen by applications over earlier FPM implementations.
Single-cycle EDO DRAM became very popular on video cards towards the end of the 1990s. It was very low cost, yet nearly as efficient for performance as the far more costly VRAM.
Burst EDO DRAM
An evolution of EDO DRAM, burst EDO DRAM (BEDO DRAM), could process four memory addresses in one burst, for a maximum of , saving an additional three clocks over optimally designed EDO memory. It was done by adding an address counter on the chip to keep track of the next address. BEDO also added a pipeline stage allowing page-access cycle to be divided into two parts. During a memory-read operation, the first part accessed the data from the memory array to the output stage (second latch). The second part drove the data bus from this latch at the appropriate logic level. Since the data is already in the output buffer, quicker access time is achieved (up to 50% for large blocks of data) than with traditional EDO.
Although BEDO DRAM showed additional optimization over EDO, by the time it was available the market had made a significant investment towards synchronous DRAM, or SDRAM . Even though BEDO RAM was superior to SDRAM in some ways, the latter technology quickly displaced BEDO.
Synchronous dynamic RAM
Synchronous dynamic RAM (SDRAM) significantly revises the asynchronous memory interface, adding a clock (and a clock enable) line. All other signals are received on the rising edge of the clock.
The and inputs no longer act as strobes, but are instead, along with , part of a 3-bit command controlled by a new active-low strobe, chip select or :
The line's function is extended to a per-byte "DQM" signal, which controls data input (writes) in addition to data output (reads). This allows DRAM chips to be wider than 8 bits while still supporting byte-granularity writes.
Many timing parameters remain under the control of the DRAM controller. For example, a minimum time must elapse between a row being activated and a read or write command. One important parameter must be programmed into the SDRAM chip itself, namely the CAS latency. This is the number of clock cycles allowed for internal operations between a read command and the first data word appearing on the data bus. The "Load mode register" command is used to transfer this value to the SDRAM chip. Other configurable parameters include the length of read and write bursts, i.e. the number of words transferred per read or write command.
The most significant change, and the primary reason that SDRAM has supplanted asynchronous RAM, is the support for multiple internal banks inside the DRAM chip. Using a few bits of "bank address" which accompany each command, a second bank can be activated and begin reading data while a read from the first bank is in progress. By alternating banks, an SDRAM device can keep the data bus continuously busy, in a way that asynchronous DRAM cannot.
Single data rate synchronous DRAM
Single data rate SDRAM (SDR SDRAM or SDR) is the original generation of SDRAM; it made a single transfer of data per clock cycle.
Double data rate synchronous DRAM
Double data rate SDRAM (DDR SDRAM or DDR) was a later development of SDRAM, used in PC memory beginning in 2000. Subsequent versions are numbered sequentially (DDR2, DDR3, etc.). DDR SDRAM internally performs double-width accesses at the clock rate, and uses a double data rate interface to transfer one half on each clock edge. DDR2 and DDR3 increased this factor to 4× and 8×, respectively, delivering 4-word and 8-word bursts over 2 and 4 clock cycles, respectively. The internal access rate is mostly unchanged (200 million per second for DDR-400, DDR2-800 and DDR3-1600 memory), but each access transfers more data.
Direct Rambus DRAM
Direct RAMBUS DRAM (DRDRAM) was developed by Rambus. First supported on motherboards in 1999, it was intended to become an industry standard, but was outcompeted by DDR SDRAM, making it technically obsolete by 2003.
Reduced Latency DRAM
Reduced Latency DRAM (RLDRAM) is a high performance double data rate (DDR) SDRAM that combines fast, random access with high bandwidth, mainly intended for networking and caching applications.
Graphics RAM
Graphics RAMs are asynchronous and synchronous DRAMs designed for graphics-related tasks such as texture memory and framebuffers, found on video cards.
Video DRAM
Video DRAM (VRAM) is a dual-ported variant of DRAM that was once commonly used to store the frame-buffer in some graphics adaptors.
Window DRAM
Window DRAM (WRAM) is a variant of VRAM that was once used in graphics adaptors such as the Matrox Millennium and ATI 3D Rage Pro. WRAM was designed to perform better and cost less than VRAM. WRAM offered up to 25% greater bandwidth than VRAM and accelerated commonly used graphical operations such as text drawing and block fills.
Multibank DRAM
Multibank DRAM (MDRAM) is a type of specialized DRAM developed by MoSys. It is constructed from small memory banks of , which are operated in an interleaved fashion, providing bandwidths suitable for graphics cards at a lower cost to memories such as SRAM. MDRAM also allows operations to two banks in a single clock cycle, permitting multiple concurrent accesses to occur if the accesses were independent. MDRAM was primarily used in graphic cards, such as those featuring the Tseng Labs ET6x00 chipsets. Boards based upon this chipset often had the unusual capacity of because of MDRAM's ability to be implemented more easily with such capacities. A graphics card with of MDRAM had enough memory to provide 24-bit color at a resolution of 1024×768—a very popular setting at the time.
Synchronous graphics RAM
Synchronous graphics RAM (SGRAM) is a specialized form of SDRAM for graphics adaptors. It adds functions such as bit masking (writing to a specified bit plane without affecting the others) and block write (filling a block of memory with a single colour). Unlike VRAM and WRAM, SGRAM is single-ported. However, it can open two memory pages at once, which simulates the dual-port nature of other video RAM technologies.
Graphics double data rate SDRAM
Graphics double data rate SDRAM is a type of specialized DDR SDRAM designed to be used as the main memory of graphics processing units (GPUs). GDDR SDRAM is distinct from commodity types of DDR SDRAM such as DDR3, although they share some core technologies. Their primary characteristics are higher clock frequencies for both the DRAM core and I/O interface, which provides greater memory bandwidth for GPUs. As of 2020, there are seven, successive generations of GDDR: GDDR2, GDDR3, GDDR4, GDDR5, GDDR5X, GDDR6 and GDDR6X.
Pseudostatic RAM
Pseudostatic RAM (PSRAM or PSDRAM) is dynamic RAM with built-in refresh and address-control circuitry to make it behave similarly to static RAM (SRAM). It combines the high density of DRAM with the ease of use of true SRAM. PSRAM (made by Numonyx) is used in the Apple iPhone and other embedded systems such as XFlar Platform.
Some DRAM components have a "self-refresh mode". While this involves much of the same logic that is needed for pseudo-static operation, this mode is often equivalent to a standby mode. It is provided primarily to allow a system to suspend operation of its DRAM controller to save power without losing data stored in DRAM, rather than to allow operation without a separate DRAM controller as is the case with PSRAM.
An embedded variant of PSRAM was sold by MoSys under the name 1T-SRAM. It is a set of small DRAM banks with an SRAM cache in front to make it behave much like SRAM. It is used in Nintendo GameCube and Wii video game consoles.
See also
DRAM price fixing
Flash memory
List of device bit rates
Memory bank
Memory geometry
References
Further reading
External links
Logarithmic graph 1980–2003 showing size and cycle time.
Benefits of Chipkill-Correct ECC for PC Server Main Memory — A 1997 discussion of SDRAM reliability—some interesting information on "soft errors" from cosmic rays, especially with respect to error-correcting code schemes
Tezzaron Semiconductor Soft Error White Paper 1994 literature review of memory error rate measurements.
Ars Technica: RAM Guide
A detailed description of current DRAM technology.
Multi-port Cache DRAM — MP-RAM
Computer memory
Types of RAM
American inventions
20th-century inventions |
74784 | https://en.wikipedia.org/wiki/Super%20Audio%20CD | Super Audio CD | Super Audio CD (SACD) is a read-only optical disc format for audio storage introduced in 1999. It was developed jointly by Sony and Philips Electronics and intended to be the successor to the Compact Disc (CD) format.
The SACD format allows multiple audio channels (i.e. surround sound or multichannel sound). It also provides a higher bit rate and longer playing time than a conventional CD. An SACD is designed to be played on an SACD player. A hybrid SACD contains a Compact Disc Digital Audio (CDDA) layer and can also be played on a standard CD player.
History
The Super Audio CD format was introduced in 1999. Royal Philips and Crest Digital partnered in May 2002 to develop and install the first SACD hybrid disc production line in the United States, with a production capacity of up to three million discs per year. SACD did not achieve the level of growth that compact discs enjoyed in the 1980s, and was not accepted by the mainstream market.
By 2007, SACD had failed to make a significant impact in the marketplace; consumers were increasingly downloading low-resolution music files over the internet rather than buying music on physical disc formats. A small and niche market for SACD has remained, serving the audiophile community.
Content
By October 2009, record companies had published more than 6,000 SACD releases, slightly more than half of which were classical music. Jazz and popular music albums, mainly remastered previous releases, were the next two genres most represented.
Many popular artists have released some or all of their back catalog on SACD. Pink Floyd's album The Dark Side of the Moon (1973) sold over 800,000 copies by June 2004 in its SACD Surround Sound edition. The Who's rock opera Tommy (1969), and Roxy Music's Avalon (1982), were released on SACD to take advantage of the format's multi-channel capability. All three albums were remixed in 5.1 surround, and released as hybrid SACDs with a stereo mix on the standard CD layer.
Some popular artists have released new recordings on SACD. Sales figures for Sting's Sacred Love (2003) album reached number one on SACD sales charts in four European countries in June 2004.
Between 2007 and 2008, the rock band Genesis re-released all of their studio albums across three SACD box sets. Each album in these sets contains both new stereo and 5.1 mixes. The original stereo mixes were not included. The US & Canada versions do not use SACD but CD instead.
By August 2009 443 labels had released one or more SACDs. Instead of depending on major label support, some orchestras and artists have released SACDs on their own. For instance, the Chicago Symphony Orchestra started the Chicago Resound label to provide full and burgeoning support for high-resolution SACD hybrid discs, and the London Symphony Orchestra established their own 'LSO Live' label.
Many SACD discs that were released from 2000-2005 are now out of print and available only on the used market. By 2009, the major record companies were no longer regularly releasing discs in the format, with new releases confined to the smaller labels.
Technology
SACD discs have identical physical dimensions as standard compact discs. The areal density of the disc is the same as a DVD. There are three types of disc:
Hybrid: Hybrid SACDs have a 4.7 GB SACD layer (the HD layer), as well as a CD (Red Book) audio layer readable by most conventional Compact Disc players.
Single-layer: A disc with one 4.7 GB SACD layer.
Dual-layer: A disc with two SACD layers, totaling 8.5 GB, and no CD layer. Dual-layer SACDs can store nearly twice as much data as a single-layer SACD.
Unlike hybrid discs, both single- and dual-layer SACDs are incompatible with conventional CD players and cannot be played on them.
A stereo SACD recording has an uncompressed rate of 5.6 Mbit/s, four times the rate for Red Book CD stereo audio.
Commercial releases commonly include both surround sound (five full-range plus LFE multi-channel) and stereo (dual-channel) mixes on the SACD layer. Some reissues retain the mixes of earlier multi-channel formats (examples include the 1973 quadraphonic mix of Mike Oldfield's Tubular Bells and the 1957 three-channel stereo recording by the Chicago Symphony Orchestra of Mussorgsky's Pictures at an Exhibition, reissued on SACD in 2001 and 2004 respectively).
Disc reading
Objective lenses in conventional CD players have a longer working distance, or focal length, than lenses designed for SACD players. This means that when a hybrid SACD is placed into a conventional CD player, the infrared laser beam passes through the SACD layer and is reflected by the CD layer at the standard 1.2 mm distance, and the SACD layer is out of focus. When the same disc is placed into an SACD player, the red laser is reflected by the SACD layer (at 0.6 mm distance) before it can reach the CD layer. Conversely, if a conventional CD is placed into an SACD player, the laser will read the disc as a CD since there is no SACD layer.
Direct Stream Digital
SACD audio is stored in Direct Stream Digital (DSD) format using pulse-density modulation (PDM) where audio amplitude is determined by the varying proportion of 1s and 0s. This contrasts with compact disc and conventional computer audio systems using pulse-code modulation (PCM) where audio amplitude is determined by numbers encoded in the bit stream. Both modulations require neighboring samples to reconstruct the original waveform, the more the lower frequency that can be encoded.
DSD is 1-bit, has a sampling rate of 2.8224 MHz, and makes use of noise shaping quantization techniques in order to push 1-bit quantization noise up to inaudible ultrasonic frequencies. This gives the format a greater dynamic range and wider frequency response than the CD. The SACD format is capable of delivering a dynamic range of 120 dB from 20 Hz to 20 kHz and an extended frequency response up to 100 kHz, although most available players list an upper limit of 70–90 kHz, and practical limits reduce this to 50 kHz. Because of the nature of sigma-delta converters, DSD and PCM cannot be directly compared. DSD's frequency response can be as high as 100 kHz, but frequencies that high compete with high levels of ultrasonic quantization noise. With appropriate low-pass filtering, a frequency response of 20 kHz can be achieved along with a dynamic range of nearly 120 dB, which is about the same dynamic range as PCM audio with a resolution of 20 bits.
Direct Stream Transfer
To reduce the space and bandwidth requirements of DSD, a lossless data compression method called Direct Stream Transfer (DST) is used. DST compression is compulsory for multi-channel regions and optional for stereo regions. It typically compresses by a factor of between two and three, allowing a disc to contain 80 minutes of both 2-channel and 5.1-channel sound.
Direct Stream Transfer compression was standardized as an amendment to MPEG-4 Audio standard (ISO/IEC 14496-3:2001/Amd 6:2005 – Lossless coding of oversampled audio) in 2005. It contains the DSD and DST definitions as described in the Super Audio CD Specification. The MPEG-4 DST provides lossless coding of oversampled audio signals. Target applications of DST are archiving and storage of 1-bit oversampled audio signals and SA-CD.
A reference implementation of MPEG-4 DST was published as ISO/IEC 14496-5:2001/Amd.10:2007 in 2007.
Copy protection
SACD has several copy protection features at the physical level, which made the digital content of SACD discs difficult to copy until the jailbreak of the PlayStation 3. The content may be copyable without SACD quality by resorting to the analog hole, or ripping the conventional 700 MB layer on hybrid discs. Copy protection schemes include physical pit modulation and 80-bit encryption of the audio data, with a key encoded on a special area of the disc that is only readable by a licensed SACD device. The HD layer of an SACD disc cannot be played back on computer CD/DVD drives, and SACDs can only be manufactured at the disc replication facilities in Shizuoka and Salzburg.
Nonetheless, a PlayStation 3 with an SACD drive and appropriate firmware can use specialized software to extract a DSD copy of the HD stream.
Sound quality
Sound quality parameters achievable by the Red Book CD-DA and SACD formats compared with the limits of human hearing are as follows:
CD Dynamic range: 90 dB; 120 dB (with shaped dither); frequency range: 20 Hz – 20 kHz
SACD Dynamic range: 105 dB; frequency range: 20 Hz – 50 kHz
Human hearing Dynamic range: 120 dB; frequency range: 20 Hz – 20 kHz (young person); 20 Hz – 8 to 15 kHz (middle-aged adult)
In September 2007, the Audio Engineering Society published the results of a year-long trial, in which a range of subjects including professional recording engineers were asked to discern the difference between high-resolution audio sources (including SACD and DVD-Audio) and a compact disc audio (44.1 kHz/16 bit) conversion of the same source material under double-blind test conditions. Out of 554 trials, there were 276 correct answers, a 49.8% success rate corresponding almost exactly to the 50% that would have been expected by chance guessing alone. When the level of the signal was elevated by 14 dB or more, the test subjects were able to detect the higher noise floor of the CD-quality loop easily. The authors commented:
Following criticism that the original published results of the study were not sufficiently detailed, the AES published a list of the audio equipment and recordings used during the tests. Since the Meyer-Moran study in 2007, approximately 80 studies have been published on high-resolution audio, about half of which included blind tests. Joshua Reiss performed a meta-analysis on 20 of the published tests that included sufficient experimental detail and data. In a paper published in the July 2016 issue of the AES Journal, Reiss says that, although the individual tests had mixed results, and that the effect was "small and difficult to detect," the overall result was that trained listeners could distinguish between hi-resolution recordings and their CD equivalents under blind conditions: "Overall, there was a small but statistically significant ability to discriminate between standard quality audio (44.1 or 48 kHz, 16 bit) and high-resolution audio (beyond standard quality). When subjects were trained, the ability to discriminate was far more significant." Hiroshi Nittono pointed out that the results in Reiss's paper showed that the ability to distinguish high-resolution audio from CD-quality audio was "only slightly better than chance."
Contradictory results have been found when comparing DSD and high-resolution PCM formats. Double-blind listening tests in 2004 between DSD and 24-bit, 176.4 kHz PCM recordings reported that among test subjects no significant differences could be heard. DSD advocates and equipment manufacturers continue to assert an improvement in sound quality above PCM 24-bit 176.4 kHz. A 2003 study found that despite both formats' extended frequency responses, people could not distinguish audio with information above 21 kHz from audio without such high-frequency content. In a 2014 study, however, Marui et al. found that under double-blind conditions, listeners were able to distinguish between PCM (192 kHz/24 bits) and DSD (2.8 MHz) or DSD (5.6MHz) recording formats, preferring the qualitative features of DSD, but could not discriminate between the two DSD formats.
Playback hardware
The Sony SCD-1 player was introduced concurrently with the SACD format in 1999, at a price of approximately US$5,000. It weighed over and played two-channel SACDs and Red Book CDs only. Electronics manufacturers, including Onkyo, Denon, Marantz, Pioneer and Yamaha offer or offered SACD players. Sony has made in-car SACD players.
SACD players are not permitted to offer an output carrying an unencrypted stream of DSD.
The first two generations of Sony's PlayStation 3 game console were capable of reading SACD discs. Starting with the third generation (introduced October 2007), SACD playback was removed. All PlayStation 3 models, however, will play DSD Disc format. The PlayStation 3 was capable of converting multi-channel DSD to lossy 1.5 Mbit/s DTS for playback over S/PDIF using the 2.00 system software. The subsequent revision removed the feature.
Several brands have introduced (mostly high-end) Blu-ray Disc and Ultra HD Blu-ray players that can play SACD discs.
Unofficial playback of SACD disc images on a PC is possible through freeware audio player foobar2000 for Windows using an open source plug-in extension called SACDDecoder. Mac OS X music software Audirvana also supports playback of SACD disc images.
See also
High-resolution audio
Audio format
Audio storage
DualDisc
DVD-Audio
High Definition Compatible Digital
Extended Resolution Compact Disc
DSD-CD
Notes
References
Bibliography
Janssen, E.; Reefman, D. "Super-audio CD: an introduction". Signal Processing Magazine, IEEE Volume 20, Issue 4, July 2003, pp. 83–90.
External links
Super Audio Compact Disc: A Technical Proposal, Sony (archived PDF)
SA-CD.net Reviews of SACD releases and a discussion forum.
Audiovisual introductions in 1999
Compact disc
DVD
Audio storage |
75028 | https://en.wikipedia.org/wiki/Voice%20over%20IP | Voice over IP | Voice over Internet Protocol (VoIP), also called IP telephony, is a method and group of technologies for the delivery of voice communications and multimedia sessions over Internet Protocol (IP) networks, such as the Internet. The terms Internet telephony, broadband telephony, and broadband phone service specifically refer to the provisioning of communications services (voice, fax, SMS, voice-messaging) over the Internet, rather than via the public switched telephone network (PSTN), also known as plain old telephone service (POTS).
Overview
The steps and principles involved in originating VoIP telephone calls are similar to traditional digital telephony and involve signaling, channel setup, digitization of the analog voice signals, and encoding. Instead of being transmitted over a circuit-switched network, the digital information is packetized and transmission occurs as IP packets over a packet-switched network. They transport media streams using special media delivery protocols that encode audio and video with audio codecs and video codecs. Various codecs exist that optimize the media stream based on application requirements and network bandwidth; some implementations rely on narrowband and compressed speech, while others support high-fidelity stereo codecs.
The most widely used speech coding standards in VoIP are based on the linear predictive coding (LPC) and modified discrete cosine transform (MDCT) compression methods. Popular codecs include the MDCT-based AAC-LD (used in FaceTime), the LPC/MDCT-based Opus (used in WhatsApp), the LPC-based SILK (used in Skype), μ-law and A-law versions of G.711, G.722, and an open source voice codec known as iLBC, a codec that uses only 8 kbit/s each way called G.729.
Early providers of voice-over-IP services used business models and offered technical solutions that mirrored the architecture of the legacy telephone network. Second-generation providers, such as Skype, built closed networks for private user bases, offering the benefit of free calls and convenience while potentially charging for access to other communication networks, such as the PSTN. This limited the freedom of users to mix-and-match third-party hardware and software. Third-generation providers, such as Google Talk, adopted the concept of federated VoIP. These solutions typically allow dynamic interconnection between users in any two domains of the Internet, when a user wishes to place a call.
In addition to VoIP phones, VoIP is also available on many personal computers and other Internet access devices. Calls and SMS text messages may be sent via Wi-Fi or the carrier's mobile data network. VoIP provides a framework for consolidation of all modern communications technologies using a single unified communications system.
Pronunciation
VoIP is variously pronounced as an initialism, V-O-I-P, or as an acronym, (). Full words, voice over Internet Protocol, or voice over IP, are sometimes used.
Protocols
Voice over IP has been implemented with proprietary protocols and protocols based on open standards in applications such as VoIP phones, mobile applications, and web-based communications.
A variety of functions are needed to implement VoIP communication. Some protocols perform multiple functions, while others perform only a few and must be used in concert. These functions include:
Network and transport – Creating reliable transmission over unreliable protocols, which may involve acknowledging receipt of data and retransmitting data that wasn't received.
Session management – Creating and managing a session (sometimes glossed as simply a "call"), which is a connection between two or more peers that provides a context for further communication.
Signaling – Performing registration (advertising one's presence and contact information) and discovery (locating someone and obtaining their contact information), dialing (including reporting call progress), negotiating capabilities, and call control (such as hold, mute, transfer/forwarding, dialing DTMF keys during a call [e.g. to interact with an automated attendant or IVR], etc.).
Media description – Determining what type of media to send (audio, video, etc.), how to encode/decode it, and how to send/receive it (IP addresses, ports, etc.).
Media – Transferring the actual media in the call, such as audio, video, text messages, files, etc.
Quality of service – Providing out-of-band content or feedback about the media such as synchronization, statistics, etc.
Security – Implementing access control, verifying the identity of other participants (computers or people), and encrypting data to protect the privacy and integrity of the media contents and/or the control messages.
VoIP protocols include:
Session Initiation Protocol (SIP), connection management protocol developed by the IETF
H.323, one of the first VoIP call signaling and control protocols that found widespread implementation. Since the development of newer, less complex protocols such as MGCP and SIP, H.323 deployments are increasingly limited to carrying existing long-haul network traffic.
Media Gateway Control Protocol (MGCP), connection management for media gateways
H.248, control protocol for media gateways across a converged internetwork consisting of the traditional PSTN and modern packet networks
Real-time Transport Protocol (RTP), transport protocol for real-time audio and video data
Real-time Transport Control Protocol (RTCP), sister protocol for RTP providing stream statistics and status information
Secure Real-time Transport Protocol (SRTP), encrypted version of RTP
Session Description Protocol (SDP), a syntax for session initiation and announcement for multi-media communications and WebSocket transports.
Inter-Asterisk eXchange (IAX), protocol used between Asterisk PBX instances
Extensible Messaging and Presence Protocol (XMPP), instant messaging, presence information, and contact list maintenance
Jingle, for peer-to-peer session control in XMPP
Skype protocol, proprietary Internet telephony protocol suite based on peer-to-peer architecture
Adoption
Consumer market
Mass-market VoIP services use existing broadband Internet access, by which subscribers place and receive telephone calls in much the same manner as they would via the PSTN. Full-service VoIP phone companies provide inbound and outbound service with direct inbound dialing. Many offer unlimited domestic calling and sometimes international calls for a flat monthly subscription fee. Phone calls between subscribers of the same provider are usually free when flat-fee service is not available.
A VoIP phone is necessary to connect to a VoIP service provider. This can be implemented in several ways:
Dedicated VoIP phones connect directly to the IP network using technologies such as wired Ethernet or Wi-Fi. These are typically designed in the style of traditional digital business telephones.
An analog telephone adapter connects to the network and implements the electronics and firmware to operate a conventional analog telephone attached through a modular phone jack. Some residential Internet gateways and cablemodems have this function built in.
Softphone application software installed on a networked computer that is equipped with a microphone and speaker, or headset. The application typically presents a dial pad and display field to the user to operate the application by mouse clicks or keyboard input.
PSTN and mobile network providers
It is increasingly common for telecommunications providers to use VoIP telephony over dedicated and public IP networks as a backhaul to connect switching centers and to interconnect with other telephony network providers; this is often referred to as IP backhaul.
Smartphones may have SIP clients built into the firmware or available as an application download.
Corporate use
Because of the bandwidth efficiency and low costs that VoIP technology can provide, businesses are migrating from traditional copper-wire telephone systems to VoIP systems to reduce their monthly phone costs. In 2008, 80% of all new Private branch exchange (PBX) lines installed internationally were VoIP. For example, in the United States, the Social Security Administration is converting its field offices of 63,000 workers from traditional phone installations to a VoIP infrastructure carried over its existing data network.
VoIP allows both voice and data communications to be run over a single network, which can significantly reduce infrastructure costs. The prices of extensions on VoIP are lower than for PBX and key systems. VoIP switches may run on commodity hardware, such as personal computers. Rather than closed architectures, these devices rely on standard interfaces. VoIP devices have simple, intuitive user interfaces, so users can often make simple system configuration changes. Dual-mode phones enable users to continue their conversations as they move between an outside cellular service and an internal Wi-Fi network, so that it is no longer necessary to carry both a desktop phone and a cell phone. Maintenance becomes simpler as there are fewer devices to oversee.
VoIP solutions aimed at businesses have evolved into unified communications services that treat all communications—phone calls, faxes, voice mail, e-mail, web conferences, and more—as discrete units that can all be delivered via any means and to any handset, including cellphones. Two kinds of service providers are operating in this space: one set is focused on VoIP for medium to large enterprises, while another is targeting the small-to-medium business (SMB) market.
Skype, which originally marketed itself as a service among friends, has begun to cater to businesses, providing free-of-charge connections between any users on the Skype network and connecting to and from ordinary PSTN telephones for a charge.
Delivery Mechanisms
In general, the provision of VoIP telephony systems to organizational or individual users can be divided into two primary delivery methods: private or on-premises solutions, or externally hosted solutions delivered by third-party providers. On-premises delivery methods are more akin to the classic PBX deployment model for connecting an office to local PSTN networks.
While many use cases still remain for private or on-premises VoIP systems, the wider market has been gradually shifting toward Cloud or Hosted VoIP solutions. Hosted systems are also generally better suited to smaller or personal use VoIP deployments, where a private system may not be viable for these scenarios.
Hosted VoIP Systems
Hosted or Cloud VoIP solutions involve a service provider or telecommunications carrier hosting the telephone system as a software solution within their own infrastructure.
Typically this will be one or more datacentres, with geographic relevance to the end-user(s) of the system. This infrastructure is external to the user of the system and is deployed and maintained by the service provider.
Endpoints, such as VoIP telephones or softphone applications (apps running on a computer or mobile device), will connect to the VoIP service remotely. These connections typically take place over public internet links, such as local fixed WAN breakout or mobile carrier service.
Private VoIP Systems
In the case of a private VoIP system, the primary telephony system itself is located within the private infrastructure of the end-user organization. Usually, the system will be deployed on-premises at a site within the direct control of the organization. This can provide numerous benefits in terms of QoS control (see below), cost scalability, and ensuring privacy and security of communications traffic. However, the responsibility for ensuring that the VoIP system remains performant and resilient is predominantly vested in the end-user organization. This is not the case with a Hosted VoIP solution.
Private VoIP systems can be physical hardware PBX appliances, converged with other infrastructure, or they can be deployed as software applications. Generally, the latter two options will be in the form of a separate virtualized appliance. However, in some scenarios, these systems are deployed on bare metal infrastructure or IoT devices. With some solutions, such as 3CX, companies can attempt to blend the benefits of hosted and private on-premises systems by implementing their own private solution but within an external environment. Examples can include datacentre collocation services, public cloud, or private cloud locations.
For on-premises systems, local endpoints within the same location typically connect directly over the LAN. For remote and external endpoints, available connectivity options mirror those of Hosted or Cloud VoIP solutions.
However, VoIP traffic to and from the on-premises systems can often also be sent over secure private links. Examples include personal VPN, site-to-site VPN, private networks such as MPLS and SD-WAN, or via private SBCs (Session Border Controllers). While exceptions and private peering options do exist, it is generally uncommon for those private connectivity methods to be provided by Hosted or Cloud VoIP providers.
Quality of service
Communication on the IP network is perceived as less reliable in contrast to the circuit-switched public telephone network because it does not provide a network-based mechanism to ensure that data packets are not lost, and are delivered in sequential order. It is a best-effort network without fundamental quality of service (QoS) guarantees. Voice, and all other data, travels in packets over IP networks with fixed maximum capacity. This system may be more prone to data loss in the presence of congestion than traditional circuit switched systems; a circuit switched system of insufficient capacity will refuse new connections while carrying the remainder without impairment, while the quality of real-time data such as telephone conversations on packet-switched networks degrades dramatically. Therefore, VoIP implementations may face problems with latency, packet loss, and jitter.
By default, network routers handle traffic on a first-come, first-served basis. Fixed delays cannot be controlled as they are caused by the physical distance the packets travel. They are especially problematic when satellite circuits are involved because of the long distance to a geostationary satellite and back; delays of 400–600 ms are typical. Latency can be minimized by marking voice packets as being delay-sensitive with QoS methods such as DiffServ.
Network routers on high volume traffic links may introduce latency that exceeds permissible thresholds for VoIP. Excessive load on a link can cause congestion and associated queueing delays and packet loss. This signals a transport protocol like TCP to reduce its transmission rate to alleviate the congestion. But VoIP usually uses UDP not TCP because recovering from congestion through retransmission usually entails too much latency. So QoS mechanisms can avoid the undesirable loss of VoIP packets by immediately transmitting them ahead of any queued bulk traffic on the same link, even when the link is congested by bulk traffic.
VoIP endpoints usually have to wait for the completion of transmission of previous packets before new data may be sent. Although it is possible to preempt (abort) a less important packet in mid-transmission, this is not commonly done, especially on high-speed links where transmission times are short even for maximum-sized packets. An alternative to preemption on slower links, such as dialup and digital subscriber line (DSL), is to reduce the maximum transmission time by reducing the maximum transmission unit. But since every packet must contain protocol headers, this increases relative header overhead on every link traversed.
The receiver must resequence IP packets that arrive out of order and recover gracefully when packets arrive too late or not at all. Packet delay variation results from changes in queuing delay along a given network path due to competition from other users for the same transmission links. VoIP receivers accommodate this variation by storing incoming packets briefly in a playout buffer, deliberately increasing latency to improve the chance that each packet will be on hand when it is time for the voice engine to play it. The added delay is thus a compromise between excessive latency and excessive dropout, i.e. momentary audio interruptions.
Although jitter is a random variable, it is the sum of several other random variables that are at least somewhat independent: the individual queuing delays of the routers along the Internet path in question. Motivated by the central limit theorem, jitter can be modeled as a Gaussian random variable. This suggests continually estimating the mean delay and its standard deviation and setting the playout delay so that only packets delayed more than several standard deviations above the mean will arrive too late to be useful. In practice, the variance in latency of many Internet paths is dominated by a small number (often one) of relatively slow and congested bottleneck links. Most Internet backbone links are now so fast (e.g. 10 Gbit/s) that their delays are dominated by the transmission medium (e.g. optical fiber) and the routers driving them do not have enough buffering for queuing delays to be significant.
A number of protocols have been defined to support the reporting of quality of service (QoS) and quality of experience (QoE) for VoIP calls. These include RTP Control Protocol (RTCP) extended reports, SIP RTCP summary reports, H.460.9 Annex B (for H.323), H.248.30 and MGCP extensions.
The RTCP extended report VoIP metrics block specified by is generated by an IP phone or gateway during a live call and contains information on packet loss rate, packet discard rate (because of jitter), packet loss/discard burst metrics (burst length/density, gap length/density), network delay, end system delay, signal/noise/echo level, mean opinion scores (MOS) and R factors and configuration information related to the jitter buffer. VoIP metrics reports are exchanged between IP endpoints on an occasional basis during a call, and an end of call message sent via SIP RTCP summary report or one of the other signaling protocol extensions. VoIP metrics reports are intended to support real-time feedback related to QoS problems, the exchange of information between the endpoints for improved call quality calculation and a variety of other applications.
DSL and ATM
DSL modems typically provide Ethernet connections to local equipment, but inside they may actually be Asynchronous Transfer Mode (ATM) modems. They use ATM Adaptation Layer 5 (AAL5) to segment each Ethernet packet into a series of 53-byte ATM cells for transmission, reassembling them back into Ethernet frames at the receiving end.
Using a separate virtual circuit identifier (VCI) for audio over IP has the potential to reduce latency on shared connections. ATM's potential for latency reduction is greatest on slow links because worst-case latency decreases with increasing link speed. A full-size (1500 byte) Ethernet frame takes 94 ms to transmit at 128 kbit/s but only 8 ms at 1.5 Mbit/s. If this is the bottleneck link, this latency is probably small enough to ensure good VoIP performance without MTU reductions or multiple ATM VCs. The latest generations of DSL, VDSL and VDSL2, carry Ethernet without intermediate ATM/AAL5 layers, and they generally support IEEE 802.1p priority tagging so that VoIP can be queued ahead of less time-critical traffic.
ATM has substantial header overhead: 5/53 = 9.4%, roughly twice the total header overhead of a 1500 byte Ethernet frame. This "ATM tax" is incurred by every DSL user whether or not they take advantage of multiple virtual circuits – and few can.
Layer 2
Several protocols are used in the data link layer and physical layer for quality-of-service mechanisms that help VoIP applications work well even in the presence of network congestion. Some examples include:
IEEE 802.11e is an approved amendment to the IEEE 802.11 standard that defines a set of quality-of-service enhancements for wireless LAN applications through modifications to the Media Access Control (MAC) layer. The standard is considered of critical importance for delay-sensitive applications, such as voice over wireless IP.
IEEE 802.1p defines 8 different classes of service (including one dedicated to voice) for traffic on layer-2 wired Ethernet.
The ITU-T G.hn standard, which provides a way to create a high-speed (up to 1 gigabit per second) Local area network (LAN) using existing home wiring (power lines, phone lines and coaxial cables). G.hn provides QoS by means of Contention-Free Transmission Opportunities (CFTXOPs) which are allocated to flows (such as a VoIP call) that require QoS and which have negotiated a contract with the network controllers.
Performance metrics
The quality of voice transmission is characterized by several metrics that may be monitored by network elements and by the user agent hardware or software. Such metrics include network packet loss, packet jitter, packet latency (delay), post-dial delay, and echo. The metrics are determined by VoIP performance testing and monitoring.
PSTN integration
A VoIP media gateway controller (aka Class 5 Softswitch) works in cooperation with a media gateway (aka IP Business Gateway) and connects the digital media stream, so as to complete the path for voice and data. Gateways include interfaces for connecting to standard PSTN networks. Ethernet interfaces are also included in the modern systems which are specially designed to link calls that are passed via VoIP.
E.164 is a global numbering standard for both the PSTN and public land mobile network (PLMN). Most VoIP implementations support E.164 to allow calls to be routed to and from VoIP subscribers and the PSTN/PLMN. VoIP implementations can also allow other identification techniques to be used. For example, Skype allows subscribers to choose Skype names (usernames) whereas SIP implementations can use Uniform Resource Identifier (URIs) similar to email addresses. Often VoIP implementations employ methods of translating non-E.164 identifiers to E.164 numbers and vice versa, such as the Skype-In service provided by Skype and the E.164 number to URI mapping (ENUM) service in IMS and SIP.
Echo can also be an issue for PSTN integration. Common causes of echo include impedance mismatches in analog circuitry and an acoustic path from the receive to transmit signal at the receiving end.
Number portability
Local number portability (LNP) and mobile number portability (MNP) also impact VoIP business. Number portability is a service that allows a subscriber to select a new telephone carrier without requiring a new number to be issued. Typically, it is the responsibility of the former carrier to "map" the old number to the undisclosed number assigned by the new carrier. This is achieved by maintaining a database of numbers. A dialed number is initially received by the original carrier and quickly rerouted to the new carrier. Multiple porting references must be maintained even if the subscriber returns to the original carrier. The FCC mandates carrier compliance with these consumer-protection stipulations. In November 2007, the Federal Communications Commission in the United States released an order extending number portability obligations to interconnected VoIP providers and carriers that support VoIP providers.
A voice call originating in the VoIP environment also faces least-cost routing (LCR) challenges to reach its destination if the number is routed to a mobile phone number on a traditional mobile carrier. LCR is based on checking the destination of each telephone call as it is made, and then sending the call via the network that will cost the customer the least. This rating is subject to some debate given the complexity of call routing created by number portability. With MNP in place, LCR providers can no longer rely on using the network root prefix to determine how to route a call. Instead, they must now determine the actual network of every number before routing the call.
Therefore, VoIP solutions also need to handle MNP when routing a voice call. In countries without a central database, like the UK, it may be necessary to query the mobile network about which home network a mobile phone number belongs to. As the popularity of VoIP increases in the enterprise markets because of LCR options, VoIP needs to provide a certain level of reliability when handling calls.
Emergency calls
A telephone connected to a land line has a direct relationship between a telephone number and a physical location, which is maintained by the telephone company and available to emergency responders via the national emergency response service centers in form of emergency subscriber lists. When an emergency call is received by a center the location is automatically determined from its databases and displayed on the operator console.
In IP telephony, no such direct link between location and communications end point exists. Even a provider having wired infrastructure, such as a DSL provider, may know only the approximate location of the device, based on the IP address allocated to the network router and the known service address. Some ISPs do not track the automatic assignment of IP addresses to customer equipment.
IP communication provides for device mobility. For example, a residential broadband connection may be used as a link to a virtual private network of a corporate entity, in which case the IP address being used for customer communications may belong to the enterprise, not the residential ISP. Such off-premises extensions may appear as part of an upstream IP PBX. On mobile devices, e.g., a 3G handset or USB wireless broadband adapter, the IP address has no relationship with any physical location known to the telephony service provider, since a mobile user could be anywhere in a region with network coverage, even roaming via another cellular company.
At the VoIP level, a phone or gateway may identify itself by its account credentials with a Session Initiation Protocol (SIP) registrar. In such cases, the Internet telephony service provider (ITSP) knows only that a particular user's equipment is active. Service providers often provide emergency response services by agreement with the user who registers a physical location and agrees that, if an emergency number is called from the IP device, emergency services are provided to that address only.
Such emergency services are provided by VoIP vendors in the United States by a system called Enhanced 911 (E911), based on the Wireless Communications and Public Safety Act. The VoIP E911 emergency-calling system associates a physical address with the calling party's telephone number. All VoIP providers that provide access to the public switched telephone network are required to implement E911, a service for which the subscriber may be charged. "VoIP providers may not allow customers to opt-out of 911 service." The VoIP E911 system is based on a static table lookup. Unlike in cellular phones, where the location of an E911 call can be traced using assisted GPS or other methods, the VoIP E911 information is accurate only if subscribers keep their emergency address information current.
Fax support
Sending faxes over VoIP networks is sometimes referred to as Fax over IP (FoIP). Transmission of fax documents was problematic in early VoIP implementations, as most voice digitization and compression codecs are optimized for the representation of the human voice and the proper timing of the modem signals cannot be guaranteed in a packet-based, connectionless network.
A standards-based solution for reliably delivering fax-over-IP is the T.38 protocol. The T.38 protocol is designed to compensate for the differences between traditional packet-less communications over analog lines and packet-based transmissions which are the basis for IP communications. The fax machine may be a standard device connected to an analog telephone adapter (ATA), or it may be a software application or dedicated network device operating via an Ethernet interface. Originally, T.38 was designed to use UDP or TCP transmission methods across an IP network.
Some newer high-end fax machines have built-in T.38 capabilities which are connected directly to a network switch or router. In T.38 each packet contains a portion of the data stream sent in the previous packet. Two successive packets have to be lost to actually lose data integrity.
Power requirements
Telephones for traditional residential analog service are usually connected directly to telephone company phone lines which provide direct current to power most basic analog handsets independently of locally available electrical power. The susceptibility of phone service to power failures is a common problem even with traditional analog service where customers purchase telephone units that operate with wireless handsets to a base station, or that have other modern phone features, such as built-in voicemail or phone book features.
IP Phones and VoIP telephone adapters connect to routers or cable modems which typically depend on the availability of mains electricity or locally generated power. Some VoIP service providers use customer premises equipment (e.g., cable modems) with battery-backed power supplies to assure uninterrupted service for up to several hours in case of local power failures. Such battery-backed devices typically are designed for use with analog handsets. Some VoIP service providers implement services to route calls to other telephone services of the subscriber, such a cellular phone, in the event that the customer's network device is inaccessible to terminate the call.
Security
Secure calls are possible using standardized protocols such as Secure Real-time Transport Protocol. Most of the facilities of creating a secure telephone connection over traditional phone lines, such as digitizing and digital transmission, are already in place with VoIP. It is necessary only to encrypt and authenticate the existing data stream. Automated software, such as a virtual PBX, may eliminate the need for personnel to greet and switch incoming calls.
The security concerns for VoIP telephone systems are similar to those of other Internet-connected devices. This means that hackers with knowledge of VoIP vulnerabilities can perform denial-of-service attacks, harvest customer data, record conversations, and compromise voicemail messages. Compromised VoIP user account or session credentials may enable an attacker to incur substantial charges from third-party services, such as long-distance or international calling.
The technical details of many VoIP protocols create challenges in routing VoIP traffic through firewalls and network address translators, used to interconnect to transit networks or the Internet. Private session border controllers are often employed to enable VoIP calls to and from protected networks. Other methods to traverse NAT devices involve assistive protocols such as STUN and Interactive Connectivity Establishment (ICE).
Standards for securing VoIP are available in the Secure Real-time Transport Protocol (SRTP) and the ZRTP protocol for analog telephony adapters, as well as for some softphones. IPsec is available to secure point-to-point VoIP at the transport level by using opportunistic encryption. Though many consumer VoIP solutions do not support encryption of the signaling path or the media, securing a VoIP phone is conceptually easier to implement using VoIP than on traditional telephone circuits. A result of the lack of widespread support fo encryption is that it is relatively easy to eavesdrop on VoIP calls when access to the data network is possible. Free open-source solutions, such as Wireshark, facilitate capturing VoIP conversations.
Government and military organizations use various security measures to protect VoIP traffic, such as voice over secure IP (VoSIP), secure voice over IP (SVoIP), and secure voice over secure IP (SVoSIP). The distinction lies in whether encryption is applied in the telephone endpoint or in the network. Secure voice over secure IP may be implemented by encrypting the media with protocols such as SRTP and ZRTP. Secure voice over IP uses Type 1 encryption on a classified network, such as SIPRNet. Public Secure VoIP is also available with free GNU software and in many popular commercial VoIP programs via libraries, such as ZRTP.
Caller ID
Voice over IP protocols and equipment provide caller ID support that is compatible the PSTN. Many VoIP service providers also allow callers to configure custom caller ID information.
Hearing aid compatibility
Wireline telephones which are manufactured in, imported to, or intended to be used in the US with Voice over IP service, on or after February 28, 2020, are required to meet the hearing aid compatibility requirements set forth by the Federal Communications Commission.
Operational cost
VoIP has drastically reduced the cost of communication by sharing network infrastructure between data and voice. A single broadband connection has the ability to transmit multiple telephone calls.
Regulatory and legal issues
As the popularity of VoIP grows, governments are becoming more interested in regulating VoIP in a manner similar to PSTN services.
Throughout the developing world, particularly in countries where regulation is weak or captured by the dominant operator, restrictions on the use of VoIP are often imposed, including in Panama where VoIP is taxed, Guyana where VoIP is prohibited. In Ethiopia, where the government is nationalizing telecommunication service, it is a criminal offense to offer services using VoIP. The country has installed firewalls to prevent international calls from being made using VoIP. These measures were taken after the popularity of VoIP reduced the income generated by the state-owned telecommunication company.
Canada
In Canada, the Canadian Radio-television and Telecommunications Commission regulates telephone service, including VoIP telephony service. VoIP services operating in Canada are required to provide 9-1-1 emergency service.
European Union
In the European Union, the treatment of VoIP service providers is a decision for each national telecommunications regulator, which must use competition law to define relevant national markets and then determine whether any service provider on those national markets has "significant market power" (and so should be subject to certain obligations). A general distinction is usually made between VoIP services that function over managed networks (via broadband connections) and VoIP services that function over unmanaged networks (essentially, the Internet).
The relevant EU Directive is not clearly drafted concerning obligations that can exist independently of market power (e.g., the obligation to offer access to emergency calls), and it is impossible to say definitively whether VoIP service providers of either type are bound by them. A review of the EU Directive is underway and should be complete by 2007.
Arab states of the GCC
Oman
In Oman, it is illegal to provide or use unauthorized VoIP services, to the extent that web sites of unlicensed VoIP providers have been blocked. Violations may be punished with fines of 50,000 Omani Rial (about 130,317 US dollars), a two-year prison sentence or both. In 2009, police raided 121 Internet cafes throughout the country and arrested 212 people for using or providing VoIP services.
Saudi Arabia
In September 2017, Saudi Arabia lifted the ban on VoIPs, in an attempt to reduce operational costs and spur digital entrepreneurship.
United Arab Emirates
In the United Arab Emirates (UAE), it is illegal to provide or use unauthorized VoIP services, to the extent that web sites of unlicensed VoIP providers have been blocked. However, some VoIPs such as Skype were allowed. In January 2018, internet service providers in UAE blocked all VoIP apps, including Skype, but permitting only 2 "government-approved" VoIP apps (C’ME and BOTIM) for a fixed rate of Dh52.50 a month for use on mobile devices, and Dh105 a month to use over a computer connected." In opposition, a petition on Change.org garnered over 5000 signatures, in response to which the website was blocked in UAE.
On March 24, 2020, the United Arab Emirates loosened restriction on VoIP services earlier prohibited in the country, to ease communication during the COVID-19 pandemic. However, popular instant messaging applications like WhatsApp, Skype, and FaceTime remained blocked from being used for voice and video calls, constricting residents to use paid services from the country's state-owned telecom providers.
India
In India, it is legal to use VoIP, but it is illegal to have VoIP gateways inside India. This effectively means that people who have PCs can use them to make a VoIP call to any number, but if the remote side is a normal phone, the gateway that converts the VoIP call to a POTS call is not permitted by law to be inside India. Foreign-based VoIP server services are illegal to use in India.
In the interest of the Access Service Providers and International Long Distance Operators, the Internet telephony was permitted to the ISP with restrictions. Internet Telephony is considered to be a different service in its scope, nature, and kind from real-time voice as offered by other Access Service Providers and Long Distance Carriers. Hence the following type of Internet Telephony are permitted in India:
(a) PC to PC; within or outside India (b) PC / a device / Adapter conforming to the standard of any international agencies like- ITU or IETF etc. in India to PSTN/PLMN abroad. (c) Any device / Adapter conforming to standards of International agencies like ITU, IETF etc. connected to ISP node with static IP address to similar device / Adapter; within or outside India. (d) Except whatever is described in , no other form of Internet Telephony is permitted. (e) In India no Separate Numbering Scheme is provided to the Internet Telephony. Presently the 10 digit Numbering allocation based on E.164 is permitted to the Fixed Telephony, GSM, CDMA wireless service. For Internet Telephony, the numbering scheme shall only conform to IP addressing Scheme of Internet Assigned Numbers Authority (IANA). Translation of E.164 number / private number to IP address allotted to any device and vice versa, by ISP to show compliance with IANA numbering scheme is not permitted. (f) The Internet Service Licensee is not permitted to have PSTN/PLMN connectivity. Voice communication to and from a telephone connected to PSTN/PLMN and following E.164 numbering is prohibited in India.
South Korea
In South Korea, only providers registered with the government are authorized to offer VoIP services. Unlike many VoIP providers, most of whom offer flat rates, Korean VoIP services are generally metered and charged at rates similar to terrestrial calling. Foreign VoIP providers encounter high barriers to government registration. This issue came to a head in 2006 when Internet service providers providing personal Internet services by contract to United States Forces Korea members residing on USFK bases threatened to block off access to VoIP services used by USFK members as an economical way to keep in contact with their families in the United States, on the grounds that the service members' VoIP providers were not registered. A compromise was reached between USFK and Korean telecommunications officials in January 2007, wherein USFK service members arriving in Korea before June 1, 2007, and subscribing to the ISP services provided on base may continue to use their US-based VoIP subscription, but later arrivals must use a Korean-based VoIP provider, which by contract will offer pricing similar to the flat rates offered by US VoIP providers.
United States
In the United States, the Federal Communications Commission requires all interconnected VoIP service providers to comply with requirements comparable to those for traditional telecommunications service providers. VoIP operators in the US are required to support local number portability; make service accessible to people with disabilities; pay regulatory fees, universal service contributions, and other mandated payments; and enable law enforcement authorities to conduct surveillance pursuant to the Communications Assistance for Law Enforcement Act (CALEA).
Operators of "Interconnected" VoIP (fully connected to the PSTN) are mandated to provide Enhanced 911 service without special request, provide for customer location updates, clearly disclose any limitations on their E-911 functionality to their consumers, obtain affirmative acknowledgements of these disclosures from all consumers, and 'may not allow their customers to “opt-out” of 911 service.' VoIP operators also receive the benefit of certain US telecommunications regulations, including an entitlement to interconnection and exchange of traffic with incumbent local exchange carriers via wholesale carriers. Providers of "nomadic" VoIP service—those who are unable to determine the location of their users—are exempt from state telecommunications regulation.
Another legal issue that the US Congress is debating concerns changes to the Foreign Intelligence Surveillance Act. The issue in question is calls between Americans and foreigners. The National Security Agency (NSA) is not authorized to tap Americans' conversations without a warrant—but the Internet, and specifically VoIP does not draw as clear a line to the location of a caller or a call's recipient as the traditional phone system does. As VoIP's low cost and flexibility convinces more and more organizations to adopt the technology, the surveillance for law enforcement agencies becomes more difficult. VoIP technology has also increased Federal security concerns because VoIP and similar technologies have made it more difficult for the government to determine where a target is physically located when communications are being intercepted, and that creates a whole set of new legal challenges.
History
The early developments of packet network designs by Paul Baran and other researchers were motivated by a desire for a higher degree of circuit redundancy and network availability in the face of infrastructure failures than was possible in the circuit-switched networks in telecommunications of the mid-twentieth century. Danny Cohen first demonstrated a form of packet voice in 1973 as part of a flight simulator application, which operated across the early ARPANET.
On the early ARPANET, real-time voice communication was not possible with uncompressed pulse-code modulation (PCM) digital speech packets, which had a bit rate of 64kbps, much greater than the 2.4kbps bandwidth of early modems. The solution to this problem was linear predictive coding (LPC), a speech coding data compression algorithm that was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. LPC was capable of speech compression down to 2.4kbps, leading to the first successful real-time conversation over ARPANET in 1974, between Culler-Harrison Incorporated in Goleta, California, and MIT Lincoln Laboratory in Lexington, Massachusetts. LPC has since been the most widely used speech coding method. Code-excited linear prediction (CELP), a type of LPC algorithm, was developed by Manfred R. Schroeder and Bishnu S. Atal in 1985. LPC algorithms remain an audio coding standard in modern VoIP technology.
In the following time span of about two decades, various forms of packet telephony were developed and industry interest groups formed to support the new technologies. Following the termination of the ARPANET project, and expansion of the Internet for commercial traffic, IP telephony was tested and deemed infeasible for commercial use until the introduction of VocalChat in the early 1990s and then in Feb 1995 the official release of Internet Phone (or iPhone for short) commercial software by VocalTec, based on the Audio Transceiver patent by Lior Haramaty and Alon Cohen, and followed by other VoIP infrastructure components such as telephony gateways and switching servers. Soon after it became an established area of interest in commercial labs of the major IT concerns. By the late 1990s, the first softswitches became available, and new protocols, such as H.323, MGCP and the Session Initiation Protocol (SIP) gained widespread attention. In the early 2000s, the proliferation of high-bandwidth always-on Internet connections to residential dwellings and businesses, spawned an industry of Internet telephony service providers (ITSPs). The development of open-source telephony software, such as Asterisk PBX, fueled widespread interest and entrepreneurship in voice-over-IP services, applying new Internet technology paradigms, such as cloud services to telephony.
In 1999, a discrete cosine transform (DCT) audio data compression algorithm called the modified discrete cosine transform (MDCT) was adopted for the Siren codec, used in the G.722.1 wideband audio coding standard. The same year, the MDCT was adapted into the LD-MDCT speech coding algorithm, used for the AAC-LD format and intended for significantly improved audio quality in VoIP applications. MDCT has since been widely used in VoIP applications, such as the G.729.1 wideband codec introduced in 2006, Apple's Facetime (using AAC-LD) introduced in 2010, the CELT codec introduced in 2011, the Opus codec introduced in 2012, and WhatsApp's voice calling feature introduced in 2015.
Milestones
1966: Linear predictive coding (LPC) proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT).
1973: Packet voice application by Danny Cohen.
1974: The Institute of Electrical and Electronics Engineers (IEEE) publishes a paper entitled "A Protocol for Packet Network Interconnection".
1974: Network Voice Protocol (NVP) tested over ARPANET in August 1974, carrying barely audible 16kpbs CVSD encoded voice.
1974: The first successful real-time conversation over ARPANET achieved using 2.4kpbs LPC, between Culler-Harrison Incorporated in Goleta, California, and MIT Lincoln Laboratory in Lexington, Massachusetts.
1977: Danny Cohen and Jon Postel of the USC Information Sciences Institute, and Vint Cerf of the Defense Advanced Research Projects Agency (DARPA), agree to separate IP from TCP, and create UDP for carrying real-time traffic.
1981: IPv4 is described in RFC 791.
1985: The National Science Foundation commissions the creation of NSFNET.
1985: Code-excited linear prediction (CELP), a type of LPC algorithm, developed by Manfred R. Schroeder and Bishnu S. Atal.
1986: Proposals from various standards organizations for Voice over ATM, in addition to commercial packet voice products from companies such as StrataCom
1991: Speak Freely, a voice-over-IP application, was released to the public domain.
1992: The Frame Relay Forum conducts development of standards for Voice over Frame Relay.
1992: InSoft Inc. announces and launches its desktop conferencing product Communique, which included VoIP and video. The company is credited with developing the first generation of commercial, US-based VoIP, Internet media streaming and real-time Internet telephony/collaborative software and standards that would provide the basis for the Real Time Streaming Protocol (RTSP) standard.
1993 Release of VocalChat, a commercial packet network PC voice communication software from VocalTec.
1994: MTALK, a freeware LAN VoIP application for Linux
1995: VocalTec releases Internet Phone commercial Internet phone software.
Beginning in 1995, Intel, Microsoft and Radvision initiated standardization activities for VoIP communications system.
1996:
ITU-T begins development of standards for the transmission and signaling of voice communications over Internet Protocol networks with the H.323 standard.
US telecommunication companies petition the US Congress to ban Internet phone technology.
G.729 speech codec introduced, using CELP (LPC) algorithm.
1997: Level 3 began development of its first softswitch, a term they coined in 1998.
1999:
The Session Initiation Protocol (SIP) specification RFC 2543 is released.
Mark Spencer of Digium develops the first open source private branch exchange (PBX) software (Asterisk).
A discrete cosine transform (DCT) variant called the modified discrete cosine transform (MDCT) is adopted for the Siren codec, used in the G.722.1 wideband audio coding standard.
The MDCT is adapted into the LD-MDCT algorithm, used in the AAC-LD standard.
2001: INOC-DBA, first inter-provider SIP network deployed; also first voice network to reach all seven continents.
2003: First released in August 2003, Skype was the creation of Niklas Zennström and Janus Friis, in cooperation with four Estonian developers. It quickly became a popular program that helped democratise VoIP.
2004: Commercial VoIP service providers proliferate.
2006: G.729.1 wideband codec introduced, using MDCT and CELP (LPC) algorithms.
2007: VoIP device manufacturers and sellers boom in Asia, specifically in the Philippines where many families of overseas workers reside.
2009: SILK codec introduced, using LPC algorithm, and used for voice calling in Skype.
2010: Apple introduces FaceTime, which uses the LD-MDCT-based AAC-LD codec.
2011:
Rise of WebRTC technology which allows VoIP directly in browsers.
CELT codec introduced, using MDCT algorithm.
2012: Opus codec introduced, using MDCT and LPC algorithms.
See also
Audio over IP
Communications Assistance For Law Enforcement Act
Comparison of audio network protocols
Comparison of VoIP software
Differentiated services
High bit rate audio video over Internet Protocol
Integrated services
Internet fax
IP Multimedia Subsystem
List of VoIP companies
Mobile VoIP
Network Voice Protocol
RTP audio video profile
SIP Trunking
UNIStim
Voice VPN
VoiceXML
VoIP recording
Notes
References
External links
Broadband
Videotelephony
Audio network protocols
Office equipment |
75625 | https://en.wikipedia.org/wiki/ASN.1 | ASN.1 | Abstract Syntax Notation One (ASN.1) is a standard interface description language for defining data structures that can be serialized and deserialized in a cross-platform way. It is broadly used in telecommunications and computer networking, and especially in cryptography.
Protocol developers define data structures in ASN.1 modules, which are generally a section of a broader standards document written in the ASN.1 language. The advantage is that the ASN.1 description of the data encoding is independent of a particular computer or programming language. Because ASN.1 is both human-readable and machine-readable, an ASN.1 compiler can compile modules into libraries of code, codecs, that decode or encode the data structures. Some ASN.1 compilers can produce code to encode or decode several encodings, e.g. packed, BER or XML.
ASN.1 is a joint standard of the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) in ITU-T Study Group 17 and ISO/IEC, originally defined in 1984 as part of CCITT X.409:1984. In 1988, ASN.1 moved to its own standard, X.208, due to wide applicability. The substantially revised 1995 version is covered by the X.680 series. The latest revision of the X.680 series of recommendations is the 6.0 Edition, published in 2021.
Language support
ASN.1 is a data type declaration notation. It does not define how to manipulate a variable of such a type. Manipulation of variables is defined in other languages such as SDL (Specification and Description Language) for executable modeling or TTCN-3 (Testing and Test Control Notation) for conformance testing. Both these languages natively support ASN.1 declarations. It is possible to import an ASN.1 module and declare variable of any of the ASN.1 types declared in the module.
Applications
ASN.1 is used to define a large number of protocols. Its most extensive uses continue to be telecommunications, cryptography, and biometrics.
Encodings
ASN.1 is closely associated with a set of encoding rules that specify how to represent a data structure as a series of bytes. The standard ASN.1 encoding rules include:
Encoding Control Notation
ASN.1 recommendations provide a number of predefined encoding rules. If none of the existing encoding rules are suitable, the Encoding Control Notation (ECN) provides a way for a user to define his or her own customized encoding rules.
Relation to Privacy-Enhanced Mail (PEM) Encoding
Privacy-Enhanced Mail (PEM) encoding is entirely unrelated to ASN.1 and its codecs, however, encoded ASN.1 data (which is often binary) is often PEM-encoded. This can aid with transport over media that is sensitive to textual encoding, such as SMTP relays, as well as copying and pasting.
Example
This is an example ASN.1 module defining the messages (data structures) of a fictitious Foo Protocol:
FooProtocol DEFINITIONS ::= BEGIN
FooQuestion ::= SEQUENCE {
trackingNumber INTEGER,
question IA5String
}
FooAnswer ::= SEQUENCE {
questionNumber INTEGER,
answer BOOLEAN
}
END
This could be a specification published by creators of Foo Protocol. Conversation flows, transaction interchanges, and states are not defined in ASN.1, but are left to other notations and textual description of the protocol.
Assuming a message that complies with the Foo Protocol and that will be sent to the receiving party, this particular message (protocol data unit (PDU)) is:
myQuestion FooQuestion ::= {
trackingNumber 5,
question "Anybody there?"
}
ASN.1 supports constraints on values and sizes, and extensibility. The above specification can be changed to
FooProtocol DEFINITIONS ::= BEGIN
FooQuestion ::= SEQUENCE {
trackingNumber INTEGER(0..199),
question IA5String
}
FooAnswer ::= SEQUENCE {
questionNumber INTEGER(10..20),
answer BOOLEAN
}
FooHistory ::= SEQUENCE {
questions SEQUENCE(SIZE(0..10)) OF FooQuestion,
answers SEQUENCE(SIZE(1..10)) OF FooAnswer,
anArray SEQUENCE(SIZE(100)) OF INTEGER(0..1000),
...
}
END
This change constrains trackingNumbers to have a value between 0 and 199 inclusive, and questionNumbers to have a value between 10 and 20 inclusive. The size of the questions array can be between 0 and 10 elements, with the answers array between 1 and 10 elements. The anArray field is a fixed length 100 element array of integers that must be in the range 0 to 1000. The '...' extensibility marker means that the FooHistory message specification may have additional fields in future versions of the specification; systems compliant with one version should be able to receive and transmit transactions from a later version, though able to process only the fields specified in the earlier version. Good ASN.1 compilers will generate (in C, C++, Java, etc.) source code that will automatically check that transactions fall within these constraints. Transactions that violate the constraints should not be accepted from, or presented to, the application. Constraint management in this layer significantly simplifies protocol specification because the applications will be protected from constraint violations, reducing risk and cost.
To send the myQuestion message through the network, the message is serialized (encoded) as a series of bytes using one of the encoding rules. The Foo protocol specification should explicitly name one set of encoding rules to use, so that users of the Foo protocol know which one they should use and expect.
Example encoded in DER
Below is the data structure shown above as FooQuestion encoded in DER format (all numbers are in hexadecimal):
30 13 02 01 05 16 0e 41 6e 79 62 6f 64 79 20 74 68 65 72 65 3fDER is a type–length–value encoding, so the sequence above can be interpreted, with reference to the standard SEQUENCE, INTEGER, and IA5String types, as follows:
30 — type tag indicating SEQUENCE
13 — length in octets of value that follows
02 — type tag indicating INTEGER
01 — length in octets of value that follows
05 — value (5)
16 — type tag indicating IA5String
(IA5 means the full 7-bit ISO 646 set, including variants,
but is generally US-ASCII)
0e — length in octets of value that follows
41 6e 79 62 6f 64 79 20 74 68 65 72 65 3f — value ("Anybody there?")
Example encoded in XER
Alternatively, it is possible to encode the same ASN.1 data structure with XML Encoding Rules (XER) to achieve greater human readability "over the wire". It would then appear as the following 108 octets, (space count includes the spaces used for indentation):
<FooQuestion>
<trackingNumber>5</trackingNumber>
<question>Anybody there?</question>
</FooQuestion>
Example encoded in PER (unaligned)
Alternatively, if Packed Encoding Rules are employed, the following 122 bits (16 octets amount to 128 bits, but here only 122 bits carry information and the last 6 bits are merely padding) will be produced:
01 05 0e 83 bb ce 2d f9 3c a0 e9 a3 2f 2c af c0
In this format, type tags for the required elements are not encoded, so it cannot be parsed without knowing the expected schemas used to encode. Additionally, the bytes for the value of the IA5String are packed using 7-bit units instead of 8-bit units, because the encoder knows that encoding an IA5String byte value requires only 7 bits. However the length bytes are still encoded here, even for the first integer tag 01 (but a PER packer could also omit it if it knows that the allowed value range fits on 8 bits, and it could even compact the single value byte 05 with less than 8 bits, if it knows that allowed values can only fit in a smaller range).
The last 6 bits in the encoded PER are padded with null bits in the 6 least significant bits of the last byte c0 : these extra bits may not be transmitted or used for encoding something else if this sequence is inserted as a part of a longer unaligned PER sequence.
This means that unaligned PER data is essentially an ordered stream of bits, and not an ordered stream of bytes like with aligned PER, and that it will be a bit more complex to decode by software on usual processors because it will require additional contextual bit-shifting and masking and not direct byte addressing (but the same remark would be true with modern processors and memory/storage units whose minimum addressable unit is larger than 1 octet). However modern processors and signal processors include hardware support for fast internal decoding of bit streams with automatic handling of computing units that are crossing the boundaries of addressable storage units (this is needed for efficient processing in data codecs for compression/decompression or with some encryption/decryption algorithms).
If alignment on octet boundaries was required, an aligned PER encoder would produce:
01 05 0e 41 6e 79 62 6f 64 79 20 74 68 65 72 65 3f
(in this case, each octet is padded individually with null bits on their unused most significant bits).
Tools
Most of the tools supporting ASN.1 do the following:
parse the ASN.1 files,
generates the equivalent declaration in a programming language (like C or C++),
generates the encoding and decoding functions based on the previous declarations.
A list of tools supporting ASN.1 can be found on the ITU-T Tool web page.
Online tools
ASN1 Web Tool (very limited)
ASN1 Playground (sandbox)
Comparison to similar schemes
ASN.1 is similar in purpose and use to protocol buffers and Apache Thrift, which are also interface description languages for cross-platform data serialization. Like those languages, it has a schema (in ASN.1, called a "module"), and a set of encodings, typically type–length–value encodings. Unlike them, ASN.1 does not provide a single and readily usable open-source implementation, and is published as an specification to be implemented by third-party vendors. However, ASN.1, defined in 1984, predates them by many years. It also includes a wider variety of basic data types, some of which are obsolete, and has more options for extensibility. A single ASN.1 message can include data from multiple modules defined in multiple standards, even standards defined years apart.
ASN.1 also includes built-in support for constraints on values and sizes. For instance, a module can specify an integer field that must be in the range 0 to 100. The length of a sequence of values (an array) can also be specified, either as a fixed length or a range of permitted lengths. Constraints can also be specified as logical combinations of sets of basic constraints.
Values used as constraints can either be literals used in the PDU specification, or ASN.1 values specified elsewhere in the schema file. Some ASN.1 tools will make these ASN.1 values available to programmers in the generated source code. Used as constants for the protocol being defined, developers can use these in the protocol's logic implementation. Thus all the PDUs and protocol constants can be defined in the schema, and all implementations of the protocol in any supported language draw upon those values. This avoids the need for developers to hand code protocol constants in their implementation's source code. This significantly aids protocol development; the protocol's constants can be altered in the ASN.1 schema and all implementations are updated simply by recompiling, promoting a rapid and low risk development cycle.
If the ASN.1 tools properly implement constraints checking in the generated source code, this acts to automatically validate protocol data during program operation. Generally ASN.1 tools will include constraints checking into the generated serialization / deserialization routines, raising errors or exceptions if out-of-bounds data is encountered. It is complex to implement all aspects of ASN.1 constraints in an ASN.1 compiler. Not all tools support the full range of possible constraints expressions. XML schema and JSON schema both support similar constraints concepts. Tool support for constraints varies. Microsoft's xsd.exe compiler ignores them.
ASN.1 is visually similar to Augmented Backus-Naur form (ABNF), which is used to define many Internet protocols like HTTP and SMTP. However, in practice they are quite different: ASN.1 defines a data structure, which can be encoded in various ways (e.g. JSON, XML, binary). ABNF, on the other hand, defines the encoding ("syntax") at the same time it defines the data structure ("semantics"). ABNF tends to be used more frequently for defining textual, human-readable protocols, and generally is not used to define type–length–value encodings.
Many programming languages define language-specific serialization formats. For instance, Python's "pickle" module and Ruby's "Marshal" module. These formats are generally language specific. They also don't require a schema, which makes them easier to use in ad hoc storage scenarios, but inappropriate for communications protocols.
JSON and XML similarly do not require a schema, making them easy to use. They are also both cross-platform standards that are broadly popular for communications protocols, particularly when combined with a JSON schema or XML schema.
Some ASN.1 tools are able to translate between ASN.1 and XML schema (XSD). The translation is standardised by the ITU. This makes it possible for a protocol to be defined in ASN.1, and also automatically in XSD. Thus it is possible (though perhaps ill-advised) to have in a project an XSD schema being compiled by ASN.1 tools producing source code that serializes objects to/from JSON wireformat. A more practical use is to permit other sub-projects to consume an XSD schema instead of an ASN.1 schema, perhaps suiting tools availability for the sub-projects language of choice, with XER used as the protocol wireformat.
For more detail, see Comparison of data serialization formats.
See also
X.690
Information Object Class (ASN.1)
Presentation layer
References
External links
A Layman's Guide to a Subset of ASN.1, BER, and DER A good introduction for beginners
ITU-T website - Introduction to ASN.1
A video introduction to ASN.1
ASN.1 Tutorial Tutorial on basic ASN.1 concepts
ASN.1 Tutorial Tutorial on ASN.1
An open-source ASN.1->C++ compiler; Includes some ASN.1 specs., An on-line ASN.1->C++ Compiler
ASN.1 decoder Allows decoding ASN.1 encoded messages into XML output.
ASN.1 syntax checker and encoder/decoder Checks the syntax of an ASN.1 schema and encodes/decodes messages.
ASN.1 encoder/decoder of 3GPP messages Encodes/decodes ASN.1 3GPP messages and allows easy editing of these messages.
Free books about ASN.1
List of ASN.1 tools at IvmaiAsn project
Overview of the Octet Encoding Rules (OER)
Overview of the JSON Encoding Rules (JER)
A Typescript node utility to parse and validate ASN.1 messages
Data modeling languages
Data serialization formats
ITU-T X Series Recommendations |
77904 | https://en.wikipedia.org/wiki/Television%20receive-only | Television receive-only | Television receive-only (TVRO) is a term used chiefly in North America, South America to refer to the reception of satellite television from FSS-type satellites, generally on C-band analog; free-to-air and unconnected to a commercial DBS provider. TVRO was the main means of consumer satellite reception in the United States and Canada until the mid-1990s with the arrival of direct-broadcast satellite television services such as PrimeStar, USSB, Bell Satellite TV, DirecTV, Dish Network, Sky TV that transmit Ku signals. While these services are at least theoretically based on open standards (DVB-S, MPEG-2, MPEG-4), the majority of services are encrypted and require proprietary decoder hardware. TVRO systems relied on feeds being transmitted unencrypted and using open standards, which heavily contrasts to DBS systems in the region.
The term is also used to refer to receiving digital television "backhaul" feeds from FSS-type satellites. Reception of free-to-air satellite signals, generally Ku band Digital Video Broadcasting, for home viewing is still common in Europe, India and Australia, although the TVRO nomenclature was never used there. Free-to-air satellite signals are also very common in the People's Republic of China, as many rural locations cannot receive cable television and solely rely on satellites to deliver television signals to individual homes.
"Big ugly dish"
The term "BUD" (big ugly dish) is a colloquialism for C-Band satellite dishes used by TVRO systems. BUDs range from 4 to 16 feet in diameter, with the most popular large size being 10 feet. The name comes from their perception as an eyesore.
History
TVRO systems were originally marketed in the late 1970s. On October 18, 1979, the FCC began allowing people to have home satellite earth stations without a federal government license. The dishes were nearly in diameter, were remote controlled, and could only pick up HBO signals from one of two satellites.
Originally, the dishes used for satellite TV reception were 12 to 16 feet in diameter and made of solid fiberglass with an embedded metal coating, with later models being 4 to 10 feet and made of wire mesh and solid steel or aluminum. Early dishes cost more than $5,000, and sometimes as much as $10,000. The wider the dish was, the better its ability to provide adequate channel reception. Programming sent from ground stations was relayed from 18 satellites in geostationary orbit located 22,300 miles above the Earth. The dish had to be pointed directly at the satellite, with nothing blocking the signal. Weaker signals required larger dishes.
The dishes worked by receiving a low-power C-Band (3.7–4.2 GHz) frequency-modulated analog signal directly from the original distribution satellite – the same signal received by cable television headends. Because analog channels took up an entire transponder on the satellite, and each satellite had a fixed number of transponders, dishes were usually equipped with a modified polar mount and actuator to sweep the dish across the horizon to receive channels from multiple satellites. Switching between horizontal and vertical polarization was accomplished by a small electric servo motor that moved a probe inside the feedhorn throat at the command of the receiver (commonly called a "polarotor" setup). Higher-end receivers did this transparently, switching polarization and moving the dish automatically as the user changed channels.
By Spring of 1984, 18 C-Band satellites were in use for United States domestic communications, owned by five different companies.
The retail price for satellite receivers soon dropped, with some dishes costing as little as $2,000 by mid-1984. Dishes pointing to one satellite were even cheaper. Once a user paid for a dish, it was possible to receive even premium movie channels, raw feeds of news broadcasts or television stations from other areas. People in areas without local broadcast stations, and people in areas without cable television, could obtain good-quality reception with no monthly fees. Two open questions existed about this practice: whether the Communications Act of 1934 applied as a case of "unauthorized reception" by TVRO consumers; and to what it extent it was legal for a service provider to encrypt their signals in an effort to prevent its reception.
The Cable Communications Policy Act of 1984 clarified all of these matters, making the following legal:
Reception of unencrypted satellite signals by a consumer
Reception of encrypted satellite signals by a consumer, when they have received authorization to legally decrypt it
This created a framework for the wide deployment of encryption on analog satellite signals. It further created a framework (and implicit mandate to provide) subscription services to TVRO consumers to allow legal decryption of those signals. HBO and Cinemax became the first two services to announce intent to encrypt their satellite feeds late in 1984. Others were strongly considering doing so as well. Where cable providers could compete with TVRO subscription options, it was thought this would provide sufficient incentive for competition.
HBO and Cinemax began encrypting their west coast feeds services with VideoCipher II 12 hours a day early in 1985, then did the same with their east coast feeds by August. The two networks began scrambling full time on January 15, 1986, which in many contemporary news reports was called "S-Day". This met with much protest from owners of big-dish systems, most of which had no other option at the time for receiving such channels. As required by the Cable Communications Policy act of 1984, HBO allowed dish owners to subscribe directly to their service, although at a price ($12.95 per month) higher than what cable subscribers were paying. This sentiment, and a collapse in the sales of TVRO equipment in early 1986, led to the April 1986 attack on HBO's transponder on Galaxy 1. Dish sales went down from 600,000 in 1985 to 350,000 in 1986, but pay television services were seeing dishes as something positive since some people would never have cable service, and the industry was starting to recover as a result. Through 1986, other channels that began full time encryption included Showtime and The Movie Channel on May 27, and CNN and CNN Headline News on July 1. Scrambling would also lead to the development of pay-per-view, as demonstrated by the early adoption of encryption by Request Television, and Viewer's Choice. Channels scrambled (encrypted) with VideoCipher and VideoCipher II could be defeated, and there was a black market for illegal descramblers.
By the end of 1987, 16 channels had employed encryption with another 7 planned in the first half of 1988. Packages that offered reduced rates for channels in bulk had begun to appear. At this time, the vast majority of analog satellite TV transponders still were not encrypted. On November 1, 1988, NBC began scrambling its C-band signal but left its Ku band signal unencrypted in order for affiliates to not lose viewers who could not see their advertising. Most of the two million satellite dish users in the United States still used C-band. ABC and CBS were considering scrambling, though CBS was reluctant due to the number of people unable to receive local network affiliates.
The growth of dishes receiving Ku band signals in North America was limited by the Challenger disaster, since 75 satellites were to be launched prior to the suspension of the Space Shuttle program. Only seven Ku band satellites were in use.
In addition to encryption, DBS services such as PrimeStar had been reducing the popularity for TVRO systems since the early 1990s. Signals from DBS satellites (operating in the more recent Ku band) are higher in both frequency and power (due to improvements in the solar panels and energy efficiency of modern satellites) and therefore require much smaller dishes than C-band, and the digital signals now used require far less signal strength at the receiver, resulting in a lower cost of entry. Each satellite also can carry up to 32 transponders in the Ku band, but only 24 in the C band, and several digital subchannels can be multiplexed (MCPC) or carried separately (SCPC) on a single transponder. General advances, such as HEMT, in noise reduction at microwave frequencies have also had an effect. However, a consequence of the higher frequency used for DBS services is rain fade where viewers lose signal during a heavy downpour. C-band's immunity to rain fade is one of the major reasons the system is still used as the preferred method for television broadcasters to distribute their signal.
Popularity
TVRO systems were most popular in rural areas, beyond the broadcast range of most local television stations. The mountainous terrain of West Virginia, for example, makes reception of over-the-air television broadcasts (especially in the higher UHF frequencies) very difficult. From the late 1970s to the early 1990s DBS systems were not available, and cable television systems of the time only carried a few channels, resulting in a boom in sales of systems in the area, which led to the systems being termed the "West Virginia state flower". The term was regional, known mostly to those living in West Virginia and surrounding areas. Another reason was the large sizes of the dishes. The first satellite systems consisted of "BUDs" twelve to sixteen feet in diameter. They became much more popular in the mid-1980s when dish sizes decreased to about six to ten feet, but have always been a source of much consternation (even local zoning disputes) due to their perception as an eyesore. Neighborhoods with restrictive covenants usually still prohibit this size of dish, except where such restrictions are illegal. Support for systems dried up when strong encryption was introduced around 1994. Many long-disconnected dishes still occupy their original spots.
TVRO on ships
The term TVRO has been in use on ships since it was introduced in the 1980s. One early provider of equipment was SeaTel with its first generation of stabilized satellite antennas that was launched in 1985, the TV-at-Sea 8885 system. Until this time ships had not been able to receive television signals from satellites due to their rocking motion rendering reception impossible. The SeaTel antenna however was stabilized using electrically driven gyroscopes and thus made it possible to point to the satellite accurately enough, that is to within 2°, in order to receive a signal. The successful implementation of stabilised TVRO systems on ships immediately led to the development of maritime VSAT systems. The second generation of SeaTel TVRO systems came in 1994 and was the 2494 antenna, which got its gyro signal from the ship rather than its own gyros, improving accuracy and reducing maintenance.
As of 2010, SeaTel continues to dominate the market for stabilized TVRO systems and has according to the Comsys group, a market share of 75%. Other established providers of stabilised satellite antennas are Intellian, KNS, Orbit, EPAK and KVH.
Current uses
Most of the free analogue channels that BUDs were built to receive have been taken offline. Due to the number of systems in existence, their lack of usefulness, and because many people consider them an eyesore, used BUDs can be purchased for very little money. As of 2009, there are 23 C-band satellites and 38 Ku/Ka band satellites.
There were over 150 channels for people who want to receive subscription channels on a C-band dish via Motorola's 4DTV equipment via two vendors Satellite Receivers Ltd (SRL) and Skyvision. The 4DTV subscription system shutdown on August 16, 2016.
The dishes themselves can be modified to receive free-to-air and DBS signals. The stock LNBs fitted to typical BUDs will usually need to be replaced with one of a lower noise temperature to receive digital broadcasts. With a suitable replacement LNB (provided there is no warping of the reflector) a BUD can be used to receive free-to-air (FTA) and DBS signals. Several companies market LNBs, LNBFs, and adaptor collars for big-dish systems. For receiving FTA signals the replacement should be capable of dual C/Ku reception with linear polarization, for DBS it will need a high band Ku LNBF using circular polarization. Older mesh dishes with perforations larger than 5mm are inefficient at Ku frequencies, because the smaller wavelengths will pass through them. Solid fiberglass dishes usually contain metal mesh with large-diameter perforations as a reflector and are usually unsuitable for anything other than C band.
Large dishes have higher antenna gain, which can be an advantage when used with DBS signals such as Dish Network and DirecTV, virtually eliminating rain fade. Restored dishes fitted with block upconverters can be used to transmit signals as well. BUDs can still be seen at antenna farms for these reasons, so that video and backhauls can be sent to and from the television network with which a station is affiliated, without interruption due to inclement weather. BUDs are also still useful for picking-up weak signals at the edge of a satellite's broadcast "footprint" – the area at which a particular satellite is aimed. For this reason, BUDs are helpful in places like Alaska, or parts of the Caribbean.
Modern equivalents
Large parabolic antennas similar to BUDs are still in production. New dishes differ in their construction and materials. New mesh dishes have much smaller perforations and solid dishes are now made with steel instead of fiberglass. New systems usually include a universal LNB that is switched electronically between horizontal and vertical polarization, obviating the need for a failure-prone polar rotor. As a complete system they have a much lower noise temperature than old BUDs, and are generally better for digital Ku reception. The prices of these dishes have fallen dramatically since the first BUDs were produced for several thousand dollars to as little as $200 for an 8 ft mesh started BUD sold on eBay or amazon as of 2014. Typical uses for these systems include receiving free-to-air and subscription services.
See also
Direct-broadcast satellite television
Polar mount
References
External links
rec.video.satellite.tvro FAQ
Part 1, Part 2, Part 3, Part 4
C/Ku Band Satellite Systems – Tuning, Tracking...
How to set up and align a BUD
North American seller of 8ft, 10ft, 12ft and 13.5ft mesh TVRO antennas
US satellite TV subscription provider for BUDs
Canadian satellite TV subscription provider for BUDs
Satellite Charts and Forum for C-Band Satellite users in North America
Television technology
Broadcast engineering
Radio frequency antenna types
Antennas (radio)
Satellite television
History of television
Television terminology |
78768 | https://en.wikipedia.org/wiki/Proxy%20server | Proxy server | In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource.
Instead of connecting directly to a server that can fulfill a requested resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.
Types
A proxy server may reside on the user's local computer, or at any point between the user's computer and destination servers on the Internet. A proxy server that passes unmodified requests and responses is usually called a gateway or sometimes a tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases anywhere on the Internet). A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption and caching.
Open proxies
An open proxy is a forwarding proxy server that is accessible by any Internet user. In 2008, network security expert Gordon Lyon estimates that "hundreds of thousands" of open proxies are operated on the Internet.
Anonymous proxy – This server reveals its identity as a proxy server but does not disclose the originating IP address of the client. Although this type of server can be discovered easily, it can be beneficial for some users as it hides the originating IP address.
Transparent proxy – This server not only identifies itself as a proxy server but with the support of HTTP header fields such as X-Forwarded-For, the originating IP address can be retrieved as well. The main benefit of using this type of server is its ability to cache a website for faster retrieval.
Reverse proxies
A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Reverse proxies forward requests to one or more ordinary servers that handle the request. The response from the proxy server is returned as if it came directly from the original server, leaving the client with no knowledge of the original server. Reverse proxies are installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the neighborhood's web servers goes through the proxy server. The use of reverse originates in its counterpart forward proxy since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers:
Encryption/SSL acceleration: when secure websites are created, the Secure Sockets Layer (SSL) encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. Furthermore, a host can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts, removing the need for a separate SSL server certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections. This problem can partly be overcome by using the SubjectAltName feature of X.509 certificates.
Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations).
Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content.
Compression: the proxy server can optimize and compress the content to speed up the load time.
Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeding" it to the client. This especially benefits dynamically generated pages.
Security: the proxy server is an additional layer of defense and can protect against some OS and web-server-specific attacks. However, it does not provide any protection from attacks against the web application or service itself, which is generally considered the larger threat.
Extranet publishing: a reverse proxy server facing the Internet can be used to communicate to a firewall server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet.
Uses
Monitoring and filtering
Content-control software
A content-filtering web proxy server provides administrative control over the content that may be relayed in one or both directions through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy.
Content filtering proxy servers will often support user authentication to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users or to monitor bandwidth usage statistics. It may also communicate to daemon-based and/or ICAP-based antivirus software to provide security against virus and other malware by scanning incoming content in real-time before it enters the network.
Many workplaces, schools, and colleges restrict web sites and online services that are accessible and available in their buildings. Governments also censor undesirable content. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture.
Websites commonly used by students to circumvent filters and access blocked content often include a proxy, from which the user can then access the websites that the filter is trying to block.
Requests may be filtered by several methods, such as a URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Blacklists are often provided and maintained by web-filtering companies, often grouped into categories (pornography, gambling, shopping, social networks, etc..).
Assuming the requested URL is acceptable, the content is then fetched by the proxy. At this point, a dynamic filter may be applied on the return path. For example, JPEG files could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error may be returned to the requester.
Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that content is a certain type. The resultant database is then corrected by manual labor based on complaints or known flaws in the content-matching algorithms.
Some proxies scan outbound content, e.g., for data loss prevention; or scan content for malicious software.
Filtering of encrypted data
Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS (Transport Layer Security) has not been tampered with. The SSL/TLS chain-of-trust relies on trusted root certificate authorities.
In a workplace setting where the client is managed by the organization, devices may be configured to trust a root certificate whose private key is known to the proxy. In such situations, proxy analysis of the contents of an SSL/TLS transaction becomes possible. The proxy is effectively operating a man-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns.
Bypassing filters and censorship
If the destination server filters content based on the origin of the request, the use of a proxy can circumvent this filter. For example, a server using IP-based geolocation to restrict its service to a certain country can be accessed using a proxy located in that country to access the service.
Web proxies are the most common means of bypassing government censorship, although no more than 3% of Internet users use any circumvention tools.
Some proxy service providers allow businesses access to their proxy network for rerouting traffic for business intelligence purposes.
In some cases, users can circumvent proxies which filter using blacklists using services designed to proxy information from a non-blacklisted location.
Logging and eavesdropping
Proxies can be installed in order to eavesdrop upon the data-flow between client machines and the web. All content sent or accessed – including passwords submitted and cookies used – can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL.
By chaining the proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind.
In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sites block IP addresses from proxies known to have spammed or trolled the site. Proxy bouncing can be used to maintain privacy.
Improving performance
A caching proxy server accelerates service requests by retrieving the content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Web proxies are commonly used to cache web pages from a web server. Poorly implemented caching proxies can cause problems, such as an inability to use user authentication.
A proxy that is designed to mitigate specific link related issues or degradation is a Performance Enhancing Proxy (PEPs). These are typically used to improve TCP performance in the presence of high round-trip times or high packet loss (such as wireless or mobile phone networks); or highly asymmetric links featuring very different upload and download rates. PEPs can make more efficient use of the network, for example, by merging TCP ACKs (acknowledgements) or compressing data sent at the application layer.
Translation
A translation proxy is a proxy server that is used to localize a website experience for different markets. Traffic from the global audience is routed through the translation proxy to the source website. As visitors browse the proxied site, requests go back to the source site where pages are rendered. The original language content in the response is replaced by the translated content as it passes back through the proxy. The translations used in a translation proxy can be either machine translation, human translation, or a combination of machine and human translation. Different translation proxy implementations have different capabilities. Some allow further customization of the source site for the local audiences such as excluding the source content or substituting the source content with the original local content.
Repairing errors
A proxy can be used to automatically repair errors in the proxied content. For instance, the BikiniProxy system instruments JavaScript code on the fly in order to detect and automatically repair errors happening in the browser. Another kind of repair that can be done by a proxy is to fix accessibility issues.
Accessing services anonymously
An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. Anonymizers may be differentiated into several varieties. The destination server (the server that ultimately satisfies the web request) receives requests from the anonymizing proxy server and thus does not receive information about the end user's address. The requests are not anonymous to the anonymizing proxy server, however, and so a degree of trust is present between the proxy server and the user. Many proxy servers are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals. Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high-anonymity proxies, make it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets that include a cookie from a previous visit that did not use the high-anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.
QA geotargeted advertising
Advertisers use proxy servers for validating, checking and quality assurance of geotargeted ads. A geotargeting ad server checks the request source IP address and uses a geo-IP database to determine the geographic source of requests. Using a proxy server that is physically located inside a specific country or a city gives advertisers the ability to test geotargeted ads.
Security
A proxy can keep the internal network structure of a company secret by using network address translation, which can help the security of the internal network. This makes requests from machines and users on the local network anonymous. Proxies can also be combined with firewalls.
An incorrectly configured proxy can provide access to a network otherwise isolated from the Internet.
Cross-domain resources
Proxies allow web sites to make web requests to externally hosted resources (e.g. images, music files, etc.) when cross-domain restrictions prohibit the web site from linking directly to the outside domains. Proxies also allow the browser to make web requests to externally hosted content on behalf of a website when cross-domain restrictions (in place to protect websites from the likes of data theft) prohibit the browser from directly accessing the outside domains.
Malicious usages
Secondary market brokers
Secondary market brokers use web proxy servers to buy large stocks of limited products such as limited sneakers or tickets.
Implementations of proxies
Web proxy servers
Web proxies forward HTTP requests. The request from the client is the same as a regular HTTP request except the full URL is passed, instead of just the path.
GET https://en.wikipedia.org/wiki/Proxy_server HTTP/1.1
Proxy-Authorization: Basic encoded-credentials
Accept: text/html
This request is sent to the proxy server, the proxy makes the request specified and returns the response.
HTTP/1.1 200 OK
Content-Type: text/html; charset UTF-8
Some web proxies allow the HTTP CONNECT method to set up forwarding of arbitrary data through the connection; a common policy is to only forward port 443 to allow HTTPS traffic.
Examples of web proxy servers include Apache (with mod_proxy or Traffic Server), HAProxy, IIS configured as proxy (e.g., with Application Request Routing), Nginx, Privoxy, Squid, Varnish (reverse proxy only), WinGate, Ziproxy, Tinyproxy, RabbIT and Polipo.
For clients, the problem of complex or multiple proxy-servers is solved by a client-server Proxy auto-config protocol (PAC file).
SOCKS proxy
SOCKS also forwards arbitrary data after a connection phase, and is similar to HTTP CONNECT in web proxies.
Transparent proxy
Also known as an intercepting proxy, inline proxy, or forced proxy, a transparent proxy intercepts normal application layer communication without requiring any special client configuration. Clients need not be aware of the existence of the proxy. A transparent proxy is normally located between the client and the Internet, with the proxy performing some of the functions of a gateway or router.
(Hypertext Transfer Protocol—HTTP/1.1) offers standard definitions:
"A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".
TCP Intercept is a traffic filtering security feature that protects TCP servers from TCP SYN flood attacks, which are a type of denial-of-service attack. TCP Intercept is available for IP traffic only.
In 2009 a security flaw in the way that transparent proxies operate was published by Robert Auger, and the Computer Emergency Response Team issued an advisory listing dozens of affected transparent and intercepting proxy servers.
Purpose
Intercepting proxies are commonly used in businesses to enforce acceptable use policy, and to ease administrative overheads since no client browser configuration is required. This second reason however is mitigated by features such as Active Directory group policy, or DHCP and automatic proxy detection.
Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for.
Issues
The diversion/interception of a TCP connection creates several issues. First, the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g., where the gateway and proxy reside on different hosts). There is a class of cross-site attacks that depend on certain behavior of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem may be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy.
Intercepting also creates problems for HTTP authentication, especially connection-oriented authentication such as NTLM, as the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, then the user connects to a site that also requires authentication.
Finally, intercepting connections can cause problems for HTTP caches, as some requests and responses become uncacheable by a shared cache.
Implementation methods
In integrated firewall/proxy servers where the router/firewall is on the same host as the proxy, communicating original destination information can be done by any method, for example Microsoft TMG or WinGate.
Interception can also be performed using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2).
Once traffic reaches the proxy machine itself interception is commonly performed with NAT (Network Address Translation). Such setups are invisible to the client browser, but leave the proxy visible to the web server and other devices on the internet side of the proxy. Recent Linux and some BSD releases provide TPROXY (transparent proxy) which performs IP-level (OSI Layer 3) transparent interception and spoofing of outbound traffic, hiding the proxy IP address from other network devices.
Detection
Several methods may be used to detect the presence of an intercepting proxy server:
By comparing the client's external IP address to the address seen by an external web server, or sometimes by examining the HTTP headers received by a server. A number of sites have been created to address this issue, by reporting the user's IP address as seen by the site back to the user on a web page. Google also returns the IP address as seen by the page if the user searches for "IP".
By comparing the result of online IP checkers when accessed using HTTPS vs HTTP, as most intercepting proxies do not intercept SSL. If there is suspicion of SSL being intercepted, one can examine the certificate associated with any secure web site, the root certificate should indicate whether it was issued for the purpose of intercepting.
By comparing the sequence of network hops reported by a tool such as traceroute for a proxied protocol such as http (port 80) with that for a non-proxied protocol such as SMTP (port 25).
By attempting to make a connection to an IP address at which there is known to be no server. The proxy will accept the connection and then attempt to proxy it on. When the proxy finds no server to accept the connection it may return an error message or simply close the connection to the client. This difference in behavior is simple to detect. For example, most web browsers will generate a browser created error page in the case where they cannot connect to an HTTP server but will return a different error in the case where the connection is accepted and then closed.
By serving the end-user specially programmed Adobe Flash SWF applications or Sun Java applets that send HTTP calls back to their server.
CGI proxy
A CGI web proxy accepts target URLs using a Web form in the user's browser window, processes the request, and returns the results to the user's browser. Consequently, it can be used on a device or network that does not allow "true" proxy settings to be changed. The first recorded CGI proxy, named "rover" at the time but renamed in 1998 to "CGIProxy", was developed by American computer scientist James Marshall in early 1996 for an article in "Unix Review" by Rich Morin.
The majority of CGI proxies are powered by one of CGIProxy (written in the Perl language), Glype (written in the PHP language), or PHProxy (written in the PHP language). As of April 2016, CGIProxy has received about 2 million downloads, Glype has received almost a million downloads, whilst PHProxy still receives hundreds of downloads per week. Despite waning in popularity due to VPNs and other privacy methods, there are still a few hundred CGI proxies online.
Some CGI proxies were set up for purposes such as making websites more accessible to disabled people, but have since been shut down due to excessive traffic, usually caused by a third party advertising the service as a means to bypass local filtering. Since many of these users don't care about the collateral damage they are causing, it became necessary for organizations to hide their proxies, disclosing the URLs only to those who take the trouble to contact the organization and demonstrate a genuine need.
Suffix proxy
A suffix proxy allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.SuffixProxy.com"). Suffix proxy servers are easier to use than regular proxy servers but they do not offer high levels of anonymity and their primary use is for bypassing web filters. However, this is rarely used due to more advanced web filters.
Tor onion proxy software
Tor is a system intended to provide online anonymity. Tor client software routes Internet traffic through a worldwide volunteer network of servers for concealing a user's computer location or usage from someone conducting network surveillance or traffic analysis. Using Tor makes tracing Internet activity more difficult, and is intended to protect users' personal freedom, privacy.
"Onion routing" refers to the layered nature of the encryption service: The original data are encrypted and re-encrypted multiple times, then sent through successive Tor relays, each one of which decrypts a "layer" of encryption before passing the data on to the next relay and ultimately the destination. This reduces the possibility of the original data being unscrambled or understood in transit.
I2P anonymous proxy
The I2P anonymous network ('I2P') is a proxy network aiming at online anonymity. It implements garlic routing, which is an enhancement of Tor's onion routing. I2P is fully distributed and works by encrypting all communications in various layers and relaying them through a network of routers run by volunteers in various locations. By keeping the source of the information hidden, I2P offers censorship resistance. The goals of I2P are to protect users' personal freedom, privacy, and ability to conduct confidential business.
Each user of I2P runs an I2P router on their computer (node). The I2P router takes care of finding other peers and building anonymizing tunnels through them. I2P provides proxies for all protocols (HTTP, IRC, SOCKS, ...).
Comparison to network address translators
The proxy concept refers to a layer 7 application in the OSI reference model. Network address translation (NAT) is similar to a proxy but operates in layer 3.
In the client configuration of layer-3 NAT, configuring the gateway is sufficient. However, for the client configuration of a layer 7 proxy, the destination of the packets that the client generates must always be the proxy server (layer 7), then the proxy server reads each packet and finds out the true destination.
Because NAT operates at layer-3, it is less resource-intensive than the layer-7 proxy, but also less flexible. As we compare these two technologies, we might encounter a terminology known as 'transparent firewall'. Transparent firewall means that the proxy uses the layer-7 proxy advantages without the knowledge of the client. The client presumes that the gateway is a NAT in layer 3, and it does not have any idea about the inside of the packet, but through this method, the layer-3 packets are sent to the layer-7 proxy for investigation.
DNS proxy
A DNS proxy server takes DNS queries from a (usually local) network and forwards them to an Internet Domain Name Server. It may also cache DNS records.
Proxifiers
Some client programs "SOCKS-ify" requests, which allows adaptation of any networked software to connect to external networks via certain types of proxy servers (mostly SOCKS).
Residential proxy
A residential proxy is an intermediary that uses a real IP address provided by an Internet Service Provider (ISP) with physical devices such as mobiles and computers of end-users. Instead of connecting directly to a server, residential proxy users connect to the target through residential IP addresses. The target then identifies them as organic internet users. It does not let any tracking tool identify the reallocation of the user. Any residential proxy can send any number of concurrent requests and IP addresses are directly related to a specific region. Unlike regular residential proxies, which hide user's real IP address behind another IP address, rotating residential proxies, also known as backconnect proxies, conceal user's real IP address behind a pool of proxies. These proxies switch between themselves with every session or at regular intervals.
See also
References
External links
Computer networking
Network performance
Internet architecture
Internet privacy
Computer security software |
79836 | https://en.wikipedia.org/wiki/Charles%20Wheatstone | Charles Wheatstone | Sir Charles Wheatstone FRS FRSE DCL LLD (6 February 1802 – 19 October 1875), was an English scientist and inventor of many scientific breakthroughs of the Victorian era, including the English concertina, the stereoscope (a device for displaying three-dimensional images), and the Playfair cipher (an encryption technique). However, Wheatstone is best known for his contributions in the development of the Wheatstone bridge, originally invented by Samuel Hunter Christie, which is used to measure an unknown electrical resistance, and as a major figure in the development of telegraphy.
Life
Charles Wheatstone was born in Barnwood, Gloucestershire. His father, W. Wheatstone, was a music-seller in the town, who moved to 128 Pall Mall, London, four years later, becoming a teacher of the flute. Charles, the second son, went to a village school, near Gloucester, and afterwards to several institutions in London. One of them was in Kennington, and kept by a Mrs. Castlemaine, who was astonished at his rapid progress. From another he ran away, but was captured at Windsor, not far from the theatre of his practical telegraph. As a boy he was very shy and sensitive, liking well to retire into an attic, without any other company than his own thoughts.
When he was about fourteen years old he was apprenticed to his uncle and namesake, a maker and seller of musical instruments at 436 Strand, London; but he showed little taste for handicraft or business, and loved better to study books. His father encouraged him in this, and finally took him out of the uncle's charge.
At the age of fifteen, Wheatstone translated French poetry, and wrote two songs, one of which was given to his uncle, who published it without knowing it as his nephew's composition. Some lines of his on the lyre became the motto of an engraving by Bartolozzi. He often visited an old book-stall in the vicinity of Pall Mall, which was then a dilapidated and unpaved thoroughfare. Most of his pocket-money was spent in purchasing the books which had taken his fancy, whether fairy tales, history, or science.
One day, to the surprise of the bookseller, he coveted a volume on the discoveries of Volta in electricity, but not having the price, he saved his pennies and secured the volume. It was written in French, and so he was obliged to save again, until he could buy a dictionary. Then he began to read the volume, and, with the help of his elder brother, William, to repeat the experiments described in it, with a home-made battery, in the scullery behind his father's house. In constructing the battery, the boy philosophers ran short of money to procure the requisite copper-plates. They had only a few copper coins left. A happy thought occurred to Charles, who was the leading spirit in these researches, 'We must use the pennies themselves,' said he, and the battery was soon complete.
At Christchurch, Marylebone, on 12 February 1847, Wheatstone was married to Emma West. She was the daughter of a Taunton tradesman, and of handsome appearance. She died in 1866, leaving a family of five young children to his care. His domestic life was quiet and uneventful.
Though silent and reserved in public, Wheatstone was a clear and voluble talker in private, if taken on his favourite studies, and his small but active person, his plain but intelligent countenance, was full of animation. Sir Henry Taylor tells us that he once observed Wheatstone at an evening party in Oxford earnestly holding forth to Lord Palmerston on the capabilities of his telegraph. 'You don't say so!' exclaimed the statesman. 'I must get you to tell that to the Lord Chancellor.' And so saying, he fastened the electrician on Lord Westbury, and effected his escape. A reminiscence of this interview may have prompted Palmerston to remark that a time was coming when a minister might be asked in Parliament if war had broken out in India, and would reply, 'Wait a minute; I'll just telegraph to the Governor-General, and let you know.'
Wheatstone was knighted in 1868, after his completion of the automatic telegraph. He had previously been made a Chevalier of the Legion of Honour. Some thirty-four distinctions and diplomas of home or foreign societies bore witness to his scientific reputation. Since 1836 he had been a Fellow of the Royal Society, and in 1859 he was elected a foreign member of the Royal Swedish Academy of Sciences, and in 1873 a Foreign Associate of the French Academy of Sciences. The same year he was awarded the Ampere Medal by the French Society for the Encouragement of National Industry. In 1875, he was created an honorary member of the Institution of Civil Engineers. He was a D.C.L. of Oxford and an LL.D. of Cambridge.
While on a visit to Paris during the autumn of 1875, and engaged in perfecting his receiving instrument for submarine cables, he caught a cold, which produced inflammation of the lungs, an illness from which he died in Paris, on 19 October 1875. A memorial service was held in the Anglican Chapel, Paris, and attended by a deputation of the Academy. His remains were taken to his home in Park Crescent, London, (marked by a blue plaque today) and buried in Kensal Green Cemetery.
Music instruments and acoustics
In September 1821, Wheatstone brought himself into public notice by exhibiting the 'Enchanted Lyre,' or 'Acoucryptophone,' at a music shop at Pall Mall and in the Adelaide Gallery. It consisted of a mimic lyre hung from the ceiling by a cord, and emitting the strains of several instruments – the piano, harp, and dulcimer. In reality it was a mere sounding box, and the cord was a steel rod that conveyed the vibrations of the music from the several instruments which were played out of sight and ear-shot. At this period Wheatstone made numerous experiments on sound and its transmission. Some of his results are preserved in Thomson's Annals of Philosophy for 1823.
He recognised that sound is propagated by waves or oscillations of the atmosphere, as light was then believed to be by undulations of the luminiferous ether. Water, and solid bodies, such as glass, or metal, or sonorous wood, convey the modulations with high velocity, and he conceived the plan of transmitting sound-signals, music, or speech to long distances by this means. He estimated that sound would travel through solid rods, and proposed to telegraph from London to Edinburgh in this way. He even called his arrangement a 'telephone.' (Robert Hooke, in his Micrographia, published in 1667, writes: 'I can assure the reader that I have, by the help of a distended wire, propagated the sound to a very considerable distance in an instant, or with as seemingly quick a motion as that of light.' Nor was it essential the wire should be straight; it might be bent into angles. This property is the basis of the mechanical or lover's telephone, said to have been known to the Chinese many centuries ago. Hooke also considered the possibility of finding a way to quicken our powers of hearing.)
A writer in the Repository of Arts for 1 September 1821, in referring to the 'Enchanted Lyre,' beholds the prospect of an opera being performed at the King's Theatre, and enjoyed at the Hanover Square Rooms, or even at the Horns Tavern, Kennington. The vibrations are to travel through underground conductors, like to gas in pipes.
And if music be capable of being thus conducted,' he observes, 'perhaps the words of speech may be susceptible of the same means of propagation. The eloquence of counsel, the debates of Parliament, instead of being read the next day only, – But we shall lose ourselves in the pursuit of this curious subject.
Besides transmitting sounds to a distance, Wheatstone devised a simple instrument for augmenting feeble sounds, to which he gave the name of 'Microphone.' It consisted of two slender rods, which conveyed the mechanical vibrations to both ears, and is quite different from the electrical microphone of Professor Hughes.
In 1823, his uncle, the musical instrument maker, died, and Wheatstone, with his elder brother, William, took over the business. Charles had no great liking for the commercial part, but his ingenuity found a vent in making improvements on the existing instruments, and in devising philosophical toys. He also invented instruments of his own. One of the most famous was the Wheatstone concertina. It was a six sided instrument with 64 keys. These keys provided for simple chromatic fingerings. The English Concertina became increasingly famous throughout his lifetime, however it didn't reach its peak of popularity until the early 20th century.
In 1827, Wheatstone introduced his 'kaleidophone', a device for rendering the vibrations of a sounding body apparent to the eye. It consists of a metal rod, carrying at its end a silvered bead, which reflects a 'spot' of light. As the rod vibrates the spot is seen to describe complicated figures in the air, like a spark whirled about in the darkness. His photometer was probably suggested by this appliance. It enables two lights to be compared by the relative brightness of their reflections in a silvered bead, which describes a narrow ellipse, so as to draw the spots into parallel lines.
In 1828, Wheatstone improved the German wind instrument, called the Mundharmonika, until it became the popular concertina, patented on 19 December 1829. The portable harmonium is another of his inventions, which gained a prize medal at the Great Exhibition of 1851. He also improved the speaking machine of De Kempelen, and endorsed the opinion of Sir David Brewster, that before the end of this century a singing and talking apparatus would be among the conquests of science.
In 1834, Wheatstone, who had won a name for himself, was appointed to the Chair of Experimental Physics in King's College London. His first course of lectures on sound were a complete failure, due to his abhorrence of public speaking. In the rostrum he was tongue-tied and incapable, sometimes turning his back on the audience and mumbling to the diagrams on the wall. In the laboratory he felt himself at home, and ever after confined his duties mostly to demonstration.
Velocity of electricity
He achieved renown by a great experiment made in 1834 – the measurement of the velocity of electricity in a wire. He cut the wire at the middle, to form a gap which a spark might leap across, and connected its ends to the poles of a Leyden jar filled with electricity. Three sparks were thus produced, one at each end of the wire, and another at the middle. He mounted a tiny mirror on the works of a watch, so that it revolved at a high velocity, and observed the reflections of his three sparks in it. The points of the wire were so arranged that if the sparks were instantaneous, their reflections would appear in one straight line; but the middle one was seen to lag behind the others, because it was an instant later. The electricity had taken a certain time to travel from the ends of the wire to the middle. This time was found by measuring the amount of lag, and comparing it with the known velocity of the mirror. Having got the time, he had only to compare that with the length of half the wire, and he could find the velocity of electricity. His results gave a calculated velocity of 288,000 miles per second, i.e. faster than what we now know to be the speed of light (), but were nonetheless an interesting approximation.
It was already appreciated by some scientists that the "velocity" of electricity was dependent on the properties of the conductor and its surroundings. Francis Ronalds had observed signal retardation in his buried electric telegraph cable (but not his airborne line) in 1816 and outlined its cause to be induction. Wheatstone witnessed these experiments as a youth, which were apparently a stimulus for his own research in telegraphy. Decades later, after the telegraph had been commercialised, Michael Faraday described how the velocity of an electric field in a submarine wire, coated with insulator and surrounded with water, is only , or still less.
Wheatstone's device of the revolving mirror was afterwards employed by Léon Foucault and Hippolyte Fizeau to measure the velocity of light.
Spectroscopy
Wheatstone and others also contributed to early spectroscopy through the discovery and exploitation of spectral emission lines.
As John Munro wrote in 1891, "In 1835, at the Dublin meeting of the British Association, Wheatstone showed that when metals were volatilised in the electric spark, their light, examined through a prism, revealed certain rays which were characteristic of them. Thus the kind of metals which formed the sparking points could be determined by analysing the light of the spark. This suggestion has been of great service in spectrum analysis, and as applied by Robert Bunsen, Gustav Robert Kirchhoff, and others, has led to the discovery of several new elements, such as rubidium and thallium, as well as increasing our knowledge of the heavenly bodies."
Telegraph
Wheatstone abandoned his idea of transmitting intelligence by the mechanical vibration of rods, and took up the electric telegraph. In 1835 he lectured on the system of Baron Schilling, and declared that the means were already known by which an electric telegraph could be made of great service to the world. He made experiments with a plan of his own, and not only proposed to lay an experimental line across the Thames, but to establish it on the London and Birmingham Railway. Before these plans were carried out, however, he received a visit from Mr William Fothergill Cooke at his house in Conduit Street on 27 February 1837, which had an important influence on his future.
Cooperation with Cooke
Mr. Cooke was an officer in the Madras Army, who, being home on leave, was attending some lectures on anatomy at the University of Heidelberg, where, on 6 March 1836, he witnessed a demonstration with the telegraph of professor Georg Wilhelm Munke, and was so impressed with its importance, that he forsook his medical studies and devoted all his efforts to the work of introducing the telegraph. He returned to London soon after, and was able to exhibit a telegraph with three needles in January 1837. Feeling his want of scientific knowledge, he consulted Michael Faraday and Peter Mark Roget (then secretary of the Royal Society), the latter of whom sent him to Wheatstone.
At a second interview, Mr. Cooke told Wheatstone of his intention to bring out a working telegraph, and explained his method. Wheatstone, according to his own statement, remarked to Cooke that the method would not act, and produced his own experimental telegraph. Finally, Cooke proposed that they should enter into a partnership, but Wheatstone was at first reluctant to comply. He was a well-known man of science, and had meant to publish his results without seeking to make capital of them. Cooke, on the other hand, declared that his sole object was to make a fortune from the scheme. In May they agreed to join their forces, Wheatstone contributing the scientific, and Cooke the administrative talent. The deed of partnership was dated 19 November 1837. A joint patent was taken out for their inventions, including the five-needle telegraph of Wheatstone, and an alarm worked by a relay, in which the current, by dipping a needle into mercury, completed a local circuit, and released the detent of a clockwork.
The five-needle telegraph, which was mainly, if not entirely, due to Wheatstone, was similar to that of Schilling, and based on the principle enunciated by André-Marie Ampère – that is to say, the current was sent into the line by completing the circuit of the battery with a make and break key, and at the other end it passed through a coil of wire surrounding a magnetic needle free to turn round its centre. According as one pole of the battery or the other was applied to the line by means of the key, the current deflected the needle to one side or the other. There were five separate circuits actuating five different needles. The latter were pivoted in rows across the middle of a dial shaped like a diamond, and having the letters of the alphabet arranged upon it in such a way that a letter was literally pointed out by the current deflecting two of the needles towards it.
Early installations
An experimental line, with a sixth return wire, was run between the Euston terminus and Camden Town station of the London and North Western Railway on 25 July 1837. The actual distance was only one and a half-mile (2.4 km), but spare wire had been inserted in the circuit to increase its length. It was late in the evening before the trial took place. Mr Cooke was in charge at Camden Town, while Mr Robert Stephenson and other gentlemen looked on; and Wheatstone sat at his instrument in a dingy little room, lit by a tallow candle, near the booking-office at Euston. Wheatstone sent the first message, to which Cooke replied, and 'never' said Wheatstone, 'did I feel such a tumultuous sensation before, as when, all alone in the still room, I heard the needles click, and as I spelled the words, I felt all the magnitude of the invention pronounced to be practicable beyond cavil or dispute.'
In spite of this trial, however, the directors of the railway treated the 'new-fangled' invention with indifference, and requested its removal. In July 1839, however, it was favoured by the Great Western Railway, and a line erected from the Paddington station terminus to West Drayton railway station, a distance of . Part of the wire was laid underground at first, but subsequently all of it was raised on posts along the line. Their circuit was eventually extended to in 1841, and was publicly exhibited at Paddington as a marvel of science, which could transmit fifty signals a distance of 280,000 miles per minute (7,500 km/s). The price of admission was a shilling (£0.05), and in 1844 one fascinated observer recorded the following:
"It is perfect from the terminus of the Great Western as far as
Slough – that is, eighteen miles; the wires being in some places
underground in tubes, and in others high up in the air, which last,
he says, is by far the best plan. We asked if the weather did not
affect the wires, but he said not; a violent thunderstorm might
ring a bell, but no more. We were taken into a small room (we
being Mrs Drummond, Miss Philips, Harry Codrington and
myself – and afterwards the Milmans and Mr Rich) where were
several wooden cases containing different sorts of telegraphs.
In one sort every word was spelt, and as each letter was placed in turn
in a particular position, the machinery caused the electric fluid to run
down the line, where it made the letter show itself at Slough, by what
machinery he could not undertake to explain. After each word came a
sign from Slough, signifying "I understand", coming certainly in less
than one second from the end of the word......Another prints the messages
it brings, so that if no-one attended to the bell,....the message would not
be lost. This is effected by the electrical fluid causing a little hammer to strike the
letter which presents itself, the letter which is raised hits some manifold
writing paper (a new invention, black paper which, if pressed, leaves an
indelible black mark), by which means the impression is left on white paper
beneath. This was the most ingenious of all, and apparently Mr. Wheatstone's
favourite; he was very good-natured in explaining but
understands it so well himself that he cannot feel how little we
know about it, and goes too fast for such ignorant folk to follow
him in everything. Mrs Drummond told me he is wonderful for
the rapidity with which he thinks and his power of invention; he
invents so many things that he cannot put half his ideas into
execution, but leaves them to be picked up and used by others,
who get the credit of them."
Public attention and success
The public took to the new invention after the capture of the murderer John Tawell, who in 1845, had become the first person to be arrested as the result of telecommunications technology. In the same year, Wheatstone introduced two improved forms of the apparatus, namely, the 'single' and the 'double' needle instruments, in which the signals were made by the successive deflections of the needles. Of these, the single-needle instrument, requiring only one wire, is still in use.
The development of the telegraph may be gathered from two facts. In 1855, the death of the Emperor Nicholas at St. Petersburg, about one o'clock in the afternoon, was announced in the House of Lords a few hours later. The result of The Oaks of 1890 was received in New York fifteen seconds after the horses passed the winning-post.
Differences with Cooke
In 1841 a difference arose between Cooke and Wheatstone as to the share of each in the honour of inventing the telegraph. The question was submitted to the arbitration of the famous engineer, Marc Isambard Brunel, on behalf of Cooke, and Professor Daniell, of King's College, the inventor of the Daniell battery, on the part of Wheatstone. They awarded to Cooke the credit of having introduced the telegraph as a useful undertaking which promised to be of national importance, and to Wheatstone that of having by his researches prepared the public to receive it. They concluded with the words: 'It is to the united labours of two gentlemen so well qualified for mutual assistance that we must attribute the rapid progress which this important invention has made during five years since they have been associated.' The decision, however vague, pronounces the needle telegraph a joint production. If it had mainly been invented by Wheatstone, it was chiefly introduced by Cooke. Their respective shares in the undertaking might be compared to that of an author and his publisher, but for the fact that Cooke himself had a share in the actual work of invention.
Further work on telegraphs
From 1836–7 Wheatstone had thought a good deal about submarine telegraphs, and in 1840 he gave evidence before the Railway Committee of the House of Commons on the feasibility of the proposed line from Dover to Calais. He had even designed the machinery for making and laying the cable. In the autumn of 1844, with the assistance of Mr. J. D. Llewellyn, he submerged a length of insulated wire in Swansea Bay, and signalled through it from a boat to the Mumbles Lighthouse. Next year he suggested the use of gutta-percha for the coating of the intended wire across the English Channel.
In 1840 Wheatstone had patented an alphabetical telegraph, or, 'Wheatstone A B C instrument,' which moved with a step-by-step motion, and showed the letters of the message upon a dial. The same principle was used in his type-printing telegraph, patented in 1841. This was the first apparatus which printed a telegram in type. It was worked by two circuits, and as the type revolved a hammer, actuated by the current, pressed the required letter on the paper.
The introduction of the telegraph had so far advanced that, on 2 September 1845, the Electric Telegraph Company was registered, and Wheatstone, by his deed of partnership with Cooke, received a sum of £33,000 for the use of their joint inventions.
In 1859 Wheatstone was appointed by the Board of Trade to report on the subject of the Atlantic cables, and in 1864 he was one of the experts who advised the Atlantic Telegraph Company on the construction of the successful lines of 1865 and 1866.
In 1870 the electric telegraph lines of the United Kingdom, worked by different companies, were transferred to the Post Office, and placed under Government control.
Wheatstone further invented the automatic transmitter, in which the signals of the message are first punched out on a strip of paper, which is then passed through the sending-key, and controls the signal currents. By substituting a mechanism for the hand in sending the message, he was able to telegraph about 100 words a minute, or five times the ordinary rate. In the Postal Telegraph service this apparatus is employed for sending Press telegrams, and it has recently been so much improved, that messages are now sent from London to Bristol at a speed of 600 words a minute, and even of 400 words a minute between London and Aberdeen. On the night of 8 April 1886, when Mr. Gladstone introduced his Bill for Home Rule in Ireland, no fewer than 1,500,000 words were dispatched from the central station at St. Martin's-le-Grand by 100 Wheatstone transmitters. The plan of sending messages by a running strip of paper which actuates the key was originally patented by Bain in 1846; but Wheatstone, aided by Mr. Augustus Stroh, an accomplished mechanician, and an able experimenter, was the first to bring the idea into successful operation. This system is often referred to as the Wheatstone Perforator and is the forerunner of the stock market ticker tape.
Optics
Stereopsis was first described by Wheatstone in 1838. In 1840 he was awarded the Royal Medal of the Royal Society for his explanation of binocular vision, a research which led him to make stereoscopic drawings and construct the stereoscope. He showed that our impression of solidity is gained by the combination in the mind of two separate pictures of an object taken by both of our eyes from different points of view. Thus, in the stereoscope, an arrangement of lenses or mirrors, two photographs of the same object taken from different points are so combined as to make the object stand out with a solid aspect. Sir David Brewster improved the stereoscope by dispensing with the mirrors, and bringing it into its existing form with lenses.
The 'pseudoscope' (Wheatstone coined the term from the Greek ψευδίς σκοπειν) was introduced in 1852, and is in some sort the reverse of the stereoscope, since it causes a solid object to seem hollow, and a nearer one to be farther off; thus, a bust appears to be a mask, and a tree growing outside of a window looks as if it were growing inside the room. Its purpose was to test his theory of stereo vision and for investigations into what would now be called experimental psychology.
Measuring time
In 1840, Wheatstone introduced his chronoscope, for measuring minute intervals of time, which was used in determining the speed of a bullet or the passage of a star. In this apparatus an electric current actuated an electro-magnet, which noted the instant of an occurrence by means of a pencil on a moving paper. It is said to have been capable of distinguishing 1/7300 part of a second (137 microsecond), and the time a body took to fall from a height of one inch (25 mm).
On 26 November 1840, he exhibited his electro-magnetic clock in the library of the Royal Society, and propounded a plan for distributing the correct time from a standard clock to a number of local timepieces. The circuits of these were to be electrified by a key or contact-maker actuated by the arbour of the standard, and their hands corrected by electro-magnetism. The following January Alexander Bain took out a patent for an electro-magnetic clock, and he subsequently charged Wheatstone with appropriating his ideas. It appears that Bain worked as a mechanist to Wheatstone from August to December 1840, and he asserted that he had communicated the idea of an electric clock to Wheatstone during that period; but Wheatstone maintained that he had experimented in that direction during May. Bain further accused Wheatstone of stealing his idea of the electro-magnetic printing telegraph; but Wheatstone showed that the instrument was only a modification of his own electro-magnetic telegraph.
In 1840, Alexander Bain mentioned to the Mechanics Magazine editor his financial problems. The editor introduced him to Sir Charles Wheatstone. Bain demonstrated his models to Wheatstone, who, when asked for his opinion, said "Oh, I shouldn't bother to develop these things any further! There's no future in them." Three months later Wheatstone demonstrated an electric clock to the Royal Society, claiming it was his own invention. However, Bain had already applied for a patent for it. Wheatstone tried to block Bain's patents, but failed. When Wheatstone organised an Act of Parliament to set up the Electric Telegraph Company, the House of Lords summoned Bain to give evidence, and eventually compelled the company to pay Bain £10,000 and give him a job as manager, causing Wheatstone to resign.
Polar clock
One of Wheatstone's most ingenious devices was the 'Polar clock,' exhibited at the meeting of the British Association in 1848. It is based on the fact discovered by Sir David Brewster, that the light of the sky is polarised in a plane at an angle of ninety degrees from the position of the sun. It follows that by discovering that plane of polarisation, and measuring its azimuth with respect to the north, the position of the sun, although beneath the horizon, could be determined, and the apparent solar time obtained.
The clock consisted of a spyglass, having a Nicol (double-image) prism for an eyepiece, and a thin plate of selenite for an object-glass. When the tube was directed to the North Pole—that is, parallel to the Earth's axis—and the prism of the eyepiece turned until no colour was seen, the angle of turning, as shown by an index moving with the prism over a graduated limb, gave the hour of day. The device is of little service in a country where watches are reliable; but it formed part of the equipment of the 1875–1876 North Polar expedition commanded by Captain Nares.
Wheatstone bridge
In 1843 Wheatstone communicated an important paper to the Royal Society, entitled 'An Account of Several New Processes for Determining the Constants of a Voltaic Circuit.' It contained an exposition of the well known balance for measuring the electrical resistance of a conductor, which still goes by the name of Wheatstone's Bridge or balance, although it was first devised by Samuel Hunter Christie, of the Royal Military Academy, Woolwich, who published it in the Philosophical Transactions for 1833. The method was neglected until Wheatstone brought it into notice.
His paper abounds with simple and practical formulae for the calculation of currents and resistances by the law of Ohm. He introduced a unit of resistance, namely, a foot of copper wire weighing one hundred grains (6.5 g), and showed how it might be applied to measure the length of wire by its resistance. He was awarded a medal for his paper by the Society. The same year he invented an apparatus which enabled the reading of a thermometer or a barometer to be registered at a distance by means of an electric contact made by the mercury. A sound telegraph, in which the signals were given by the strokes of a bell, was also patented by Cooke and Wheatstone in May of that year.
Cryptography
Wheatstone's remarkable ingenuity was also displayed in the invention of ciphers. He was responsible for the then unusual Playfair cipher, named after his friend Lord Playfair. It was used by the militaries of several nations through at least World War I, and is known to have been used during World War II by British intelligence services.
It was initially resistant to cryptanalysis, but methods were eventually developed to break it. He also became involved in the interpretation of cipher manuscripts in the British Museum. He devised a cryptograph or machine for turning a message into cipher which could only be interpreted by putting the cipher into a corresponding machine adjusted to decrypt it.
As an amateur mathematician, Wheatstone published a mathematical proof in 1854 (see Cube (algebra)).
Electrical generators
In 1840, Wheatstone brought out his magneto-electric machine for generating continuous currents.
On 4 February 1867, he published the principle of reaction in the dynamo-electric machine by a paper to the Royal Society; but Mr. C. W. Siemens had communicated the identical discovery ten days earlier, and both papers were read on the same day.
It afterwards appeared that Werner von Siemens, Samuel Alfred Varley, and Wheatstone had independently arrived at the principle within a few months of each other. Varley patented it on 24 December 1866; Siemens called attention to it on 17 January 1867; and Wheatstone exhibited it in action at the Royal Society on the above date.
Disputes over invention
Wheatstone was involved in various disputes with other scientists throughout his life regarding his role in different technologies and appeared at times to take more credit than he was due. As well as William Fothergill Cooke, Alexander Bain and David Brewster, mentioned above, these also included Francis Ronalds at the Kew Observatory. Wheatstone was erroneously believed by many to have created the atmospheric electricity observing apparatus that Ronalds invented and developed at the observatory in the 1840s and also to have installed the first automatic recording meteorological instruments there (see for example, Howarth, p158).
Personal life
Wheatstone married Emma West, spinster, a daughter of John Hooke West, deceased, at Christ Church, Marylebone, on 12 February 1847. The marriage was by licence.
See also
William Fothergill Cooke
Oliver Heaviside
References
Further reading
The Scientific Papers of Sir Charles Wheatstone (1879)
This article incorporates text from Heroes of the Telegraph by John Munro (1849–1930) in 1891, now in the public domain and available at this site''.
External links
Biographical material at Pandora Web Archive
Biographical sketch at Institute for Learning Technologies
Gravesite in Kensal Green, London
Charles Wheatstone at Cyber Philately
Charles Wheatstone at Open Library
English electrical engineers
English physicists
Optical physicists
English inventors
Concertina makers
People associated with electricity
Pre-computer cryptographers
Academics of King's College London
Fellows of the Royal Society
Members of the Royal Swedish Academy of Sciences
Recipients of the Pour le Mérite (civil class)
Recipients of the Copley Medal
People from Gloucester
1802 births
1875 deaths
British cryptographers
Royal Medal winners
Telegraph engineers and inventors
Chevaliers of the Légion d'honneur
Spectroscopists
Knights Bachelor |
80150 | https://en.wikipedia.org/wiki/Dynamic%20DNS | Dynamic DNS | Dynamic DNS (DDNS) is a method of automatically updating a name server in the Domain Name System (DNS), often in real time, with the active DDNS configuration of its configured hostnames, addresses or other information.
The term is used to describe two different concepts. The first is "dynamic DNS updating" which refers to systems that are used to update traditional DNS records without manual editing. These mechanisms are explained in RFC 2136, and use the TSIG mechanism to provide security. The second kind of dynamic DNS permits lightweight and immediate updates often using an update client, which do not use the RFC2136 standard for updating DNS records. These clients provide a persistent addressing method for devices that change their location, configuration or IP address frequently.
Background
In the initial stages of the Internet (ARPANET), addressing of hosts on the network was achieved by static translation tables that mapped hostnames to IP addresses. The tables were maintained manually in form of the host file. The Domain Name System brought a method of distributing the same address information automatically online through recursive queries to remote databases configured for each network, or domain. Even this DNS facility still used static lookup tables at each participating node. IP addresses, once assigned to a particular host, rarely changed and the mechanism was initially sufficient. However, the rapid growth of the Internet and the proliferation of personal computers in the workplace and in homes created the substantial burden for administrators of keeping track of assigned IP addresses and managing their address space. The Dynamic Host Configuration Protocol (DHCP) allowed enterprises and Internet service providers (ISPs) to assign addresses to computers automatically as they powered up. In addition, this helped conserve the address space available, since not all devices might be actively used at all times and addresses could be assigned as needed. This feature required that DNS servers be kept current automatically as well. The first implementations of dynamic DNS fulfilled this purpose: Host computers gained the feature to notify their respective DNS server of the address they had received from a DHCP server or through self-configuration. This protocol-based DNS update method was documented and standardized in IETF publication RFC 2136 in 1997 and has become a standard part of the DNS protocol (see also nsupdate program).
The explosive growth and proliferation of the Internet into homes brought a growing shortage of available IP addresses. DHCP became an important tool for ISPs as well to manage their address spaces for connecting home and small-business end-users with a single IP address each by implementing network address translation (NAT) at the customer-premises router. The private network behind these routers uses address space set aside for these purposes (RFC 1918), masqueraded by the NAT device. This, however, broke the end-to-end principle of Internet architecture and methods were required to allow private networks, with frequently changing external IP addresses, to discover their public address and insert it into the Domain Name System in order to participate in Internet communications properly. Today, numerous providers, called Dynamic DNS service providers, offer such technology and services on the Internet.
Domain Name System
DNS is based on a distributed database that takes some time to update globally. When DNS was first introduced, the database was small and could be easily maintained by hand. As the system grew this task became difficult for any one site to handle, and a new management structure was introduced to spread out the updates among many domain name registrars. Registrars today offer end-user updating to their account information, typically using a web-based form, and the registrar then pushes out update information to other DNS servers.
Due to the distributed nature of the domain name systems and its registrars, updates to the global DNS may take hours to distribute. Thus DNS is only suitable for services that do not change their IP address very often, as is the case for most large services like Wikipedia. Smaller services, however, are generally much more likely to move from host to host over shorter periods of time. Servers being run on certain types of Internet service provider, cable modems in particular, are likely to change their IP address over very short periods of time, on the order of days or hours. Dynamic DNS is a system that addresses the problem of rapid updates.
Types
The term DDNS is used in two ways, which, while technically similar, have very different purposes and user populations. The first is standards-based DDNS, which uses an extension of the DNS protocol to ask for an update; this is often used for company laptops to register their address. The second is proprietary DDNS, usually a web-based protocol, normally a single HTTP fetch with username and password which then updates some DNS records (by some unspecified method); this is commonly used for a domestic computer to register itself by a publicly known name in order to be found by a wider group, for example as a games server or webcam.
End users of Internet access receive an allocation of IP addresses, often only a single address, by their Internet service provider. The assigned addresses may either be fixed (i.e. static), or may change from time to time, a situation called dynamic. Dynamic addresses are generally given only to residential customers and small businesses, as most enterprises specifically require static addresses.
Dynamic IP addresses present a problem if the customer wants to provide a service to other users on the Internet, such as a web service. As the IP address may change frequently, corresponding domain names must be quickly re-mapped in the DNS, to maintain accessibility using a well-known URL.
Many providers offer commercial or free Dynamic DNS service for this scenario. The automatic reconfiguration is generally implemented in the user's router or computer, which runs software to update the DDNS service. The communication between the user's equipment and the provider is not standardized, although a few standard web-based methods of updating have emerged over time.
Standards-based DDNS
The standardized method of dynamically updating domain name server records is prescribed by RFC 2136, commonly known as dynamic DNS update. The method described by RFC 2136 is a network protocol for use with managed DNS servers, and it includes a security mechanism. RFC 2136 supports all DNS record types, but often it is used only as an extension of the DHCP system, and in which the authorized DHCP servers register the client records in the DNS. This form of support for RFC 2136 is provided by a plethora of client and server software, including those that are components of most current operating systems. Support for RFC 2136 is also an integral part of many directory services, including LDAP and Windows' Active Directory domains.
Applications
In Microsoft Windows networks, dynamic DNS is an integral part of Active Directory, because domain controllers register their network service types in DNS so that other computers in the domain (or forest) can access them.
Increasing efforts to secure Internet communications today involve encryption of all dynamic updates via the public Internet, as these public dynamic DNS services have been abused increasingly to design security breaches. Standards-based methods within the DNSSEC protocol suite, such as TSIG, have been developed to secure DNS updates, but are not widely in use. Microsoft developed alternative technology (GSS-TSIG) based on Kerberos authentication.
Some free DNS server software systems, such as dnsmasq, support a dynamic update procedure that directly involves a built-in DHCP server. This server automatically updates or adds the DNS records as it assigns addresses, relieving the administrator of the task of specifically configuring dynamic updates.
DDNS for Internet access devices
Dynamic DNS providers offer a software client program that automates the discovery and registration of the client system's public IP addresses. The client program is executed on a computer or device in the private network. It connects to the DDNS provider's systems with a unique login name; the provider uses the name to link the discovered public IP address of the home network with a hostname in the domain name system. Depending on the provider, the hostname is registered within a domain owned by the provider, or within the customer's own domain name. These services can function by a number of mechanisms. Often they use an HTTP service request since even restrictive environments usually allow HTTP service. The provider might use RFC 2136 to update the DNS servers.
Many home networking modem/routers include client applications in their firmware, compatible with a variety of DDNS providers.
DDNS for security appliance manufacturers
Dynamic DNS is an expected feature or even requirement for IP-based security appliances like DVRs and IP cameras. Many options are available for today's manufacturer, and these include the use of existing DDNS services or the use of custom services hosted by the manufacturer themselves.
In almost all cases, a simple HTTP based update API is used as it allows for easy integration of a DDNS client into a device's firmware. There are several pre-made tools that can help to ease the burden of server and client development, like MintDNS, cURL and Inadyn. Most web-based DDNS services use a standard user name and password security schema. This requires that a user first create an account at the DDNS server website and then configure their device to send updates to the DDNS server whenever an IP address change is detected.
Some device manufacturers go a step further by only allowing their DDNS Service to be used by the devices they manufacture, and also eliminate the need for user names and passwords altogether. Generally this is accomplished by encrypting the device's MAC address using an cryptographic algorithm kept secret on both the DDNS server and within the device's firmware. The resulting decryption or decryption failure is used to secure or deny updates. Resources for the development of custom DDNS services are generally limited and involve a full software development cycle to design and field a secure and robust DDNS server.
See also
List of managed DNS providers
Comparison of DNS server software
Multicast DNS, an alternative mechanism for dynamic name resolution for use in internal networks
Domain Name System |
80405 | https://en.wikipedia.org/wiki/Johnny%20Mnemonic%20%28film%29 | Johnny Mnemonic (film) | Johnny Mnemonic is a 1995 cyberpunk film directed by Robert Longo in his directorial debut. The film stars Keanu Reeves and Dolph Lundgren. The film is based on the story of the same name by William Gibson. Keanu Reeves plays the title character, a man with a cybernetic brain implant designed to store information. The film portrays Gibson's dystopian view of the future with the world dominated by megacorporations and with strong East Asian influences.
The film was shot on location in Canada, with Toronto and Montreal filling in for the film's Newark and Beijing settings. A number of local sites, including Toronto's Union Station and Montreal's skyline and Jacques Cartier Bridge, feature prominently.
The film premiered in Japan on April 15, 1995, in a longer version (103 mins) that is closer to the director's cut, featuring a score by Mychael Danna and different editing. The film was released in the United States on May 26, 1995.
Plot
In 2021, society is driven by a virtual Internet, which has created a degenerate effect called "nerve attenuation syndrome" or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
Johnny is a "mnemonic courier" who discreetly transports sensitive data for corporations in a storage device implanted in his brain at the cost of his childhood memories. His current job is for a group of scientists in Beijing. Johnny initially balks when he learns the data exceeds his memory capacity even with compression, but agrees given the large fee will be enough to cover the cost of the operation to remove the device. Johnny warns that he must have the data extracted within a few days or suffer psychological damage. The scientists encrypt the data with three random images from a television feed and start sending these images to the receiver in Newark, New Jersey, but they are attacked and killed by the Yakuza led by Shinji (Akiyama) before the images can be fully transmitted. Johnny escapes with a portion of the images, but is pursued by both the Yakuza as well as security forces for Pharmakom, one of the mega-corporations run by Takahashi (Kitano), both seeking the data he carries. Johnny starts witnessing brief images of a female projection of an artificial intelligence (AI) who attempts to aid Johnny, but he dismisses her.
In Newark, Johnny meets with his handler Ralfi (Kier) to explain the situation, but finds Ralfi is also working with the Yakuza and wants to kill Johnny to get the storage device. Johnny is rescued by Jane (Meyer), a cybernetically-enhanced bodyguard, and members of anti-establishment Lo-Teks and their leader J-Bone (Ice-T). Jane takes Johnny to a clinic run by Spider (Rollins), who had installed Jane's implants. Spider reveals he was intended to receive the Beijing scientists' data, which is the cure for NAS stolen from Pharmakom; Spider claims Pharmakom refuses to release the cure as they are profiting off the mitigation of NAS. Unfortunately, even with the portion of the encryption images Johnny took and what Spider had received is not sufficient to decrypt Johnny's mind, and Spider suggests that they see Jones at the Lo-Teks' base. Just then, Karl "The Street Preacher" (Lundgren), an assassin hired by Takahashi, attacks the clinic, killing Spider as Johnny and Jane escape.
The two reach the Lo-Tek base and learn from J-Bone that Jones is a dolphin once used by the Navy which can help decrypt the data in Johnny's mind. Just as they start the procedure, Shinji and the Yakuza, Takahashi and his security forces, and the Street Preacher all attack the base, but Johnny, Jane, J-Bone and the other Lo-Teks are able to defeat all three forces. Takahashi turns over a portion of the encryption key before he dies, but this still is not enough to fully decrypt the data, and J-Bone tells Johnny that he will need to hack his own mind with Jones' help. The second attempt starts, and aided by the female AI, Johnny is able to decrypt the data and at the same time recover his childhood memories. The AI is revealed to be the virtual version of Johnny's mother who was also the founder of Pharmakom, angered at the company holding back the cure. As J-Bone transmits the NAS cure information across the Internet, Johnny and Jane watch from afar as the Pharmakom headquarters goes up in flames from the public outcry.
Cast
Keanu Reeves as Johnny Mnemonic
Dolph Lundgren as Karl Honig
Dina Meyer as Jane
Ice-T as J-Bone
Takeshi Kitano as Takahashi
Denis Akiyama as Shinji
Henry Rollins as Spider
Barbara Sukowa as Anna Kalmann
Udo Kier as Ralfi
Tracy Tweed as Pretty
Falconer Abraham as Yomamma
Don Francks as Hooky
Diego Chambers as Henson
Arthur Eng as Viet
Production
Longo and Gibson originally envisioned making an art film on a small budget but failed to get financing. Longo commented that the project "started out as an arty 1½-million-dollar movie, and it became a 30-million-dollar movie because we couldn't get a million and a half." Longo's lawyer suggested that their problem was that they were not asking for enough money and that studios would not be interested in such a small project. The unbounded spread of the Internet in the early 1990s and the consequent rapid growth of high technology culture had made cyberpunk increasingly relevant, and this was a primary motivation for Sony Pictures's decision to fund the project in the tens of millions. Val Kilmer was originally cast in the title role, and Reeves replaced him when Kilmer dropped out. Reeves' Canadian nationality opened up further financial options, such as Canadian tax incentives. When Speed turned into a major hit in 1994, expectations were raised for Johnny Mnemonic, and Sony saw the film as a potential blockbuster hit.
Longo's experiences with the financiers were poor, believing that their demands compromised his artistic vision. Many of the casting decisions, such as Lundgren, were forced upon him to increase the film's appeal outside of the United States. Longo and Gibson, who had no idea what to do with Lundgren, created a new character for him. Lundgren had previously starred in several action films that emphasized his physique. He intended the role of the street preacher to be a showcase for further range as an actor, but his character's monologue was cut during editing. Gibson said that the monologue, a sermon about transhumanism that Lundgren delivered naked, was cut due to fears of offending religious groups. Kitano was cast to appeal to the Japanese market. Eight minutes of extra footage starring Kitano was shot for the Japanese release of the film.
The film significantly deviates from the short story, most notably turning Johnny, not his bodyguard partner, into the primary action figure. Molly Millions is replaced with Jane, as the film rights to Molly had already been sold. Nerve Attenuation Syndrome (NAS) is a fictional disease that is not present in the short story. NAS, also called "the black shakes", is caused by an overexposure to electromagnetic radiation from omnipresent technological devices and is presented as a raging epidemic. In the film, one pharmaceutical corporation has found a cure but chooses to withhold it from the public in favor of a more lucrative treatment program. The code-cracking Navy dolphin Jones's reliance on heroin was one of many scenes cut during an editing process. Gibson said that the film was "taken away and re-cut by the American distributor". He described the original film as "a very funny, very alternative piece of work", and said it was "very unsuccessfully chopped and cut into something more mainstream". Gibson compared this to editing Blue Velvet into a mainstream thriller lacking any irony. Prior to its release, critic Amy Harmon identified the film as an epochal moment when cyberpunk counterculture would enter the mainstream. News of the script's compromises spurred pre-release concerns that the film would prove a disappointment to hardcore cyberpunks.
The Japanese soundtrack was composed by Mychael Danna but re-composed by Brad Fiedel for the international version. It also contains tracks from independent industrial band Black Rain who had initially recorded a score for Robert Longo that had been rejected.
Release and marketing
Simultaneous with Sony Pictures' release of the film, its soundtrack was released by Sony subsidiary Columbia Records, and the corporation's digital effects division Sony ImageWorks issued a CD-ROM videogame version for DOS, Mac and Windows 3.x.
The Johnny Mnemonic videogame, which was developed by Evolutionary Publishing, Inc. and directed by Douglas Gayeton, offered 90 minutes of full motion video storytelling and puzzles. A Mega-CD/Sega CD version of the game was also developed, but never released despite being fully completed. This version was eventually leaked on the Internet many years later. A pinball machine based on the film designed by George Gomez was released in August 1995 by Williams.
Sony realised early on the potential for reaching their target demographic through Internet marketing, and its new-technology division promoted the film with an online scavenger hunt offering $20,000 in prizes. One executive was quoted as remarking "We see the Internet as turbo-charged word-of-mouth. Instead of one person telling another person something good is happening, it's one person telling millions!". The film's website, the first official site launched by Columbia TriStar Interactive, facilitated further cross-promotion by selling Sony Signatures-issued Johnny Mnemonic merchandise such as a "hack your own brain" T-shirt and Pharmakom coffee cups. Screenwriter William Gibson was deployed to field questions about the videogame from fans online. The habitually reclusive novelist, who despite creating in cyberspace one of the core metaphors for the internet age had never personally been on the Internet, likened the experience to "taking a shower with a raincoat on" and "trying to do philosophy in Morse code."
The film grossed ¥73.6 million ($897,600) in its first 3 days in Japan from 14 screens in the nine key Japanese cities. It was released in the United States and Canada on May 26 in 2,030 theaters, grossing $6 million in the opening weekend. It grossed $19.1 million in total in the United States and Canada and $52.4 million worldwide against its $26 million budget.
Reception
The film holds a 19% approval rating on Rotten Tomatoes from 37 critics. The website's consensus reads, "As narratively misguided as it is woefully miscast, Johnny Mnemonic brings the '90s cyberpunk thriller to inane new whoas – er, lows." On Metacritic, the film has a score of 36/100 based on 25 reviews, which the site terms "generally unfavorable reviews".
Varietys Todd McCarthy called the film "high-tech trash" and likened it to a video game. Roger Ebert, the film critic for the Chicago Sun-Times, gave the film two stars out of four and called it "one of the great goofy gestures of recent cinema". Owen Gleiberman of Entertainment Weekly rated it C− and called it "a slack and derivative future-shock thriller". Conversely, Mick LaSalle of the San Francisco Chronicle described it as "inescapably a very cool movie", and Marc Savlov wrote in The Austin Chronicle that the film works well for both Gibson fans and those unfamiliar with his work. Writing in The New York Times, Caryn James called the film "a disaster in every way" and said that despite Gibson's involvement, the film comes off as "a shabby imitation of Blade Runner and Total Recall".
McCarthy said that the film's premise is its "one bit of ingenuity", but the plot, which he called likely to disappoint Gibson's fans, is simply an excuse for "elaborate but undramatic and unexciting computer-graphics special effects". Ebert also called the plot an excuse for the special effects, and the conceit of having to deliver important information while avoiding enemy agents struck him as "breathtakingly derivative". Ebert furthermore felt that hiring a data courier instead of transmitting encrypted data over the internet makes no sense outside the artificiality of a film. In his review for the Los Angeles Times, Peter Rainer wrote that the film, when stripped of the cyberpunk atmosphere, is recycled from noir fiction, and LaSalle viewed it more positively as "a hard-boiled action story using technology as its backdrop". Savlov called it "an updated D.O.A.", and Ebert said the film's plot could have worked in any genre and been set in any time period.
James criticized the film's lack of tension, and Rainer called the film's tone too grim and lacking excitement. McCarthy criticized what he saw as a "unrelieved grimness" and "desultory, darkly staged action scenes". McCarthy felt the film's visual depiction of the future was unoriginal, and Gleiberman described the film as "Blade Runner with tackier sets". Savlov wrote that Longo's "attempts to out-Blade Runner Ridley Scott in the decaying cityscape department grow wearisome". Savlov still found the film "much better than expected". LaSalle felt the film "introduces a fantastic yet plausible vision of a computer-dominated age" and maintains a focus on humanity, in contrast to Rainer, who found the film's countercultural pose to be inauthentic and lacking humanity. James called the film murky and colorless; Rainer's review criticized similar issues, finding the film's lack of lighting and its grim set design to give everything an "undifferentiated dullness". McCarthy found the special effects to be "slick and accomplished but unimaginative", though Ebert enjoyed the special effects. Gleiberman highlighted the monofilament whip as his favorite special effect, though James found it unimpressive.
Although saying that Reeves is not a good actor, LaSalle said Reeves is still enjoyable to watch and makes for a compelling protagonist. McCarthy instead found Reeves' character to be unlikable and one-dimensional. James compared Reeves to a robot, and Gleiberman compared him to an action figure. Rainer posited that Reeves' character may seem so blank due to his memory loss. Savlov said that Reeves' wooden delivery gives the film unintentional humor, but Rainer found that the lack of humor throughout the film sapped all the acting performances of any enjoyment. Gleiberman said that Reeves' efforts to avoid Valleyspeak backfire, giving his character's lines "an intense, misplaced urgency", though he liked the unconventional casting of Lundgren as a psychopathic street preacher. Rainer highlighted Lundgren as the only actor to display mirth and said his performance was the best in the film. James called Ice-T's role stereotypical and said he deserved better.
Reeves's performance in the film earned him a Golden Raspberry Award nomination for Worst Actor (also for A Walk in the Clouds), but lost to Pauly Shore for Jury Duty. The film was filed under the Founders Award (What Were They Thinking and Why?) at the 1995 Stinkers Bad Movie Awards and was also a dishonourable mention for Worst Picture.
In a career retrospective of Reeves' films for Entertainment Weekly, Chris Nashawaty ranked the film as Reeves' second worst, calling the film's fans "nuts" for liking it. While acknowledging the film's issues, critic Ty Burr attributed its poor reviews to critics' unfamiliarity with Gibson's work. The Quietus described the film as having "all the makings of a cult classic", and its release to streaming sites in 2021 resulted in a passionate defense by Rowan Righelato in The Guardian and a recommendation from Inverse. In a retrospective review from 2021, Peter Bradshaw, film critic for The Guardian, rated it 4/5 stars and wrote, "Perhaps it's quaint, but it's also watchable, and it is the kind of sci-fi that is genuinely audacious".
The props from the film were transformed into sculptures by artist Dora Budor for her 2015 solo exhibition, Spring.
References
External links
1995 films
1990s science fiction action films
1995 science fiction films
1990s dystopian films
Canadian films
Canadian science fiction action films
Cyberpunk films
American science fiction action films
English-language films
Films about computing
Films about telepresence
American chase films
American films
Films based on short fiction
Films based on science fiction works
Films set in 2021
Films shot in Toronto
Japanese-language films
American dystopian films
TriStar Pictures films
Works by William Gibson
Films shot in Montreal
Films set in New Jersey
Films set in Beijing
Transhumanism in film
Films about dolphins
Films about artificial intelligence
Cryptography in fiction
Corporate warfare in fiction
Films produced by Don Carmody
Films scored by Brad Fiedel
1995 directorial debut films
Alliance Films films |
80857 | https://en.wikipedia.org/wiki/Meromorphic%20function | Meromorphic function | In the mathematical field of complex analysis, a meromorphic function on an open subset D of the complex plane is a function that is holomorphic on all of D except for a set of isolated points, which are poles of the function. The term comes from the Ancient Greek meros (μέρος), meaning "part".
Every meromorphic function on D can be expressed as the ratio between two holomorphic functions (with the denominator not constant 0) defined on D: any pole must coincide with a zero of the denominator.
Heuristic description
Intuitively, a meromorphic function is a ratio of two well-behaved (holomorphic) functions. Such a function will still be well-behaved, except possibly at the points where the denominator of the fraction is zero. If the denominator has a zero at z and the numerator does not, then the value of the function will approach infinity; if both parts have a zero at z, then one must compare the multiplicity of these zeros.
From an algebraic point of view, if the function's domain is connected, then the set of meromorphic functions is the field of fractions of the integral domain of the set of holomorphic functions. This is analogous to the relationship between the rational numbers and the integers.
Prior, alternate use
Both the field of study wherein the term is used and the precise meaning of the term changed in the 20th century. In the 1930s, in group theory, a meromorphic function (or meromorph) was a function from a group G into itself that preserved the product on the group. The image of this function was called an automorphism of G. Similarly, a homomorphic function (or homomorph) was a function between groups that preserved the product, while a homomorphism was the image of a homomorph. This form of the term is now obsolete, and the related term meromorph is no longer used in group theory.
The term endomorphism is now used for the function itself, with no special name given to the image of the function.
A meromorphic function is not necessarily an endomorphism, since the complex points at its poles are not in its domain, but may be in its range.
Properties
Since the poles of a meromorphic function are isolated, there are at most countably many. The set of poles can be infinite, as exemplified by the function
By using analytic continuation to eliminate removable singularities, meromorphic functions can be added, subtracted, multiplied, and the quotient can be formed unless on a connected component of D. Thus, if D is connected, the meromorphic functions form a field, in fact a field extension of the complex numbers.
Higher dimensions
In several complex variables, a meromorphic function is defined to be locally a quotient of two holomorphic functions. For example, is a meromorphic function on the two-dimensional complex affine space. Here it is no longer true that every meromorphic function can be regarded as a holomorphic function with values in the Riemann sphere: There is a set of "indeterminacy" of codimension two (in the given example this set consists of the origin ).
Unlike in dimension one, in higher dimensions there do exist compact complex manifolds on which there are no non-constant meromorphic functions, for example, most complex tori.
Examples
All rational functions, for example are meromorphic on the whole complex plane.
The functions as well as the gamma function and the Riemann zeta function are meromorphic on the whole complex plane.
The function is defined in the whole complex plane except for the origin, 0. However, 0 is not a pole of this function, rather an essential singularity. Thus, this function is not meromorphic in the whole complex plane. However, it is meromorphic (even holomorphic) on .
The complex logarithm function is not meromorphic on the whole complex plane, as it cannot be defined on the whole complex plane while only excluding a set of isolated points.
The function is not meromorphic in the whole plane, since the point is an accumulation point of poles and is thus not an isolated singularity.
The function is not meromorphic either, as it has an essential singularity at 0.
On Riemann surfaces
On a Riemann surface, every point admits an open neighborhood
which is biholomorphic to an open subset of the complex plane. Thereby the notion of a meromorphic function can be defined for every Riemann surface.
When D is the entire Riemann sphere, the field of meromorphic functions is simply the field of rational functions in one variable over the complex field, since one can prove that any meromorphic function on the sphere is rational. (This is a special case of the so-called GAGA principle.)
For every Riemann surface, a meromorphic function is the same as a holomorphic function that maps to the Riemann sphere and which is not the constant function equal to ∞. The poles correspond to those complex numbers which are mapped to ∞.
On a non-compact Riemann surface, every meromorphic function can be realized as a quotient of two (globally defined) holomorphic functions. In contrast, on a compact Riemann surface, every holomorphic function is constant, while there always exist non-constant meromorphic functions.
See also
Cousin problems
Mittag-Leffler's theorem
Weierstrass factorization theorem
Footnotes
References |
81764 | https://en.wikipedia.org/wiki/Nestor | Nestor | Nestor may refer to:
Nestor (mythology), King of Pylos in Greek mythology
Arts and entertainment
"Nestor" (Ulysses episode) an episode in James Joyce's novel Ulysses
Nestor Studios, first-ever motion picture studio in Hollywood, Los Angeles
Nestor, the Long-Eared Christmas Donkey, a Christmas television program
Geography
Nestor, San Diego, a neighborhood of San Diego, California
Mount Nestor (Antarctica), in the Achaean Range of Antarctica
Mount Nestor (Alberta), a mountain in Alberta, Canada
People
Nestor (surname), anglicised form of Mac an Adhastair, an Irish family
Nestor (given name), a name of Greek origin, from Greek mythology
Science and technology
Nestor (genus), a genus of parrots
NESTOR Project, an international scientific collaboration for the deployment of a neutrino telescope
NESTOR (encryption), a family of voice encryption devices used by the United States during the Vietnam War era
659 Nestor, an asteroid
Ships
, three ships of the Royal Navy
, a Second World War Royal Australian Navy destroyer which remained the property of the British Royal Navy
, a number of ships of this name
, an LNG carrier
Nestor (sternwheeler), a steamboat that operated in Oregon and Washington State
Other uses
Nestor (solitaire), a card game
Tropical Storm Nestor
Typhoon Nestor (1997)
A West Cornwall Railway steam locomotive
See also
Dniester, a river in Eastern Europe
Nester (disambiguation)
Nestori, a given name
Nestorianism, a Christian theological doctrine condemned as heretical at the Council of Ephesus in 431
Nestorius, Ecumenical Patriarch of Constantinople 428–431 |
82921 | https://en.wikipedia.org/wiki/Disk%20image | Disk image | A disk image, in computing, is a computer file containing the contents and structure of a disk volume or of an entire data storage device, such as a hard disk drive, tape drive, floppy disk, optical disc, or USB flash drive. A disk image is usually made by creating a sector-by-sector copy of the source medium, thereby perfectly replicating the structure and contents of a storage device independent of the file system. Depending on the disk image format, a disk image may span one or more computer files.
The file format may be an open standard, such as the ISO image format for optical disc images, or a disk image may be unique to a particular software application.
The size of a disk image can be large because it contains the contents of an entire disk. To reduce storage requirements, if an imaging utility is filesystem-aware it can omit copying unused space, and it can compress the used space.
History
Disk images were originally (in the late 1960s) used for backup and disk cloning of mainframe disk media. The early ones were as small as 5 megabytes and as large as 330 megabytes, and the copy medium was magnetic tape, which ran as large as 200 megabytes per reel. Disk images became much more popular when floppy disk media became popular, where replication or storage of an exact structure was necessary and efficient, especially in the case of copy protected floppy disks.
Uses
Disk images are used for duplication of optical media including DVDs, Blu-ray discs, etc. It is also used to make perfect clones of hard disks.
A virtual disk may emulate any type of physical drive, such as a hard disk drive, tape drive, key drive, floppy drive, CD/DVD/BD/HD DVD, or a network share among others; and of course, since it is not physical, requires a virtual reader device matched to it (see below). An emulated drive is typically created either in RAM for fast read/write access (known as a RAM disk), or on a hard drive. Typical uses of virtual drives include the mounting of disk images of CDs and DVDs, and the mounting of virtual hard disks for the purpose of on-the-fly disk encryption ("OTFE").
Some operating systems such as Linux and macOS have virtual drive functionality built-in (such as the loop device), while others such as older versions of Microsoft Windows require additional software. Starting from Windows 8, Windows includes native virtual drive functionality.
Virtual drives are typically read-only, being used to mount existing disk images which are not modifiable by the drive. However some software provides virtual CD/DVD drives which can produce new disk images; this type of virtual drive goes by a variety of names, including "virtual burner".
Enhancement
Using disk images in a virtual drive allows users to shift data between technologies, for example from CD optical drive to hard disk drive. This may provide advantages such as speed and noise (hard disk drives are typically four or five times faster than optical drives, are quieter, suffer from less wear and tear, and in the case of solid-state drives, are immune to some physical trauma). In addition it may reduce power consumption, since it may allow just one device (a hard disk) to be used instead of two (hard disk plus optical drive).
Virtual drives may also be used as part of emulation of an entire machine (a virtual machine).
Software distribution
Since the spread of broadband, CD and DVD images have become a common medium for Linux distributions. Applications for macOS are often delivered online as an Apple Disk Image containing a file system that includes the application, documentation for the application, and so on. Online data and bootable recovery CD images are provided for customers of certain commercial software companies.
Disk images may also be used to distribute software across a company network, or for portability (many CD/DVD images can be stored on a hard disk drive). There are several types of software that allow software to be distributed to large numbers of networked machines with little
or no disruption to the user. Some can even be scheduled to update only at night so that machines are not disturbed during business hours. These technologies reduce end-user impact and greatly reduce the time and man-power needed to ensure a secure corporate environment. Efficiency is also increased because there is much less opportunity for human error. Disk images may also be needed to transfer software to machines without a compatible physical disk drive.
For computers running macOS, disk images are the most common file type used for software downloads, typically downloaded with a web browser. The images are typically compressed Apple Disk Image (.dmg suffix) files. They are usually opened by directly mounting them without using a real disk. The advantage compared with some other technologies, such as Zip and RAR archives, is they do not need redundant drive space for the unarchived data.
Software packages for Windows are also sometimes distributed as disk images including ISO images. While Windows versions prior to Windows 7 do not natively support mounting disk images to the files system, several software options are available to do this; see Comparison of disc image software.
Security
Virtual hard disks are often used in on-the-fly disk encryption ("OTFE") software such as FreeOTFE and TrueCrypt, where an encrypted "image" of a disk is stored on the computer. When the disk's password is entered, the disk image is "mounted", and made available as a new volume on the computer. Files written to this virtual drive are written to the encrypted image, and never stored in cleartext.
The process of making a computer disk available for use is called "mounting", the process of removing it is called "dismounting" or "unmounting"; the same terms are used for making an encrypted disk available or unavailable.
Virtualization
A hard disk image is interpreted by a Virtual Machine Monitor as a system administrator using terms of naming, a hard disk image for a certain Virtual Machine monitor has a specific file.
Hard drive imaging is used in several major application areas:
Forensic imaging is the process that involves copying the contents and recording an image of the entire drives contents (imaging) into a single file (or a very small number of files). A component of forensic imaging involves verification of the values imaged to ensure the integrity of the file(s) imaged. Forensic images are created using software tools that can be acquired. Some tools have added forensic functionality previously mentioned; it is typically used to replicate the contents of the hard drive for use in another system. This can typically be done by software programs as it only structure are files themselves.
Data recovery imaging is the process of imaging each sector, systematically, on the source drive to another destination storage medium from which required files can then be retrieved. In data recovery situations, one cannot always rely on the integrity of their particular file structure and therefore a complete sector copy is mandatory to imaging end there though. Forensic images are typically acquired using software tools compatible with their system. Note that some forensic imaging software tools may have limitations in terms of the software's ability to communicate, diagnose, or repair storage mediums that (often times) are experiencing errors or even a failure of some internal component.
System backup
Some backup programs only back up user files; boot information and files locked by the operating system, such as those in use at the time of the backup, may not be saved on some operating systems. A disk image contains all files, faithfully replicating all data, including file attributes and the file fragmentation state. For this reason, it is also used for backing up optical media (CDs and DVDs, etc.), and allows the exact and efficient recovery after experimenting with modifications to a system or virtual machine, in one go.
There are benefits and drawbacks to both "file-based" and "bit-identical" image backup methods. Files that don't belong to installed programs can usually be backed up with file-based backup software, and this is preferred because file-based backup usually saves more time or space because they never copy unused space (as a bit-identical image does), they usually are capable of incremental backups, and generally have more flexibility. But for files of installed programs, file-based backup solutions may fail to reproduce all necessary characteristics, particularly with Windows systems. For example, in Windows certain registry keys use short filenames, which are sometimes not reproduced by file-based backup, some commercial software uses copy protection that will cause problems if a file is moved to a different disk sector, and file-based backups do not always reproduce metadata such as security attributes. Creating a bit-identical disk image is one way to ensure the system backup will be exactly as the original. Bit-identical images can be made in Linux with dd, available on nearly all live CDs.
Most commercial imaging software is "user-friendly" and "automatic" but may not create bit-identical images. These programs have most of the same advantages, except that they may allow restoring to partitions of a different size or file-allocation size, and thus may not put files on the same exact sector. Additionally, if they do not support Windows Vista, they may slightly move or realign partitions and thus make Vista unbootable (see Windows Vista startup process).
Rapid deployment of clone systems
Large enterprises often need to buy or replace new computer systems in large numbers. Installing operating system and programs into each of them one by one requires a lot of time and effort and has a significant possibility of human error. Therefore, system administrators use disk imaging to quickly clone the fully prepared software environment of a reference system. This method saves time and effort and allows administrators to focus on each systems unique idiosyncrasies they must bear.
There are several types of disk imaging software available that use single instancing technology to reduce the time, bandwidth, and storage required to capture and archive disk images. This makes it possible to rebuild and transfer
information-rich disk images at lightning speeds, which is a significant improvement over the days when programmers spent hours configuring each machine within an organization.
Legacy hardware emulation
Emulators frequently use disk images to simulate the floppy drive of the computer being emulated. This is usually simpler to program than accessing a real floppy drive (particularly if the disks are in a format not supported by the host operating system), and allows a large library of software to be managed.
Copy protection circumvention
A mini image is an optical disc image file in a format that fakes the disk's content to bypass CD/DVD copy protection.
Because they are the full size of the original disk, Mini Images are stored instead. Mini Images are small, on the order of kilobytes, and contain just the information necessary to bypass CD-checks. Therefore; the Mini Image is a form of a No-CD crack, for unlicensed games, and legally backed up games. Mini images do not contain the real data from an image file, just the code that is needed to satisfy the CD-check. They cannot provide CD or DVD backed data to the computer program such as on-disk image or video files.
Creation
Creating a disk image is achieved with a suitable program. Different disk imaging programs have varying capabilities, and may focus on hard drive imaging (including hard drive backup, restore and rollout), or optical media imaging (CD/DVD images).
A virtual disk writer or virtual burner is a computer program that emulates an actual disc authoring device such as a CD writer or DVD writer. Instead of writing data to an actual disc, it creates a virtual disk image. A virtual burner, by definition, appears as a disc drive in the system with writing capabilities (as opposed to conventional disc authoring programs that can create virtual disk images), thus allowing software that can burn discs to create virtual discs.
File formats
Apple Disk Image
IMG (file format)
VHD (file format)
VDI (file format)
VMDK
QCOW
Utilities
RawWrite and WinImage are examples of floppy disk image file writer/creator for MS-DOS and Microsoft Windows. They can be used to create raw image files from a floppy disk, and write such image files to a floppy.
In Unix or similar systems the dd program can be used to create disk images, or to write them to a particular disk. It is also possible to mount and access them at block level using a loop device.
Apple Disk Copy can be used on Classic Mac OS and macOS systems to create and write disk image files.
Authoring software for CDs/DVDs such as Nero Burning ROM can generate and load disk images for optical media.
See also
Boot image
Card image
Comparison of disc image software
Disk cloning
El Torito (CD-ROM standard)
ISO image, an archive file of an optical media volume
Loop device
Mtools
no-CD crack
Protected Area Run Time Interface Extension Services (PARTIES)
ROM image
Software cracking
References
External links
Software repository including RAWRITE2
Archive formats
Compact Disc and DVD copy protection
Computer file formats
Disk image emulators
Hacker culture
Hardware virtualization
Optical disc authoring
Warez |
85765 | https://en.wikipedia.org/wiki/Penet%20remailer | Penet remailer | The Penet remailer () was a pseudonymous remailer operated by Johan "Julf" Helsingius of Finland from 1993 to 1996. Its initial creation stemmed from an argument in a Finnish newsgroup over whether people should be required to tie their real name to their online communications. Julf believed that people should not—indeed, could not—be required to do so. In his own words:
"Some people from a university network really argued about if everybody should put their proper name on the messages and everybody should be accountable, so you could actually verify that it is the person who is sending the messages. And I kept arguing that the Internet just doesn't work that way, and if somebody actually tries to enforce that, the Internet will always find a solution around it. And just to prove my point, I spent two days or something cooking up the first version of the server, just to prove a point."
Implementation
Julf's remailer worked by receiving an e-mail from a person, stripping away all the technical information that could be used to identify the original source of the e-mail, and then remailing the message to its final destination. The result provided Internet users with the ability to send e-mail messages and post to Usenet newsgroups without revealing their identities.
In addition, the Penet remailer used a type of “post office box” system in which users could claim their own anonymous e-mail addresses of the form anxxxxx@anon.penet.fi, allowing them to assign pseudonymous identities to their anonymous messages, and to receive messages sent to their (anonymous) e-mail addresses.
While the basic concept was effective, the Penet remailer had several vulnerabilities which threatened the anonymity of its users. Chief among them was the need to store a list of real e-mail addresses mapped to the corresponding anonymous e-mail addresses on the server. A potential attacker needed only to access that list to compromise the identities of all of Penet's users. The Penet remailer was on two occasions required by the legal system in Finland (the country where the Penet server hardware resided) to turn over the real e-mail address that was mapped to an anonymous e-mail address. Another potential vulnerability was that messages sent to and from the remailer were all sent in cleartext, making it vulnerable to electronic eavesdropping.
Later anonymous remailer designs, such as the Cypherpunk and Mixmaster designs, adopted more sophisticated techniques to try and overcome these vulnerabilities, including the use of encryption to prevent eavesdropping, and also the technique known as onion routing to allow the existence of pseudonymous remailers in which no record of a user's real e-mail address is stored by the remailer.
Despite its relatively weak security, the Penet remailer was a hugely popular remailer owing to its ease of anonymous account set-up and use compared to more secure but less user-friendly remailers, and had over 700,000 registered users at the time of its shutdown in September 1996.
First compromise
In the summer of 1994, word spread online of the Penet remailer being compromised, with the announcement being made at the hacker convention DEF CON II. Wired magazine reported at the time:
An official announcement was made at this year's DC that anon.penet.fi has been seriously compromised. We strongly suggest that you not trust this anonymous remailer. (Word has it that some folks are working on a PGP-based service.) We'll keep you posted.
This was followed a year later by a mention in the announcement for DEF CON III:
SPEAKERS
Sarah Gordon, AKA Theora, a veteran of DC II will be presenting another speech this year. Last year she organized a round table discussion with Phil Zimmermann and Presence, and revealed that the Anonymous remailer anon.penet.fi was compromised. TOPIC: Not Announced Yet.
There are no known reports detailing the specifics and extent of this compromise.
Second compromise
The second reported compromise of the Penet remailer occurred in February 1995 at the behest of the Church of Scientology. Claiming that a file had been stolen from one of the Church's internal computer servers and posted to the newsgroup alt.religion.scientology by a Penet user, representatives of the Church contacted Interpol, who in turn contacted the Finnish police, who issued a search warrant demanding that Julf hand over data on the users of the Penet remailer. Initially Julf was asked to turn over the identities of all users of his remailer (which numbered over 300,000 at the time), but he managed a compromise and revealed only the single user being sought by the Church of Scientology.
The anonymous user in question used the handle "-AB-" when posting anonymously, and their real e-mail address indicated that they were an alumnus or alumna of the California Institute of Technology. The document he posted was an internal report by a Scientology private investigator, Gene Ingram, about an incident that had occurred involving a man named Tom Klemesrud, a BBS operator involved in the Scientology versus the Internet controversy. The confusing story became known on the Internet as the "Miss Blood Incident".
Eventually the Church learned the real identity of "-AB-" to be Tom Rummelhart, a Scientologist and computer operator responsible for some of the maintenance of the Church of Scientology's INCOMM computer system. The fate of "-AB-" after the Church of Scientology learned his true identity is unknown. Years later in 2003, a two-part story entitled "What Really Happened in INCOMM - Part 1" and "What Really Happened in INCOMM – Part 2" was posted to alt.religion.scientology by a former Scientologist named Dan Garvin, which described events within the Church leading up to and stemming from the Penet posting by "-AB-".
Other attacks
Julf was also contacted by the government of Singapore as part of an effort to discover who was posting messages critical of the nation's government in the newsgroup soc.culture.singapore, but as Finnish law did not recognise any crime being committed, Julf was not required to reveal the user's identity.
In August 1996, a major British newspaper, The Observer, published an article describing the Penet remailer as a major hub of child pornography, quoting a United States FBI investigator named Toby Tyler as saying that Penet was responsible for between 75% and 90% of the child pornography being distributed on the Internet. Investigations by online journalist Declan McCullagh demonstrated many errors and omissions in the Observer article. In an article penned by McCullagh, the alleged FBI investigator described himself as a sergeant in California's San Bernardino sheriff's office who only consulted with the FBI from time to time, a relationship which the Observer article had in his opinion purposefully misrepresented as some kind of employment relationship. Tyler also claimed that the Observer purposely misquoted him, and he had actually said "that most child pornography posted to newsgroups does not go through remailers."
In addition, Julf claimed that he explained to the Observer the steps he took to prevent child pornography from being posted by forbidding posting to the alt.binaries newsgroups and limiting the size of messages to 16 kilobytes, too small to allow uuencoded binaries such as pictures to be posted. He also informed the Observer of an investigation already performed by the Finnish police which had found no evidence that child pornography was being remailed through Penet. Julf claims that all this information was ignored, stating that the Observer "wanted to make a story so they made things up."
Despite voluminous reader mail pointing to the numerous errors in the news story, the Observer never issued a full retraction of its claims, only going so far as to clarify that Johan Helsingius had "consistently denied" the claims of child pornography distribution.
In September 1996, the Church of Scientology again sought information from Julf as part of its court case against a critic of the Church named Grady Ward. The Church wanted to know if Ward had posted any information through the Penet remailer. Ward gave Julf explicit permission to reveal the extent of his alleged use of the Penet remailer, and Julf told the Church that he could find no evidence that Ward had ever used the Penet remailer at all.
Third compromise and shutdown
In September 1996, an anonymous user posted the confidential writings of the Church of Scientology through the Penet remailer. The Church once again demanded that Julf turn over the identity of one of its users, claiming that the poster had infringed the Church's copyright on the confidential material. The Church was successful in finding the originating e-mail address of the posting before Penet remailed it, but it turned out to be another anonymous remailer: the alpha.c2.org nymserver, a more advanced and more secure remailer which didn't keep a mapping of e-mail addresses that could be subpoenaed.
Facing much criticism and many attacks, and unable to guarantee the anonymity of Penet users, Julf shut down the remailer in September 1996.
See also
Anonymous remailer
Crypto-anarchism
Cypherpunk
Pseudonymous remailer
Sintercom
The Law of Cyber-Space
References
Further reading
External links
Cryptography law
Anonymity networks
Internet properties established in 1993
Internet properties disestablished in 1996
Scientology and the Internet
Internet services shut down by a legal challenge
Routing
Network architecture
Internet in Finland |
87027 | https://en.wikipedia.org/wiki/Malleability%20%28cryptography%29 | Malleability (cryptography) | Malleability is a property of some cryptographic algorithms. An encryption algorithm is "malleable" if it is possible to transform a ciphertext into another ciphertext which decrypts to a related plaintext. That is, given an encryption of a plaintext , it is possible to generate another ciphertext which decrypts to , for a known function , without necessarily knowing or learning .
Malleability is often an undesirable property in a general-purpose cryptosystem, since it allows an attacker to modify the contents of a message. For example, suppose that a bank uses a stream cipher to hide its financial information, and a user sends an encrypted message containing, say, "." If an attacker can modify the message on the wire, and can guess the format of the unencrypted message, the attacker could be able to change the amount of the transaction, or the recipient of the funds, e.g. "". Malleability does not refer to the attacker's ability to read the encrypted message. Both before and after tampering, the attacker cannot read the encrypted message.
On the other hand, some cryptosystems are malleable by design. In other words, in some circumstances it may be viewed as a feature that anyone can transform an encryption of into a valid encryption of (for some restricted class of functions ) without necessarily learning . Such schemes are known as homomorphic encryption schemes.
A cryptosystem may be semantically secure against chosen plaintext attacks or even non-adaptive chosen ciphertext attacks (CCA1) while still being malleable. However, security against adaptive chosen ciphertext attacks (CCA2) is equivalent to non-malleability.
Example malleable cryptosystems
In a stream cipher, the ciphertext is produced by taking the exclusive or of the plaintext and a pseudorandom stream based on a secret key , as . An adversary can construct an encryption of for any , as .
In the RSA cryptosystem, a plaintext is encrypted as , where is the public key. Given such a ciphertext, an adversary can construct an encryption of for any , as . For this reason, RSA is commonly used together with padding methods such as OAEP or PKCS1.
In the ElGamal cryptosystem, a plaintext is encrypted as , where is the public key. Given such a ciphertext , an adversary can compute , which is a valid encryption of , for any .
In contrast, the Cramer-Shoup system (which is based on ElGamal) is not malleable.
In the Paillier, ElGamal, and RSA cryptosystems, it is also possible to combine several ciphertexts together in a useful way to produce a related ciphertext. In Paillier, given only the public key and an encryption of and , one can compute a valid encryption of their sum . In ElGamal and in RSA, one can combine encryptions of and to obtain a valid encryption of their product .
Block ciphers in the cipher block chaining mode of operation, for example, are partly malleable: flipping a bit in a ciphertext block will completely mangle the plaintext it decrypts to, but will result in the same bit being flipped in the plaintext of the next block. This allows an attacker to 'sacrifice' one block of plaintext in order to change some data in the next one, possibly managing to maliciously alter the message. This is essentially the core idea of the padding oracle attack on CBC, which allows the attacker to decrypt almost an entire ciphertext without knowing the key. For this and many other reasons, a message authentication code is required to guard against any method of tampering.
Complete non-malleability
Fischlin, in 2005, defined the notion of complete non-malleability as the ability of the system to remain non-malleable while giving the adversary additional power to choose a new public key which could be a function of the original public key. In other words, the adversary shouldn't be able to come up with a ciphertext whose underlying plaintext is related to the original message through a relation that also takes public keys into account.
See also
Homomorphic encryption
References
Cryptography |
87043 | https://en.wikipedia.org/wiki/International%20Association%20for%20Cryptologic%20Research | International Association for Cryptologic Research | The International Association for Cryptologic Research (IACR) is a non-profit scientific organization that furthers research in cryptology and related fields. The IACR was organized at the initiative of David Chaum at the CRYPTO '82 conference.
Activities
The IACR organizes and sponsors three annual flagship conferences, four area conferences in specific sub-areas of cryptography, and one symposium:
Crypto (flagship)
Eurocrypt (flagship)
Asiacrypt (flagship)
Fast Software Encryption (FSE)
Public Key Cryptography (PKC)
Cryptographic Hardware and Embedded Systems (CHES)
Theory of Cryptography (TCC)
Real World Crypto Symposium (RWC)
Several other conferences and workshops are held in cooperation with the IACR. Starting in 2015, selected summer schools will be officially sponsored by the IACR. CRYPTO '83 was the first conference officially sponsored by the IACR.
The IACR publishes the Journal of Cryptology, in addition to the proceedings of its conference and workshops. The IACR also maintains the Cryptology ePrint Archive, an online repository of cryptologic research papers aimed at providing rapid dissemination of results.
Asiacrypt
Asiacrypt (also ASIACRYPT) is an international conference for cryptography research. The full name of the conference is currently International Conference on the Theory and Application of Cryptology and Information Security, though this has varied over time. Asiacrypt is a conference sponsored by the IACR since 2000, and is one of its three flagship conferences. Asiacrypt is now held annually in November or December at various locations throughout Asia and Australia.
Initially, the Asiacrypt conferences were called AUSCRYPT, as the first one was held in Sydney, Australia in 1990, and only later did the community decide that the conference should be held in locations throughout Asia. The first conference to be called "Asiacrypt" was held in 1991 in Fujiyoshida, Japan.
Cryptographic Hardware and Embedded Systems
Cryptographic Hardware and Embedded Systems (CHES) is a conference for cryptography research, focusing on the implementation of cryptographic algorithms. The two general areas treated are the efficient and the secure implementation of algorithms. Related topics such as random number generators, physical unclonable function or special-purpose cryptanalytical machines are also commonly covered at the workshop. It was first held in Worcester, Massachusetts in 1999 at Worcester Polytechnic Institute (WPI). It was founded by Çetin Kaya Koç and Christof Paar. CHES 2000 was also held at WPI; after that, the conference has been held at various locations worldwide. The locations in the first ten years were, in chronological order, Paris, San Francisco, Cologne, Boston, Edinburgh, Yokohama, Vienna, Washington, D.C., and Lausanne. Since 2009, CHES rotates between the three continents Europe, North America and Asia. The attendance record was set by CHES 2018 in Amsterdam with about 600 participants.
Eurocrypt
Eurocrypt (or EUROCRYPT) is a conference for cryptography research. The full name of the conference is now the Annual International Conference on the Theory and Applications of Cryptographic Techniques. Eurocrypt is one of the IACR flagship conferences, along with CRYPTO and ASIACRYPT.
Eurocrypt is held annually in the spring in various locations throughout Europe. The first workshop in the series of conferences that became known as Eurocrypt was held in 1982. In 1984, the name "Eurocrypt" was first used. Generally, there have been published proceedings including all papers at the conference every year, with two exceptions; in 1983, no proceedings was produced, and in 1986, the proceedings contained only abstracts. Springer has published all the official proceedings, first as part of Advances in Cryptology in the Lecture Notes in Computer Science series.
Fast Software Encryption
Fast Software Encryption, often abbreviated FSE, is a workshop for cryptography research, focused on symmetric-key cryptography with an emphasis on fast, practical techniques, as opposed to theory. Though "encryption" is part of the conference title, it is not limited to encryption research; research on other symmetric techniques such as message authentication codes and hash functions is often presented there. FSE has been an IACR workshop since 2002, though the first FSE workshop was held in 1993. FSE is held annually in various locations worldwide, mostly in Europe. The dates of the workshop have varied over the years, but recently, it has been held in February.
Public Key Cryptography
PKC or Public-Key Cryptography is the short name of the International Workshop on Theory and Practice in Public Key Cryptography (modified as International Conference on Theory and Practice in Public Key Cryptography since 2006).
Theory of Cryptography
The Theory of Cryptography Conference, often abbreviated TCC, is an annual conference for theoretical cryptography research. It was first held in 2004 at MIT, and was also held at MIT in 2005, both times in February. TCC became an IACR-sponsored workshop in 2006. The founding steering committee consists of Mihir Bellare, Ivan Damgard, Oded Goldreich, Shafi Goldwasser, Johan Hastad, Russell Impagliazzo, Ueli Maurer, Silvio Micali, Moni Naor, and Tatsuaki Okamoto.
The importance of the theoretical study of Cryptography is widely recognized by now. This area has contributed much to the practice of cryptography and secure systems as well as to the theory of computation at large.
The needs of the theoretical cryptography (TC) community are best understood in relation to the two communities between which it resides: the Theory of Computation (TOC) community and the Cryptography/Security community. All three communities have grown in volume in recent years. This increase in volume makes the hosting of TC by the existing TOC and Crypto conferences quite problematic. Furthermore, the perspectives of TOC and Crypto on TC do not necessarily fit the internal perspective of TC and the interests of TC. All these indicate a value in the establishment of an independent specialized conference. A dedicated conference not only provides opportunities for research dissemination and interaction, but helps shape the field, give it a recognizable identity, and communicate its message.
Real World Crypto Symposium
The Real World Crypto Symposium is a conference for applied cryptography research, which was started in 2012 by Kenny Paterson and Nigel Smart. The winner of the Levchin Prize is announced at RWC.
Announcements made at the symposium include the first known chosen prefix attack on SHA-1 and the inclusion of end-to-end encryption in Facebook Messenger. Also, the introduction of the E4 chip took place at RWC. Flaws in messaging apps such as WhatsApp were also presented there.
International Cryptology Conference
CRYPTO, the International Cryptology Conference, is an academic conference on all aspects of cryptography and cryptanalysis. It is held yearly in August in Santa Barbara, California at the University of California, Santa Barbara.
The first CRYPTO was held in 1981. It was the first major conference on cryptology and was all the more important because relations between government, industry and academia were rather tense. Encryption was considered a very sensitive subject and the coming together of delegates from different countries was unheard-of at the time. The initiative for the formation of the IACR came during CRYPTO '82, and CRYPTO '83 was the first IACR sponsored conference.
Fellows
The IACR Fellows Program (FIACR) has been established as an honor to bestow upon its exceptional members. There are currently 68 IACR Fellows.
References
External links
IACR workshops page, contains links to FSE home pages from 2002
FSE bibliography from 1993
IACR workshops page
PKC homepage
Bibliography data for Eurocrypt proceedings
Conference proceedings online, 1982-1997
Asiacrypt/Auscrypt bibliography from 1990
IACR conferences page; contains links to Asiacrypt homepages from 2000
Cryptography organizations |
87202 | https://en.wikipedia.org/wiki/Coda%20%28file%20system%29 | Coda (file system) | Coda is a distributed file system developed as a research project at Carnegie Mellon University since 1987 under the direction of Mahadev Satyanarayanan. It descended directly from an older version of Andrew File System (AFS-2) and offers many similar features. The InterMezzo file system was inspired by Coda.
Features
Coda has many features that are desirable for network file systems, and several features not found elsewhere.
Disconnected operation for mobile computing.
Is freely available under the GPL
High performance through client side persistent caching
Server replication
Security model for authentication, encryption and access control
Continued operation during partial network failures in server network
Network bandwidth adaptation
Good scalability
Well defined semantics of sharing, even in the presence of network failure
Coda uses a local cache to provide access to server data when the network connection is lost. During normal operation, a user reads and writes to the file system normally, while the client fetches, or "hoards", all of the data the user has listed as important in the event of network disconnection. If the network connection is lost, the Coda client's local cache serves data from this cache and logs all updates. This operating state is called disconnected operation. Upon network reconnection, the client moves to reintegration state; it sends logged updates to the servers. Then it transitions back to normal connected-mode operation.
Also different from AFS is Coda's data replication method. AFS uses a pessimistic replication strategy with its files, only allowing one read/write server to receive updates and all other servers acting as read-only replicas. Coda allows all servers to receive updates, allowing for a greater availability of server data in the event of network partitions, a case which AFS cannot handle.
These unique features introduce the possibility of semantically diverging copies of the same files or directories, known as "conflicts". Disconnected operation's local updates can potentially clash with other connected users' updates on the same objects, preventing reintegration. Optimistic replication can potentially cause concurrent updates to different servers on the same object, preventing replication. The former case is called a "local/global" conflict, and the latter case a "server/server" conflict. Coda has extensive repair tools, both manual and automated, to handle and repair both types of conflicts.
Supported platforms
Coda has been developed on Linux and support for it appeared in the 2.1 Linux kernel series. It has also been ported to FreeBSD. Subsequently, obsoleted there, an effort is under way to bring it back. Efforts have been made to port Coda to Microsoft Windows, from the Windows 95/Windows 98 era, Windows NT to Windows XP, by means of open source projects like the DJGCC DOS C Compiler and Cygwin.
References
External links
Coda website at Carnegie Mellon University
Coda: a highly available file system for a distributed workstation network, Mahadev Satyanarayanan James J. Kistler, Puneet Kumar, IEEE Transactions on Computers, Vol. 39, No. 4, April 1990
The Coda Distributed Filesystem for Linux, Bill von Hagen, October 7, 2002.
The Coda Distributed File System with Picture representation, Peter J. Braam, School of Computer Science,
Network file systems
Distributed file systems
Distributed file systems supported by the Linux kernel
Carnegie Mellon University software |
87231 | https://en.wikipedia.org/wiki/Surveillance | Surveillance | Surveillance is the monitoring of behavior, many activities, or information for the purpose of information gathering, influencing, managing or directing. This can include observation from a distance by means of electronic equipment, such as closed-circuit television (CCTV), or interception of electronically transmitted information like Internet traffic. It can also include simple technical methods, such as human intelligence gathering and postal interception.
Surveillance is used by citizens for protecting their neighborhoods. And by governments for intelligence gathering - including espionage, prevention of crime, the protection of a process, person, group or object, or the investigation of crime. It is also used by criminal organizations to plan and commit crimes, and by businesses to gather intelligence on criminals, their competitors, suppliers or customers. Religious organisations charged with detecting heresy and heterodoxy may also carry out surveillance.
Auditors carry out a form of surveillance.
A byproduct of surveillance is that it can unjustifiably violate people's privacy and is often criticized by civil liberties activists. Liberal democracies may have laws that seek to restrict governmental and private use of surveillance, whereas authoritarian governments seldom have any domestic restrictions.
Espionage is by definition covert and typically illegal according to the rules of the observed party, whereas most types of surveillance are overt and are considered legitimate. International espionage seems to be common among all types of countries.
Methods
Computer
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by federal law enforcement agencies.
There is far too much data on the Internet for human investigators to manually search through all of it. Therefore, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic to identify and report to human investigators the traffic that is considered interesting or suspicious. This process is regulated by targeting certain "trigger" words or phrases, visiting certain types of web sites, or communicating via email or online chat with suspicious individuals or groups. Billions of dollars per year are spent by agencies, such as the NSA, the FBI and the now-defunct Information Awareness Office, to develop, purchase, implement, and operate systems such as Carnivore, NarusInsight, and ECHELON to intercept and analyze all of this data to extract only the information which is useful to law enforcement and intelligence agencies.
Computers can be a surveillance target because of the personal data stored on them. If someone is able to install software, such as the FBI's Magic Lantern and CIPAV, on a computer system, they can easily gain unauthorized access to this data. Such software could be installed physically or remotely. Another form of computer surveillance, known as van Eck phreaking, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters. The NSA runs a database known as "Pinwale", which stores and indexes large numbers of emails of both American citizens and foreigners. Additionally, the NSA runs a program known as PRISM, which is a data mining system that gives the United States government direct access to information from technology companies. Through accessing this information, the government is able to obtain search history, emails, stored information, live chats, file transfers, and more. This program generated huge controversies in regards to surveillance and privacy, especially from U.S. citizens.
Telephones
The official and unofficial tapping of telephone lines is widespread. In the United States for instance, the Communications Assistance For Law Enforcement Act (CALEA) requires that all telephone and VoIP communications be available for real-time wiretapping by Federal law enforcement and intelligence agencies. Two major telecommunications companies in the U.S.—AT&T Inc. and Verizon—have contracts with the FBI, requiring them to keep their phone call records easily searchable and accessible for Federal agencies, in return for $1.8 million per year. Between 2003 and 2005, the FBI sent out more than 140,000 "National Security Letters" ordering phone companies to hand over information about their customers' calling and Internet histories. About half of these letters requested information on U.S. citizens.
Human agents are not required to monitor most calls. Speech-to-text software creates machine-readable text from intercepted audio, which is then processed by automated call-analysis programs, such as those developed by agencies such as the Information Awareness Office, or companies such as Verint, and Narus, which search for certain words or phrases, to decide whether to dedicate a human agent to the call.
Law enforcement and intelligence services in the United Kingdom and the United States possess technology to activate the microphones in cell phones remotely, by accessing phones' diagnostic or maintenance features in order to listen to conversations that take place near the person who holds the phone.
The StingRay tracker is an example of one of these tools used to monitor cell phone usage in the United States and the United Kingdom. Originally developed for counterterrorism purposes by the military, they work by broadcasting powerful signals that cause nearby cell phones to transmit their IMSI number, just as they would to normal cell phone towers. Once the phone is connected to the device, there is no way for the user to know that they are being tracked. The operator of the stingray is able to extract information such as location, phone calls, and text messages, but it is widely believed that the capabilities of the StingRay extend much further. A lot of controversy surrounds the StingRay because of its powerful capabilities and the secrecy that surrounds it.
Mobile phones are also commonly used to collect location data. The geographical location of a mobile phone (and thus the person carrying it) can be determined easily even when the phone is not being used, using a technique known as multilateration to calculate the differences in time for a signal to travel from the cell phone to each of several cell towers near the owner of the phone. The legality of such techniques has been questioned in the United States, in particular whether a court warrant is required. Records for one carrier alone (Sprint), showed that in a given year federal law enforcement agencies requested customer location data 8 million times.
In response to customers' privacy concerns in the post Edward Snowden era, Apple's iPhone 6 has been designed to disrupt investigative wiretapping efforts. The phone encrypts e-mails, contacts, and photos with a code generated by a complex mathematical algorithm that is unique to an individual phone, and is inaccessible to Apple. The encryption feature on the iPhone 6 has drawn criticism from FBI director James B. Comey and other law enforcement officials since even lawful requests to access user content on the iPhone 6 will result in Apple supplying "gibberish" data that requires law enforcement personnel to either break the code themselves or to get the code from the phone's owner. Because the Snowden leaks demonstrated that American agencies can access phones anywhere in the world, privacy concerns in countries with growing markets for smart phones have intensified, providing a strong incentive for companies like Apple to address those concerns in order to secure their position in the global market.
Although the CALEA requires telecommunication companies to build into their systems the ability to carry out a lawful wiretap, the law has not been updated to address the issue of smart phones and requests for access to e-mails and metadata. The Snowden leaks show that the NSA has been taking advantage of this ambiguity in the law by collecting metadata on "at least hundreds of millions" of "incidental" targets from around the world. The NSA uses an analytic tool known as CO-TRAVELER in order to track people whose movements intersect and to find any hidden connections with persons of interest.
The Snowden leaks have also revealed that the British Government Communications Headquarters (GCHQ) can access information collected by the NSA on American citizens. Once the data has been collected, the GCHQ can hold on to it for up to two years. The deadline can be extended with the permission of a "senior UK official".
Cameras
Surveillance cameras, or security cameras, are video cameras used for the purpose of observing an area. They are often connected to a recording device or IP network, and may be watched by a security guard or law enforcement officer. Cameras and recording equipment used to be relatively expensive and required human personnel to monitor camera footage, but analysis of footage has been made easier by automated software that organizes digital video footage into a searchable database, and by video analysis software (such as VIRAT and HumanID). The amount of footage is also drastically reduced by motion sensors which record only when motion is detected. With cheaper production techniques, surveillance cameras are simple and inexpensive enough to be used in home security systems, and for everyday surveillance.
As of 2016, there are about 350 million surveillance cameras worldwide. About 65% of these cameras are installed in Asia. The growth of CCTV has been slowing in recent years. In 2018, China was reported to have a huge surveillance network of over 170 million CCTV cameras with 400 million new cameras expected to be installed in the next three years, many of which use facial recognition technology.
In the United States, the Department of Homeland Security awards billions of dollars per year in Homeland Security grants for local, state, and federal agencies to install modern video surveillance equipment. For example, the city of Chicago, Illinois, recently used a $5.1 million Homeland Security grant to install an additional 250 surveillance cameras, and connect them to a centralized monitoring center, along with its preexisting network of over 2000 cameras, in a program known as Operation Virtual Shield. Speaking in 2009, Chicago Mayor Richard Daley announced that Chicago would have a surveillance camera on every street corner by the year 2016. New York City received a $350 million grant towards the development of the Domain Awareness System, which is an interconnected system of sensors including 18,000 CCTV cameras used for continual surveillance of the city by both police officers and artificial intelligence systems.
In the United Kingdom, the vast majority of video surveillance cameras are not operated by government bodies, but by private individuals or companies, especially to monitor the interiors of shops and businesses. According to 2011 Freedom of Information Act requests, the total number of local government operated CCTV cameras was around 52,000 over the entirety of the UK. The prevalence of video surveillance in the UK is often overstated due to unreliable estimates being requoted; for example one report in 2002 extrapolated from a very small sample to estimate the number of cameras in the UK at 4.2 million (of which 500,000 were in Greater London). More reliable estimates put the number of private and local government operated cameras in the United Kingdom at around 1.85 million in 2011.
In the Netherlands, one example city where there are cameras is The Hague. There, cameras are placed in city districts in which the most illegal activity is concentrated. Examples are the red-light districts and the train stations.
As part of China's Golden Shield Project, several U.S. corporations, including IBM, General Electric, and Honeywell, have been working closely with the Chinese government to install millions of surveillance cameras throughout China, along with advanced video analytics and facial recognition software, which will identify and track individuals everywhere they go. They will be connected to a centralized database and monitoring station, which will, upon completion of the project, contain a picture of the face of every person in China: over 1.3 billion people. Lin Jiang Huai, the head of China's "Information Security Technology" office (which is in charge of the project), credits the surveillance systems in the United States and the U.K. as the inspiration for what he is doing with the Golden Shield Project.
The Defense Advanced Research Projects Agency (DARPA) is funding a research project called Combat Zones That See that will link up cameras across a city to a centralized monitoring station, identify and track individuals and vehicles as they move through the city, and report "suspicious" activity (such as waving arms, looking side-to-side, standing in a group, etc.).
At Super Bowl XXXV in January 2001, police in Tampa, Florida, used Identix's facial recognition software, FaceIt, to scan the crowd for potential criminals and terrorists in attendance at the event (it found 19 people with pending arrest warrants).
Governments often initially claim that cameras are meant to be used for traffic control, but many of them end up using them for general surveillance. For example, Washington, D.C. had 5,000 "traffic" cameras installed under this premise, and then after they were all in place, networked them all together and then granted access to the Metropolitan Police Department, so they could perform "day-to-day monitoring".
The development of centralized networks of CCTV cameras watching public areas – linked to computer databases of people's pictures and identity (biometric data), able to track people's movements throughout the city, and identify whom they have been with – has been argued by some to present a risk to civil liberties. Trapwire is an example of such a network.
Social network analysis
One common form of surveillance is to create maps of social networks based on data from social networking sites such as Facebook, MySpace, Twitter as well as from traffic analysis information from phone call records such as those in the NSA call database, and others. These social network "maps" are then data mined to extract useful information such as personal interests, friendships & affiliations, wants, beliefs, thoughts, and activities.
Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to U.S. power comes from decentralized, leaderless, geographically dispersed groups of terrorists, subversives, extremists, and dissidents. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office:
AT&T developed a programming language called "Hancock", which is able to sift through enormous databases of phone call and Internet traffic records, such as the NSA call database, and extract "communities of interest"—groups of people who call each other regularly, or groups that regularly visit certain sites on the Internet. AT&T originally built the system to develop "marketing leads", but the FBI has regularly requested such information from phone companies such as AT&T without a warrant, and, after using the data, stores all information received in its own databases, regardless of whether or not the information was ever useful in an investigation.
Some people believe that the use of social networking sites is a form of "participatory surveillance", where users of these sites are essentially performing surveillance on themselves, putting detailed personal information on public websites where it can be viewed by corporations and governments. In 2008, about 20% of employers reported using social networking sites to collect personal data on prospective or current employees.
Biometric
Biometric surveillance is a technology that measures and analyzes human physical and/or behavioral characteristics for authentication, identification, or screening purposes. Examples of physical characteristics include fingerprints, DNA, and facial patterns. Examples of mostly behavioral characteristics include gait (a person's manner of walking) or voice.
Facial recognition is the use of the unique configuration of a person's facial features to accurately identify them, usually from surveillance video. Both the Department of Homeland Security and DARPA are heavily funding research into facial recognition systems. The Information Processing Technology Office ran a program known as Human Identification at a Distance which developed technologies that are capable of identifying a person at up to by their facial features.
Another form of behavioral biometrics, based on affective computing, involves computers recognizing a person's emotional state based on an analysis of their facial expressions, how fast they are talking, the tone and pitch of their voice, their posture, and other behavioral traits. This might be used for instance to see if a person's behavior is suspect (looking around furtively, "tense" or "angry" facial expressions, waving arms, etc.).
A more recent development is DNA profiling, which looks at some of the major markers in the body's DNA to produce a match. The FBI is spending $1 billion to build a new biometric database, which will store DNA, facial recognition data, iris/retina (eye) data, fingerprints, palm prints, and other biometric data of people living in the United States. The computers running the database are contained in an underground facility about the size of two American football fields.
The Los Angeles Police Department is installing automated facial recognition and license plate recognition devices in its squad cars, and providing handheld face scanners, which officers will use to identify people while on patrol.
Facial thermographs are in development, which allow machines to identify certain emotions in people such as fear or stress, by measuring the temperature generated by blood flow to different parts of the face. Law enforcement officers believe that this has potential for them to identify when a suspect is nervous, which might indicate that they are hiding something, lying, or worried about something.
In his paper in Ethics and Information Technology, Avi Marciano maps the harms caused by biometric surveillance, traces their theoretical origins, and brings these harms together in one integrative framework to elucidate their cumulative power. Marciano proposes four types of harms: Unauthorized use of bodily information, denial or limitation of access to physical spaces, bodily social sorting, and symbolic ineligibility through construction of marginality and otherness. Biometrics' social power, according to Marciano, derives from three main features: their complexity as "enigmatic technologies", their objective-scientific image, and their increasing agency, particularly in the context of automatic decision-making.
Aerial
Aerial surveillance is the gathering of surveillance, usually visual imagery or video, from an airborne vehicle—such as an unmanned aerial vehicle, helicopter, or spy plane. Military surveillance aircraft use a range of sensors (e.g. radar) to monitor the battlefield.
Digital imaging technology, miniaturized computers, and numerous other technological advances over the past decade have contributed to rapid advances in aerial surveillance hardware such as micro-aerial vehicles, forward-looking infrared, and high-resolution imagery capable of identifying objects at extremely long distances. For instance, the MQ-9 Reaper, a U.S. drone plane used for domestic operations by the Department of Homeland Security, carries cameras that are capable of identifying an object the size of a milk carton from altitudes of , and has forward-looking infrared devices that can detect the heat from a human body at distances of up to . In an earlier instance of commercial aerial surveillance, the Killington Mountain ski resort hired 'eye in the sky' aerial photography of its competitors' parking lots to judge the success of its marketing initiatives as it developed starting in the 1950s.
The United States Department of Homeland Security is in the process of testing UAVs to patrol the skies over the United States for the purposes of critical infrastructure protection, border patrol, "transit monitoring", and general surveillance of the U.S. population. Miami-Dade police department ran tests with a vertical take-off and landing UAV from Honeywell, which is planned to be used in SWAT operations. Houston's police department has been testing fixed-wing UAVs for use in "traffic control".
The United Kingdom, as well, is working on plans to build up a fleet of surveillance UAVs ranging from micro-aerial vehicles to full-size drones, to be used by police forces throughout the U.K.
In addition to their surveillance capabilities, MAVs are capable of carrying tasers for "crowd control", or weapons for killing enemy combatants.
Programs such as the Heterogeneous Aerial Reconnaissance Team program developed by DARPA have automated much of the aerial surveillance process. They have developed systems consisting of large teams drone planes that pilot themselves, automatically decide who is "suspicious" and how to go about monitoring them, coordinate their activities with other drones nearby, and notify human operators if something suspicious is occurring. This greatly increases the amount of area that can be continuously monitored, while reducing the number of human operators required. Thus a swarm of automated, self-directing drones can automatically patrol a city and track suspicious individuals, reporting their activities back to a centralized monitoring station.
In addition, researchers also investigate possibilities of autonomous surveillance by large groups of micro aerial vehicles stabilized by decentralized bio-inspired swarming rules.
Corporate
Corporate surveillance is the monitoring of a person or group's behavior by a corporation. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. Although there is a common belief that monitoring can increase productivity, it can also create consequences such as increasing chances of deviant behavior and creating punishments that are not equitable to their actions. Additionally, monitoring can cause resistance and backlash because it insinuates an employer's suspicion and lack of trust.
Data mining and profiling
Data mining is the application of statistical techniques and programmatic algorithms to discover previously unnoticed relationships within the data. Data profiling in this context is the process of assembling information about a particular individual or group in order to generate a profile — that is, a picture of their patterns and behavior. Data profiling can be an extremely powerful tool for psychological and social network analysis. A skilled analyst can discover facts about a person that they might not even be consciously aware of themselves.
Economic (such as credit card purchases) and social (such as telephone calls and emails) transactions in modern society create large amounts of stored data and records. In the past, this data was documented in paper records, leaving a "paper trail", or was simply not documented at all. Correlation of paper-based records was a laborious process—it required human intelligence operators to manually dig through documents, which was time-consuming and incomplete, at best.
But today many of these records are electronic, resulting in an "electronic trail". Every use of a bank machine, payment by credit card, use of a phone card, call from home, checked out library book, rented video, or otherwise complete recorded transaction generates an electronic record. Public records—such as birth, court, tax and other records—are increasingly being digitized and made available online. In addition, due to laws like CALEA, web traffic and online purchases are also available for profiling. Electronic record-keeping makes data easily collectable, storable, and accessible—so that high-volume, efficient aggregation and analysis is possible at significantly lower costs.
Information relating to many of these individual transactions is often easily available because it is generally not guarded in isolation, since the information, such as the title of a movie a person has rented, might not seem sensitive. However, when many such transactions are aggregated they can be used to assemble a detailed profile revealing the actions, habits, beliefs, locations frequented, social connections, and preferences of the individual. This profile is then used, by programs such as ADVISE and TALON, to determine whether the person is a military, criminal, or political threat.
In addition to its own aggregation and profiling tools, the government is able to access information from third parties — for example, banks, credit companies or employers, etc. — by requesting access informally, by compelling access through the use of subpoenas or other procedures, or by purchasing data from commercial data aggregators or data brokers. The United States has spent $370 million on its 43 planned fusion centers, which are national network of surveillance centers that are located in over 30 states. The centers will collect and analyze vast amounts of data on U.S. citizens. It will get this data by consolidating personal information from sources such as state driver's licensing agencies, hospital records, criminal records, school records, credit bureaus, banks, etc. – and placing this information in a centralized database that can be accessed from all of the centers, as well as other federal law enforcement and intelligence agencies.
Under United States v. Miller (1976), data held by third parties is generally not subject to Fourth Amendment warrant requirements.
Human operatives
Organizations that have enemies who wish to gather information about the groups' members or activities face the issue of infiltration.
In addition to operatives' infiltrating an organization, the surveilling party may exert pressure on certain members of the target organization to act as informants (i.e., to disclose the information they hold on the organization and its members).
Fielding operatives is very expensive, and for governments with wide-reaching electronic surveillance tools at their disposal the information recovered from operatives can often be obtained from less problematic forms of surveillance such as those mentioned above. Nevertheless, human infiltrators are still common today. For instance, in 2007 documents surfaced showing that the FBI was planning to field a total of 15,000 undercover agents and informants in response to an anti-terrorism directive sent out by George W. Bush in 2004 that ordered intelligence and law enforcement agencies to increase their HUMINT capabilities.
Satellite imagery
On May 25, 2007, the U.S. Director of National Intelligence Michael McConnell authorized the National Applications Office (NAO) of the Department of Homeland Security to allow local, state, and domestic Federal agencies to access imagery from military intelligence Reconnaissance satellites and Reconnaissance aircraft sensors which can now be used to observe the activities of U.S. citizens. The satellites and aircraft sensors will be able to penetrate cloud cover, detect chemical traces, and identify objects in buildings and "underground bunkers", and will provide real-time video at much higher resolutions than the still-images produced by programs such as Google Earth.
Identification and credentials
One of the simplest forms of identification is the carrying of credentials. Some nations have an identity card system to aid identification, whilst others are considering it but face public opposition. Other documents, such as passports, driver's licenses, library cards, banking or credit cards are also used to verify identity.
If the form of the identity card is "machine-readable", usually using an encoded magnetic stripe or identification number (such as a Social Security number), it corroborates the subject's identifying data. In this case it may create an electronic trail when it is checked and scanned, which can be used in profiling, as mentioned above.
Wireless Tracking
This section refers to methods that involve the monitoring of tracking devices through the aid of wireless signals.
Mobile phones
Mobile carrier antennas are also commonly used to collect geolocation data on mobile phones. The geographical location of a powered mobile phone (and thus the person carrying it) can be determined easily (whether it is being used or not), using a technique known as multilateration to calculate the differences in time for a signal to travel from the cell phone to each of several cell towers near the owner of the phone. Dr. Victor Kappeler of Eastern Kentucky University indicates that police surveillance is a strong concern, stating the following statistics from 2013:
A comparatively new off-the-shelf surveillance device is an IMSI-catcher, a telephone eavesdropping device used to intercept mobile phone traffic and track the movement of mobile phone users. Essentially a "fake" mobile tower acting between the target mobile phone and the service provider's real towers, it is considered a man-in-the-middle (MITM) attack. IMSI-catchers are used in some countries by law enforcement and intelligence agencies, but their use has raised significant civil liberty and privacy concerns and is strictly regulated in some countries.
In March 2020, British daily The Guardian, based on the claims of a whistleblower, accused the government of Saudi Arabia of exploiting global mobile telecom network weaknesses to spy on its citizens traveling around the United States. The data shared by the whistleblower in support of the claims, showed that a systematic spying campaign was being run by the kingdom exploiting the flaws of SS7, a global messaging system. The data showed that millions of secret tracking commands originated from Saudi in a duration of four-months, starting from November 2019.
RFID tagging
Radio Frequency Identification (RFID) tagging is the use of very small electronic devices (called "RFID tags") which are applied to or incorporated into a product, animal, or person for the purpose of identification and tracking using radio waves. The tags can be read from several meters away. They are extremely inexpensive, costing a few cents per piece, so they can be inserted into many types of everyday products without significantly increasing the price, and can be used to track and identify these objects for a variety of purposes.
Some companies appear to be "tagging" their workers by incorporating RFID tags in employee ID badges. Workers in U.K. considered strike action in protest of having themselves tagged; they felt that it was dehumanizing to have all of their movements tracked with RFID chips. Some critics have expressed fears that people will soon be tracked and scanned everywhere they go. On the other hand, RFID tags in newborn baby ID bracelets put on by hospitals have foiled kidnappings.
In a 2003 editorial, CNET News.com's chief political correspondent, Declan McCullagh, speculated that, soon, every object that is purchased, and perhaps ID cards, will have RFID devices in them, which would respond with information about people as they walk past scanners (what type of phone they have, what type of shoes they have on, which books they are carrying, what credit cards or membership cards they have, etc.). This information could be used for identification, tracking, or targeted marketing. , this has largely not come to pass.
RFID tagging on humans
A human microchip implant is an identifying integrated circuit device or RFID transponder encased in silicate glass and implanted in the body of a human being. A subdermal implant typically contains a unique ID number that can be linked to information contained in an external database, such as personal identification, medical history, medications, allergies, and contact information.
Several types of microchips have been developed in order to control and monitor certain types of people, such as criminals, political figures and spies, a "killer" tracking chip patent was filed at the German Patent and Trademark Office (DPMA) around May 2009.
Verichip is an RFID device produced by a company called Applied Digital Solutions (ADS). Verichip is slightly larger than a grain of rice, and is injected under the skin. The injection reportedly feels similar to receiving a shot. The chip is encased in glass, and stores a "VeriChip Subscriber Number" which the scanner uses to access their personal information, via the Internet, from Verichip Inc.'s database, the "Global VeriChip Subscriber Registry". Thousands of people have already had them inserted. In Mexico, for example, 160 workers at the Attorney General's office were required to have the chip injected for identity verification and access control purposes.
Implantable microchips have also been used in healthcare settings, but ethnographic researchers have identified a number of ethical problems with such uses; these problems include unequal treatment, diminished trust, and possible endangerment of patients.
Geolocation devices
Global Positioning System
In the U.S., police have planted hidden GPS tracking devices in people's vehicles to monitor their movements, without a warrant. In early 2009, they were arguing in court that they have the right to do this.
Several cities are running pilot projects to require parolees to wear GPS devices to track their movements when they get out of prison.
Devices
Covert listening devices and video devices, or "bugs", are hidden electronic devices which are used to capture, record, and/or transmit data to a receiving party such as a law enforcement agency.
The U.S. has run numerous domestic intelligence operations, such as COINTELPRO, which have bugged the homes, offices, and vehicles of thousands of U.S. citizens, usually political activists, subversives, and criminals.
Law enforcement and intelligence services in the U.K. and the United States possess technology to remotely activate the microphones in cell phones, by accessing the phone's diagnostic/maintenance features, in order to listen to conversations that take place nearby the person who holds the phone.
Postal services
As more people use faxes and e-mail the significance of surveilling the postal system is decreasing, in favor of Internet and telephone surveillance. But interception of post is still an available option for law enforcement and intelligence agencies, in certain circumstances. This is not a common practice, however, and entities like the US Army require high levels of approval to conduct.
The U.S. Central Intelligence Agency and Federal Bureau of Investigation have performed twelve separate mail-opening campaigns targeted towards U.S. citizens. In one of these programs, more than 215,000 communications were intercepted, opened, and photographed.
Stakeout
A stakeout is the coordinated surveillance of a location or person. Stakeouts are generally performed covertly and for the purpose of gathering evidence related to criminal activity. The term derives from the practice by land surveyors of using survey stakes to measure out an area before the main building project begins.
Internet of things
The Internet of Things (IoT) is a term that refers to the future of technology in which data can be collected without human and computer interaction. IoTs can be used for identification, monitoring, location tracking, and health tracking. While IoTs have the benefit of being a time-saving tool that makes activities simpler, they raise the concern of government surveillance and privacy regarding how data will be used.
Controversy
Support
Supporters of surveillance systems believe that these tools can help protect society from terrorists and criminals. They argue that surveillance can reduce crime by three means: by deterrence, by observation, and by reconstruction. Surveillance can deter by increasing the chance of being caught, and by revealing the modus operandi. This requires a minimal level of invasiveness.
Another method on how surveillance can be used to fight criminal activity is by linking the information stream obtained from them to a recognition system (for instance, a camera system that has its feed run through a facial recognition system). This can for instance auto-recognize fugitives and direct police to their location.
A distinction here has to be made however on the type of surveillance employed. Some people that say support video surveillance in city streets may not support indiscriminate telephone taps and vice versa. Besides the types, the way in how this surveillance is done also matters a lot; i.e. indiscriminate telephone taps are supported by much fewer people than say telephone taps done only to people suspected of engaging in illegal activities.
Surveillance can also be used to give human operatives a tactical advantage through improved situational awareness, or through the use of automated processes, i.e. video analytics. Surveillance can help reconstruct an incident and prove guilt through the availability of footage for forensics experts. Surveillance can also influence subjective security if surveillance resources are visible or if the consequences of surveillance can be felt.
Some of the surveillance systems (such as the camera system that has its feed run through a facial recognition system mentioned above) can also have other uses besides countering criminal activity. For instance, it can help on retrieving runaway children, abducted or missing adults and mentally disabled people.
Other supporters simply believe that there is nothing that can be done about the loss of privacy, and that people must become accustomed to having no privacy. As Sun Microsystems CEO Scott McNealy said: "You have zero privacy anyway. Get over it."
Another common argument is: "If you aren't doing something wrong then you don't have anything to fear." Which follows that if one is engaging in unlawful activities, in which case they do not have a legitimate justification for their privacy. However, if they are following the law the surveillance would not affect them.
Opposition
With the advent of programs such as the Total Information Awareness program and ADVISE, technologies such as high speed surveillance computers and biometrics software, and laws such as the Communications Assistance for Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of their subjects. Many civil rights and privacy groups, such as the Electronic Frontier Foundation and American Civil Liberties Union, have expressed concern that by allowing continual increases in government surveillance of citizens we will end up in a mass surveillance society, with extremely limited, or non-existent political and/or personal freedoms. Fears such as this have led to numerous lawsuits such as Hepting v. AT&T.
Some critics state that the claim made by supporters should be modified to read: "As long as we do what we're told, we have nothing to fear.". For instance, a person who is part of a political group which opposes the policies of the national government, might not want the government to know their names and what they have been reading, so that the government cannot easily subvert their organization, arrest, or kill them. Other critics state that while a person might not have anything to hide right now, the government might later implement policies that they do wish to oppose, and that opposition might then be impossible due to mass surveillance enabling the government to identify and remove political threats. Further, other critics point to the fact that most people do have things to hide. For example, if a person is looking for a new job, they might not want their current employer to know this. Also if an employer wishes total privacy to watch over their own employee and secure their financial information it may become impossible, and they may not wish to hire those under surveillance.
In December 2017, the Government of China took steps to oppose widespread surveillance by security-company cameras, webcams, and IP Cameras after tens-of-thousands were made accessible for internet viewing by IT company Qihoo
Totalitarianism
Programs such as the Total Information Awareness program, and laws such as the Communications Assistance For Law Enforcement Act have led many groups to fear that society is moving towards a state of mass surveillance with severely limited personal, social, political freedoms, where dissenting individuals or groups will be strategically removed in COINTELPRO-like purges.
Kate Martin, of the Center For National Security Studies said of the use of military spy satellites being used to monitor the activities of U.S. citizens: "They are laying the bricks one at a time for a police state."
Some point to the blurring of lines between public and private places, and the privatization of places traditionally seen as public (such as shopping malls and industrial parks) as illustrating the increasing legality of collecting personal information. Traveling through many public places such as government offices is hardly optional for most people, yet consumers have little choice but to submit to companies' surveillance practices. Surveillance techniques are not created equal; among the many biometric identification technologies, for instance, face recognition requires the least cooperation. Unlike automatic fingerprint reading, which requires an individual to press a finger against a machine, this technique is subtle and requires little to no consent.
Psychological/social effects
Some critics, such as Michel Foucault, believe that in addition to its obvious function of identifying and capturing individuals who are committing undesirable acts, surveillance also functions to create in everyone a feeling of always being watched, so that they become self-policing. This allows the State to control the populace without having to resort to physical force, which is expensive and otherwise problematic.
With the development of digital technology, individuals have become increasingly perceptible to one another, as surveillance becomes virtual. Online surveillance is the utilization of the internet to observe one's activity. Corporations, citizens, and governments participate in tracking others' behaviours for motivations that arise out of business relations, to curiosity, to legality. In her book Superconnected, Mary Chayko differentiates between two types of surveillance: vertical and horizontal. Vertical surveillance occurs when there is a dominant force, such as the government that is attempting to control or regulate the actions of a given society. Such powerful authorities often justify their incursions as a means to protect society from threats of violence or terrorism. Some individuals question when this becomes an infringement on civil rights.
Horizontal diverges from vertical surveillance as the tracking shifts from an authoritative source to an everyday figure, such as a friend, coworker, or stranger that is interested in one's mundane activities. Individuals leave traces of information when they are online that reveal their interests and desires of which others observe. While this can allow people to become interconnected and develop social connections online, it can also increase potential risk to harm, such as cyberbullying or censoring/stalking by strangers, reducing privacy.
In addition, Simone Browne argues that surveillance wields an immense racializing quality such that it operates as "racializing surveillance." Browne uses racializing surveillance to refer to moments when enactments of surveillance are used to reify boundaries, borders, and bodies along racial lines and where the outcome is discriminatory treatment of those who are negatively racialized by such surveillance. Browne argues racializing surveillance pertains to policing what is "in or out of place."
Privacy
Numerous civil rights groups and privacy groups oppose surveillance as a violation of people's right to privacy. Such groups include: Electronic Privacy Information Center, Electronic Frontier Foundation, American Civil Liberties Union and Privacy International.
There have been several lawsuits such as Hepting v. AT&T and EPIC v. Department of Justice by groups or individuals, opposing certain surveillance activities.
Legislative proceedings such as those that took place during the Church Committee, which investigated domestic intelligence programs such as COINTELPRO, have also weighed the pros and cons of surveillance.
Court cases
People vs. Diaz (2011) was a court case in the realm of cell phone privacy, even though the decision was later overturned. In this case, Gregory Diaz was arrested during a sting operation for attempting to sell ecstasy. During his arrest, police searched Diaz's phone and found more incriminating evidence including SMS text messages and photographs depicting illicit activities. During his trial, Diaz attempted to have the information from his cell phone removed from evidence, but the courts deemed it as lawful and Diaz's appeal was denied on the California State Court level and, later, the Supreme Court level. Just three short years after, this decision was overturned in the case Riley vs. California (2014).
Riley vs. California (2014) was a U.S. Supreme Court case in which a man was arrested for his involvement in a drive-by shooting. A few days after the shooting the police made an arrest of the suspect (Riley), and, during the arrest, the police searched him. However, this search was not only of Riley's person, but also the police opened and searched his cell phone, finding pictures of other weapons, drugs, and of Riley showing gang signs. In court, the question arose whether searching the phone was lawful or if the search was protected by the 4th amendment of the constitution. The decision held that the search of Riley's cell phone during the arrest was illegal, and that it was protected by the 4th Amendment.
Countersurveillance, inverse surveillance, sousveillance
Countersurveillance is the practice of avoiding surveillance or making surveillance difficult. Developments in the late twentieth century have caused counter surveillance to dramatically grow in both scope and complexity, such as the Internet, increasing prevalence of electronic security systems, high-altitude (and possibly armed) UAVs, and large corporate and government computer databases.
Inverse surveillance is the practice of the reversal of surveillance on other individuals or groups (e.g., citizens photographing police). Well-known examples include George Holliday's recording of the Rodney King beating and the organization Copwatch, which attempts to monitor police officers to prevent police brutality. Counter-surveillance can be also used in applications to prevent corporate spying, or to track other criminals by certain criminal entities. It can also be used to deter stalking methods used by various entities and organizations.
Sousveillance is inverse surveillance, involving the recording by private individuals, rather than government or corporate entities.
Popular culture
In literature
George Orwell's novel Nineteen Eighty-Four portrays a fictional totalitarian surveillance society with a very simple mass surveillance system consisting of human operatives, informants, and two-way "telescreens" in people's homes. Because of the impact of this book, mass-surveillance technologies are commonly called "Orwellian" when they are considered problematic.
The novel mistrust highlights the negative effects from the overuse of surveillance at Reflection House. The central character Kerryn installs secret cameras to monitor her housemates – see also Paranoia.
The book The Handmaid's Tale, as well as a film and TV series based on it, portray a totalitarian Christian theocracy where all citizens are kept under constant surveillance.
In the book The Girl with the Dragon Tattoo, Lisbeth Salander uses computers to get information on people, as well as other common surveillance methods, as a freelancer.
V for Vendetta, a British graphic novel written by Alan Moore
David Egger's novel The Circle exhibits a world where a single company called "The Circle" produces all of the latest and highest quality technologies from computers and smartphones, to surveillance cameras known as "See-Change cameras". This company becomes associated with politics when starting a movement where politicians go "transparent" by wearing See-Change cameras on their body to prevent keeping secrets from the public about their daily work activity. In this society, it becomes mandatory to share personal information and experiences because it is The Circle's belief that everyone should have access to all information freely. However, as Eggers illustrates, this takes a toll on the individuals and creates a disruption of power between the governments and the private company. The Circle presents extreme ideologies surrounding mandatory surveillance. Eamon Bailey, one of the Wise Men, or founders of The Circle, believes that possessing the tools to access information about anything or anyone, should be a human right given to all of the world's citizens. By eliminating all secrets, any behaviour that has been deemed shameful will either become normalized or no longer considered shocking. Negative actions will eventually be eradicated from society altogether, through the fear of being exposed to other citizens This would be achieved in part by everyone going transparent, something that Bailey highly supports, although it's notable that none of the Wise Men ever became transparent themselves. One major goal of The Circle is to have all of the world's information filtered through The Circle, a process they call "Completion". A single, private company would then have full access and control over all information and privacy of individuals and governments. Ty Gospodinov, the first founder of The Circle, has major concerns about the completion of the circle. He warns that this step would give The Circle too much power and control, and would quickly lead to totalitarianism.
In music
The Dead Kennedys' song "I Am The Owl" is about government surveillance and social engineering of political groups.
The Vienna Teng song "Hymn of Acxiom" is about corporate data collection and surveillance.
Onscreen
The film Gattaca portrays a society that uses biometric surveillance to distinguish between people who are genetically engineered "superior" humans and genetically natural "inferior" humans.
In the movie Minority Report, the police and government intelligence agencies use micro aerial vehicles in SWAT operations and for surveillance purposes.
HBO's crime-drama series The Sopranos regularly portrays the FBI's surveillance of the DiMeo Crime Family. Audio devices they use include "bugs" placed in strategic locations (e.g., in "I Dream of Jeannie Cusamano" and "Mr. Ruggerio's Neighborhood") and hidden microphones worn by operatives (e.g., in "Rat Pack") and informants (e.g., in "Funhouse", "Proshai, Livushka" and "Members Only"). Visual devices include hidden still cameras (e.g., in "Pax Soprana") and video cameras (e.g., in "Long Term Parking").
The movie THX-1138 portrays a society wherein people are drugged with sedatives and antidepressants, and have surveillance cameras watching them everywhere they go.
The movie The Lives of Others portrays the monitoring of East Berlin by agents of the Stasi, the GDR's secret police.
The movie The Conversation portrays many methods of audio surveillance.
The movie V for Vendetta, a 2005 dystopian political thriller film directed by James McTeigue and written by the Wachowskis, is about British government trying to brainwash people by media, obtain their support by fearmongering, monitor them by mass surveillance devices, and suppress or kill any political or social objection.
The movie Enemy of the State a 1998 American action-thriller film directed by Tony Scott is about using U.S. citizens' data to search their background and surveillance devices to capture everyone that is identified as "enemy".
The British TV series The Capture explores the potential for video surveillance to be manipulated in order to support a conviction to pursue a political agenda.
See also
Mass surveillance
Sousveillance
Surveillance art
Surveillance capitalism
Surveillance system monitor
Trapwire
Participatory surveillance
PRISM (surveillance program)
References
Further reading
Allmer, Thomas. (2012). Towards a Critical Theory of Surveillance in Informational Capitalism. Frankfurt am Main: Peter Lang.
Andrejevic, Mark. 2007. iSpy: Surveillance and Power in the Interactive Era. Lawrence, KS: University Press of Kansas.
Ball, Kirstie, Kevin D. Haggerty, and David Lyon, eds. (2012). Routledge Handbook of Surveillance Studies. New York: Routledge.
Brayne, Sarah. (2020). Predict and Surveil: Data, Discretion, and the Future of Policing. New York: Oxford University Press.
Browne, Simone. (2015). Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.
Coleman, Roy, and Michael McCahill. 2011. Surveillance & Crime. Thousand Oaks, Calif.: Sage.
Feldman, Jay. (2011). Manufacturing Hysteria: A History of Scapegoating, Surveillance, and Secrecy in Modern America. New York, NY: Pantheon Books.
Fuchs, Christian, Kees Boersma, Anders Albrechtslund, and Marisol Sandoval, eds. (2012). "Internet and Surveillance: The Challenges of Web 2.0 and Social Media". New York: Routledge.
Garfinkel, Simson, Database Nation; The Death of Privacy in the 21st Century. O'Reilly & Associates, Inc.
Gilliom, John. (2001). Overseers of the Poor: Surveillance, Resistance, and the Limits of Privacy, University Of Chicago Press,
Haque, Akhlaque. (2015). Surveillance, Transparency and Democracy: Public Administration in the Information Age. University of Alabama Press, Tuscaloosa, AL.
Harris, Shane. (2011). The Watchers: The Rise of America's Surveillance State. London, UK: Penguin Books Ltd.
Hier, Sean P., & Greenberg, Joshua (Eds.). (2009). Surveillance: Power, Problems, and Politics. Vancouver, CA: UBC Press.
Jensen, Derrick and Draffan, George (2004) Welcome to the Machine: Science, Surveillance, and the Culture of Control Chelsea Green Publishing Company.
Lewis, Randolph. (2017). Under Surveillance: Being Watched in Modern America. Austin: University of Texas Press.
Lyon, David (2001). Surveillance Society: Monitoring in Everyday Life. Philadelphia: Open University Press.
Lyon, David (Ed.). (2006). Theorizing Surveillance: The Panopticon and Beyond. Cullompton, UK: Willan Publishing.
Lyon, David (2007) Surveillance Studies: An Overview. Cambridge: Polity Press.
Matteralt, Armand. (2010). The Globalization of Surveillance. Cambridge, UK: Polity Press.
Monahan, Torin, ed. (2006). Surveillance and Security: Technological Politics and Power in Everyday Life. New York: Routledge.
Monahan, Torin. (2010). Surveillance in the Time of Insecurity. New Brunswick: Rutgers University Press.
Monahan, Torin, and David Murakami Wood, eds. (2018). Surveillance Studies: A Reader. New York: Oxford University Press.
Parenti, Christian The Soft Cage: Surveillance in America From Slavery to the War on Terror, Basic Books,
Petersen, J.K. (2012) Handbook of Surveillance Technologies, Third Edition, Taylor & Francis: CRC Press, 1020 pp.,
Staples, William G. (2000). Everyday Surveillance: Vigilance and Visibility in Post-Modern Life. Lanham, MD: Rowman & Littlefield Publishers.
General information
(Volume 66, Number 3, July–August)
ACLU, "The Surveillance-Industrial Complex: How the American Government Is Conscripting Businesses and Individuals in the Construction of a Surveillance Society"
Balkin, Jack M. (2008). "The Constitution in the National Surveillance State", Yale Law School
Bibo, Didier and Delmas-Marty, "The State and Surveillance: Fear and Control"
EFF Privacy Resources
EPIC Privacy Resources
ICO. (September 2006). "A Report on the Surveillance Society for the Information Commissioner by the Surveillance Studies Network".
Privacy Information Center
Historical information
COINTELPRO—FBI counterintelligence programs designed to neutralize political dissidents
Reversing the Whispering Gallery of Dionysius – A Short History of Electronic Surveillance in the United States
Legal resources
EFF Legal Cases
Guide to lawful intercept legislation around the world
External links
Crime prevention
Espionage techniques
Law enforcement
Law enforcement techniques
National security
Privacy
Security |
87920 | https://en.wikipedia.org/wiki/Accelerator | Accelerator | Accelerator may refer to:
In science and technology
In computing
Download accelerator, or download manager, software dedicated to downloading
Hardware acceleration, the use of dedicated hardware to perform functions faster than a CPU
Graphics processing unit or graphics accelerator, a dedicated graphics-rendering device
Accelerator (library), a library that allows the coding of programs for a graphics processing unit
Cryptographic accelerator, performs decrypting/encrypting
Web accelerator, a proxy server that speeds web-site access
Accelerator (Internet Explorer), a form of selection-based search
Accelerator table, specifies keyboard shortcuts for commands
Apple II accelerators, hardware devices designed to speed up an Apple II computer
PHP accelerator, speeds up software applications written in the PHP programming language
SAP BI Accelerator, speeds up online analytical processing queries
SSL/TLS accelerator, offloads public-key encryption algorithms to a hardware accelerator
TCP accelerator, or TCP Offload Engine, offloads processing of the TCP/IP stack to a network controller
Accelerator (software), collection of development solutions for LANSA and .NET
Keyboard shortcut, a set of key presses that invoke a software or operating system operation
Accelerator or AFU (Accelerator Function Unit): a component of IBM's Coherent Accelerator Processor Interface (CAPI)
In physics and chemistry
Accelerator (chemistry), a substance that increases the rate of a chemical reaction
Araldite accelerator 062, or Dimethylbenzylamine, an organic compound
Cement accelerator, an admixture that speeds the cure time of concrete
Particle accelerator, a device which uses electric and/or magnetic fields to propel charged particles to high speeds
Accelerator-driven system (ADS), a nuclear reactor coupled to a particle accelerator
Accelerator mass spectrometry (AMS), a form of mass spectrometry
Tanning accelerator, chemicals that increase the effect of ultraviolet radiation on human skin
Vulcanizing accelerators, chemical agents to speed the vulcanization of rubber
Firearms
.22 Accelerator or Remington Accelerator, a type of .224 caliber bullet
Electricothermal accelerator, a weapon that uses a plasma discharge to accelerate its projectile
Magnetic accelerator gun, a weapon that converts magnetic energy into kinetic energy for a projectile
Ram accelerator, a device for accelerating projectiles using ramjet or scramjet combustion
Produce accelerators, cannons which use air pressure or combustion to launch large projectiles at low speed
Other technologies
Accelerator, gas pedal or throttle, a foot pedal that controls the engine speed of an automobile
Accelerator Coaster, a roller coaster that uses hydraulic acceleration
In entertainment
The Accelerators, US rock band
Accelerator (Royal Trux album), their seventh studio album, released in 1998
Accelerator (The Future Sound of London album), their debut album
"Accelerator", a 1993 single by Gumball
"Accelerator", a song by band The O.A.O.T.'s, on their album Typical
"Accelerator", a song by the band Primal Scream from their album XTRMNTR
XLR8R (pronounced "accelerator"), a magazine and website that covers music, culture, style, and technology
Accelerator (To Aru Majutsu No Index), one of the main characters in the anime series Toaru Majutsu no Index
Accelerator (Universal Studios Singapore), a whirling twirling ride that spins guests around
PC Accelerator, a personal computer game magazine
The Accelerators (comics), a comic book created by Ronnie Porto and Gavin Smith
Other uses
Accelerator pedal or gas pedal
Seed- or startup-accelerator, an organization that offers advice and resources to help small businesses grow
Accelerator effect, economic stimulus to private fixed investment due to growth in aggregate demand
Saskatoon Accelerators, a professional soccer team based in Saskatoon, Canada
See also
Accelerant
Acceleration (disambiguation)
Accelerate (disambiguation) |
88823 | https://en.wikipedia.org/wiki/AmigaDOS | AmigaDOS | AmigaDOS is the disk operating system of the AmigaOS, which includes file systems, file and directory manipulation, the command-line interface, and file redirection.
In AmigaOS 1.x, AmigaDOS is based on a TRIPOS port by MetaComCo, written in BCPL. BCPL does not use native pointers, so the more advanced functionality of the operating system was difficult to use and error-prone. The third-party AmigaDOS Resource Project (ARP, formerly the AmigaDOS Replacement Project), a project begun by Amiga developer Charlie Heath, replaced many of the BCPL utilities with smaller, more sophisticated equivalents written in C and assembler, and provided a wrapper library, arp.library. This eliminated the interfacing problems in applications by automatically performing conversions from native pointers (such as those used by C or assembler) to BCPL equivalents and vice versa for all AmigaDOS functions.
From AmigaOS 2.x onwards, AmigaDOS was rewritten in C, retaining 1.x compatibility where possible. Starting with AmigaOS 4, AmigaDOS abandoned its legacy with BCPL. Starting from AmigaOS 4.1, AmigaDOS has been extended with 64-bit file-access support.
Console
The Amiga console is a standard Amiga virtual device, normally assigned to CON: and driven by console.handler. It was developed from a primitive interface in AmigaOS 1.1, and became stable with versions 1.2 and 1.3, when it started to be known as AmigaShell and its original handler was replaced by newconsole.handler (NEWCON:).
The console has various features that were considered up to date when it was created in 1985, like command template help, redirection to null ("NIL:"), and ANSI color terminal. The new console handler – which was implemented in release 1.2 – allows many more features, such as command history, pipelines, and automatic creation of files when output is redirected. When TCP/IP stacks like AmiTCP were released in the early 1990s, the console could also receive redirection from Internet-enabled Amiga device handlers (e.g., TCP:, ).
Unlike other systems originally launched in the mid-1980s, AmigaDOS does not implement a proprietary character set; the developers chose to use the ANSI–ISO standard ISO-8859-1 (Latin 1), which includes the ASCII character set. As in Unix systems, the Amiga console accepts only linefeed ("LF") as an end-of-line ("EOL") character. The Amiga console has support for accented characters as well as for characters created by combinations of 'dead keys' on the keyboard.
Syntax of AmigaDOS commands
This is an example of typical AmigaDOS command syntax:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir DF0:
|-
|
Without entering the directory tree, this shows the content of a directory of a floppy disk and lists subdirectories as well.
|-
| style="vertical-align:top;"| 1> Dir SYS: ALL
|-
|
The argument "ALL" causes the command to show the entire content of a volume or device, entering and expanding all directory trees. "SYS:" is a default name that is assigned to the boot device, regardless of its physical name.
|}
Command redirection
AmigaDOS can redirect the output of a command to files, pipes, a printer, the null device, and other Amiga devices.
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir > SPEAK: ALL
|-
|
Redirects the output of the "dir" command to the speech synthesis handler. The colon character ":" indicates that SPEAK: points to an AmigaDOS device. While a typical use for a device is file systems, special-purpose device names such as this are commonly used in the system.
|}
Command template
AmigaDOS commands are expected to provide a standard "template" that describes the arguments they can accept. This can be used as a basic "help" feature for commands, although third-party replacement console handlers and shells, such as Bash or Zshell (ported from Unix), or KingCON often provide more verbose help for built-in commands.
On requesting the template for the command "Copy", the following output is obtained:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Copy ?
|-
|
|-
| style="vertical-align:top;"| FROM, TO/A, ALL/S, QUIET/S
|-
|
This string means that the user must use this command in conjunction with FROM and TO arguments, where the latter is compulsory (). The argument keywords ALL and QUIET are switches () and change the results of the command Copy (ALL causes all files in a directory to be copied, while QUIET will cause the command to generate no output).
|}
By reading this template, a user can know that the following syntax is acceptable for the command:
{|style="background:transparent"
|-
| style="vertical-align:top;"|Copy DF0:Filename TO DH0:Directory/Filename
|}
Breaking commands and pausing console output
A user can terminate a program by invoking the key combination or . Pressing or any printing character on the keyboard suspends the console output. Output may be resumed by pressing the key (to delete all of the input) or by pressing (which will cause the input to be processed as a command as soon as the current command stops running).
Wildcard characters
Like other operating systems, AmigaDOS also provides wildcard characters that are substitutes for any character or any sequence of random characters in a string. Here is an example of wildcard characters in AmigaDOS commands:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Dir #?.info
|-
|
searches the current directory for any file containing ".info" at its end as suffix, and displays only these files in the output.
|}
The parsing of this is as follows. The "?" wildcard indicates "any character". Prefixing this with a "#" indicates "any number of repetitions". This can be viewed as analogous to the regular expression ".*".
Scripting
AmigaDOS also has the feature of dealing with batch programming, which it calls "script" programming, and has a number of commands such as Echo, If, Then, EndIf, Val, and Skip to deal with structured script programming. Scripts are text-based files and can be created with AmigaDOS's internal text editor program, called Ed (unrelated to Unix's Ed), or with any other third-party text editor. To invoke a script program, AmigaDOS uses the command Execute.
{|style="background:transparent"
| style="vertical-align:top;"| 1> Execute myscript
|-
|
executes the script called "myscript".
|}
This method of executing scripts keeps the console window busy until the script has finished its scheduled job. Users cannot interact with the console window until the script ends or until they interrupt it.
While:
{|style="background:transparent"
| style="vertical-align:top;"| 1> Run Execute myscript
|-
|
The AmigaDOS command "Run" executes any DOS command or any kind of program and keeps the console free for further input.
|}
Protection bits
Protection bits are flags that files, links and directories have in the filesystem. To change them one can either use the command Protect, or use the Information entry from the Icons menu in Workbench on selected files. AmigaDOS supports the following set of protection bits (abbreviated as HSPARWED):
H = Hold (reentrant commands with the P-bit set will automatically become resident on first execution. Requires E, P and R bits set to work. Does not mean "Hide". See below.)
S = Script (Batch file. Requires E and R bits set to work.) If this protection bit is set on, then AmigaDOS is able to recognize and automatically run a script by simply invoking its name. Without S bit scripts can still be launched using the Execute command.
P = Pure (indicates reentrant commands that can be made resident in RAM and then no longer need to be loaded any time from flash drives, hard disks or any other media device. Requires E and R bits set to work.)
A = Archive (Archived bit, used by various backup programs to indicate that a file has been backed up)
R = Read (Permission to read the file, link or content of directory)
W = Write (Permission to write the file, link or inside a directory)
E = Execute (Permission to execute the file or enter the directory. All commands need this bit set, or they won't run. Requires R bit set to work.)
D = Delete (Permission to delete the file, link or directory)
The H-bit has often been misunderstood to mean "Hide". In Smart File System (SFS) files and directories with H-bit set are hidden from the system. It is still possible access hidden files but they don't appear in any directory listings.
Demonstration of H-bit in action:
Notice how the list command becomes resident after execution when the H-bit is set.
Local and global variables
As any other DOS, Amiga deals with environment variables as used in batch programming.
There are both global and local variables, and they are referred to with a dollar sign in front of the variable name, for example $myvar. Global variables are available system-wide; local variables are only valid in the current shell. In case of name collision, local variables have precedence over global variables. Global variables can be set using the command SetEnv, while local variables can be set using the command Set. There are also the commands GetEnv and Get that can be used to print out global and local variables.
The examples below demonstrate simple usage:
1> setenv foo blapp
1> echo $foo
blapp
1> set foo bar
1> echo $foo
bar
1> getenv foo
blapp
1> get foo
bar
1> type ENV:foo
blapp
1> setenv save foo $foo
1> type ENV:foo
bar
1> type ENVARC:foo
bar
Note the save flag of the SetEnv command and how global variables are available in the filesystem
Global variables are kept as files in ENV:, and optionally saved on disk in ENVARC: to survive reboot and power cycling. ENV: is by default an assign to RAM:Env, and ENVARC: is an assign to SYS:Prefs/Env-archive where SYS: refers to the boot device. On bootup, the content of ENVARC: is copied to ENV: for accessibility.
When programming AmigaDOS scripts, one must keep in mind that global variables are system-wide. All script-internal variables shall be set using local variables, or one risks conflicts over global variables between scripts. Also, global variables require filesystem access, which typically makes them slower to access than local variables.
Since ENVARC: is also used to store other system settings than just string variables (such as system settings, default icons and more), it tends to grow large over time, and copying everything over to ENV: located on RAM disk becomes expensive. This has led to alternative ways to set up ENV: by using dedicated ramdisk handlers that only copy files over from ENVARC: when the files are requested. Examples of such handlers are and.
An example demonstrating creative abuse of global variables as well as Lab and Skip is the AmigaDOS variant of the infamous GOTO.
Case sensitivity
AmigaDOS is in general case-insensitive. Indicating a device as "Dh0:", "DH0:" or "dh0:" always refers to the same partition; however, for file and directory names, this is filesystem-dependent, and some filesystems allow case sensitivity as a flag upon formatting. An example of such a file system is Smart File System. This is very convenient when dealing with software ported over from the mostly case-sensitive Un*x world, but causes much confusion for native Amiga applications, which assume case insensitivity. Advanced users will hence typically only use the case sensitivity flag for file systems used for software originating from Un*x.
Re-casing of file, directory and volume names is allowed using ordinary methods; the commands "rename foo Foo" and "relabel Bar: bAr:" are valid and do exactly what is expected, in contrast to for example on Linux, where "mv foo Foo" results in the error message "mv: `foo' and `Foo' are the same file" on case-insensitive filesystems like VFAT.
Volume naming conventions
Partitions and physical drives are typically referred to as DF0: (floppy drive 0), DH0: (hard drive 0), etc. However, unlike many operating systems, outside of built-in physical hardware devices like DF0: or HD0:, the names of the single disks, volumes and partitions are totally arbitrary: for example a hard disk partition could be named Work or System, or anything else at the time of its creation. Volume names can be used in place of the corresponding device names, so a disk partition on device DH0: called Workbench could be accessed either with the name DH0: or Workbench:. Users must indicate to the system that "Workbench" is the volume "Workbench:" by always typing the colon ":" when they are entering information in a requester form or into AmigaShell.
If an accessed volume name cannot be found, the operating system will prompt the user to insert the disk with the given volume name, or allow the user to cancel the operation.
In addition, logical device names can be set with the "assign" command to any directory or device; programs often assigned a virtual volume name to their installation directory (for instance, a fictional wordprocessor called Writer might assign Writer: to DH0:Productivity/Writer). This allows for easy relocation of installed programs. The default name SYS: is used to refer to the volume that the system was booted from. Various other default names are provided to refer to important system locations. e.g. S: for startup scripts, C: for AmigaDOS commands, FONTS: for installed fonts, etc.
Assignment of volume labels can also be set on multiple directories, which will be treated as a union of their contents. For example, FONTS: might be assigned to SYS:Fonts, then extended to include, for example, Work:UserFonts using the add option of the AmigaDos assign command. The system would then permit use of fonts installed in either directory. Listing FONTS: would show the files from both locations.
Conventions of names and typical behaviour of virtual devices
The physical device shares the same floppy drive mechanics with , which is the CrossDOS virtual device capable of reading PC formatted floppy disks. When any PC formatted floppy disk is inserted into the floppy drive, then the floppy Amiga icon will change to indicate that the disk is unknown to the normal Amiga device, and it will show four question marks as the standard "unknown" volume name, while the icon will appear revealing the name of the PC formatted disk. Any disk change with Amiga formatted disks will invert this behaviour.
File systems
AmigaDOS supports various filesystems and variants. The first filesystem was simply called Amiga FileSystem, and was suitable mainly for floppy disks, because it did not support automatic booting from hard disks (on floppy, booting was done using code from the bootblock). It was soon replaced by FastFileSystem (FFS), and hence the original filesystem was known by the name of "Old" FileSystem (OFS). FFS was more efficient on space and quite measurably faster than OFS, hence the name.
With AmigaOS 2.x, FFS became an official part of the OS and was soon expanded to recognise cached partitions, international partitions allowing accented characters in file and partition names, and finally (with MorphOS and AmigaOS 4) long filenames, up to 108 characters (from 31).
Both AmigaOS 4.x and MorphOS featured a new version of FFS called FastFileSystem 2. FFS2 incorporated all of the features of the original FFS including, as its author put it, "some minor changes". In order to preserve backwards compatibility, there were no major structural changes. (However, FF2 on AmigaOS 4.1 differs in that it can expand its features and capabilities with the aid of plug-ins). As with FFS2, the AmigaOS 4 and MorphOS version of Smart FileSystem is a fork of original SFS and are not 100% compatible with it.
Other filesystems like FAT12, FAT16, FAT32 from Windows or ext2 from Linux are available through easily installable (drag and drop) system libraries or third party modules such as FAT95 (features read/write support), which can be found on the Aminet software repository. MorphOS 2 has built-in support for FAT filesystems.
AmigaOS 4.1 adopted a new filesystem called JXFS capable to support partitions over a terabyte of size.
Alternate filesystems from third-party manufacturers include Professional FileSystem, which is a filesystem with an easy structure, based on metadata, allowing high internal coherence, capable of defragmenting itself on the fly, and does not require to be unmounted before being mounted again; and Smart FileSystem which is a journaling filesystem which performs journaled activities during system inactivities, and has been chosen by MorphOS as its standard filesystem.
Official variants of Amiga filesystems
Old File System/Fast File System
OFS (DOS0)
FFS (DOS1)
OFS International (DOS2)
FFS International (DOS3)
OFS Directory Caching (DOS4)
FFS Directory Caching (DOS5)
Fast File System 2 (AmigaOS4.x/MorphOS)
OFS Long filenames (DOS6)
FFS Long filenames (DOS7)
Both DOS6 and DOS7 feature International filenames featured in DOS2 and DO3, but not Directory Caching, which was abandoned due to bugs in the original implementation. DOS4 and DOS5 are not recommended for use for this reason.
Dostypes are backwards compatible with each other, but not forward compatible. A DOS7 formatted disk cannot be read on original Amiga FFS, and a DOS3 disk cannot be read on a KS1.3 Amiga. However, any disk formatted with DOS0 using FFS or FFS2 can be read by any version of the Amiga operating system. For this reason, DOS0 tended to be the format of choice of software developers distributing on floppy, except where a custom filesystem and bootblock was used - a common practice in Amiga games. Where software needed AmigaOS 2 anyway, DOS3 was generally used.
FastFileSystem2 plug-ins
With the July 2007 Update of AmigaOS 4.0 in 2007, the first two plug-ins for FFS2 were released:
fs_plugin_cache: increases performance of FFS2 by introducing a new method of data buffering.
fs_plugin_encrypt: data encryption plug-in for partitions using the Blowfish algorithm.
Filename extensions
AmigaDOS has only a single mandated filename extension: ".info", which must be appended to the filename of each icon. If a file called myprog exists, then its icon file must be called myprog.info. In addition to image data, the icon file also records program metadata such as options and keywords, its own position on the desktop (AmigaOS can "snapshot" icons in places defined by the user), and other information about the file. Directory window size and position information is stored in the ".info" file associated with the directory, and disk icon information is stored in "Disk.info" in the root of the volume.
With the exception of icons, the Amiga system does not identify file types using extensions, but instead will examine either the icon associated with a file or the binary header of the file itself to determine the file type.
See also
Comparison of operating systems
References
Further reading
External links
AmigaOS
Disk operating systems
MorphOS
1985 software |
89388 | https://en.wikipedia.org/wiki/Trillian%20%28software%29 | Trillian (software) | Trillian is a proprietary multiprotocol instant messaging application created by Cerulean Studios. It is currently available for Microsoft Windows, Mac OS X, Linux, Android, iOS, BlackBerry OS, and the Web. It can connect to multiple IM services, such as AIM, Bonjour, Facebook Messenger, Google Talk (Hangouts), IRC, XMPP (Jabber), VZ, and Yahoo! Messenger networks; as well as social networking sites, such as Facebook, Foursquare, LinkedIn, and Twitter; and email services, such as POP3 and IMAP.
Trillian no longer supports Windows Live Messenger or Skype as these services have combined and Microsoft chose to discontinue Skypekit. They also no longer support connecting to MySpace, and no longer support a distinct connection for Gmail, Hotmail or Yahoo! Mail although these can still be connected to via POP3 or IMAP. Currently, Trillian supports Facebook, Google, Jabber (XMPP), and Olark.
Initially released July 1, 2000, as a freeware IRC client, the first commercial version (Trillian Pro 1.0) was published on September 10, 2002. The program was named after Trillian, a fictional character in The Hitchhiker's Guide to the Galaxy by Douglas Adams. A previous version of the official web site even had a tribute to Douglas Adams on its front page. On August 14, 2009, Trillian "Astra" (4.0) for Windows was released, along with its own Astra network. Trillian 5 for Windows was released in May 2011, and Trillian 6.0 was initially released in February 2017.
Features
Connection to multiple IM services
Trillian connects to multiple instant messaging services without the need of running multiple clients. Users can create multiple connections to the same service, and can also group connections under separate identities to prevent confusion. All contacts are gathered under the same contact list. Contacts are not bound to their own IM service groups, and can be dragged and dropped freely.
Trillian represents each service with a different-colored sphere. Prior versions used the corporate logos for each service, but these were removed to avoid copyright issues, although some skins still use the original icons. The Trillian designers chose a color-coding scheme based on the underground maps used by the London Underground that uses different colors to differentiate between different lines.
IM services
Green And Blue for Trillian Astra Network
Grey for IRC
Teal and Amber for Google Talk
Amber and Dark Gray for Bonjour (Rendezvous)
Blue And Teal for Facebook
Purple for Jabber/XMPP (partially broken as of 10/27/2017)
Mail services
A White Envelope for POP emails
a Manila Envelope for IMAP emails
a Teal Envelope for Twitter
Prior versions of Trillian supported:
Microsoft Exchange
Lotus Sametime
Novell GroupWise Messenger
Metacontact
To eliminate duplicates and simplify the structure of the contact list, users can bundle multiple contact entries for the same person into one entry in the contact list, using the Metacontact feature (similarly to Ayttm's fallback messaging feature). Subcontacts will appear under the metacontact as small icons aligned in a manner of a tree.
Activity history
Trillian Pro comes with Activity History, and both log the history as both plain text files and as XML files. Pro has a History Manager that shows the chat history and allows the user to add bookmarks for revision later on. XML-based history makes the log easy to manipulate, searchable and extendable for future functions.
Stream manipulation
Trillian Pro also has a stream manipulation feature labelled 'time travel', which allows the user to record, and subsequently review, pause, rewind, and fast forward live video and audio sessions.
SecureIM
SecureIM is an encryption system built into the Trillian Instant Messenger Client.
It encrypts messages from user to user, so no passively observing node between the two is supposedly able to read the encrypted messages. SecureIM does not authenticate its messages, and therefore it is susceptible to active attacks including simple forms of man-in-the-middle attacks.
According to Cerulean Studios, the makers of Trillian, SecureIM enciphers messages with 128-bit Blowfish encryption. It only works with the OSCAR protocol and if both chat partners use Trillian.
However, the key used for encryption is established using a Diffie–Hellman key exchange which only uses a 128 bit prime number as modulus, which is extremely insecure and can be broken within minutes on a standard PC.
Instant lookup
Starting with version 3.0 in both the Basic and Pro suites, Trillian makes use of the English-language version of the Wikipedia free online encyclopedia for real-time referencing using its database of free knowledge. The feature is employed directly within a conversation window of a user. When one or more words are entered (by either user), Trillian checks all words against a database file and if a match is found, the word appears with a dotted green underline. When users point their mouse over the word, the lead paragraph of the corresponding article is downloaded from Wikipedia and displayed on screen as a tooltip. When users click on the underlined word, they are given the choice to visit the article online.
Emotiblips
Emotiblips are the video equivalent of an emoticon. During video sessions, the user may stream a song or video to the other user in real time. One can send MP3s, WAVs, WMVs, and MPGs with this feature. QuickTime MOV files as Emotiblips are not currently supported.
Hidden smileys
In version 2.0 to the current, the default emoticon set contains emoticons that don't appear in the menu but can be used in conversations. Some of these are animations that can only be viewed in Trillian Pro, but all of them can be used regardless.
Skins and interfaces (Discontinued)
Trillian has its own unique skinning engine known as SkinXML. Many skins have been developed for Trillian and they can be downloaded from the official skins gallery or deviantArt.
Trillian also came with an easier skinning language, Stixe, which is essentially a set of XML Entities that simplifies repetitive codes and allows skinners to share XML and graphics in the form of emoticon packs, sound packs and interfaces.
The default skins of Trillian are designed by Madelena Mak. Trillian Cordillera was used in Trillian 0.7x, while Trillian Whistler has been the default skin for Trillian since Pro 1.0. Small cosmetic changes were noticeable in each major release.
The Trillian Astra features a brand new design for the front-end UI, named Trillian Cordonata.
Plugins (Discontinued)
Trillian is a closed-source application, but the Pro version can be extended by plugins. Plugins by Cerulean Studios itself include spell-check, weather monitor, a mini-browser (for viewing AIM profiles), Winamp song title scroller, stock exchange monitor, RSS feedreader, and conversation abilities for the Logitech G15 keyboard, as well as a plug-in for the XMPP and Bonjour networks. Others have developed various plug-ins, such as a games plug-in which can be used to play chess and checkers, a protocol plugin to send NetBIOS messages through Trillian, a plug-in to interact with Lotus Sametime clients, a plug-in to interact with Microsoft Exchange, a POP3 and IMAP email checker, or an automatic translator for many European languages to and from English.
Trillian 5.1 for Windows and later included a plug-in that allows you to chat and make calls on Skype without Skype being installed. As of July 2014, Skype is no longer accessible from the Trillian client, as the Skype plug-in no longer works (some had been able to use older versions of the Trillian client, but now these also no longer work with Skype.)
Plugins are available for free and are hosted on the official web site, but most need Trillian Pro 2+ to run.
In-Game Chat
Starting at version 5.3, Trillian users can toggle an overlay when playing a video game on the computer that allows the user to use Trillian's chat features, in a similar vein to Steam's overlay chat. When toggled, the overlay will show the time according to the system's clock, and the chat window itself is a variation Trillian's base chat window, with tabs used for different sets of queries and channels. Also, when the overlay is not activated, users can view a toggle-able sticker that tells the user how many messages are unread.
History
Early beginnings
After several internal builds, the first ever public release of Trillian, version 0.50, was available on July 1, 2000, and was designed to be an IRC client. The release was deemed 'too buggy' and was immediately pulled off the shelf and replaced by a new version 0.51 on the same day. It featured a simple Connection Manager and skinned windows.
A month later, two minor builds were released with additional IRC features and bug fixes. Despite these efforts, Trillian was not popular, as reflected in the number of downloads from CNET's Download.com.
Trillian was a donateware at that time. They used PayPal for receiving donations through their web site.
Introduction of interoperability
Version 0.6, released November 29, 2000, represented a major change in the direction of development, when the client became able to connect to AOL Instant Messenger, ICQ and MSN Messenger simultaneously in one window.
Although similar products, such as Odigo and Imici, already existed, Trillian was novel in the way that it distinguished contacts from different IM services clearly on the contact list, and it did not require registration of a proprietary account. It also did not lose connection easily like the other clients.
A month later, Yahoo! Messenger support was introduced in Trillian 0.61, and it also featured a holiday skin for Christmas. Meanwhile, the Trillian community forums were opened to the public.
During this period, new versions were released frequently, attracting many enthusiasts to the community. Skinning activity boomed and fan sites were created. A skinning contest was held on deviantArt in Summer, and the winner was selected to design the default skin for the next version of Trillian. Trillian hit 100,000 downloads on August 14, 2001.
Entry into mainstream and the "IM Wars"
Contrary to the anticipation for version "0.64" in the community, the next version of Trillian was numbered 0.70. It was released December 5, 2001. Development took five months, considerably longer than development of prior builds.
The new version implemented file transfer in all IM services, a feature most requested by the community at the time. It also represented a number of skin language changes. It used the contact list as the main window (as opposed to a status window 'container' in prior versions) and featured a brand new default skin, Trillian Cordillera, and an emoticon set boasting over 100 emoticons, setting a record apart from other messengers available at that time.
Version 0.71 was released on December 18, 2001. It supported AIM group chats and was the first major IM client which included the ability to encrypt messages with SecureIM.
In the following months, the number of downloads of Trillian surged, reaching 1 million on 27 January 2002, and 5 million within 6 months. Trillian received coverage and favorable reviews from mainstream media worldwide, particularly by CNET, Wired and BetaNews. The lead developer and co-founder, Scott Werndorfer, was also interviewed on TechTV.
AOL became aware that Trillian users were able to chat with their AIM buddies without having to download the AIM client, and on January 28, 2002, AOL blocked SecureIM access from Trillian clients. Cerulean appeared to have circumvented the block with version 0.721 of its client software, released one day later. This "AOL War" continued for the next couple of weeks, with Cerulean releasing subsequent patches 0.722, 0.723 and 0.724.
Trillian appeared in the Jupiter Media Metrix Internet audience ratings in February 2002 with 344,000 unique users, and grew to 610,000 by April 2002. While those numbers are very small compared to the major IM networks, Jupiter said Trillian consistently ranks highest according to the number of average minutes spent per month.
Trillian also created a special version for Iomega ActiveDisk.
Commercialisation with Trillian Pro
On September 9, 2002 a commercial version, Trillian Pro 1.0, was released concurrently with Trillian Basic 0.74. The commercial version was sold for $25 US for a year of subscription, but all those who donated to the development of Trillian before were eligible to a year of subscription at no cost.
The new version had added SMS and mobile messaging abilities, Yahoo! Messenger webcam support, pop-up e-mail alerts and new plug-ins to shuttle news, weather and stock quotes directly to buddy lists.
It appeared Trillian Pro would be marketed to corporate clients looking to keep in touch with suppliers or customers via a secured, interoperable IM network, and a relatively stern user interface. The company had no venture capital backing, and had depended entirely on donations from users to stay alive.
Trillian Pro 1.0 was nominated and picked among three other nominees as the Best Internet Communication shareware in its debut year of being a "try before you buy" shareware.
On April 26, 2003, total downloads of Trillian reached ten million.
Blocking from Yahoo! and cooperation with Gaim
A few weeks after Trillian Pro 2.0 was released, Yahoo! attempted to block Trillian from connecting to its service in their "efforts to implement preventative measures to protect our users from potential spammers." A few patches were released by the Trillian developers, which resolved the issue.
The Trillian developers assisted its open-source cross-platform rival Gaim in solving the Yahoo! connection issues. Sean Egan, the developer of Gaim, posted in its site, "Our friends over at Cerulean Studios managed to break my speed record at cracking Yahoo! authentication schemes with an impressive feat of hackery. They sent it over and here it is in Gaim 0.70." It was later revealed that the developers were friends and had helped each other on past occasions.
Meanwhile, as Microsoft forced its users to upgrade to MSN Messenger 5.0 for upgrades in their servers for security issues, October 15, 2003 also would mark the deadline for Trillian support for MSN Messenger. However, it appeared that Cerulean Studios worked with Microsoft to resolve the issue on August 2, 2003, long before the deadline.
On March 7, 2004 and June 23, 2004, Yahoo! changed its instant messaging language again to prevent third-party services, such as Trillian, from accessing its service. Like prior statements, the company said the block is meant as a pre-emptive measure against spammers. Cerulean Studios released a few patches to fix the issues within a day or two.
Trillian 3 Series
In August 2004, a new official blog was created in attempt to rebuild connections between the Studios and its customers. Trillian 3 was announced in the blog, and a sneak preview was made available to a small group of testers.
After months of beta-testing, the final build of Trillian 3 was released on December 18, 2004, with features such as new video and audio chat abilities throughout AIM, MSN Messenger and Yahoo! Messenger, an enhanced logging manager and integration with the Wikipedia online encyclopedia. It also featured a clean and re-organized user interface and a brand new official web site.
The release also updated the long-abandoned Trillian Basic .74 to match the new user interface and functionalities as Trillian Basic 3.0. The number of accumulated downloads of Trillian Basic in Download.com hit 20 million within a matter of weeks.
Trillian 3.1 was released February 23, 2005. It included new features such as Universal Plug and Play (UPnP) and multiple identities support.
On June 10, 2011, all instances of Trillian 3 Basic got an automatic upgrade to Trillian 3 Pro, free of charge.
U3 and Google Pack
A version of Trillian that could run on U3 USB flash drives was released on October 21, 2005. Trillian could previously be run from generic flash drives or other storage devices with some minor unofficial modifications, known as "Trillian Anywhere". A U3 version of Trillian Astra is also posted on the official Cerulean Studios forum.
On January 6, 2006, Larry Page, President of Products at Google, announced Google Pack, a bundle of various applications including Trillian Basic 3.0 as "a free collection of safe, useful software from Google and other companies that improves the user experience online and on the desktop".
According to the Cerulean Studios blog, Trillian was discontinued from Google Pack on 19 May 2006.
The inclusion of Trillian in Google Pack was perplexing to some media analysts as Google had at the time its own Google Talk service which touted the benefits of an open IM system. The free Trillian Basic client could not be used with Google Talk, however, the paid Trillian Pro was listed as one of the "client choices" in the Google Talk client choices list until Google Talk was replaced by Google Hangouts in May 2013.
Trillian Astra (Trillian 4)
More than a year after the release of Trillian 3.1, the Cerulean Studios blog began spreading news again and announced the next version of Trillian, to be named Trillian Astra. The name for version 4, Astra, is the nickname used by the same fictional character that is the namesake of the software, which is a reference to The Hitchhiker's Guide to the Galaxy. The new release claimed to be faster and include a new login screen. A new domain, www.trillianastra.com, was disclosed to the public, with only the logo and blue background. On July 3, 2009, Cerulean Studios reopened the premium web version of Astra to public testing. On August 14, 2009, Cerulean Studios released the final gold build. Trillian has its own social network named Astra Network, in which users who have Astra ID can communicate with each other on the network regardless of platform. Cerulean Studios later registered a new domain, www.trillian.im, to provide a more user-friendly experience.
On November 18, 2009, the first mobile version of Trillian was launched for iPhone. As of 2010, final builds for Android, BlackBerry, and Apple iOS were available for their markets (Market, App World and App Store respectively). Trillian initially cost $4.99 USD but became free of charge, supported by ads, in 2011.
As of August 2010, the Mac OS X version was in beta testing.
Trillian 5
On August 2, 2010, Trillian 5.0 was released as a public beta. Newer features included a resize-able interface, History synchronization, a new ribbon inspired interface with Windows theme integration, new "marble-like" icons for service providers, the option to revert to the Trillian 3 & 4 interfaces, and a new social network interface window were introduced. Along with Trillian 5.0 For Windows and the aforementioned Mac beta. As of 2010, the Android and BlackBerry OS final builds were available on their respective markets for free.
OpenCandy
Included with the installation of Trillian 5.0 was a program called OpenCandy, which some security programs, including Microsoft Security Essentials, classed as adware. OpenCandy was removed shortly after on May 5, 2011.
Trillian 6
On January 8, 2016, Trillian 6 was released.
As ICQ has decided to disable support for 3rd party IM clients, Trillian is no longer able to connect to ICQ as of April 1, 2019.
See also
List of XMPP clients
Comparison of instant messaging clients
Comparison of IRC clients
Comparison of instant messaging protocols
References
External links
2000 software
AIM (software) clients
IOS software
BlackBerry software
Windows Internet Relay Chat clients
Windows instant messaging clients
Internet Relay Chat clients
Portable software
Instant messaging clients
Online chat
Android (operating system) software
Yahoo! instant messaging clients |
89847 | https://en.wikipedia.org/wiki/IPod | IPod | The iPod is a series of portable media players and multi-purpose mobile devices designed and marketed by Apple Inc. The first version was released on October 23, 2001, about months after the Macintosh version of iTunes was released. As of 2022, only the 7th generation iPod touch remains in production.
Like other digital music players, some versions of the iPod can serve as external data storage devices. Prior to macOS 10.15, Apple's iTunes software (and other alternative software) could be used to transfer music, photos, videos, games, contact information, e-mail settings, Web bookmarks, and calendars to the devices supporting these features from computers using certain versions of Apple macOS and Microsoft Windows operating systems.
Before the release of iOS 5, the iPod branding was used for the media player included with the iPhone and iPad, which was separated into apps named "Music" and "Videos" on the iPod Touch. As of iOS 5, separate Music and Videos apps are standardized across all iOS-powered products. While the iPhone and iPad have essentially the same media player capabilities as the iPod line, they are generally treated as separate products. During the middle of 2010, iPhone sales overtook those of the iPod.
History
Portable MP3 players had been around since the mid 1990s, but Apple found existing digital music players "big and clunky or small and useless" with user interfaces that were "unbelievably awful". Apple thought flash memory-based players didn't carry enough songs and the hard drive based ones were too big and heavy so the company decided to develop its own.
As ordered by CEO Steve Jobs, Apple's hardware engineering chief Jon Rubinstein contacted Tony Fadell, a former employee of General Magic and Philips who had a business idea to invent a better MP3 player and build a music sales store to complement it. Fadell, who had previously developed the Philips Velo and Nino PDA, had started a company called Fuse Systems to build the MP3 player and had been turned down by RealNetworks, Sony and Philips. Rubinstein had already discovered the Toshiba hard disk drive while meeting with an Apple supplier in Japan, and purchased the rights to it for Apple, and had also already worked out how the screen, battery, and other key elements would work.
Fadell found support for his project with Apple Computer and was hired by Apple in 2001 as an independent contractor to work on the iPod project, then code-named project P-68. Due to the engineers and resources at Apple being constrained with the iMac line, Fadell hired engineers from his startup company, Fuse, and veteran engineers from General Magic and Philips to build the core iPod development team.
Time constraints forced Fadell to develop various components of the iPod outside Apple. Fadell partnered with a company called PortalPlayer to design the software for the new Apple music player which became the iPod OS. Within eight months, Tony Fadell's team and PortalPlayer had completed a prototype. The power supply was then designed by Michael Dhuey and the display design made by Apple design engineer Jonathan Ive in-house. The aesthetic was inspired by the 1958 Braun T3 transistor radio designed by Dieter Rams, while the wheel-based user interface was prompted by Bang & Olufsen's BeoCom 6000 telephone.
Apple contracted another company, Pixo, to help design and implement the user interface (as well as Unicode, memory management, and event processing) under the direct supervision of Steve Jobs.
Steve Jobs is said to have dropped a prototype into an aquarium infront of engineers to demonstrate from bubbles leaving its housing that there is internal space to be saved.
The name iPod was proposed by Vinnie Chieco, a freelance copywriter, who (with others) was called by Apple to figure out how to introduce the new player to the public. After Chieco saw a prototype, he thought of the movie 2001: A Space Odyssey and the phrase "Open the pod bay doors, Hal", which refers to the white EVA Pods of the Discovery One spaceship. Chieco saw an analogy to the relationship between the spaceship and the smaller independent pods in the relationship between a personal computer and the music player.
The product (which Fortune called "Apple's 21st-Century Walkman") was developed in less than one year and unveiled on October 23, 2001. Jobs announced it as a Mac-compatible product with a 5 GB hard drive that put "1,000 songs in your pocket."
Apple researched the trademark and found that it was already in use. Joseph N. Grasso of New Jersey had originally listed an "iPod" trademark with the U.S. Patent and Trademark Office (USPTO) in July 2000 for Internet kiosks. The first iPod kiosks had been demonstrated to the public in New Jersey in March 1998, and commercial use began in January 2000 but had apparently been discontinued by 2001. The trademark was registered by the USPTO in November 2003, and Grasso assigned it to Apple Computer, Inc. in 2005.
The earliest recorded use in commerce of an "iPod" trademark was in 1991 by Chrysalis Corp. of Sturgis, Michigan, styled "iPOD", for office furniture.
As development progressed, Apple continued to refine the software's look and feel, rewriting much of the code. Starting with the iPod Mini, the Chicago font was replaced with Espy Sans. Later iPods switched fonts again to Podium Sans—a font similar to Apple's corporate font, Myriad. Color display iPods then adopted some Mac OS X themes like Aqua progress bars, and brushed metal meant to evoke a combination lock.
In 2007, Apple modified the iPod interface again with the introduction of the sixth-generation iPod Classic and third-generation iPod Nano by changing the font to Helvetica and, in most cases, splitting the screen in half by displaying the menus on the left and album artwork, photos, or videos on the right (whichever was appropriate for the selected item).
In 2006, Apple with Irish rock band U2 presented a special edition of the 5th-generation iPod. Like its predecessor, this iPod has the signatures of the four members of the band engraved on its back, but this one was the first time the company changed the color of the stainless steel back from a silver chrome to black. This iPod was only available with 30 GB of storage capacity. The special edition entitled purchasers to an exclusive video with 33 minutes of interviews and performance by U2, downloadable from the iTunes Store.
In mid-2015, several new color schemes for all of the current iPod models were spotted in the latest version of iTunes, 12.2. Belgian website Belgium iPhone originally found the images when plugging in an iPod for the first time, and subsequent leaked photos were found by Pierre Dandumont.
On July 27, 2017, Apple removed the iPod Nano and Shuffle from its stores, marking the end of Apple producing standalone music players. Currently, the iPod Touch is the only iPod produced by Apple.
Hardware
Audio
The third-generation iPod had a weak bass response, as shown in audio tests. The combination of the undersized DC-blocking capacitors and the typical low impedance of most consumer headphones form a high-pass filter, which attenuates the low-frequency bass output. Similar capacitors were used in the fourth-generation iPods. The problem is reduced when using high-impedance headphones and is completely masked when driving high-impedance (line level) loads, such as an external headphone amplifier. The first-generation iPod Shuffle uses a dual-transistor output stage, rather than a single capacitor-coupled output, and does not exhibit reduced bass response for any load.
For all iPods released in 2006 and earlier, some equalizer (EQ) sound settings would distort the bass sound far too easily, even on undemanding songs. This would happen for EQ settings like R&B, Rock, Acoustic, and Bass Booster, because the equalizer amplified the digital audio level beyond the software's limit, causing distortion (clipping) on bass instruments.
From the fifth-generation iPod on, Apple introduced a user-configurable volume limit in response to concerns about hearing loss. Users report that in the sixth-generation iPod, the maximum volume output level is limited to 100 dB in EU markets. Apple previously had to remove iPods from shelves in France for exceeding this legal limit. However, users who have bought a new sixth-generation iPod in late 2013 have reported a new option that allowed them to disable the EU volume limit. It has been said that these new iPods came with an updated software that allowed this change. Older sixth-generation iPods, however, are unable to update to this software version.
Connectivity
Originally, a FireWire connection to the host computer was used to update songs or recharge the battery. The battery could also be charged with a power adapter that was included with the first four generations.
The third generation began including a 30-pin dock connector, allowing for FireWire or USB connectivity. This provided better compatibility with non-Apple machines, as most of them did not have FireWire ports at the time. Eventually, Apple began shipping iPods with USB cables instead of FireWire, although the latter was available separately. As of the first-generation iPod Nano and the fifth-generation iPod Classic, Apple discontinued using FireWire for data transfer (while still allowing for use of FireWire to charge the device) in an attempt to reduce cost and form factor. As of the second-generation iPod Touch and the fourth-generation iPod Nano, FireWire charging ability has been removed. The second-, third-, and fourth-generation iPod Shuffle uses a single 3.5 mm minijack phone connector which acts as both a headphone jack or a USB data and charging port for the dock/cable.
The dock connector also allowed the iPod to connect to accessories, which often supplement the iPod's music, video, and photo playback. Apple sells a few accessories, such as the now-discontinued iPod Hi-Fi, but most are manufactured by third parties such as Belkin and Griffin. Some peripherals use their own interface, while others use the iPod's own screen. Because the dock connector is a proprietary interface, the implementation of the interface requires paying royalties to Apple.
Apple introduced a new 8-pin dock connector, named Lightning, on September 12, 2012 with their announcement of the iPhone 5, the fifth-generation iPod Touch, and the seventh-generation iPod Nano, which all feature it. The new connector replaces the older 30-pin dock connector used by older iPods, iPhones, and iPads. Apple Lightning cables have pins on both sides of the plug so it can be inserted with either side facing up.
Bluetooth connectivity was added to the last model of the iPod Nano, and Wi-Fi to the iPod Touch.
Accessories
Many accessories have been made for the iPod line. A large number are made by third-party companies, although many, such as the iPod Hi-Fi and iPod Socks, are made by Apple. Some accessories add extra features that other music players have, such as sound recorders, FM radio tuners, wired remote controls, and audio/visual cables for TV connections. Other accessories offer unique features like the Nike+iPod pedometer and the iPod Camera Connector. Other notable accessories include external speakers, wireless remote controls, protective case, screen films, and wireless earphones. Among the first accessory manufacturers were Griffin Technology, Belkin, JBL, Bose, Monster Cable, and SendStation.
BMW released the first iPod automobile interface, allowing drivers of newer BMW vehicles to control an iPod using either the built-in steering wheel controls or the radio head-unit buttons. Apple announced in 2005 that similar systems would be available for other vehicle brands, including Mercedes-Benz, Volvo, Nissan, Toyota, Alfa Romeo, Ferrari, Acura, Audi, Honda, Renault, Infiniti and Volkswagen. Scion offers standard iPod connectivity on all their cars.
Some independent stereo manufacturers including JVC, Pioneer, Kenwood, Alpine, Sony, and Harman Kardon also have iPod-specific integration solutions. Alternative connection methods include adapter kits (that use the cassette deck or the CD changer port), audio input jacks, and FM transmitters such as the iTrip—although personal FM transmitters are illegal in some countries. Many car manufacturers have added audio input jacks as standard.
Beginning in mid-2007, four major airlines, United, Continental, Delta, and Emirates, reached agreements to install iPod seat connections. The free service will allow passengers to power and charge an iPod, and view video and music libraries on individual seat-back displays. Originally KLM and Air France were reported to be part of the deal with Apple, but they later released statements explaining that they were only contemplating the possibility of incorporating such systems.
Software
The iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook, and Apple Lossless. The iPod Photo introduced the ability to display JPEG, BMP, GIF, TIFF, and PNG image file formats. Fifth- and sixth-generation iPod Classic models, as well as third-generation iPod Nano models, can also play MPEG-4 (H.264/MPEG-4 AVC) and QuickTime video formats, with restrictions on video dimensions, encoding techniques and data rates. Originally, iPod software only worked with Classic Mac OS and macOS; iPod software for Microsoft Windows was launched with the second-generation model. Unlike most other media players, Apple does not support Microsoft's WMA audio format—but a converter for WMA files without digital rights management (DRM) is provided with the Windows version of iTunes. MIDI files also cannot be played, but can be converted to audio files using the "Advanced" menu in iTunes. Alternative open-source audio formats, such as Ogg Vorbis and FLAC, are not supported without installing custom firmware onto an iPod (e.g., Rockbox).
During installation, an iPod is associated with one host computer. Each time an iPod connects to its host computer, iTunes can synchronize entire music libraries or music playlists either automatically or manually. Song ratings can be set on an iPod and synchronized later to the iTunes library, and vice versa. A user can access, play, and add music on a second computer if an iPod is set to manual and not automatic sync, but anything added or edited will be reversed upon connecting and syncing with the main computer and its library. If a user wishes to automatically sync music with another computer, an iPod's library will be entirely wiped and replaced with the other computer's library.
Interface
iPods with color displays use anti-aliased graphics and text, with sliding animations. All iPods (except the 3rd-generation iPod Shuffle, the 6th & 7th generation iPod Nano, and iPod Touch) have five buttons and the later generations have the buttons integrated into the click wheel – an innovation that gives an uncluttered, minimalist interface. The buttons perform basic functions such as menu, play, pause, next track, and previous track. Other operations, such as scrolling through menu items and controlling the volume, are performed by using the click wheel in a rotational manner. The 3rd-generation iPod Shuffle does not have any controls on the actual player; instead, it has a small control on the earphone cable, with volume-up and -down buttons and a single button for play and pause, next track, etc. The iPod Touch has no click-wheel; instead, it uses a touch screen along with a home button, sleep/wake button, and (on the second and third generations of the iPod Touch) volume-up and -down buttons. The user interface for the iPod Touch is identical to that of the iPhone. Differences include the lack of a phone application. Both devices use iOS.
iTunes Store
The iTunes Store (introduced April 29, 2003) is an online media store run by Apple and accessed through iTunes. The store became the market leader soon after its launch and Apple announced the sale of videos through the store on October 12, 2005. Full-length movies became available on September 12, 2006.
At the time the store was introduced, purchased audio files used the AAC format with added encryption, based on the FairPlay DRM system. Up to five authorized computers and an unlimited number of iPods could play the files. Burning the files with iTunes as an audio CD, then re-importing would create music files without the DRM. The DRM could also be removed using third-party software. However, in a deal with Apple, EMI began selling DRM-free, higher-quality songs on the iTunes Stores, in a category called "iTunes Plus." While individual songs were made available at a cost of US$1.29, 30¢ more than the cost of a regular DRM song, entire albums were available for the same price, US$9.99, as DRM encoded albums. On October 17, 2007, Apple lowered the cost of individual iTunes Plus songs to US$0.99 per song, the same as DRM encoded tracks. On January 6, 2009, Apple announced that DRM has been removed from 80% of the music catalog and that it would be removed from all music by April 2009.
iPods cannot play music files from competing music stores that use rival-DRM technologies like Microsoft's protected WMA or RealNetworks' Helix DRM. Example stores include Napster and MSN Music. RealNetworks claims that Apple is creating problems for itself by using FairPlay to lock users into using the iTunes Store. Steve Jobs stated that Apple makes little profit from song sales, although Apple uses the store to promote iPod sales. However, iPods can also play music files from online stores that do not use DRM, such as eMusic or Amie Street.
Universal Music Group decided not to renew their contract with the iTunes Store on July 3, 2007. Universal will now supply iTunes in an 'at will' capacity.
Apple debuted the iTunes Wi-Fi Music Store on September 5, 2007, in its Media Event entitled "The Beat Goes On...". This service allows users to access the Music Store from either an iPhone or an iPod Touch and download songs directly to the device that can be synced to the user's iTunes Library over a WiFi connection, or, in the case of an iPhone, the telephone network.
Games
Video games are playable on various versions of iPods. The original iPod had the game Brick (originally invented by Apple's co-founder Steve Wozniak) included as an easter egg hidden feature; later firmware versions added it as a menu option. Later revisions of the iPod added three more games: Parachute, Solitaire, and Music Quiz.
In September 2006, the iTunes Store began to offer additional games for purchase with the launch of iTunes 7, compatible with the fifth generation iPod with iPod software 1.2 or later. Those games were: Bejeweled, Cubis 2, Mahjong, Mini Golf, Pac-Man, Tetris, Texas Hold 'Em, Vortex, Asphalt 4: Elite Racing and Zuma. Additional games have since been added. These games work on the 6th and 5th generation iPod Classic and the 5th and 4th generation iPod Nano.
With third parties like Namco, Square Enix, Electronic Arts, Sega, and Hudson Soft all making games for the iPod, Apple's MP3 player has taken steps towards entering the video game handheld console market. Even video game magazines like GamePro and EGM have reviewed and rated most of their games as of late.
The games are in the form of .ipg files, which are actually .zip archives in disguise. When unzipped, they reveal executable files along with common audio and image files, leading to the possibility of third party games. Apple has not publicly released a software development kit (SDK) for iPod-specific development. Apps produced with the iPhone SDK are compatible only with the iOS on the iPod Touch and iPhone, which cannot run click wheel-based games.
File storage and transfer
All iPods except for the iPod Touch can function in "disk mode" as mass storage devices to store data files but this may not be the default behavior. If an iPod is formatted on a Mac OS computer, it uses the HFS+ file system format, which allows it to serve as a boot disk for a Mac computer. If it is formatted on Windows, the FAT32 format is used. With the release of the Windows-compatible iPod, the default file system used on the iPod line switched from HFS+ to FAT32, although it can be reformatted to either file system (excluding the iPod Shuffle which is strictly FAT32). Generally, if a new iPod (excluding the iPod Shuffle) is initially plugged into a computer running Windows, it will be formatted with FAT32, and if initially plugged into a Mac running Mac OS it will be formatted with HFS+.
Unlike many other MP3 players, simply copying audio or video files to the drive with a typical file management application will not allow an iPod to properly access them. The user must use software that has been specifically designed to transfer media files to iPods so that the files are playable and viewable. Usually iTunes is used to transfer media to an iPod, though several alternative third-party applications are available on a number of different platforms.
iTunes 7 and above can transfer purchased media of the iTunes Store from an iPod to a computer, provided that computer containing the DRM protected media is authorized to play it.
Media files are stored on an iPod in a hidden folder, along with a proprietary database file. The hidden content can be accessed on the host operating system by enabling hidden files to be shown. The media files can then be recovered manually by copying the files or folders off the iPod. Many third-party applications also allow easy copying of media files off of an iPod.
Models and features
While the suffix "Classic" was not introduced until the sixth generation, it has been applied here retroactively to all generic iPods for clarity.
Patent disputes
In 2005, Apple faced two lawsuits claiming patent infringement by the iPod line and its associated technologies: Advanced Audio Devices claimed the iPod line breached its patent on a "music jukebox", while a Hong Kong-based IP portfolio company called Pat-rights filed a suit claiming that Apple's FairPlay technology breached a patent issued to inventor Ho Keung Tse. The latter case also includes the online music stores of Sony, RealNetworks, Napster, and Musicmatch as defendants.
Apple's application to the United States Patent and Trademark Office for a patent on "rotational user inputs", as used on the iPod interface, received a third "non-final rejection" (NFR) in August 2005. Also in August 2005, Creative Technology, one of Apple's main rivals in the MP3 player market, announced that it held a patent on part of the music selection interface used by the iPod line, which Creative Technology dubbed the "Zen Patent", granted on August 9, 2005. On May 15, 2006, Creative filed another suit against Apple with the United States District Court for the Northern District of California. Creative also asked the United States International Trade Commission to investigate whether Apple was breaching U.S. trade laws by importing iPods into the United States.
On August 24, 2006, Apple and Creative announced a broad settlement to end their legal disputes. Apple will pay Creative US$100 million for a paid-up license, to use Creative's awarded patent in all Apple products. As part of the agreement, Apple will recoup part of its payment, if Creative is successful in licensing the patent. Creative then announced its intention to produce iPod accessories by joining the Made for iPod program.
Sales
On January 8, 2004, Hewlett-Packard (HP) announced that they would sell HP-branded iPods under a license agreement from Apple. Several new retail channels were used—including Walmart—and these iPods eventually made up 5% of all iPod sales. In July 2005, HP stopped selling iPods due to unfavorable terms and conditions imposed by Apple.
In January 2007, Apple reported record quarterly revenue of US$7.1 billion, of which 48% was made from iPod sales.
On April 9, 2007, it was announced that Apple had sold its one-hundred millionth iPod, making it the best-selling digital music player of all time. In April 2007, Apple reported second-quarter revenue of US$5.2 billion, of which 32% was made from iPod sales. Apple and several industry analysts suggest that iPod users are likely to purchase other Apple products such as Mac computers.
On October 22, 2007, Apple reported quarterly revenue of US$6.22 billion, of which 30.69% came from Apple notebook sales, 19.22% from desktop sales and 26% from iPod sales. Apple's 2007 year revenue increased to US$24.01 billion with US$3.5 billion in profits. Apple ended the fiscal year 2007 with US$15.4 billion in cash and no debt.
On January 22, 2008, Apple reported the best quarter revenue and earnings in Apple's history so far. Apple posted record revenue of US$9.6 billion and record net quarterly profit of US$1.58 billion. 42% of Apple's revenue for the First fiscal quarter of 2008 came from iPod sales, followed by 21% from notebook sales and 16% from desktop sales.
On October 21, 2008, Apple reported that only 14.21% of total revenue for fiscal quarter 4 of the year 2008 came from iPods. At the September 9, 2009 keynote presentation at the Apple Event, Phil Schiller announced total cumulative sales of iPods exceeded 220 million. The continual decline of iPod sales since 2009 has not been a surprising trend for the Apple corporation, as Apple CFO Peter Oppenheimer explained in June 2009: "We expect our traditional MP3 players to decline over time as we cannibalize ourselves with the iPod Touch and the iPhone." Since 2009, the company's iPod sales have continually decreased every financial quarter and in 2013 a new model was not introduced onto the market.
, Apple reported that total number of iPods sold worldwide was 350 million.
Market share
Since October 2004, the iPod line has dominated digital music player sales in the United States, with over 90% of the market for hard drive-based players and over 70% of the market for all types of players. During the year from January 2004 to January 2005, the high rate of sales caused its U.S. market share to increase from 31% to 65% and in July 2005, this market share was measured at 74%. In January 2007 the iPod market share reached 72.7% according to Bloomberg Online. In the Japanese market iPod market share was 36% in 2005, albeit still leader there. In Europe, Apple also led the market (especially the UK) but local brands such as Archos managed to outsell Apple in certain categories.
One of the reasons for the iPod's early success, having been released three years after the very first digital audio player (namely the MPMan), was its seamless integration with the company's iTunes software, and the ecosystem built around it such as the iTunes Music Store, as well as a competitive price. As a result, Apple achieved a dominance in the MP3 player market as Sony's Walkman did with personal cassette players two decades earlier. The software between computer and player made it easy to transfer music over and synchronize it, tasks that were considered difficult on pre-iPod MP3 players like those from Rio and Creative.
Some of the iPod's chief competitors during its pinnacle include Creative's Zen, Sony's Walkman, iriver, and Cowon's iAudio and others. The iPod's dominance was challenged numerous times: in November 2004, Creative's CEO "declared war" on the iPod; that same year, Sony's first hard disk Walkman was designed to take on the iPod, accompanied by its own music store Sony Connect; while Microsoft initially attempted to compete using a software platform called Portable Media Center, and in later years designed the Zune line. These competitors failed to make major dents and Apple remained dominant in the fast growing digital audio player market during the decade. Mobile phone manufacturers Nokia and Sony Ericsson also made "music phones" to rival iPod. Apple's popular iTunes Store catalog played a part in keeping Apple firmly market leader, also helped by the mismanagement of others, such as Sony's unpopular SonicStage software.
One notable exception where iPod was not faring well was in South Korea, where as of 2005 Apple held a small market share of less than 2%, compared to market leaders iRiver, Samsung and Cowon.
As of 2011, iPod held a 70% market share in global MP3 players. Its closest competitor was noted to be the Sansa line from SanDisk.
Industry impact
iPods have won several awards ranging from engineering excellence, to most innovative audio product, to fourth best computer product of 2006. iPods often receive favorable reviews; scoring on looks, clean design, and ease of use. PC World wrote that iPod line has "altered the landscape for portable audio players". Several industries are modifying their products to work better with both the iPod line and the AAC audio format. Examples include CD copy-protection schemes, and mobile phones, such as phones from Sony Ericsson and Nokia, which play AAC files rather than WMA.
Besides earning a reputation as a respected entertainment device, the iPod has also been accepted as a business device. Government departments, major institutions, and international organizations have turned to the iPod line as a delivery mechanism for business communication and training, such as the Royal and Western Infirmaries in Glasgow, Scotland, where iPods are used to train new staff.
iPods have also gained popularity for use in education. Apple offers more information on educational uses for iPods on their website, including a collection of lesson plans. There has also been academic research done in this area in nursing education and more general K-16 education. Duke University provided iPods to all incoming freshmen in the fall of 2004, and the iPod program continues today with modifications. Entertainment Weekly put it on its end-of-the-decade, "best-of" list, saying, "Yes, children, there really was a time when we roamed the earth without thousands of our favorite jams tucked comfortably into our hip pockets. Weird."
The iPod has also been credited with accelerating shifts within the music industry. The iPod's popularization of digital music storage allows users to abandon listening to entire albums and instead be able to choose specific singles which hastened the end of the Album Era in popular music.
Criticism
Battery problems
The advertised battery life on most models is different from the real-world achievable life. For example, the fifth-generation 30 GB iPod Classic was advertised as having up to 14 hours of music playback. However, an MP3.com report stated that this was virtually unachievable under real-life usage conditions, with a writer for the site getting, on average, less than 8 hours from an iPod. In 2003, class action lawsuits were brought against Apple complaining that the battery charges lasted for shorter lengths of time than stated and that the battery degraded over time. The lawsuits were settled by offering individuals with first or second-generation iPods either US$50 store credit or a free battery replacement and offering individuals with third-generation iPods an extended warranty that would allow them to get a replacement iPod if they experienced battery problems.
As an instance of planned obsolescence, iPod batteries are not designed to be removed or replaced by the user, although some users have been able to open the case themselves, usually following instructions from third-party vendors of iPod replacement batteries. Compounding the problem, Apple initially would not replace worn-out batteries. The official policy was that the customer should buy a refurbished replacement iPod, at a cost almost equivalent to a brand new one. All lithium-ion batteries lose capacity during their lifetime even when not in use (guidelines are available for prolonging life-span) and this situation led to a market for third-party battery replacement kits.
Apple announced a battery replacement program on November 14, 2003, a week before a high publicity stunt and website by the Neistat Brothers. The initial cost was US$99, and it was lowered to US$59 in 2005. One week later, Apple offered an extended iPod warranty for US$59. For the iPod Nano, soldering tools are needed because the battery is soldered onto the main board. Fifth generation iPods have their battery attached to the backplate with adhesive.
The first generation iPod Nano may overheat and pose a health and safety risk. Affected iPod Nanos were sold between September 2005 and December 2006. This is due to a flawed battery used by Apple from a single battery manufacturer. Apple recommended that owners of affected iPod Nanos stop using them. Under an Apple product replacement program, affected Nanos were replaced with current generation Nanos free of charge.
Reliability and durability
iPods have been criticized for alleged short lifespan and fragile hard drives. A 2005 survey conducted on the MacInTouch website found that the iPod line had an average failure rate of 13.7% (although they note that comments from respondents indicate that "the true iPod failure rate may be lower than it appears"). It concluded that some models were more durable than others. In particular, failure rates for iPods employing hard drives were usually above 20% while those with flash memory had a failure rate below 10%. In late 2005, many users complained that the surface of the first-generation iPod Nano can become scratched easily, rendering the screen unusable. A class-action lawsuit was also filed. Apple initially considered the issue a minor defect, but later began shipping these iPods with protective sleeves.
Labor disputes
On June 11, 2006, the British tabloid The Mail on Sunday reported that iPods are mainly manufactured by workers who earn no more than US$50 per month and work 15-hour shifts. Apple investigated the case with independent auditors and found that, while some of the plant's labor practices met Apple's Code of Conduct, others did not: employees worked over 60 hours a week for 35% of the time and worked more than six consecutive days for 25% of the time.
Foxconn, Apple's manufacturer, initially denied the abuses, but when an auditing team from Apple found that workers had been working longer hours than were allowed under Chinese law, they promised to prevent workers working more hours than the code allowed. Apple hired a workplace standards auditing company, Verité, and joined the Electronic Industry Code of Conduct Implementation Group to oversee the measures. On December 31, 2006, workers at the Foxconn factory in Longhua, Shenzhen formed a union affiliated with the All-China Federation of Trade Unions, the Chinese government-approved union umbrella organization.
In 2010, a number of workers committed suicide at a Foxconn operations in China. Apple, HP, and others stated that they were investigating the situation. Foxconn guards have been videotaped beating employees. Another employee killed himself in 2009 when an Apple prototype went missing, and claimed in messages to friends, that he had been beaten and interrogated.
As of 2006, the iPod was produced by about 14,000 workers in the U.S. and 27,000 overseas. Further, the salaries attributed to this product were overwhelmingly distributed to highly skilled U.S. professionals, as opposed to lower-skilled U.S. retail employees or overseas manufacturing labor. One interpretation of this result is that U.S. innovation can create more jobs overseas than domestically.
Timeline of models
See also
Comparison of portable media players
Comparison of iPod managers
iPhone
Podcast
iPad
Notes
References
External links
– official site at Apple Inc.
iPod troubleshooting basics and service FAQ at Apple Inc.
Apple's 21st century Walkman article, Brent Schlender, Fortune, November 12, 2001
, Steven Levy, Newsweek, July 26, 2004
The Perfect Thing article, Steven Levy, Wired, November 2006
iPod (1st generation) complete disassembly at TakeItApart.com
Apple Inc. hardware
ITunes
Portable media players
Foxconn
Computer-related introductions in 2001
Digital audio players
Products introduced in 2001 |
91127 | https://en.wikipedia.org/wiki/Fermat%20number | Fermat number | In mathematics, a Fermat number, named after Pierre de Fermat, who first studied them, is a positive integer of the form
where n is a non-negative integer. The first few Fermat numbers are:
3, 5, 17, 257, 65537, 4294967297, 18446744073709551617, ... .
If 2k + 1 is prime and k > 0, then k must be a power of 2, so 2k + 1 is a Fermat number; such primes are called Fermat primes. As of 2021, the only known Fermat primes are F0 = 3, F1 = 5, F2 = 17, F3 = 257, and F4 = 65537 ; heuristics suggest that there are no more.
Basic properties
The Fermat numbers satisfy the following recurrence relations:
for n ≥ 1,
for n ≥ 2. Each of these relations can be proved by mathematical induction. From the second equation, we can deduce Goldbach's theorem (named after Christian Goldbach): no two Fermat numbers share a common integer factor greater than 1. To see this, suppose that 0 ≤ i < j and Fi and Fj have a common factor a > 1. Then a divides both
and Fj; hence a divides their difference, 2. Since a > 1, this forces a = 2. This is a contradiction, because each Fermat number is clearly odd. As a corollary, we obtain another proof of the infinitude of the prime numbers: for each Fn, choose a prime factor pn; then the sequence {pn} is an infinite sequence of distinct primes.
Further properties
No Fermat prime can be expressed as the difference of two pth powers, where p is an odd prime.
With the exception of F0 and F1, the last digit of a Fermat number is 7.
The sum of the reciprocals of all the Fermat numbers is irrational. (Solomon W. Golomb, 1963)
Primality of Fermat numbers
Fermat numbers and Fermat primes were first studied by Pierre de Fermat, who conjectured that all Fermat numbers are prime. Indeed, the first five Fermat numbers F0, ..., F4 are easily shown to be prime. Fermat's conjecture was refuted by Leonhard Euler in 1732 when he showed that
Euler proved that every factor of Fn must have the form k2n+1 + 1 (later improved to k2n+2 + 1 by Lucas).
That 641 is a factor of F5 can be deduced from the equalities 641 = 27 × 5 + 1 and 641 = 24 + 54. It follows from the first equality that 27 × 5 ≡ −1 (mod 641) and therefore (raising to the fourth power) that 228 × 54 ≡ 1 (mod 641). On the other hand, the second equality implies that 54 ≡ −24 (mod 641). These congruences imply that 232 ≡ −1 (mod 641).
Fermat was probably aware of the form of the factors later proved by Euler, so it seems curious that he failed to follow through on the straightforward calculation to find the factor. One common explanation is that Fermat made a computational mistake.
There are no other known Fermat primes Fn with n > 4, but little is known about Fermat numbers for large n. In fact, each of the following is an open problem:
Is Fn composite for all n > 4?
Are there infinitely many Fermat primes? (Eisenstein 1844)
Are there infinitely many composite Fermat numbers?
Does a Fermat number exist that is not square-free?
, it is known that Fn is composite for , although of these, complete factorizations of Fn are known only for , and there are no known prime factors for and . The largest Fermat number known to be composite is F18233954, and its prime factor , a megaprime, was discovered in October 2020.
Heuristic arguments
Heuristics suggest that F4 is the last Fermat prime.
The prime number theorem implies that a random integer in a suitable interval around N is prime with probability 1/ln N. If one uses the heuristic that a Fermat number is prime with the same probability as a random integer of its size, and that F5, ..., F32 are composite, then the expected number of Fermat primes beyond F4 (or equivalently, beyond F32) should be
One may interpret this number as an upper bound for the probability that a Fermat prime beyond F4 exists.
This argument is not a rigorous proof. For one thing, it assumes that Fermat numbers behave "randomly", but the factors of Fermat numbers have special properties. Boklan and Conway published a more precise analysis suggesting that the probability that there is another Fermat prime is less than one in a billion.
Equivalent conditions of primality
Let be the nth Fermat number. Pépin's test states that for n > 0,
is prime if and only if
The expression can be evaluated modulo by repeated squaring. This makes the test a fast polynomial-time algorithm. But Fermat numbers grow so rapidly that only a handful of them can be tested in a reasonable amount of time and space.
There are some tests for numbers of the form k2m + 1, such as factors of Fermat numbers, for primality.
Proth's theorem (1878). Let = + with odd < . If there is an integer such that
then is prime. Conversely, if the above congruence does not hold, and in addition
(See Jacobi symbol)
then is composite.
If N = Fn > 3, then the above Jacobi symbol is always equal to −1 for a = 3, and this special case of Proth's theorem is known as Pépin's test. Although Pépin's test and Proth's theorem have been implemented on computers to prove the compositeness of some Fermat numbers, neither test gives a specific nontrivial factor. In fact, no specific prime factors are known for n = 20 and 24.
Factorization of Fermat numbers
Because of Fermat numbers' size, it is difficult to factorize or even to check primality. Pépin's test gives a necessary and sufficient condition for primality of Fermat numbers, and can be implemented by modern computers. The elliptic curve method is a fast method for finding small prime divisors of numbers. Distributed computing project Fermatsearch has found some factors of Fermat numbers. Yves Gallot's proth.exe has been used to find factors of large Fermat numbers. Édouard Lucas, improving Euler's above-mentioned result, proved in 1878 that every factor of the Fermat number , with n at least 2, is of the form (see Proth number), where k is a positive integer. By itself, this makes it easy to prove the primality of the known Fermat primes.
Factorizations of the first twelve Fermat numbers are:
{|
|- valign="top"
|F0 ||=|| 21||+||1 ||=||3 is prime||
|- valign="top"
|F1 ||=|| 22||+||1 ||=||5 is prime||
|- valign="top"
|F2 ||=|| 24||+||1 ||=||17 is prime||
|- valign="top"
|F3 ||=|| 28||+||1 ||=||257 is prime||
|- valign="top"
|F4 ||=|| 216||+||1 ||=||65,537 is the largest known Fermat prime||
|- valign="top"
|F5 ||=|| 232||+||1 ||=||4,294,967,297||
|- style="background:white; color:#700000"
| || || || || ||=||641 × 6,700,417 (fully factored 1732 )
|- valign="top"
|F6 ||=|| 264||+||1 ||=||18,446,744,073,709,551,617 (20 digits) ||
|- style="background:white; color:#700000"
| || || || || ||=||274,177 × 67,280,421,310,721 (14 digits) (fully factored 1855)
|- valign="top"
|F7 ||=|| 2128||+||1 ||=||340,282,366,920,938,463,463,374,607,431,768,211,457 (39 digits)||
|- style="background:white; color:#700000"
| || || || || ||=||59,649,589,127,497,217 (17 digits) × 5,704,689,200,685,129,054,721 (22 digits) (fully factored 1970)
|- valign="top"
|F8 ||=|| 2256||+||1 ||=||115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,937 (78 digits)||
|- style="background:white; color:#700000; vertical-align:top"
| || || || || ||=||1,238,926,361,552,897 (16 digits) × 93,461,639,715,357,977,769,163,558,199,606,896,584,051,237,541,638,188,580,280,321 (62 digits) (fully factored 1980)
|- valign="top"
|F9 ||=|| 2512||+||1 ||=||13,407,807,929,942,597,099,574,024,998,205,846,127,479,365,820,592,393,377,723,561,443,721,764,030,073,546,976,801,874,298,166,903,427,690,031,858,186,486,050,853,753,882,811,946,569,946,433,649,006,084,097 (155 digits)||
|- style="background:white; color:#700000; vertical-align:top"
| || || || || ||=|| 2,424,833 × 7,455,602,825,647,884,208,337,395,736,200,454,918,783,366,342,657 (49 digits) × 741,640,062,627,530,801,524,787,141,901,937,474,059,940,781,097,519,023,905,821,316,144,415,759,504,705,008,092,818,711,693,940,737 (99 digits) (fully factored 1990)
|- valign="top"
|F10 ||=|| 21024||+||1
||=||179,769,313,486,231,590,772,930...304,835,356,329,624,224,137,217 (309 digits)||
|- style="background:white; color:#700000; vertical-align:top"
| || || || || ||=||45,592,577 × 6,487,031,809 × 4,659,775,785,220,018,543,264,560,743,076,778,192,897 (40 digits) × 130,439,874,405,488,189,727,484...806,217,820,753,127,014,424,577 (252 digits) (fully factored 1995)
|- valign="top"
|F11 ||=|| 22048||+||1
||=||32,317,006,071,311,007,300,714,8...193,555,853,611,059,596,230,657 (617 digits)||
|- style="background:white; color:#700000; vertical-align:top"
| || || || || ||=|| 319,489 × 974,849 × 167,988,556,341,760,475,137 (21 digits) × 3,560,841,906,445,833,920,513 (22 digits) × 173,462,447,179,147,555,430,258...491,382,441,723,306,598,834,177 (564 digits) (fully factored 1988)
|-
|}
, only F0 to F11 have been completely factored. The distributed computing project Fermat Search is searching for new factors of Fermat numbers. The set of all Fermat factors is A050922 (or, sorted, A023394) in OEIS.
The following factors of Fermat numbers were known before 1950 (since then, digital computers have helped find more factors):
, 356 prime factors of Fermat numbers are known, and 312 Fermat numbers are known to be composite. Several new Fermat factors are found each year.
Pseudoprimes and Fermat numbers
Like composite numbers of the form 2p − 1, every composite Fermat number is a strong pseudoprime to base 2. This is because all strong pseudoprimes to base 2 are also Fermat pseudoprimes - i.e.
for all Fermat numbers.
In 1904, Cipolla showed that the product of at least two distinct prime or composite Fermat numbers will be a Fermat pseudoprime to base 2 if and only if .
Other theorems about Fermat numbers
A Fermat number cannot be a perfect number or part of a pair of amicable numbers.
The series of reciprocals of all prime divisors of Fermat numbers is convergent.
If nn + 1 is prime, there exists an integer m such that n = 22m. The equation
nn + 1 = F(2m+m)
holds in that case.
Let the largest prime factor of the Fermat number Fn be P(Fn). Then,
Relationship to constructible polygons
Carl Friedrich Gauss developed the theory of Gaussian periods in his Disquisitiones Arithmeticae and formulated a sufficient condition for the constructibility of regular polygons. Gauss stated that this condition was also necessary, but never published a proof. Pierre Wantzel gave a full proof of necessity in 1837. The result is known as the Gauss–Wantzel theorem:
An n-sided regular polygon can be constructed with compass and straightedge if and only if n is the product of a power of 2 and distinct Fermat primes: in other words, if and only if n is of the form n = 2kp1p2...ps, where k, s are nonnegative integers and the pi are distinct Fermat primes.
A positive integer n is of the above form if and only if its totient φ(n) is a power of 2.
Applications of Fermat numbers
Pseudorandom Number Generation
Fermat primes are particularly useful in generating pseudo-random sequences of numbers in the range 1 ... N, where N is a power of 2. The most common method used is to take any seed value between 1 and P − 1, where P is a Fermat prime. Now multiply this by a number A, which is greater than the square root of P and is a primitive root modulo P (i.e., it is not a quadratic residue). Then take the result modulo P. The result is the new value for the RNG.
(see linear congruential generator, RANDU)
This is useful in computer science since most data structures have members with 2X possible values. For example, a byte has 256 (28) possible values (0–255). Therefore, to fill a byte or bytes with random values a random number generator which produces values 1–256 can be used, the byte taking the output value −1. Very large Fermat primes are of particular interest in data encryption for this reason. This method produces only pseudorandom values as, after P − 1 repetitions, the sequence repeats. A poorly chosen multiplier can result in the sequence repeating sooner than P − 1.
Generalized Fermat numbers
Numbers of the form with a, b any coprime integers, a > b > 0, are called generalized Fermat numbers. An odd prime p is a generalized Fermat number if and only if p is congruent to 1 (mod 4). (Here we consider only the case n > 0, so 3 = is not a counterexample.)
An example of a probable prime of this form is 12465536 + 5765536 (found by Valeryi Kuryshev).
By analogy with the ordinary Fermat numbers, it is common to write generalized Fermat numbers of the form as Fn(a). In this notation, for instance, the number 100,000,001 would be written as F3(10). In the following we shall restrict ourselves to primes of this form, , such primes are called "Fermat primes base a". Of course, these primes exist only if a is even.
If we require n > 0, then Landau's fourth problem asks if there are infinitely many generalized Fermat primes Fn(a).
Generalized Fermat primes
Because of the ease of proving their primality, generalized Fermat primes have become in recent years a topic for research within the field of number theory. Many of the largest known primes today are generalized Fermat primes.
Generalized Fermat numbers can be prime only for even , because if is odd then every generalized Fermat number will be divisible by 2. The smallest prime number with is , or 3032 + 1. Besides, we can define "half generalized Fermat numbers" for an odd base, a half generalized Fermat number to base a (for odd a) is , and it is also to be expected that there will be only finitely many half generalized Fermat primes for each odd base.
(In the list, the generalized Fermat numbers () to an even are , for odd , they are . If is a perfect power with an odd exponent , then all generalized Fermat number can be algebraic factored, so they cannot be prime)
(For the smallest number such that is prime, see )
(See for more information (even bases up to 1000), also see for odd bases)
(For the smallest prime of the form (for odd ), see also )
(For the smallest even base such that is prime, see )
The smallest base b such that b2n + 1 is prime are
2, 2, 2, 2, 2, 30, 102, 120, 278, 46, 824, 150, 1534, 30406, 67234, 70906, 48594, 62722, 24518, 75898, 919444, ...
The smallest k such that (2n)k + 1 is prime are
1, 1, 1, 0, 1, 1, 2, 1, 1, 2, 1, 2, 2, 1, 1, 0, 4, 1, ... (The next term is unknown) (also see and )
A more elaborate theory can be used to predict the number of bases for which will be prime for fixed . The number of generalized Fermat primes can be roughly expected to halve as is increased by 1.
Largest known generalized Fermat primes
The following is a list of the 5 largest known generalized Fermat primes. They are all megaprimes. The whole top-5 is discovered by participants in the PrimeGrid project.
On the Prime Pages one can find the current top 100 generalized Fermat primes.
See also
Constructible polygon: which regular polygons are constructible partially depends on Fermat primes.
Double exponential function
Lucas' theorem
Mersenne prime
Pierpont prime
Primality test
Proth's theorem
Pseudoprime
Sierpiński number
Sylvester's sequence
Notes
References
- This book contains an extensive list of references.
External links
Chris Caldwell, The Prime Glossary: Fermat number at The Prime Pages.
Luigi Morelli, History of Fermat Numbers
John Cosgrave, Unification of Mersenne and Fermat Numbers
Wilfrid Keller, Prime Factors of Fermat Numbers
Yves Gallot, Generalized Fermat Prime Search
Mark S. Manasse, Complete factorization of the ninth Fermat number (original announcement)
Peyton Hayslette, Largest Known Generalized Fermat Prime Announcement
Constructible polygons
Articles containing proofs
Unsolved problems in number theory
Large integers
Classes of prime numbers
Integer sequences |
91221 | https://en.wikipedia.org/wiki/Telephone%20tapping | Telephone tapping | Telephone tapping (also wire tapping or wiretapping in American English) is the monitoring of telephone and Internet-based conversations by a third party, often by covert means. The wire tap received its name because, historically, the monitoring connection was an actual electrical tap on the telephone line. Legal wiretapping by a government agency is also called lawful interception. Passive wiretapping monitors or records the traffic, while active wiretapping alters or otherwise affects it.
Legal status
Lawful interception is officially strictly controlled in many countries to safeguard privacy; this is the case in all liberal democracies. In theory, telephone tapping often needs to be authorized by a court, and is again in theory, normally only approved when evidence shows it is not possible to detect criminal or subversive activity in less intrusive ways. Oftentimes, the law and regulations require that the crime investigated must be at least of a certain severity. Illegal or unauthorized telephone tapping is often a criminal offense. However, in certain jurisdictions such as Germany and France, courts will accept illegally recorded phone calls without the other party's consent as evidence, but the unauthorized telephone tapping will still be prosecuted.
United States
In the United States, under the Foreign Intelligence Surveillance Act, federal intelligence agencies can get approval for wiretaps from the United States Foreign Intelligence Surveillance Court, a court with secret proceedings, or in certain circumstances from the Attorney General without a court order.
The telephone call recording laws in most U.S. states require only one party to be aware of the recording, while twelve states require both parties to be aware. In Nevada, the state legislature enacted a law making it legal for a party to record a conversation if one party to the conversation consented, but the Nevada Supreme Court issued two judicial opinions changing the law and requiring all parties to consent to the recording of a private conversation for it to be legal. It is considered better practice to announce at the beginning of a call that the conversation is being recorded.
The Fourth Amendment to the United States Constitution protects privacy rights by requiring a warrant to search an individual. However, telephone tapping is the subject of controversy surrounding violations of this right. There are arguments that wiretapping invades an individual's personal privacy and therefore violates their Fourth Amendment rights. On the other hand, there are certain rules and regulations, which permit wiretapping. A notable example of this is the Patriot Act. The Patriot act does, in certain circumstances, give the government permission to wiretap citizens. In addition, wiretapping laws vary per state which makes it even more difficult to determine whether the Fourth Amendment is being violated.
Canada
In Canadian law, police are allowed to wiretap without the authorization from a court when there is the risk for imminent harm, such as kidnapping or a bomb threat. They must believe that the interception is immediately necessary to prevent an unlawful act that could cause serious harm to any person or to property. This was introduced by Rob Nicholson on February 11, 2013, and is also known as Bill C-55. The Supreme Court gave Parliament twelve months to rewrite a new law. Bill C-51 (also known as the Anti-Terrorism Act) was then released in 2015, which transformed the Canadian Security Intelligence Service from an intelligence-gathering agency to an agency actively engaged in countering national security threats.
Legal protection extends to 'private communications' where the participants would not expect unintended persons to learn the content of the communication. A single participant can legally, and covertly record a conversation. Otherwise police normally need a judicial warrant based upon probable grounds to record a conversation they are not a part of. In order to be valid wiretap authorization must state: 1) the offense being investigated by the wiretap, 2) the type of communication, 3) the identity of the people or places targeted, 4) the period of validity (60 days from issue).
India
In India, the lawful interception of communication by authorized law enforcement agencies (LEAs) is carried out in accordance with Section 5(2) of the Indian Telegraph Act, 1885 read with Rule 419A of Indian Telegraph (Amendment) Rules, 2007. Directions for interception of any message or class of messages under sub-section (2) of Section 5 of the Indian Telegraph Act, 1885 shall not be issued except by an order made by the Secretary to the Government of India in the Ministry of Home Affairs in the case of Government of India and by the Secretary to the State Government in-charge of the Home Department in the case of a state government. The government has set up the Centralized Monitoring System (CMS) to automate the process of lawful interception and monitoring of telecommunications technology. The government of India on 2015 December 2 in a reply to parliament question no. 595 on scope, objectives and framework of the CMS has struck a balance between national security, online privacy and free speech informed that to take care of the privacy of citizens, lawful interception and monitoring is governed by the Section 5(2) of Indian Telegraph Act, 1885 read with Rule 419A of Indian Telegraph (Amendment) Rules, 2007 wherein oversight mechanism exists in form of review committee under chairmanship of the Cabinet Secretary at Central Government level and Chief Secretary of the State at the state government level. Section 5(2) also allows the government to intercept messages that are public emergencies or for public safety.
Methods
Official use
The contracts or licenses by which the state controls telephone companies often require that the companies must provide access to tapping lines to law enforcement. In the U.S., telecommunications carriers are required by law to cooperate in the interception of communications for law enforcement purposes under the terms of Communications Assistance for Law Enforcement Act (CALEA).
When telephone exchanges were mechanical, a tap had to be installed by technicians, linking circuits together to route the audio signal from the call. Now that many exchanges have been converted to digital technology, tapping is far simpler and can be ordered remotely by computer. This central office switch wiretapping technology using the Advanced Intelligent Network (AIN) was invented by Wayne Howe and Dale Malik at BellSouth's Advanced Technology R&D group in 1995 and was issued as US Patent #5,590,171. Telephone services provided by cable TV companies also use digital switching technology. If the tap is implemented at a digital switch, the switching computer simply copies the digitized bits that represent the phone conversation to a second line and it is impossible to tell whether a line is being tapped. A well-designed tap installed on a phone wire can be difficult to detect. In some places, some law enforcement may be able to even access a mobile phone's internal microphone even while it isn't actively being used on a phone call (unless the battery is removed or drained). The noises that some people believe to be telephone taps are simply crosstalk created by the coupling of signals from other phone lines.
Data on the calling and called number, time of call and duration, will generally be collected automatically on all calls and stored for later use by the billing department of the phone company. These data can be accessed by security services, often with fewer legal restrictions than for a tap. This information used to be collected using special equipment known as pen registers and trap and trace devices and U.S. law still refers to it under those names. Today, a list of all calls to a specific number can be obtained by sorting billing records. A telephone tap during which only the call information is recorded but not the contents of the phone calls themselves, is called a pen register tap.
For telephone services via digital exchanges, the information collected may additionally include a log of the type of communications media being used (some services treat data and voice communications differently, in order to conserve bandwidth).
Non-official use
Conversations can be recorded or monitored unofficially, either by tapping by a third party without the knowledge of the parties to the conversation or recorded by one of the parties. This may or may not be illegal, according to the circumstances and the jurisdiction.
There are a number of ways to monitor telephone conversations. One of the parties may record the conversation, either on a tape or solid-state recording device, or they may use a computer running call recording software. The recording, whether overt or covert, may be started manually, automatically when it detects sound on the line (VOX), or automatically whenever the phone is off the hook.
using an inductive coil tap (telephone pickup coil) attached to the handset or near the base of the telephone, picking up the stray field of the telephone's hybrid;
fitting an in-line tap, as discussed below, with a recording output;
using an in-ear microphone while holding the telephone to the ear normally; this picks up both ends of the conversation without too much disparity between the volumes
more crudely and with lower quality, simply using a speakerphone and recording with a normal microphone
The conversation may be monitored (listened to or recorded) covertly by a third party by using an induction coil or a direct electrical connection to the line using a beige box. An induction coil is usually placed underneath the base of a telephone or on the back of a telephone handset to pick up the signal inductively. An electrical connection can be made anywhere in the telephone system, and need not be in the same premises as the telephone. Some apparatus may require occasional access to replace batteries or tapes. Poorly designed tapping or transmitting equipment can cause interference audible to users of the telephone.
The tapped signal may either be recorded at the site of the tap or transmitted by radio or over the telephone wires. state-of-the-art equipment operates in the 30–300 GHz range to keep up with telephone technology compared to the 772 kHz systems used in the past. The transmitter may be powered from the line to be maintenance-free, and only transmits when a call is in progress. These devices are low-powered as not much power can be drawn from the line, but a state-of-the-art receiver could be located as far away as ten kilometers under ideal conditions, though usually located much closer. Research has shown that a satellite can be used to receive terrestrial transmissions with a power of a few milliwatts. Any sort of radio transmitter whose presence is suspected is detectable with suitable equipment.
Conversation on many early cordless telephones could be picked up with a simple radio scanner or sometimes even a domestic radio. Widespread digital spread spectrum technology and encryption has made eavesdropping increasingly difficult.
A problem with recording a telephone conversation is that the recorded volume of the two speakers may be very different. A simple tap will have this problem. An in-ear microphone, while involving an additional distorting step by converting the electrical signal to sound and back again, in practice gives better-matched volume. Dedicated, and relatively expensive, telephone recording equipment equalizes the sound at both ends from a direct tap much better.
Location data
Mobile phones are, in surveillance terms, a major liability.
For mobile phones the major threat is the collection of communications data. This data does not only include information about the time, duration, originator and recipient of the call, but also the identification of the base station where the call was made from, which equals its approximate geographical location. This data is stored with the details of the call and has utmost importance for traffic analysis.
It is also possible to get greater resolution of a phone's location by combining information from a number of cells surrounding the location, which cells routinely communicate (to agree on the next handoff—for a moving phone) and measuring the timing advance, a correction for the speed of light in the GSM standard. This additional precision must be specifically enabled by the telephone company—it is not part of the network's ordinary operation.
Internet
In 1995, Peter Garza, a Special Agent with the Naval Criminal Investigative Service, conducted the first court-ordered Internet wiretap in the United States while investigating Julio Cesar "griton" Ardita.
As technologies emerge, including VoIP, new questions are raised about law enforcement access to communications (see VoIP recording). In 2004, the Federal Communications Commission was asked to clarify how the Communications Assistance for Law Enforcement Act (CALEA) related to Internet service providers. The FCC stated that “providers of broadband Internet access and voice over Internet protocol (“VoIP”) services are regulable as “telecommunications carriers” under the Act.” Those affected by the Act will have to provide access to law enforcement officers who need to monitor or intercept communications transmitted through their networks. As of 2009, warrantless surveillance of internet activity has consistently been upheld in FISA court.
The Internet Engineering Task Force has decided not to consider requirements for wiretapping as part of the process for creating and maintaining IETF standards.
Typically, illegal Internet wiretapping will be conducted via Wi-Fi connection to someone's internet by cracking the WEP or WPA key, using a tool such as Aircrack-ng or Kismet. Once in, the intruder will rely on a number of potential tactics, for example an ARP spoofing attack which will allow the intruder to view packets in a tool such as Wireshark or Ettercap.
Mobile phone
The second generation mobile phones (circa 1978 through 1990) could be easily monitored by anyone with a 'scanning all-band receiver' because the system used an analog transmission system-like an ordinary radio transmitter. The third generation digital phones are harder to monitor because they use digitally encoded and compressed transmission. However the government can tap mobile phones with the cooperation of the phone company. It is also possible for organizations with the correct technical equipment to monitor mobile phone communications and decrypt the audio.
To the mobile phones in its vicinity, a device called an "IMSI-catcher" pretends to be a legitimate base station of the mobile phone network, thus subjecting the communication between the phone and the network to a man-in-the-middle attack. This is possible because, while the mobile phone has to authenticate itself to the mobile telephone network, the network does not authenticate itself to the phone. There is no defense against IMSI-catcher based eavesdropping, except using end-to-end call encryption; products offering this feature, secure telephones, are already beginning to appear on the market, though they tend to be expensive and incompatible with each other, which limits their proliferation.
Webtapping
Logging the IP addresses of users that access certain websites is commonly called "webtapping".
Webtapping is used to monitor websites that presumably contain dangerous or sensitive materials, and the people that access them. Though it is allowed by the USA PATRIOT Act, it is considered a questionable practice by many citizens.
Telephones recording
In Canada, anyone is legally allowed to record a conversation as long as they are involved in the conversation. The police must apply for a warrant beforehand to legally eavesdrop on the conversation. It must be expected that it will reveal evidence to a crime. State agents are lawfully allowed to record conversations, but to reveal the evidence in court, they must obtain a warrant.
History
Many state legislatures in the United States enacted statutes that prohibited anyone from listening in on telegraph communication. Telephone wiretapping began in the 1890s, following the invention of the telephone recorder, and its constitutionality was established in the Prohibition-Era conviction of bootlegger Roy Olmstead. Wiretapping has also been carried out under most Presidents, sometimes with a lawful warrant since the Supreme Court ruled it constitutional in 1928. On October 19, 1963, U.S. Attorney General Robert F. Kennedy, who served under John F. Kennedy and Lyndon B. Johnson, authorized the FBI to begin wiretapping the communications of Rev. Martin Luther King Jr. The wiretaps remained in place until April 1965 at his home and June 1966 at his office.
The history of voice communication technology began in 1876 with the invention of Alexander Graham Bell’s telephone. In the 1890s, “law enforcement agencies begin tapping wires on early telephone networks”. Remote voice communications “were carried almost exclusively by circuit-switched systems,” where telephone switches would connect wires to form a continuous circuit and disconnect the wires when the call ended). All other telephone services, such as call forwarding and message taking, were handled by human operators. However, the first computerized telephone switch was developed by Bell Labs in 1965. This got rid of standard wiretapping techniques.
In late 1940, the Nazis tried to secure some telephone lines between their forward headquarters in Paris and a variety of Fuhrerbunkers in Germany. They did this by constantly monitoring the voltage on the lines, looking for any sudden drops or increases in voltage indicating that other wiring had been attached. However, the French telephone engineer Robert Keller succeeded in attaching taps without alerting the Nazis. This was done through an isolated rental property just outside of Paris. Keller's group became known to SOE (and later Allied military intelligence generally) as "Source K". They were later betrayed by a mole within the French resistance, and Keller was murdered in Bergen-Belsen in April 1945.
In the 1970s, optical fibers become a medium for telecommunications. These fiber lines, “long, thin strands of glass that carry signals via laser light,” are more secure than radio and have become very cheap. From the 1990s to the present, the majority of communications between fixed locations has been achieved by fiber. Because these fiber communications are wired, they're given greater protection under U.S. law.
The earliest wiretaps were extra wires — physically inserted to the line between the switchboard and the subscriber — that carried the signal to a pair of earphones and a recorder. Later on, wiretaps were installed at the central office on the frames that held the incoming wires.”
Before the attack on Pearl Harbor and the subsequent entry of the United States into World War II, the U.S. House of Representatives held hearings on the legality of wiretapping for national defense. Significant legislation and judicial decisions on the legality and constitutionality of wiretapping had taken place years before World War II. However, it took on new urgency at that time of national crisis. The actions of the government regarding wiretapping for the purpose of national defense in the current war on terror have drawn considerable attention and criticism. In the World War II era, the public was also aware of the controversy over the question of the constitutionality and legality of wiretapping. Furthermore, the public was concerned with the decisions that the legislative and judicial branches of the government were making regarding wiretapping.
In 1967 the U.S. Supreme Court ruled that wiretapping (or “intercepting communications”) requires a warrant in Katz v. United States. In 1968 Congress passed a law that provided warrants for wiretapping in criminal investigations. In 1978 the Foreign Intelligence Surveillance Act (FISA) created a "secret federal court" for issuing wiretap warrants in national security cases. This was in response to findings from the Watergate break-in, which allegedly uncovered a history of presidential operations that had used surveillance on domestic and foreign political organizations.
In 1994, Congress approved the Communications Assistance for Law Enforcement Act (CALEA), which “requires telephone companies to be able to install more effective wiretaps. In 2004, the Federal Bureau of Investigation (FBI), United States Department of Justice (DOJ), Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF), and Drug Enforcement Administration (DEA) wanted to expand CALEA requirements to VoIP service.”
The Federal Communications Commission (FCC) ruled in August 2005 that “broadband-service providers and interconnected VoIP providers fall within CALEA’s scope. Currently, instant messaging, web boards and site visits are not included in CALEA’s jurisdiction. In 2007 Congress amended FISA to “allow the government to monitor more communications without a warrant”. In 2008 President George W. Bush expanded the surveillance of internet traffic to and from the U.S. government by signing a national security directive.
In the Greek telephone tapping case 2004–2005 more than 100 mobile phone numbers belonging mostly to members of the Greek government, including the Prime Minister of Greece, and top-ranking civil servants were found to have been illegally tapped for a period of at least one year. The Greek government concluded this had been done by a foreign intelligence agency, for security reasons related to the 2004 Olympic Games, by unlawfully activating the lawful interception subsystem of the Vodafone Greece mobile network. An Italian tapping case which surfaced in November 2007 revealed significant manipulation of the news at the national television company RAI.
In 2008, Wired and other media reported a lamplighter disclosed a "Quantico Circuit", a 45-megabit/second DS-3 line linking a carrier's most sensitive network in an affidavit that was the basis for a lawsuit against Verizon Wireless. The circuit provides direct access to all content and all information concerning the origin and termination of telephone calls placed on the Verizon Wireless network as well as the actual content of calls, according to the filing.
The most recent case of U.S. wiretapping was the NSA warrantless surveillance controversy discovered in December 2005. It aroused much controversy after then President George W. Bush admitted to violating a specific federal statute (FISA) and the warrant requirement of the Fourth Amendment to the United States Constitution. The President claimed his authorization was consistent with other federal statutes (AUMF) and other provisions of the Constitution, also stating that it was necessary to keep America safe from terrorism and could lead to the capture of notorious terrorists responsible for the September 11 attacks in 2001.
One difference between foreign wiretapping and domestic wiretapping is that, when operating in other countries, “American intelligence services could not place wiretaps on phone lines as easily as they could in the U.S.” Also, domestically, wiretapping is regarded as an extreme investigative technique, whereas outside of the country, the interception of communications is huge. The National Security Agency (NSA) “spends billions of dollars every year intercepting foreign communications from ground bases, ships, airplanes and satellites”.
FISA distinguishes between U.S. persons and foreigners, between communications inside and outside the U.S., and between wired and wireless communications. Wired communications within the United States are protected, since intercepting them requires a warrant.
See also
Echelon (signals intelligence)
Indiscriminate monitoring
Mass surveillance
Phone hacking
Secure telephone
Telephone tapping in the Eastern Bloc
References
External links
In Pellicano Case, Lessons in Wiretapping Skills NY Times May 5, 2008
Lawyers for Guantanamo Inmates Accuse US of Eavesdropping NY Times May 7, 2008 |
91256 | https://en.wikipedia.org/wiki/Computer%20and%20network%20surveillance | Computer and network surveillance | Computer and network surveillance is the monitoring of computer activity and data stored locally on a computer or data being transferred over computer networks such as the Internet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost all Internet traffic can be monitored.
Surveillance allows governments and other agencies to maintain social control, recognize and monitor threats or any suspicious activity, and prevent and investigate criminal activities. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
Many civil rights and privacy groups, such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in a mass surveillance society, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
Network surveillance
The vast majority of computer surveillance involves the monitoring of personal data and traffic on the Internet. For example, in the United States, the Communications Assistance For Law Enforcement Act mandates that all phone calls and broadband internet traffic (emails, web traffic, instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies.
Packet capture (also known as "packet sniffing") is the monitoring of data traffic on a network. Data sent between computers over the Internet or between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. A packet capture appliance intercepts these packets, so that they may be examined and analyzed. Computer technology is needed to perform traffic analysis and sift through intercepted data to look for important/useful information. Under the Communications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers' broadband Internet and voice over Internet protocol (VoIP) traffic.
There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group. Billions of dollars per year are spent by agencies such as the Information Awareness Office, NSA, and the FBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies.
Similar systems are now used by Iranian Security dept. to identify and suppress dissidents. All of the technology has been allegedly installed by German Siemens AG and Finnish Nokia.
The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages to network monitoring. For instance, systems described as "Web 2.0" have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0", stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online. However, Internet surveillance also has a disadvantage. One researcher from Uppsala University said "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance". Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves also monitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive. This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to ‘creeping’ on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy". The study shows that women can become jealous of other people when they are in an online group.
The virtual assistant(AI) has become a social integration into lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services. They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device. The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement. While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data.
Corporate surveillance
Corporate surveillance of computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form of business intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such as targeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails (if they use free webmail services), which are kept in a database.
Such type of surveillance is also used to establish business purposes of monitoring, which may include the following:
Preventing misuse of resources. Companies can discourage unproductive personal activities such as online shopping or web surfing on company time. Monitoring employee performance is one way to reduce unnecessary network traffic and reduce the consumption of network bandwidth.
Promoting adherence to policies. Online surveillance is one means of verifying employee observance of company networking policies.
Preventing lawsuits. Firms can be held liable for discrimination or employee harassment in the workplace. Organizations can also be involved in infringement suits through employees that distribute copyrighted material over corporate networks.
Safeguarding records. Federal legislation requires organizations to protect personal information. Monitoring can determine the extent of compliance with company policies and programs overseeing information security. Monitoring may also deter unlawful appropriation of personal information, and potential spam or viruses.
Safeguarding company assets. The protection of intellectual property, trade secrets, and business strategies is a major concern. The ease of information transmission and storage makes it imperative to monitor employee actions as part of a broader policy.
The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm.
For instance, Google Search stores identifying information for each web search. An IP address and the search phrase used are stored in a database for up to 18 months. Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences. Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies "cookies" on each visitor's computer. These cookies track the user across all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the information from their email accounts, and search engine histories, is stored by Google to use to build a profile of the user to deliver better-targeted advertising.
The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. The Department of Homeland Security has openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring.
Malicious software
In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer's hard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collect passwords, and/or report back activities in real-time to its operator through the Internet connection. A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server.
There are multiple ways of installing such software. The most common is remote installation, using a backdoor created by a computer virus or trojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such as CIPAV and Magic Lantern. More often, however, viruses created by other people or spyware installed by marketing agencies can be used to gain access through the security breaches that they create.
Another method is "cracking" into the computer to gain access over a network. An attacker can then install surveillance software remotely. Servers and computers with permanent broadband connections are most vulnerable to this type of attack. Another source of security cracking is employees giving out information or users using brute force tactics to guess their password.
One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from a compact disc, floppy disk, or thumbdrive. This method shares a disadvantage with hardware devices in that it requires physical access to the computer. One well-known worm that uses this method of spreading itself is Stuxnet.
Social network analysis
One common form of surveillance is to create maps of social networks based on data from social networking sites as well as from traffic analysis information from phone call records such as those in the NSA call database, and internet traffic data gathered under CALEA. These social network "maps" are then data mined to extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities.
Many U.S. government agencies such as the Defense Advanced Research Projects Agency (DARPA), the National Security Agency (NSA), and the Department of Homeland Security (DHS) are currently investing heavily in research involving social network analysis. The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.
Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by the Information Awareness Office:
Monitoring from a distance
With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting the radiation emitted by the CRT monitor. This form of computer surveillance, known as TEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters.
IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer.
In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act. At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from the Stingray phone tracker device. As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device. Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing. Some cities have pulled out of using the StingRay such as Santa Clara County.
And it has also been shown, by Adi Shamir et al., that even the high frequency noise emitted by a CPU includes information about the instructions being executed.
Policeware and govware
In German-speaking countries, spyware used or made by the government is sometimes called govware. Some countries like Switzerland and Germany have a legal framework governing the use of such software. Known examples include the Swiss MiniPanzer and MegaPanzer and the German R2D2 (trojan).
Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens. Within the U.S., Carnivore was the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails. Magic Lantern is another such application, this time running in a targeted computer in a trojan style and performing keystroke logging. CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan.
The Clipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.” The government portrayed it as the solution to the secret codes or cryptographic keys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda.
The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) without Digital Rights Management (DRM) that prevented access to this material without the permission of the copyright holder.
Surveillance as an aid to censorship
Surveillance and censorship are different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance. And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead to self-censorship.
In March 2013 Reporters Without Borders issued a Special report on Internet surveillance that examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists, citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet", Bahrain, China, Iran, Syria, and Vietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", Amesys (France), Blue Coat Systems (U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future.
Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone. Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities.
See also
Anonymizer, a software system that attempts to make network activity untraceable
Computer surveillance in the workplace
Cyber spying
Datacasting, a means of broadcasting files and Web pages using radio waves, allowing receivers near total immunity from traditional network surveillance techniques.
Differential privacy, a method to maximize the accuracy of queries from statistical databases while minimizing the chances of violating the privacy of individuals.
ECHELON, a signals intelligence (SIGINT) collection and analysis network operated on behalf of Australia, Canada, New Zealand, the United Kingdom, and the United States, also known as AUSCANNZUKUS and Five Eyes
GhostNet, a large-scale cyber spying operation discovered in March 2009
List of government surveillance projects
Mass surveillance
China's Golden Shield Project
Mass surveillance in Australia
Mass surveillance in China
Mass surveillance in East Germany
Mass surveillance in India
Mass surveillance in North Korea
Mass surveillance in the United Kingdom
Mass surveillance in the United States
Surveillance
Surveillance by the United States government:
2013 mass surveillance disclosures, reports about NSA and its international partners' mass surveillance of foreign nationals and U.S. citizens
Bullrun (code name), a highly classified NSA program to preserve its ability to eavesdrop on encrypted communications by influencing and weakening encryption standards, by obtaining master encryption keys, and by gaining access to data before or after it is encrypted either by agreement, by force of law, or by computer network exploitation (hacking)
Carnivore, a U.S. Federal Bureau of Investigation system to monitor email and electronic communications
COINTELPRO, a series of covert, and at times illegal, projects conducted by the FBI aimed at U.S. domestic political organizations
Communications Assistance For Law Enforcement Act
Computer and Internet Protocol Address Verifier (CIPAV), a data gathering tool used by the U.S. Federal Bureau of Investigation (FBI)
Dropmire, a secret surveillance program by the NSA aimed at surveillance of foreign embassies and diplomatic staff, including those of NATO allies
Magic Lantern, keystroke logging software developed by the U.S. Federal Bureau of Investigation
Mass surveillance in the United States
NSA call database, a database containing metadata for hundreds of billions of telephone calls made in the U.S.
NSA warrantless surveillance (2001–07)
NSA whistleblowers: William Binney, Thomas Andrews Drake, Mark Klein, Edward Snowden, Thomas Tamm, Russ Tice
Spying on United Nations leaders by United States diplomats
Stellar Wind (code name), code name for information collected under the President's Surveillance Program
Tailored Access Operations, NSA's hacking program
Terrorist Surveillance Program, an NSA electronic surveillance program
Total Information Awareness, a project of the Defense Advanced Research Projects Agency (DARPA)
TEMPEST, codename for studies of unintentional intelligence-bearing signals which, if intercepted and analyzed, may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment
References
External links
"Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011.
Computer forensics
Surveillance
Espionage techniques |
92210 | https://en.wikipedia.org/wiki/Len%20Sassaman | Len Sassaman | Leonard Harris Sassaman (1980 – July 3, 2011) was an American technologist, information privacy advocate, and the maintainer of the Mixmaster anonymous remailer code and operator of the randseed remailer. Much of his career gravitated towards cryptography and protocol development.
Early life and education
Sassaman graduated from The Hill School in 1998. By 18, he was on the Internet Engineering Task Force responsible for the TCP/IP protocol underlying the internet and later the Bitcoin network. He was diagnosed with depression as a teenager. In 1999, Len moved to the Bay Area, quickly became a regular in the cypherpunk community and moved in with Bram Cohen.
Career
Sassaman was employed as the security architect and senior systems engineer for Anonymizer. He was a PhD candidate at the Katholieke Universiteit Leuven in Belgium, as a researcher with the Computer Security and Industrial Cryptography (COSIC) research group, led by Bart Preneel. David Chaum and Bart Preneel were his advisors.
Sassaman was a well-known cypherpunk, cryptographer and privacy advocate. He worked for Network Associates on the PGP encryption software, was a member of the Shmoo Group, a contributor to the OpenPGP IETF working group, the GNU Privacy Guard project, and frequently appeared at technology conferences like DEF CON. Sassaman was the co-founder of CodeCon along with Bram Cohen, co-founder of the HotPETS workshop (with Roger Dingledine of Tor and Thomas Heydt-Benjamin), co-author of the Zimmermann–Sassaman key-signing protocol, and at the age of 21, was an organizer of the protests following the arrest of Russian programmer Dmitry Sklyarov.
On February 11, 2006, at the fifth CodeCon, Sassaman proposed to returning speaker and noted computer scientist Meredith L. Patterson during the Q&A after her presentation, and they were married. The couple worked together on several research collaborations, including a critique of privacy flaws in the OLPC Bitfrost security platform, and a proposal of formal methods of analysis of computer insecurity in February 2011.
Meredith Patterson's current startup, Osogato, aims to commercialize Patterson's Support Vector Machine-based "query by example" research. Sassaman and Patterson announced Osogato's first product, a downloadable music recommendation tool, at SuperHappyDevHouse 21 in San Francisco.
In 2009, Dan Kaminsky presented joint work with Sassaman and Patterson at Black Hat in Las Vegas, showing multiple methods for attacking the X.509 certificate authority infrastructure. Using these techniques, the team demonstrated how an attacker could obtain a certificate that clients would treat as valid for domains the attacker did not control.
Death
Sassaman is reported to have died on July 3, 2011. Patterson reported that her husband's death was a suicide.
A presentation given by Kaminsky at the 2011 Black Hat Briefings revealed that a testimonial in honor of Sassaman had been permanently embedded into Bitcoin's block chain.
See also
Information privacy
Information security
References
External links
Archive of Len Sassaman's homepage from July 2011
Cypherpunks
1980 births
2011 suicides
Modern cryptographers
People associated with computer security
Computer systems engineers
Suicides in Belgium
The Hill School alumni
2011 deaths |
93763 | https://en.wikipedia.org/wiki/Delia%20Bacon | Delia Bacon | Delia Salter Bacon (February 2, 1811 – September 2, 1859) was an American writer of plays and short stories and Shakespeare scholar. She is best known for her work on the authorship of Shakespeare's plays, which she attributed to social reformers including Francis Bacon, Sir Walter Raleigh and others.
Bacon's research in Boston, New York, and London led to the publication of her major work on the subject, The Philosophy of the Plays of Shakspere Unfolded. Her admirers included authors Harriet Beecher Stowe, Nathaniel Hawthorne and Ralph Waldo Emerson, the last of whom called her "America's greatest literary producer of the past ten years" at the time of her death.
Biography
Bacon was born in a frontier log cabin in Tallmadge, Ohio, the youngest daughter of Congregational minister David Bacon, who in pursuit of a vision, had abandoned New Haven for the wilds of Ohio. The venture quickly collapsed, and the family returned to New England, where her father died soon after. The impoverished state of their finances permitted only her elder brother Leonard to receive a tertiary education, at Yale, while her own formal education ended when she was fourteen. She became a teacher in schools in Connecticut, New Jersey, and New York, and then, until about 1852, became a distinguished professional lecturer, conducting, in various Eastern United States cities, classes for women in history and literature by methods she devised. At 20, in 1831, she published her first book, Tales of the Puritans anonymously, consisting of three long stories on colonial life. In 1832, she beat Edgar Allan Poe for a short-story prize sponsored by the Philadelphia Saturday Courier.
In 1836, she moved to New York, and became an avid theatre-goer. She met the leading Shakespearean actress Ellen Tree soon after, and persuaded her to take the lead role in a play she was writing, partly in blank verse, entitled The Bride of Fort Edward, based on her award-winning story, Love's Martyr, about Jane M'Crea. The play, however, was never performed, due in part to Bacon's health and the harsh criticisms of her brother. It was published anonymously in 1839 (with a note claiming it was "not a play"). The text was reviewed favourably by the Saturday Courier and Edgar Allan Poe, but proved to be a commercial flop.
Returning to New Haven, Bacon met Yale-educated minister Alexander MacWhorter in 1846. Time in each other's company and a trip to Northampton convinced many of the impropriety of their relationship. MacWhorter was brought to ecclesiastical trial by Bacon's brother Leonard for "dishonorable conduct," but was acquitted in a 12–11 vote. Public opinion compelled Bacon to leave New Haven for Ohio, while Catharine Beecher wrote a book defending her conduct.
Shakespeare authorship theory
Delia Bacon withdrew from public life and lecturing in early 1845, and began to research intensively a theory she was developing over the authorship of Shakespeare's works, which she mapped out by October of that year. However a decade was to pass before her book The Philosophy of the Plays of Shakespeare Unfolded (1857) was to see print. During these years she was befriended by Nathaniel Hawthorne and Ralph Waldo Emerson, and, after securing sponsorship to travel for research to England, in May 1853, met Thomas Carlyle, who though intrigued, shrieked loudly as he heard her exposition.
This was the heyday of higher criticism, which was claiming to have uncovered the multiple authorship of the Bible, and positing the composite nature of masterpieces like those attributed to Homer. It was also a period of rising bardolatry, the deification of Shakespeare's genius, and a widespread, almost hyperbolic veneration for the philosophical genius of Francis Bacon. Delia Bacon was influenced by these currents. Like many of her time, she approached Shakespearean drama as philosophical masterpieces written for a closed aristocratic society of courtiers and monarchs, and found it difficult to believe they were written either with commercial intent or for a popular audience. Puzzled by the gap between the bare facts of William Shakespeare's life and his vast literary output, she intended to prove that the plays attributed to Shakespeare were written by a coterie of men, including Francis Bacon, Sir Walter Raleigh and Edmund Spenser, for the purpose of inculcating a philosophic system, for which they felt that they themselves could not afford to assume the responsibility. This system she set out to discover beneath the superficial text of the plays. From her friendship with Samuel Morse, an authority on codes, and encryption for the telegraph, she learned of Bacon's interest in secret ciphers, and this prompted her own approach to the authorship question.
James Shapiro interprets her theory both in terms of the cultural tensions of her historical milieu, and as consequential on an intellectual and emotional crisis that unfolded as she both broke with her Puritan upbringing and developed a deep confidential relationship with a fellow lodger, Alexander MacWhorter, a young theology graduate from Yale, which was subsequently interrupted by her brother. MacWhorter was absolved of culpability in a subsequent ecclesiastical trial, whose verdict led to a rift between Delia and her fellow congregationalists.
Her theory proposed that the missing fourth part of Bacon's unfinished magnum opus, the Instauratio Magna had in fact survived in the form of the plays attributed to Shakespeare. Delia Bacon argued that the great plays were the collective effort of a:
little clique of disappointed and defeated politicians who undertook to head and organize popular opposition against the government, and were compelled to retreat from that enterprise.. .Driven from one field, they showed themselves in another. Driven from the open field, they fought in secret.
The cenacle opposing the 'despotism' of Queen Elizabeth and King James, like the knights of King Arthur's Round Table consisted of Francis Bacon, Walter Ralegh, and, as far as Shapiro can make out from her confused writing, perhaps Edmund Spenser, Lord Buckhurst and the Earl of Oxford, all putatively employing playwriting to speak to both rulers and the ruled as committed republicans vindicating that cause against tyranny. She had, in Shapiro's reading, a 'revolutionary agenda' that consisted in upturning the myths of America's founding fathers and the Puritan heritage.
Bacon's skeptical attitude towards the orthodox view of Shakespearean authorship earned her the enduring contempt of many, such as Richard Grant White. However, Emerson assisted her in publishing her first essay on the Shakespearean question in the January 1856 issue of Putnam’s:
How can we undertake to account for the literary miracles of antiquity, while this great myth of the modern ages still lies at our own door, unquestioned? This vast, magical, unexplained phenomenon which our own times have proceed under our own eyes, appears to be, indeed, the only thing which our modern rationalism is not to be permitted to meddle with. For, here the critics themselves still veil their faces, filling the air with mystic utterances which seem to say, that to this shrine at least, for the footstep of the common reason and the common sense, there is yet no admittance.
Emerson, who greatly admired Bacon, and who was sceptical of her claim, wrote that she would need 'enchanted instruments, nay alchemy itself, to melt into one identity these two reputations', and retrospectively remarked that America had only two "producers" during the 1850s, "Our wild Whitman, with real inspiration but checked by [a] titanic abdomen; and Delia Bacon, with genius, but mad and clinging like a tortoise to English soil." Though he was intrigued by her insights into the plays, he grew skeptical of the 'magical cipher' of which Bacon wrote without ever producing evidence for it.
According to Whitman, himself among the most outspoken of 19th century anti-Stratfordians, she was "the sweetest, eloquentist, grandest woman…that America has so far produced….and, of course, very unworldly, just in all ways such a woman as was calculated to bring the whole literary pack down on her, the orthodox, cruel, stately, dainty, over-fed literary pack – worshipping tradition, unconscious of this day’s honest sunlight."
Bacon's legacy
One recent assessment echoes the favorable view of Bacon held by Emerson, Hawthorne, and Whitman:
For too long critics have depicted [her] as a tragicomic figure, blindly pursuing a fantastic mission in obscurity and isolation, only to end in silence and madness….this is not to say that the stereotype is without basis. On the contrary, her sad story established an archetype for the story of the Shakespeare authorship at large – or at least one element of it: an otherworldly pursuit of truth that produces gifts for a world that is indifferent or hostile to them.
James Shapiro argues that her political reading of the plays, and her insistence on collaborative authorship, anticipated modern approaches by a century and a half.
Had she limited her argument to these points instead of conjoining it to an argument about how Shakespeare couldn't have written them, there is little doubt that, instead of being dismissed as a crank and a madwoman, she would be hailed today as the precursor of the New Historicists, and the first to argue that the plays anticipated the political upheavals England experienced in the mid-seventeenth century. But Delia Bacon couldn't stop at that point. Nor could she concede that the republican ideas she located in the plays circulated widely at the time and were as available to William Shakespeare as they were to Walter Ralegh or Francis Bacon.
There is a biography by her nephew, Theodore Bacon, Delia Bacon: A Sketch (Boston, 1888), and an appreciative chapter, "Recollections of a Gifted Woman," in Nathaniel Hawthorne's Our Old Home (Boston, 1863). She died in Hartford, Connecticut.
Bacon and her theories are featured heavily in Jennifer Lee Carrell's novel Interred with Their Bones.
She is interred in Grove Street Cemetery in New Haven, Connecticut.
Notes
Further reading
Papers of Delia Salter Bacon can be found at the Folger Shakespeare Library: 323 items (2 boxes), Folger MS Y.c.2599 (1-323)
External links
The Philosophy of the Plays of Shakspere Unfolded at Archive.org.
1811 births
1859 deaths
Baconian theory of Shakespeare authorship
Burials at Grove Street Cemetery
Shakespearean scholars
Shakespeare authorship theorists
19th-century American women writers
American women short story writers
Writers from New Haven, Connecticut
19th-century American short story writers |
93769 | https://en.wikipedia.org/wiki/PKZIP | PKZIP | PKZIP is a file archiving computer program, notable for introducing the popular ZIP file format. PKZIP was first introduced for MS-DOS on the IBM-PC compatible platform in 1989. Since then versions have been released for a number of other architectures and operating systems. PKZIP was originally written by Phil Katz and marketed by his company PKWARE, Inc, with both of them bearing his initials: 'PK'.
History
By the 1970s, file archiving programs were distributed as standard utilities with operating systems. They include the Unix utilities ar, shar, and tar. These utilities were designed to gather a number of separate files into a single archive file for easier copying and distribution. These archives could optionally be passed through a stream compressor utility, such as compress and others.
Other archivers also appeared during the 1980s, including ARC by System Enhancement Associates, Inc. (SEA), Rahul Dhesi's ZOO, Dean W. Cooper's DWC, LHarc by Haruhiko Okomura and Haruyasu Yoshizaki and ARJ which stands for "Archived by Robert Jung".
The development of PKZIP was first announced in the file SOFTDEV.DOC from within the PKPAK 3.61 package, stating it would develop a new and yet unnamed compression program. The announcement had been made following the lawsuit between SEA and PKWARE, Inc. Although SEA won the suit, it lost the compression war, as the user base migrated to PKZIP as the compressor of choice. Led by BBS sysops who refused to accept or offer files compressed as .ARC files, users began recompressing any old archives that were currently stored in .ARC format into .ZIP files.
The first version was released in 1989, as a DOS command-line tool, distributed under shareware model with a US$25 registration fee (US$47 with manual).
Version history
PKZIP
PKZIP 0.8 (released on January 1, 1989) initial version
PKZIP 0.9 (released on February 10, 1989) supported reducing algorithm (from SCRNCH by Graeme McRae) with four compression settings and shrinking. In addition to PKZIP and PKUNZIP, it also included ZIP2EXE, which required an external self-extracting executable header created by MAKESFX from the PKZIP executable package.
PKZIP 0.92 (released on March 6, 1989): In addition to bug fixes, PKZIP included an option to automatically choose the best compression method for each file. New tools included with PKZIP include PKZipFix.
PKZIP 1.01 (released on July 21, 1989) added Implode compression, while reduced files can only be extracted from ZIP archive. Imploding was chosen based on the characteristics of the file being compressed. New utility included Thomas Atkinson's REZIP conversion utility (part of ZIP-KIT). PKZIP's default compression behavior was changed from fastest (Shrink) to best (Implode). Supported platforms include OS/2, DOS.
PKZIP 1.02 (released on October 1, 1989) includes new utility BIOSFIX.COM, which preserved the entire 80386 register set during any mode switches via INT 15H. OS/2 version added ZIP2EXE and 2 self-extracting archive headers.
PKZIP 1.10 (released on March 15, 1990): New features included authenticity verification, "mini" PKSFX self-extracting module, integrating self-extracting module into ZIP2EXE, ability to save & restore volume labels. Imploding was up to 5X faster and compression ratio was improved over 1.02. EAX register was always saved on 80386 or above CPU. Removed tools included BIOSFIX, REZIP, MAKESFX.
PKZIP 1.93a (released in October 1991): An alpha version that introduced a new compression method which Katz called "deflating". It was supposed to be quickly followed by a final PKZIP 2 release, but there were numerous delays.
PKZIP 2.04g (released in January 1993): By the time the release was ready, fake 2.x releases were circulating, some of them malware, so an untainted version number was chosen instead of 2.0. This new version dispensed with the miscellaneous compression methods of PKZIP 1.x and replaced them with the deflate algorithm (although several levels of deflation were provided by the program). The resulting file format has since become ubiquitous on Microsoft Windows and on the Internet almost all files with the .ZIP (or .zip) extension are in PKZIP 2.x format, and utilities to read and write these files are available on all common platforms. PKZIP 2.x also supported spanning archives to multiple disk, which simply split the files into multiple pieces, and using volume label on each drive to differentiate each other. A new Authenticity Verification (AV) signature format was used. Registered version included PKUNZJR, PK Safe ANSI, PKCFG utilities.
PKZIP 2.06 was released in 1994. It was a version of PKZIP 2.04g licensed to IBM.
PKZIP 2.50 (released on April 15, 1998) was the first version released for Windows 3.1, 95, NT platforms. DOS version of PKZIP 2.50 was released on 1999-03-01, as its final MS-DOS product. PKZIP 2.50 supported long file names on all builds, and Deflate64 extraction. DCL Implode extraction was supported on non-DOS ports. A new command-line product was introduced in Windows 95, OS/2, UNIX platforms, called "PKZIP Command Line" (later expanded to "PKZIP Server"), which featured new command line syntax.
PKZIP 2.6 was the last version to support Windows 3.1 and Windows NT for the Alpha and PowerPC platforms.
PKZIP 2.70 added email MAPI (i.e. Send To) support. Registered version included creation of configurable self-extracted archives, added Authenticity Verification (AV) Information. Distribution Licensed versions included enhanced self-extractors. Professional distribution licensed version could create self-extracting patch files, and includes self-extractors for several new platforms.
PKZIP 4.0 was an updated version of PKZIP 2.7. Version 3 was skipped as a result of PKZIP 3.0 Trojan. It supported Deflate64 and DCL Implode compression, and the use of X.509 v3 certificate-based authentication., creation of Span or Split large .ZIP archives. Old PKZIP command line conversion tools were introduced.
On August 21, 2001, PKWARE announced the availability of PKZIP 4.5. PKZIP 4.5 included ZIP64 archives support, which allowed more than 65535 files per ZIP archives, and storing files larger than 4 gigabytes into .ZIP archive. A version called PKZIP Suite 4.5 also included PKZIP Command Line 4.5, PKZIP Explorer 1.5, PKZIP Attachments 1.1, and PKZIP Plug-In 1.0.
PKZIP 5.0 was announced in 2002, which introduced Strong Encryption Specification (SES) for the Professional version of the product, which initially included DES, 3DES, RC2, RC4 encryption formats, and the use of using X.509 v3 certificate-based encryption.
PKZIP 6.0 (released in 2003) added support for bzip2 (based on Burrows–Wheeler transform) compression, with Professional Edition supporting 256-bit AES.
PKZIP 7.0 changed SES to use non-OAEP key wrapping for compatibility with smart cards and USB tokens. Support of creating AV authenticity verification archives was dropped. PKZIP could now create archives of the following types: ZIP, bzip2, GZIP, tar, UUEncoded, XXEncoded.
PKZIP 8.0 was released on April 27, 2004. In addition, PKWARE renamed its PKZip Professional to SecureZIP. Creation of ZIP archives with encrypted headers was available.
PKZIP 9.0 was the first version to unofficially support Windows Vista (as administrator). Creation of RC2, DES-encrypted ZIP archives are dropped.
PKZIP 10 Enterprise Edition and SecureZIP 10 were released on i5/OS. It offered the ability to create ZIP64 archives for the target platform. Desktop PKZIP version was no longer developed beyond version 9
.ZIP file format
To help ensure the interoperability of the ZIP format, Phil Katz published the original .ZIP File Format Specification in the APPNOTE.TXT documentation file. PKWARE continued to maintain this document and periodically published updates. Originally only bundled with registered versions of PKZIP, it was later available on the PKWARE site.
The specification has its own version number, which does not necessarily correspond to the PKZIP version numbers, especially with PKZIP 6 or later. At various times, PKWARE adds preliminary features that allows PKZIP products to extract archives using advanced features, but PKZIP products that create such archives won't be available until the next major release.
Compatibility
Although popular at the time, ZIP archives using PKZIP 1.0 compression methods are now rare, and many unzip tools such as 7-Zip are able to read and write several other archive formats.
Patents
Shrinking uses dynamic LZW, on which Unisys held patents. A patent for the Reduce Algorithm had also been filed on June 19, 1984, long before PKZIP was produced.
Other products
PKWARE also used its PKZIP standards on following products:
SecureZIP (including SecureZIP PartnerLink)
PKZIP Explorer
See also
Comparison of file archivers
Comparison of archive formats
List of archive formats
PKLite
References
External links
Official
, PKWARE
PKZIP from PKWARE
PKZIP 2.50 for DOS
SecureZIP from PKWARE
APPNOTE
Other
SecureZIP Homepage
Commentary from SEA owner about Phil Katz, the lawsuit, and his death
CONTROVERSY: LAWSUITS: SEA vs. PKWARE
Judgment in favor of SEA in SEA v. PKWARE and Phil Katz
How to Use PKZIP From the Command Line
Data compression software
File archivers
Windows compression software |
97302 | https://en.wikipedia.org/wiki/Intel%208085 | Intel 8085 | The Intel 8085 ("eighty-eighty-five") is an 8-bit microprocessor produced by Intel and introduced in March 1976. It is a software-binary compatible with the more-famous Intel 8080 with only two minor instructions added to support its added interrupt and serial input/output features. However, it requires less support circuitry, allowing simpler and less expensive microcomputer systems to be built.
The "5" in the part number highlighted the fact that the 8085 uses a single +5-volt (V) power supply by using depletion-mode transistors, rather than requiring the +5 V, −5 V and +12 V supplies needed by the 8080. This capability matched that of the competing Z80, a popular 8080-derived CPU introduced the year before. These processors could be used in computers running the CP/M operating system.
The 8085 is supplied in a 40-pin DIP package. To maximise the functions on the available pins, the 8085 uses a multiplexed address/data(AD^0-AD^7) bus. However, an 8085 circuit requires an 8-bit address latch, so Intel manufactured several support chips with an address latch built in. These include the 8755, with an address latch, 2 KB of EPROM and 16 I/O pins, and the 8155 with 256 bytes of RAM, 22 I/O pins and a 14-bit programmable timer/counter. The multiplexed address/data bus reduced the number of PCB tracks between the 8085 and such memory and I/O chips.
Both the 8080 and the 8085 were eclipsed by the Zilog Z80 for desktop computers, which took over most of the CP/M computer market, as well as a share of the booming home-computer market in the early-to-mid-1980s.
The 8085 had a long life as a controller, no doubt thanks to its built-in serial I/O and five prioritized interrupts, arguably microcontroller-like features that the Z80 CPU did not have. Once designed into such products as the DECtape II controller and the VT102 video terminal in the late 1970s, the 8085 served for new production throughout the lifetime of those products. This was typically longer than the product life of desktop computers.
Description
The 8085 is a conventional von Neumann design based on the Intel 8080. Unlike the 8080 it does not multiplex state signals onto the data bus, but the 8-bit data bus is instead multiplexed with the lower eight bits of the 16-bit address bus to limit the number of pins to 40. State signals are provided by dedicated bus control signal pins and two dedicated bus state ID pins named S0 and S1. Pin 40 is used for the power supply (+5 V) and pin 20 for ground. Pin 39 is used as the Hold pin. The processor was designed using nMOS circuitry, and the later "H" versions were implemented in Intel's enhanced nMOS process called HMOS II ("High-performance MOS"), originally developed for fast static RAM products. Only a single 5-volt power supply is needed, like competing processors and unlike the 8080. The 8085 uses approximately 6,500 transistors.
The 8085 incorporates the functions of the 8224 (clock generator) and the 8228 (system controller) on chip, increasing the level of integration. A downside compared to similar contemporary designs (such as the Z80) is the fact that the buses require demultiplexing; however, address latches in the Intel 8155, 8355, and 8755 memory chips allow a direct interface, so an 8085 along with these chips is almost a complete system.
The 8085 has extensions to support new interrupts, with three maskable vectored interrupts (RST 7.5, RST 6.5 and RST 5.5), one non-maskable interrupt (TRAP), and one externally serviced interrupt (INTR). Each of these five interrupts has a separate pin on the processor, a feature which permits simple systems to avoid the cost of a separate interrupt controller. The RST 7.5 interrupt is edge triggered (latched), while RST 5.5 and 6.5 are level-sensitive. All interrupts are enabled by the EI instruction and disabled by the DI instruction. In addition, the SIM (Set Interrupt Mask) and RIM (Read Interrupt Mask) instructions, the only instructions of the 8085 that are not from the 8080 design, allow each of the three maskable RST interrupts to be individually masked. All three are masked after a normal CPU reset. SIM and RIM also allow the global interrupt mask state and the three independent RST interrupt mask states to be read, the pending-interrupt states of those same three interrupts to be read, the RST 7.5 trigger-latch flip-flop to be reset (cancelling the pending interrupt without servicing it), and serial data to be sent and received via the SOD and SID pins, respectively, all under program control and independently of each other.
SIM and RIM each execute in four clock cycles (T states), making it possible to sample SID and/or toggle SOD considerably faster than it is possible to toggle or sample a signal via any I/O or memory-mapped port, e.g. one of the port of an 8155. (In this way, SID can be compared to the SO ["Set Overflow"] pin of the 6502 CPU contemporary to the 8085.)
Like the 8080, the 8085 can accommodate slower memories through externally generated wait states (pin 35, READY), and has provisions for Direct Memory Access (DMA) using HOLD and HLDA signals (pins 39 and 38). An improvement over the 8080 is that the 8085 can itself drive a piezoelectric crystal directly connected to it, and a built-in clock generator generates the internal high-amplitude two-phase clock signals at half the crystal frequency (a 6.14 MHz crystal would yield a 3.07 MHz clock, for instance). The internal clock is available on an output pin, to drive peripheral devices or other CPUs in lock-step synchrony with the CPU from which the signal is output. The 8085 can also be clocked by an external oscillator (making it feasible to use the 8085 in synchronous multi-processor systems using a system-wide common clock for all CPUs, or to synchronize the CPU to an external time reference such as that from a video source or a high-precision time reference).
The 8085 is a binary compatible follow-up on the 8080. It supports the complete instruction set of the 8080, with exactly the same instruction behavior, including all effects on the CPU flags (except for the AND/ANI operation, which sets the AC flag differently). This means that the vast majority of object code (any program image in ROM or RAM) that runs successfully on the 8080 can run directly on the 8085 without translation or modification. (Exceptions include timing-critical code and code that is sensitive to the aforementioned difference in the AC flag setting or differences in undocumented CPU behavior.) 8085 instruction timings differ slightly from the 8080—some 8-bit operations, including INR, DCR, and the heavily used MOV r,r' instruction, are one clock cycle faster, but instructions that involve 16-bit operations, including stack operations (which increment or decrement the 16-bit SP register) generally one cycle slower.
It is of course possible that the actual 8080 and/or 8085 differs from the published specifications, especially in subtle details. (The same is not true of the Z80.) As mentioned already, only the SIM and RIM instructions were new to the 8085.
Programming model
The processor has seven 8-bit registers accessible to the programmer, named A, B, C, D, E, H, and L, where A is also known as the accumulator. The other six registers can be used as independent byte-registers or as three 16-bit register pairs, BC, DE, and HL (or B, D, H, as referred to in Intel documents), depending on the particular instruction. Some instructions use HL as a (limited) 16-bit accumulator. As in the 8080, the contents of the memory address pointed to by HL can be accessed as pseudo register M. It also has a 16-bit program counter and a 16-bit stack pointer to memory (replacing the 8008's internal stack). Instructions such as PUSH PSW, POP PSW affect the Program Status Word (accumulator and flags). The accumulator stores the results of arithmetic and logical operations, and the flags register bits (sign, zero, auxiliary carry, parity, and carry flags) are set or cleared according to the results of these operations. The sign flag is set if the result has a negative sign (i.e. it is set if bit 7 of the accumulator is set). The auxiliary or half carry flag is set if a carry-over from bit 3 to bit 4 occurred. The parity flag is set to 1 if the parity (number of 1-bits) of the accumulator is even; if odd, it is cleared. The zero flag is set if the result of the operation was 0. Lastly, the carry flag is set if a carry-over from bit 7 of the accumulator (the MSB) occurred.
Commands/instructions
As in many other 8-bit processors, all instructions are encoded in a single byte (including register-numbers, but excluding immediate data), for simplicity. Some of them are followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. A NOP "no operation" instruction exists, but does not modify any of the registers or flags. Like larger processors, it has CALL and RET instructions for multi-level procedure calls and returns (which can be conditionally executed, like jumps) and instructions to save and restore any 16-bit register-pair on the machine stack. There are also eight one-byte call instructions (RST) for subroutines located at the fixed addresses 00h, 08h, 10h,...,38h. These are intended to be supplied by external hardware in order to invoke a corresponding interrupt-service routine, but are also often employed as fast system calls. One sophisticated instruction is XTHL, which is used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer.
8-bit instructions
All two-operand 8-bit arithmetic and logical (ALU) operations work on the 8-bit accumulator (the A register). For two-operand 8-bit operations, the other operand can be either an immediate value, another 8-bit register, or a memory cell addressed by the 16-bit register pair HL. The only 8-bit ALU operations that can have a destination other than the accumulator are the unary incrementation or decrementation instructions, which can operate on any 8-bit register or on memory addressed by HL, as for two-operand 8-bit operations. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory cell, using the MOV instruction. An immediate value can also be moved into any of the foregoing destinations, using the MVI instruction. Due to the regular encoding of the MOV instruction (using nearly a quarter of the entire opcode space) there are redundant codes to copy a register into itself (MOV B,B, for instance), which are of little use, except for delays. However, what would have been a copy from the HL-addressed cell into itself (i.e., MOV M,M) instead encodes the HLT instruction, halting execution until an external reset or unmasked interrupt occurs.
16-bit operations
Although the 8085 is an 8-bit processor, it has some 16-bit operations. Any of the three 16-bit register pairs (BC, DE, HL or SP) can be loaded with an immediate 16-bit value (using LXI), incremented or decremented (using INX and DCX), or added to HL (using DAD). LHLD loads HL from directly addressed memory and SHLD stores HL likewise. The XCHG operation exchanges the values of HL and DE. Adding HL to itself performs a 16-bit arithmetical left shift with one instruction. The only 16-bit instruction that affects any flag is DAD (adding BC, DE, HL, or SP, to HL), which updates the carry flag to facilitate 24-bit or larger additions and left shifts. Adding the stack pointer to HL is useful for indexing variables in (recursive) stack frames. A stack frame can be allocated using DAD SP and SPHL, and a branch to a computed pointer can be done with PCHL. These abilities make it feasible to compile languages such as PL/M, Pascal, or C with 16-bit variables and produce 8085 machine code. Subtraction and bitwise logical operations on 16 bits is done in 8-bit steps. Operations that have to be implemented by program code (subroutine libraries) include comparisons of signed integers as well as multiplication and division.
Undocumented instructions
A number of undocumented instructions and flags were discovered by two software engineers, Wolfgang Dehnhardt and Villy M. Sorensen in the process of developing an 8085 assembler. These instructions use 16-bit operands and include indirect loading and storing of a word, a subtraction, a shift, a rotate, and offset operations.
Input/output scheme
The 8085 supports both port-mapped and memory-mapped io. It supports up to 256 input/output (I/O) ports via dedicated Input/Output instructions, with port addresses as operands. Port-mapped IO can be an advantage on processors with limited address space. During a port-mapped I/O bus cycle, the 8-bit I/O address is output by the CPU on both the lower and upper halves of the 16-bit address bus.
Devices designed for memory mapped I/O can also be accessed by using the LDA (load accumulator from a 16-bit address) and STA (store accumulator at a 16-bit address specified) instructions, or any other instructions that have memory operands. A memory-mapped IO transfer cycle appears on the bus as a normal memory access cycle.
Development system
Intel produced a series of development systems for the 8080 and 8085, known as the MDS-80 Microprocessor System. The original development system had an 8080 processor. Later 8085 and 8086 support was added including ICE (in-circuit emulators). It is a large and heavy desktop box, about a 20" cube (in the Intel corporate blue color) which includes a CPU, monitor, and a single 8-inch floppy disk drive. Later an external box was made available with two more floppy drives. It runs the ISIS operating system and can also operate an emulator pod and an external EPROM programmer. This unit uses the Multibus card cage which was intended just for the development system. A surprising number of spare card cages and processors were being sold, leading to the development of the Multibus as a separate product.
The later iPDS is a portable unit, about 8" x 16" x 20", with a handle. It has a small green screen, a keyboard built into the top, a 5¼ inch floppy disk drive, and runs the ISIS-II operating system. It can also accept a second 8085 processor, allowing a limited form of multi-processor operation where both processors run simultaneously and independently. The screen and keyboard can be switched between them, allowing programs to be assembled on one processor (large programs took awhile) while files are edited in the other. It has a bubble memory option and various programming modules, including EPROM, and Intel 8048 and 8051 programming modules which are plugged into the side, replacing stand-alone device programmers. In addition to an 8080/8085 assembler, Intel produced a number of compilers including those for PL/M-80 and Pascal, and a set of tools for linking and statically locating programs to enable them to be burned into EPROMs and used in embedded systems.
A lower cost "MCS-85 System Design Kit" (SDK-85) board contains an 8085 CPU, an 8355 ROM containing a debugging monitor program, an 8155 RAM and 22 I/O ports, an 8279 hex keypad and 8-digit 7-segment LED, and a TTY (Teletype) 20 mA current loop serial interface. Pads are available for one more 2K×8 8755 EPROM, and another 256 byte RAM 8155 I/O Timer/Counter can be optionally added. All data, control, and address signals are available on dual pin headers, and a large prototyping area is provided.
List of Intel 8085
Applications
The 8085 processor was used in a few early personal computers, for example, the TRS-80 Model 100 line used an OKI manufactured 80C85 (MSM80C85ARS). The CMOS version 80C85 of the NMOS/HMOS 8085 processor has several manufacturers. In the Soviet Union, an 80C85 clone was developed under the designation IM1821VM85A () 2016 was still in production. Some manufacturers provide variants with additional functions such as additional instructions. The rad-hard version of the 8085 has been in on-board instrument data processors for several NASA and ESA space physics missions in the 1990s and early 2000s, including CRRES, Polar, FAST, Cluster, HESSI, the Sojourner Mars Rover, and THEMIS. The Swiss company SAIA used the 8085 and the 8085-2 as the CPUs of their PCA1 line of programmable logic controllers during the 1980s.
Pro-Log Corp.put the 8085 and supporting hardware on an STD Bus format card containing CPU, RAM, sockets for ROM/EPROM, I/O and external bus interfaces. The included Instruction Set Reference Card uses entirely different mnemonics for the Intel 8085 CPU. The product was a direct competitor to Intel's Multibus card offerings.
MCS-85 family
The 8085 CPU is one part of a family of chips developed by Intel for building a complete system. Many of these support chips were also used with other processors. The original IBM PC based on the Intel 8088 processor used several of these chips; the equivalent functions today are provided by VLSI chips, namely the "Southbridge" chips.
8085 – CPU
8155 – 2K-bit static MOS RAM with 3 I/O Ports and Timer. The industrial version of ID8155 was available for US$37.50 in quantities of 100 and up. The military version of M8155 was available for US$100.00 in quantities of 100. There is a 5 MHz version of Intel 8155-2. The available 8155H was introduced using the HMOS II technology which it uses 30 percent less power than the previous generation. The plastic package version of P8155H (3 MHz) and P8155H-2 (5 MHz) are available for USD $5.15 and USD $6.40 per 100 in quantities respectfully.
8156 – 2K-bit static MOS RAM with 3 I/O Ports and Timer. The industrial version of ID8156 was available for US$37.50 in quantities of 100. There is a 5 MHz version of Intel 8156-2. The available 8156H was introduced using the HMOS II technology which it uses 30 percent less power than the previous generation. The plastic package version of P8156H (3 MHz) and P8156H-2 (5 MHz) are available for USD $5.15 and USD $6.40 per 100 in quantities respectfully.
8185 – 1,024 x 8-bit Static RAM. The 5 MHz version of Intel 8185-2 was available for US$48.75 in quantity of 100.
8355 – 2,048 × 8-bit ROM, two 8-bit I/O ports. The industrial version of ID8355 was available for US$22.00 in quantities of 1000. There is a 5 MHz version of Intel 8355-2.
8604 – 4096-bit (512 ×8) PROM
8755 – 2048 x 8-bit EPROM, two 8-bit I/O ports. The Intel 8755A-2 is the 5 MHz version. That version was available for US$81.00 in quantity of 100. There was an Industrial Grade version Intel I8755A-8.
8202 – Dynamic RAM Controller. This supports the Intel 2104A, 2117, or 2118 DRAM modules, up to 128 KB of DRAM modules. Price was reduced to US$36.25 for quantities of 100 for the D8202 package style around May 1979.
8203 – Dynamic RAM Controller. The Intel 82C03 CMOS version dissipates less than 25 mA. It supports up to 16x 64K-bit RAM for a total capacity of up to 256 KB. It refreshes every 10 to 16 microseconds. It supports multiplexing of row and column memory addresses. It generates strobes to latch the address internally. It arbitrates between simultaneous requests for memory access and refresh. It also acknowledges memory-access cycles to the system CPU. The 82C03 was available in either ceramic or plastic packages for US$32.00 in 100 pieces quantity.
8205 – 1 of 8 Binary Decoder
8206 – Error Detection & Correction Unit
8207 – DRAM Controller
8210 – TTL To MOS Shifter & High Voltage Clock Driver
8212 – 8-bit I/O Port. The industrial version of ID8212 was available for US$6.75 in quantities of 100.
8216 – 4-bit Parallel Bidirectional Bus Driver. The industrial version of ID8216 was available for US$6.40 in quantities of 100.
8218/8219 – Bus Controller
8226 – 4-bit Parallel Bidirectional Bus Driver. The industrial version of ID8226 was available for US$6.40 in quantities of 100.
8231 – Arithmetic Processing Unit
8232 – Floating-Point Processor
8237 – DMA Controller
8251 – Communication Controller
8253 – Programmable Interval Timer
8254 – Programmable Interval Timer. The 82C54 CMOS version was outsourced to Oki Electronic Industry Co., Ltd.
8255 – Programmable Peripheral Interface
8256 – Multifunction Peripheral. This chip combines Intel 8251A Programmable Communications Interface, Intel 8253 Programmable Interval Timer, Intel 8255A Programmable Peripheral Interface, and Intel 8259A Programmable Interrupt Controller. This multifunction chip uses Serial Communications, Parallel I/O, Counter/Timers and Interrupts. The Intel 8256AH was available for US$21.40 in quantities of 100.
8257 – DMA Controller
8259 – Programmable Interrupt Controller
8271 – Programmable Floppy Disk Controller
8272 – Single/Double Density Floppy Disk Controller. It is compatible with IBM 3740 and System 34 formats and provides both Frequency Modulation (FM) or Modified Frequency Modulation (MFM). This version was available for US$38.10 in quantities of 100.
8273 – Programmable HDLC/SDLC Protocol Controller. This device supports ISO/CCITT's HDLC and IBM's SDLC communication protocols. These were available for US$33.75 (4 MHz) and US$30.00 (8 MHz) in quantities of 100.
8274 – Multi-Protocol Serial Controller. This support three different protocols using the following feature of Asynchronous Operation, Byte Synchronous Operation and Bit Synchronous Operation. The Byte Synchronous mode is compatible to IBM's Bisync signal protocol. The Bit Synchronous mode is compatible to IBM's SDLC and the International Standards Organization's HDLC protocol and is compatible with CCITT X.25 international standard as well. It was packaged in 40-pin product using the Intel's HMOS technology. The available version is rated up to 880 kilobaud for USD $30.30 in the quantities of 100. NEC µPD7201 was also compatible.
8275 – Programmable CRT Controller. It refreshes the raster scan display by buffering from main memory and keeping track of the display portion. This version was available for US$32.00 in quantities of 100.
8276 – Small System CRT Controller
8278 – Programmable Key Board Interface
8279 – Key Board/Display Controller
8282 – 8-bit Non-Inverting Latch with Output Buffer
8283 – 8-bit Inverting Latch with Output Buffer
8291 – GPIB Talker/Listener. This controller can operate from 1 to 8 MHz. It was available for US$23.75 in quantities of 100.
8292 – GPIB Controller. Designed around Intel 8041A which it has been programmed as an controller interface element. It also controls the bus using three lock-up timers to detect issues on the GPIB bus interface. It was available for US$21.25 in quantities of 100.
8293 – GPIB Transceiver. This chipset supports up to 4 different modes: Mode 0 Talker/Listener Control Lines, Mode 1 Talker/Listener/Controller Control Lines, Mode 2 Talker/Listener/Controller Data Lines, and Mode 3 Talker/Listener Data Lines. It was available for US$11.50 in quantities of 100. At the time of release, it was available in samples then full production in the first quarter of 1980.
8294 – Data Encryption/Decryption Unit + 1 O/P Port. It encrypts and decrypts 64-bit blocks of data using the Federal Information Processing Data Encryption Standard algorithm. This also uses the National Bureau of Standards encryption algorithm. This DEU operates using a 56-bit user-specified key to generate 64-bit cipher words. It was available for US$22.50 in quantities of 100.
8295 – Dot Matrix Printer Controller. This interfaces with LRC 7040 Series dot matrix printers and other small printers as well. It was available for US$20.65 in quantities of 100.
Educational use
In many engineering schools the 8085 processor is used in introductory microprocessor courses. Trainer kits composed of a printed circuit board, 8085, and supporting hardware are offered by various companies. These kits usually include complete documentation allowing a student to go from soldering to assembly language programming in a single course. Also, the architecture and instruction set of the 8085 are easy for a student to understand. Shared Project versions of educational and hobby 8085-based single-board computers are noted below in the External Links section of this article.
Simulators
Software simulators are available for the 8085 microprocessor, which allow simulated execution of opcodes in a graphical environment.
See also
IBM System/23 Datamaster gave IBM designers familiarity with the 8085 support chips used in the IBM PC.
Notes
References
Further reading
Books
Bill Detwiler Tandy TRS-80 Model 100 Teardown Tech Republic, 2011 Web
; 495 pages
; 466 pages
; 303 pages
Reference Cards
Intel 8085 Reference Card; Saundby; 2 pages. (archive)
External links
Simulators:
GNUSim8085 - simulator, assembler, debugger
Boards:
MCS-85 System Design Kit (SDK-85) - Intel
Altaids
SBC-85
Minimax
Glitchworks
OMEN Alpha
8-bit microprocessors |
99431 | https://en.wikipedia.org/wiki/Advanced%20Encryption%20Standard%20process | Advanced Encryption Standard process | The Advanced Encryption Standard (AES), the symmetric block cipher ratified as a standard by National Institute of Standards and Technology of the United States (NIST), was chosen using a process lasting from 1997 to 2000 that was markedly more open and transparent than its predecessor, the Data Encryption Standard (DES). This process won praise from the open cryptographic community, and helped to increase confidence in the security of the winning algorithm from those who were suspicious of backdoors in the predecessor, DES.
A new standard was needed primarily because DES has a relatively small 56-bit key which was becoming vulnerable to brute-force attacks. In addition, the DES was designed primarily for hardware and is relatively slow when implemented in software. While Triple-DES avoids the problem of a small key size, it is very slow even in hardware, it is unsuitable for limited-resource platforms, and it may be affected by potential security issues connected with the (today comparatively small) block size of 64 bits.
Start of the process
On January 2, 1997, NIST announced that they wished to choose a successor to DES to be known as AES. Like DES, this was to be "an unclassified, publicly disclosed encryption algorithm capable of protecting sensitive government information well into the next century." However, rather than simply publishing a successor, NIST asked for input from interested parties on how the successor should be chosen. Interest from the open cryptographic community was immediately intense, and NIST received a great many submissions during the three-month comment period.
The result of this feedback was a call for new algorithms on September 12, 1997. The algorithms were all to be block ciphers, supporting a block size of 128 bits and key sizes of 128, 192, and 256 bits. Such ciphers were rare at the time of the announcement; the best known was probably Square.
Rounds one and two
In the nine months that followed, fifteen designs were created and submitted from several countries. They were, in alphabetical order: CAST-256, CRYPTON, DEAL, DFC, E2, FROG, HPC, LOKI97, MAGENTA, MARS, RC6, Rijndael, SAFER+, Serpent, and Twofish.
In the ensuing debate, many advantages and disadvantages of the candidates were investigated by cryptographers; they were assessed not only on security, but also on performance in a variety of settings (PCs of various architectures, smart cards, hardware implementations) and on their feasibility in limited environments (smart cards with very limited memory, low gate count implementations, FPGAs).
Some designs fell due to cryptanalysis that ranged from minor flaws to significant attacks, while others lost favour due to poor performance in various environments or through having little to offer over other candidates. NIST held two conferences to discuss the submissions (AES1, August 1998 and AES2, March 1999), and in August 1999 they announced that they were narrowing the field from fifteen to five: MARS, RC6, Rijndael, Serpent, and Twofish. All five algorithms, commonly referred to as "AES finalists", were designed by cryptographers considered well-known and respected in the community.
The AES2 conference votes were as follows:
Rijndael: 86 positive, 10 negative
Serpent: 59 positive, 7 negative
Twofish: 31 positive, 21 negative
RC6: 23 positive, 37 negative
MARS: 13 positive, 84 negative
A further round of intense analysis and cryptanalysis followed, culminating in the AES3 conference in April 2000, at which a representative of each of the final five teams made a presentation arguing why their design should be chosen as the AES.
Selection of the winner
On October 2, 2000, NIST announced that Rijndael had been selected as the proposed AES and started the process of making it the official standard by publishing an announcement in the Federal Register on February 28, 2001 for the draft FIPS to solicit comments. On November 26, 2001, NIST announced that AES was approved as FIPS PUB 197.
NIST won praises from the cryptographic community for the openness and care with which they ran the standards process. Bruce Schneier, one of the authors of the losing Twofish algorithm, wrote after the competition was over that "I have nothing but good things to say about NIST and the AES process."
See also
CAESAR Competition – Competition to design authenticated encryption schemes
NIST hash function competition
Post-Quantum Cryptography Standardization
References
External links
A historical overview of the process can be found on NIST's website.
On the sci.crypt newsgroup, there are extensive discussions about the AES process.
Cryptography contests
History of cryptography
Advanced Encryption Standard
National Institute of Standards and Technology |
99438 | https://en.wikipedia.org/wiki/Extended%20Euclidean%20algorithm | Extended Euclidean algorithm | In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that
This is a certifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs.
It allows one to compute also, with almost no extra cost, the quotients of a and b by their greatest common divisor.
also refers to a very similar algorithm for computing the polynomial greatest common divisor and the coefficients of Bézout's identity of two univariate polynomials.
The extended Euclidean algorithm is particularly useful when a and b are coprime. With that provision, x is the modular multiplicative inverse of a modulo b, and y is the modular multiplicative inverse of b modulo a. Similarly, the polynomial extended Euclidean algorithm allows one to compute the multiplicative inverse in algebraic field extensions and, in particular in finite fields of non prime order. It follows that both extended Euclidean algorithms are widely used in cryptography. In particular, the computation of the modular multiplicative inverse is an essential step in the derivation of key-pairs in the RSA public-key encryption method.
Description
The standard Euclidean algorithm proceeds by a succession of Euclidean divisions whose quotients are not used. Only the remainders are kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm with a and b as input, consists of computing a sequence of quotients and a sequence of remainders such that
It is the main property of Euclidean division that the inequalities on the right define uniquely and from and
The computation stops when one reaches a remainder which is zero; the greatest common divisor is then the last non zero remainder
The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows
The computation also stops when and gives
is the greatest common divisor of the input and
The Bézout coefficients are and that is
The quotients of a and b by their greatest common divisor are given by and
Moreover, if a and b are both positive and , then
for where denotes the integral part of , that is the greatest integer not greater than .
This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is the minimal pair of Bézout coefficients, as being the unique pair satisfying both above inequalities .
Also it means that the algorithm can be done without integer overflow by a computer program using integers of a fixed size that is larger than that of a and b.
Example
The following table shows how the extended Euclidean algorithm proceeds with input and . The greatest common divisor is the last non zero entry, in the column "remainder". The computation stops at row 6, because the remainder in it is . Bézout coefficients appear in the last two entries of the second-to-last row. In fact, it is easy to verify that . Finally the last two entries and of the last row are, up to the sign, the quotients of the input and by the greatest common divisor .
Proof
As the sequence of the is a decreasing sequence of nonnegative integers (from i = 2 on). Thus it must stop with some This proves that the algorithm stops eventually.
As the greatest common divisor is the same for and This shows that the greatest common divisor of the input is the same as that of This proves that is the greatest common divisor of a and b. (Until this point, the proof is the same as that of the classical Euclidean algorithm.)
As and we have for i = 0 and 1. The relation follows by induction for all :
Thus and are Bézout coefficients.
Consider the matrix
The recurrence relation may be rewritten in matrix form
The matrix is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant of is In particular, for we have Viewing this as a Bézout's identity, this shows that and are coprime. The relation that has been proved above and Euclid's lemma show that divides , that is that for some integer . Dividing by the relation gives So, and are coprime integers that are the quotients of and by a common factor, which is thus their greatest common divisor or its opposite.
To prove the last assertion, assume that a and b are both positive and . Then, , and if , it can be seen that the s and t sequences for (a,b) under the EEA are, up to initial 0s and 1s, the t and s sequences for (b,a). The definitions then show that the (a,b) case reduces to the (b,a) case. So assume that without loss of generality.
It can be seen that is 1 and (which exists by ) is a negative integer. Thereafter, the alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact that for , the case holds because . The same is true for the after the first few terms, for the same reason. Furthermore, it is easy to see that (when a and b are both positive and ). Thus,
This, accompanied by the fact that are larger than or equal to in absolute value than any previous or respectively completed the proof.
Polynomial extended Euclidean algorithm
For univariate polynomials with coefficients in a field, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality has to be replaced by an inequality on the degrees Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials.
A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem.
If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials (s, t) such that
and
A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor.
In mathematics, it is common to require that the greatest common divisor be a monic polynomial. To get this, it suffices to divide every element of the output by the leading coefficient of This allows that, if a and b are coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. In computer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient.
The second way to normalize the greatest common divisor in the case of polynomials with integers coefficients is to divide every output by the content of to get a primitive greatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation.
A third approach consists in extending the algorithm of subresultant pseudo-remainder sequences in a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainder is a subresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes
where denotes the resultant of a and b. In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it.
Pseudocode
To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables.
For simplicity, the following algorithm (and the other algorithms in this article) uses parallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one,
(old_r, r) := (r, old_r - quotient * r)
is equivalent to
prov := r;
r := old_r - quotient × prov;
old_r := prov;
and similarly for the other parallel assignments.
This leads to the following code:
function extended_gcd(a, b)
(old_r, r) := (a, b)
(old_s, s) := (1, 0)
(old_t, t) := (0, 1)
while r ≠ 0 do
quotient := old_r div r
(old_r, r) := (r, old_r − quotient × r)
(old_s, s) := (s, old_s − quotient × s)
(old_t, t) := (t, old_t − quotient × t)
output "Bézout coefficients:", (old_s, old_t)
output "greatest common divisor:", old_r
output "quotients by the gcd:", (t, s)
The quotients of a and b by their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if either a or b is zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed.
Finally, notice that in Bézout's identity, , one can solve for given . Thus, an optimization to the above algorithm is to compute only the sequence (which yields the Bézout coefficient ), and then compute at the end:
function extended_gcd(a, b)
s := 0; old_s := 1
r := b; old_r := a
while r ≠ 0 do
quotient := old_r div r
(old_r, r) := (r, old_r − quotient × r)
(old_s, s) := (s, old_s − quotient × s)
if b ≠ 0 then
bezout_t := (old_r − old_s × a) div b
else
bezout_t := 0
output "Bézout coefficients:", (old_s, bezout_t)
output "greatest common divisor:", old_r
However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication of old_s * a in computation of bezout_t can overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together.
Simplification of fractions
A fraction is in canonical simplified form if and are coprime and is positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by
if then output "Division by zero"
if then ; (for avoiding negative denominators)
if then output (for avoiding denominators equal to 1)
output
The proof of this algorithm relies on the fact that and are two coprime integers such that , and thus . To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator.
If divides evenly, the algorithm executes only one iteration, and we have at the end of the algorithm. It is the only case where the output is an integer.
Computing multiplicative inverses in modular structures
The extended Euclidean algorithm is the essential tool for computing multiplicative inverses in modular structures, typically the modular integers and the algebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order.
Modular integers
If is a positive integer, the ring may be identified with the set of the remainders of Euclidean division by , the addition and the multiplication consisting in taking the remainder by of the result of the addition and the multiplication of integers. An element of has a multiplicative inverse (that is, it is a unit) if it is coprime to . In particular, if is prime, has a multiplicative inverse if it is not zero (modulo ). Thus is a field if and only if is prime.
Bézout's identity asserts that and are coprime if and only if there exist integers and such that
Reducing this identity modulo gives
Thus , or, more exactly, the remainder of the division of by , is the multiplicative inverse of modulo .
To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient of is not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower than n, one may use the fact that the integer provided by the algorithm satisfies . That is, if , one must add to it at the end. This results in the pseudocode, in which the input n is an integer larger than 1.
function inverse(a, n)
t := 0; newt := 1
r := n; newr := a
while newr ≠ 0 do
quotient := r div newr
(t, newt) := (newt, t − quotient × newt)
(r, newr) := (newr, r − quotient × newr)
if r > 1 then
return "a is not invertible"
if t < 0 then
t := t + n
return t
Simple algebraic field extensions
The extended Euclidean algorithm is also the main tool for computing multiplicative inverses in simple algebraic field extensions. An important case, widely used in cryptography and coding theory, is that of finite fields of non-prime order. In fact, if is a prime number, and , the field of order is a simple algebraic extension of the prime field of elements, generated by a root of an irreducible polynomial of degree .
A simple algebraic extension of a field , generated by the root of an irreducible polynomial of degree may be identified to the quotient ring , and its elements are in bijective correspondence with the polynomials of degree less than . The addition in is the addition of polynomials. The multiplication in is the remainder of the Euclidean division by of the product of polynomials. Thus, to complete the arithmetic in , it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm.
The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less than . Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements of ; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element of . In the pseudocode which follows, is a polynomial of degree greater than one, and is a polynomial.
function inverse(a, p)
t := 0; newt := 1
r := p; newr := a
while newr ≠ 0 do
quotient := r div newr
(r, newr) := (newr, r − quotient × newr)
(t, newt) := (newt, t − quotient × newt)
if degree(r) > 0 then
return "Either p is not irreducible or a is a multiple of p"
return (1/r) × t
Example
For example, if the polynomial used to define the finite field GF(28) is p = x8 + x4 + x3 + x + 1, and a = x6 + x4 + x + 1 is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2n, one has -z = z and z + z = 0 for every element z in the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed.
Thus, the inverse is x7 + x6 + x3 + x, as can be confirmed by multiplying the two elements together, and taking the remainder by of the result.
The case of more than two numbers
One can handle the case of more than two numbers iteratively. First we show that . To prove this let . By definition of gcd is a divisor of and . Thus for some . Similarly is a divisor of so for some . Let . By our construction of , but since is the greatest divisor is a unit. And since the result is proven.
So if then there are and such that so the final equation will be
So then to apply to n numbers we use induction
with the equations following directly.
See also
Euclidean domain
Linear congruence theorem
Kuṭṭaka
References
Volume 2, Chapter 4.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Pages 859–861 of section 31.2: Greatest common divisor.
External links
Source for the form of the algorithm used to determine the multiplicative inverse in GF(2^8)
Number theoretic algorithms
Articles with example pseudocode
Euclid |
100098 | https://en.wikipedia.org/wiki/Signalling%20System%20No.%207 | Signalling System No. 7 | Signalling System No. 7 (SS7) is a set of telephony signaling protocols developed in 1975, which is used to set up and tear down telephone calls in most parts of the world-wide public switched telephone network (PSTN). The protocol also performs number translation, local number portability, prepaid billing, Short Message Service (SMS), and other services.
The protocol was introduced in the Bell System in the United States by the name Common Channel Interoffice Signaling in the 1970s for signaling between No. 4ESS and No. 4A crossbar toll offices. In North America SS7 is also often referred to as Common Channel Signaling System 7 (CCSS7). In the United Kingdom, it is called C7 (CCITT number 7), number 7 and Common Channel Interoffice Signaling 7 (CCIS7). In Germany, it is often called Zentraler Zeichengabekanal Nummer 7 (ZZK-7).
The SS7 protocol is defined for international use by the Q.700-series recommendations of 1988 by the ITU-T. Of the many national variants of the SS7 protocols, most are based on variants standardized by the American National Standards Institute (ANSI) and the European Telecommunications Standards Institute (ETSI). National variants with striking characteristics are the Chinese and Japanese Telecommunication Technology Committee (TTC) national variants.
The Internet Engineering Task Force (IETF) has defined the SIGTRAN protocol suite that implements levels 2, 3, and 4 protocols compatible with SS7. Sometimes also called Pseudo SS7, it is layered on the Stream Control Transmission Protocol (SCTP) transport mechanism for use on Internet Protocol networks, such as the Internet.
History
Signaling System No. 5 and earlier systems use in-band signaling, in which the call-setup information is sent by generating special multi-frequency tones transmitted on the telephone line audio channels, also known as bearer channels. As the bearer channel are directly accessible by users, it can be exploited with devices such as the blue box, which plays the tones required for call control and routing. As a remedy, SS6 and SS7 implements out-of-band signaling, carried in a separate signaling channel, thus keeping the speech path separate. SS6 and SS7 are referred to as common-channel signaling (CCS) protocols, or Common Channel Interoffice Signaling (CCIS) systems.
Another element of in-band signaling addressed by SS7 is network efficiency. With in-band signaling, the voice channel is used during call setup which makes it unavailable for actual traffic. For long-distance calls, the talk path may traverse several nodes which reduces usable node capacity. With SS7, the connection is not established between the end points until all nodes on the path confirm availability. If the far end is busy, the caller gets a busy signal without consuming a voice channel.
Since 1975, CCS protocols have been developed by major telephone companies and the International Telecommunication Union Telecommunication Standardization Sector (ITU-T); in 1977 the ITU-T defined the first international CCS protocol as Signaling System No. 6 (SS6). In its 1980 Yellow Book Q.7XX-series recommendations ITU-T defined the Signalling System No. 7 as an international standard. SS7 replaced SS6 with its restricted 28-bit signal unit that was both limited in function and not amendable to digital systems. SS7 also replaced Signaling System No. 5 (SS5), while R1 and R2 variants are still used in numerous countries.
The Internet Engineering Task Force (IETF) defined SIGTRAN protocols which translate the common channel signaling paradigm to the IP Message Transfer Part (MTP) level 2 (M2UA and M2PA), Message Transfer Part (MTP) level 3 (M3UA) and Signaling Connection Control Part (SCCP) (SUA). While running on a transport based upon IP, the SIGTRAN protocols are not an SS7 variant, but simply transport existing national and international variants of SS7.
Functionality
Signaling in telephony is the exchange of control information associated with the setup and release of a telephone call on a telecommunications circuit. Examples of control information are the digits dialed by the caller and the caller's billing number.
When signaling is performed on the same circuit as the conversation of the call, it is termed channel-associated signaling (CAS). This is the case for analogue trunks, multi-frequency (MF) and R2 digital trunks, and DSS1/DASS PBX trunks.
In contrast, SS7 uses common channel signaling, in which the path and facility used by the signaling is separate and distinct from the signaling without first seizing a voice channel, leading to significant savings and performance increases in both signaling and channel usage.
Because of the mechanisms in use by signaling methods prior to SS7 (battery reversal, multi-frequency digit outpulsing, A- and B-bit signaling), these earlier methods cannot communicate much signaling information. Usually only the dialed digits are signaled during call setup. For charged calls, dialed digits and charge number digits are outpulsed. SS7, being a high-speed and high-performance packet-based communications protocol, can communicate significant amounts of information when setting up a call, during the call, and at the end of the call. This permits rich call-related services to be developed. Some of the first such services were call management related, call forwarding (busy and no answer), voice mail, call waiting, conference calling, calling name and number display, call screening, malicious caller identification, busy callback.
The earliest deployed upper-layer protocols in the SS7 suite were dedicated to the setup, maintenance, and release of telephone calls. The Telephone User Part (TUP) was adopted in Europe and the Integrated Services Digital Network (ISDN) User Part (ISUP) adapted for public switched telephone network (PSTN) calls was adopted in North America. ISUP was later used in Europe when the European networks upgraded to the ISDN. North America has not accomplished full upgrade to the ISDN, and the predominant telephone service is still Plain Old Telephone Service. Due to its richness and the need for an out-of-band channel for its operation, SS7 is mostly used for signaling between telephone switches and not for signaling between local exchanges and customer-premises equipment.
Because SS7 signaling does not require seizure of a channel for a conversation prior to the exchange of control information, non-facility associated signaling (NFAS) became possible. NFAS is signaling that is not directly associated with the path that a conversation will traverse and may concern other information located at a centralized database such as service subscription, feature activation, and service logic. This makes possible a set of network-based services that do not rely upon the call being routed to a particular subscription switch at which service logic would be executed, but permits service logic to be distributed throughout the telephone network and executed more expediently at originating switches far in advance of call routing. It also permits the subscriber increased mobility due to the decoupling of service logic from the subscription switch. Another ISUP characteristic SS7 with NFAS enables is the exchange of signaling information during the middle of a call.
SS7 also enables Non-Call-Associated Signaling, which is signaling not directly related to establishing a telephone call. This includes the exchange of registration information used between a mobile telephone and a home location register database, which tracks the location of the mobile. Other examples include Intelligent Network and local number portability databases.
Signaling modes
Apart from signaling with these various degrees of association with call set-up and the facilities used to carry calls, SS7 is designed to operate in two modes: associated mode and quasi-associated mode.
When operating in the associated mode, SS7 signaling progresses from switch to switch through the Public Switched Telephone Network following the same path as the associated facilities that carry the telephone call. This mode is more economical for small networks. The associated mode of signaling is not the predominant choice of modes in North America.
When operating in the quasi-associated mode, SS7 signaling progresses from the originating switch to the terminating switch, following a path through a separate SS7 signaling network composed of signal transfer points. This mode is more economical for large networks with lightly loaded signaling links. The quasi-associated mode of signaling is the predominant choice of modes in North America.
Physical network
SS7 separates signaling from the voice circuits. An SS7 network must be made up of SS7-capable equipment from end to end in order to provide its full functionality. The network can be made up of several link types (A, B, C, D, E, and F) and three signaling nodes – Service Switching Points (SSPs), Signal Transfer Points (STPs), and Service Control Points (SCPs). Each node is identified on the network by a number, a signaling point code. Extended services are provided by a database interface at the SCP level using the SS7 network.
The links between nodes are full-duplex 56, 64, 1,536, or 1,984 kbit/s graded communications channels. In Europe they are usually one (64 kbit/s) or all (1,984 kbit/s) timeslots (DS0s) within an E1 facility; in North America one (56 or 64 kbit/s) or all (1,536 kbit/s) timeslots (DS0As or DS0s) within a T1 facility. One or more signaling links can be connected to the same two endpoints that together form a signaling link set. Signaling links are added to link sets to increase the signaling capacity of the link set.
In Europe, SS7 links normally are directly connected between switching exchanges using F-links. This direct connection is called associated signaling. In North America, SS7 links are normally indirectly connected between switching exchanges using an intervening network of STPs. This indirect connection is called quasi-associated signaling, which reduces the number of SS7 links necessary to interconnect all switching exchanges and SCPs in an SS7 signaling network.
SS7 links at higher signaling capacity (1.536 and 1.984 Mbit/s, simply referred to as the 1.5 Mbit/s and 2.0 Mbit/s rates) are called high-speed links (HSL) in contrast to the low speed (56 and 64 kbit/s) links. High-speed links are specified in ITU-T Recommendation Q.703 for the 1.5 Mbit/s and 2.0 Mbit/s rates, and ANSI Standard T1.111.3 for the 1.536 Mbit/s rate. There are differences between the specifications for the 1.5 Mbit/s rate. High-speed links utilize the entire bandwidth of a T1 (1.536 Mbit/s) or E1 (1.984 Mbit/s) transmission facility for the transport of SS7 signaling messages.
SIGTRAN provides signaling using SCTP associations over the Internet Protocol. The protocols for SIGTRAN are M2PA, M2UA, M3UA and SUA.
SS7 protocol suite
The SS7 protocol stack may be partially mapped to the OSI Model of a packetized digital protocol stack. OSI layers 1 to 3 are provided by the Message Transfer Part (MTP) and the Signalling Connection Control Part (SCCP) of the SS7 protocol (together referred to as the Network Service Part (NSP)); for circuit related signaling, such as the BT IUP, Telephone User Part (TUP), or the ISDN User Part (ISUP), the User Part provides layer 7. Currently there are no protocol components that provide OSI layers 4 through 6. The Transaction Capabilities Application Part (TCAP) is the primary SCCP User in the Core Network, using SCCP in connectionless mode. SCCP in connection oriented mode provides transport layer for air interface protocols such as BSSAP and RANAP. TCAP provides transaction capabilities to its Users (TC-Users), such as the Mobile Application Part, the Intelligent Network Application Part and the CAMEL Application Part.
The Message Transfer Part (MTP) covers a portion of the functions of the OSI network layer including: network interface, information transfer, message handling and routing to the higher levels. Signaling Connection Control Part (SCCP) is at functional Level 4. Together with MTP Level 3 it is called the Network Service Part (NSP). SCCP completes the functions of the OSI network layer: end-to-end addressing and routing, connectionless messages (UDTs), and management services for users of the Network Service Part (NSP). Telephone User Part (TUP) is a link-by-link signaling system used to connect calls. ISUP is the key user part, providing a circuit-based protocol to establish, maintain, and end the connections for calls. Transaction Capabilities Application Part (TCAP) is used to create database queries and invoke advanced network functionality, or links to Intelligent Network Application Part (INAP) for intelligent networks, or Mobile Application Part (MAP) for mobile services.
BSSAP
BSS Application Part (BSSAP) is a protocol in SS7 used by the Mobile Switching Center (MSC) and the Base station subsystem (BSS) to communicate with each other using signaling messages supported by the MTP and connection-oriented services of the SCCP. For each active mobile equipment one signalling connection is used by BSSAP having at least one active transactions for the transfer of messages.
BSSAP provides two kinds of functions:
The BSS Mobile Application Part (BSSMAP) supports procedures to facilitate communication between the MSC and the BSS pertaining to resource management and handover control.
The Direct Transfer Application Part (DTAP) is used for transfer of those messages which need to travel directly to mobile equipment from MSC bypassing any interpretation by BSS. These messages are generally pertaining to mobility management (MM) or call management (CM).
Protocol security vulnerabilities
In 2008, several SS7 vulnerabilities were published that permitted the tracking of cell phone users.
In 2014, the media reported a protocol vulnerability of SS7 by which anyone can track the movements of cell phone users from virtually anywhere in the world with a success rate of approximately 70%. In addition, eavesdropping is possible by using the protocol to forward calls and also facilitate decryption by requesting that each caller's carrier release a temporary encryption key to unlock the communication after it has been recorded. The software tool SnoopSnitch can warn when certain SS7 attacks occur against a phone, and detect IMSI-catchers that allow call interception and other activities.
In February 2016, 30% of the network of the largest mobile operator in Norway, Telenor, became unstable due to "unusual SS7 signaling from another European operator".
The security vulnerabilities of SS7 have been highlighted in U.S. governmental bodies, for example when in April 2016 Congressman Ted Lieu called for an oversight committee investigation.
In May 2017, O2 Telefónica, a German mobile service provider, confirmed that the SS7 vulnerabilities had been exploited to bypass two-factor authentication to achieve unauthorized withdrawals from bank accounts. The perpetrators installed malware on compromised computers, allowing them to collect online banking account credentials and telephone numbers. They set up redirects for the victims' telephone numbers to telephone lines controlled by them. Confirmation calls and SMS text messages of two-factor authentication procedures were routed to telephone numbers controlled by the attackers. This enabled them to log into victims' online bank accounts and effect money transfers.
In March 2018, a method was published for the detection of the vulnerabilities, through the use of open-source monitoring software such as Wireshark and Snort. The nature of SS7 normally being used between consenting network operators on dedicated links means that any bad actor's traffic can be traced to its source.
An investigation by The Guardian and the Bureau of Investigative Journalism revealed that the SS7 protocol was exploited in an attempt to locate Sheikha Latifa bint Mohammed Al Maktoum (II) on 3 March 2018, a day before her abduction.
See also
SS7 probe
Out-of-band data
Signaling System No. 5
Signaling System No. 6
References
Further reading
ITU-T recommendations
Signaling System 7
Telephony
Network protocols
Telephony signals |
102567 | https://en.wikipedia.org/wiki/John%20Forbes%20Nash%20Jr. | John Forbes Nash Jr. | John Forbes Nash Jr. (June 13, 1928 – May 23, 2015) was an American mathematician who made fundamental contributions to game theory, differential geometry, and the study of partial differential equations. Nash's work has provided insight into the factors that govern chance and decision-making inside complex systems found in everyday life.
His theories are widely used in economics. Serving as a senior research mathematician at Princeton University during the later part of his life, he shared the 1994 Nobel Memorial Prize in Economic Sciences with game theorists Reinhard Selten and John Harsanyi. In 2015, he also shared the Abel Prize with Louis Nirenberg for his work on nonlinear partial differential equations. John Nash is the only person to be awarded both the Nobel Memorial Prize in Economic Sciences and the Abel Prize.
In 1959, Nash began showing clear signs of mental illness, and spent several years at psychiatric hospitals being treated for schizophrenia. After 1970, his condition slowly improved, allowing him to return to academic work by the mid-1980s. His struggles with his illness and his recovery became the basis for Sylvia Nasar's biographical book A Beautiful Mind in 1998, as well as a film of the same name directed by Ron Howard, in which Nash was portrayed by actor Russell Crowe.
Early life and education
John Forbes Nash Jr. was born on June 13, 1928, in Bluefield, West Virginia. His father and namesake, John Forbes Nash, was an electrical engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia (née Martin) Nash, had been a schoolteacher before she was married. He was baptized in the Episcopal Church. He had a younger sister, Martha (born November 16, 1930).
Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents. Nash's parents pursued opportunities to supplement their son's education, and arranged for him to take advanced mathematics courses at a local community college during his final year of high school. He attended Carnegie Institute of Technology (which later became Carnegie Mellon University) through a full benefit of the George Westinghouse Scholarship, initially majoring in chemical engineering. He switched to a chemistry major and eventually, at the advice of his teacher John Lighton Synge, to mathematics. After graduating in 1948, with both a B.S. and M.S. in mathematics, Nash accepted a fellowship to Princeton University, where he pursued further graduate studies in mathematics and sciences .
Nash's adviser and former Carnegie professor Richard Duffin wrote a letter of recommendation for Nash's entrance to Princeton stating, "He is a mathematical genius". Nash was also accepted at Harvard University. However, the chairman of the mathematics department at Princeton, Solomon Lefschetz, offered him the John S. Kennedy fellowship, convincing Nash that Princeton valued him more. Further, he considered Princeton more favorably because of its proximity to his family in Bluefield. At Princeton, he began work on his equilibrium theory, later known as the Nash equilibrium.
Major contributions
Game theory
Nash earned a PhD in 1950 with a 28-page dissertation on non-cooperative games.
The thesis, written under the supervision of doctoral advisor Albert W. Tucker, contained the definition and properties of the Nash equilibrium, a crucial concept in non-cooperative games. It won Nash the Nobel Memorial Prize in Economic Sciences in 1994.
Publications authored by Nash relating to the concept are in the following papers :
Other mathematics
Nash did groundbreaking work in the area of real algebraic geometry:
See
His work in mathematics includes the Nash embedding theorem, which shows that every abstract Riemannian manifold can be isometrically realized as a submanifold of Euclidean space. Nash also made significant contributions to the theory of nonlinear parabolic partial differential equations, and to singularity theory.
Mikhail Leonidovich Gromov writes about Nash's work:
John Milnor gives a list of 21 publications.
In the Nash biography A Beautiful Mind, author Sylvia Nasar explains that Nash was working on proving Hilbert's nineteenth problem, a theorem involving elliptic partial differential equations when, in 1956, he suffered a severe disappointment. He learned that an Italian mathematician, Ennio de Giorgi, had published a proof just months before Nash achieved his. Each took different routes to get to their solutions. The two mathematicians met each other at the Courant Institute of Mathematical Sciences of New York University during the summer of 1956. It has been speculated that if only one had solved the problem, he would have been given the Fields Medal for the proof.
In 2011, the National Security Agency declassified letters written by Nash in the 1950s, in which he had proposed a new encryption–decryption machine. The letters show that Nash had anticipated many concepts of modern cryptography, which are based on computational hardness.
Mental illness
Although Nash's mental illness first began to manifest in the form of paranoia, his wife later described his behavior as erratic. Nash thought that all men who wore red ties were part of a communist conspiracy against him. He mailed letters to embassies in Washington, D.C., declaring that they were establishing a government. Nash's psychological issues crossed into his professional life when he gave an American Mathematical Society lecture at Columbia University in early 1959. Originally intended to present proof of the Riemann hypothesis, the lecture was incomprehensible. Colleagues in the audience immediately realized that something was wrong.
In April 1959, Nash was admitted to McLean Hospital for one month. Based on his paranoid, persecutory delusions, hallucinations, and increasing asociality, he was diagnosed with schizophrenia. In 1961, Nash was admitted to the New Jersey State Hospital at Trenton. Over the next nine years, he spent intervals of time in psychiatric hospitals, where he received both antipsychotic medications and insulin shock therapy.
Although he sometimes took prescribed medication, Nash later wrote that he did so only under pressure. According to Nash, the film A Beautiful Mind inaccurately implied he was taking atypical antipsychotics. He attributed the depiction to the screenwriter who was worried about the film encouraging people with mental illness to stop taking their medication.
Nash did not take any medication after 1970, nor was he committed to a hospital ever again. Nash recovered gradually. Encouraged by his then former wife, de Lardé, Nash lived at home and spent his time in the Princeton mathematics department where his eccentricities were accepted even when his mental condition was poor. De Lardé credits his recovery to maintaining "a quiet life" with social support.
Nash dated the start of what he termed "mental disturbances" to the early months of 1959, when his wife was pregnant. He described a process of change "from scientific rationality of thinking into the delusional thinking characteristic of persons who are psychiatrically diagnosed as 'schizophrenic' or 'paranoid schizophrenic. For Nash, this included seeing himself as a messenger or having a special function of some kind, of having supporters and opponents and hidden schemers, along with a feeling of being persecuted and searching for signs representing divine revelation. Nash suggested his delusional thinking was related to his unhappiness, his desire to be recognized, and his characteristic way of thinking, saying, "I wouldn't have had good scientific ideas if I had thought more normally." He also said, "If I felt completely pressureless I don't think I would have gone in this pattern".
Nash reported that he started hearing voices in 1964, then later engaged in a process of consciously rejecting them. He only renounced his "dream-like delusional hypotheses" after a prolonged period of involuntary commitment in mental hospitals—"enforced rationality". Upon doing so, he was temporarily able to return to productive work as a mathematician. By the late 1960s, he relapsed. Eventually, he "intellectually rejected" his "delusionally influenced" and "politically oriented" thinking as a waste of effort. In 1995, he said that he didn't realize his full potential due to nearly 30 years of mental illness.
Nash wrote in 1994:
Recognition and later career
In 1978, Nash was awarded the John von Neumann Theory Prize for his discovery of non-cooperative equilibria, now called Nash Equilibria. He won the Leroy P. Steele Prize in 1999.
In 1994, he received the Nobel Memorial Prize in Economic Sciences (along with John Harsanyi and Reinhard Selten) for his game theory work as a Princeton graduate student. In the late 1980s, Nash had begun to use email to gradually link with working mathematicians who realized that he was John Nash and that his new work had value. They formed part of the nucleus of a group that contacted the Bank of Sweden's Nobel award committee and were able to vouch for Nash's mental health and ability to receive the award.
Nash's later work involved ventures in advanced game theory, including partial agency, which show that, as in his early career, he preferred to select his own path and problems. Between 1945 and 1996, he published 23 scientific studies.
Nash has suggested hypotheses on mental illness. He has compared not thinking in an acceptable manner, or being "insane" and not fitting into a usual social function, to being "on strike" from an economic point of view. He advanced views in evolutionary psychology about the potential benefits of apparently nonstandard behaviors or roles.
Nash developed work on the role of money in society. He criticized interest groups that promote quasi-doctrines based on Keynesian economics that permit manipulative short-term inflation and debt tactics that ultimately undermine currencies. He suggested a global "industrial consumption price index" system that would support the development of more "ideal money" that people could trust rather than more unstable "bad money." He noted that some of his thinking parallels that of economist and political philosopher Friedrich Hayek, regarding money and an atypical viewpoint of the function of authority.
Nash received an honorary degree, Doctor of Science and Technology, from Carnegie Mellon University in 1999, an honorary degree in economics from the University of Naples Federico II in 2003, an honorary doctorate in economics from the University of Antwerp in 2007, an honorary doctorate of science from the City University of Hong Kong in 2011, and was keynote speaker at a conference on game theory. Nash also received honorary doctorates from two West Virginia colleges: the University of Charleston in 2003 and West Virginia University Tech in 2006. He was a prolific guest speaker at a number of events, such as the Warwick Economics Summit in 2005, at the University of Warwick.
Nash was elected to the American Philosophical Society in 2006 and became a fellow of the American Mathematical Society in 2012.
On May 19, 2015, a few days before his death, Nash, along with Louis Nirenberg, was awarded the 2015 Abel Prize by King Harald V of Norway at a ceremony in Oslo.
Personal life
In 1951, the Massachusetts Institute of Technology (MIT) hired Nash as a C. L. E. Moore instructor in the mathematics faculty. About a year later, Nash began a relationship with Eleanor Stier, a nurse he met while admitted as a patient. They had a son, John David Stier, but Nash left Stier when she told him of her pregnancy. The film based on Nash's life, A Beautiful Mind, was criticized during the run-up to the 2002 Oscars for omitting this aspect of his life. He was said to have abandoned her based on her social status, which he thought to have been beneath his.
In Santa Monica, California, in 1954, while in his twenties, Nash was arrested for indecent exposure in a sting operation targeting gay men. Although the charges were dropped, he was stripped of his top-secret security clearance and fired from RAND Corporation, where he had worked as a consultant.
Not long after breaking up with Stier, Nash met Alicia Lardé Lopez-Harrison, a naturalized U.S. citizen from El Salvador. Lardé graduated from MIT, having majored in physics. They married in February 1957. Although Nash was an atheist, the ceremony was performed in an Episcopal church. In 1958, Nash was appointed to a tenured position at MIT, and his first signs of mental illness soon became evident. He resigned his position at MIT in the spring of 1959. His son, John Charles Martin Nash, was born a few months later. The child was not named for a year because Alicia felt that Nash should have a say in choosing the name. Due to the stress of dealing with his illness, Nash and Lardé divorced in 1963. After his final hospital discharge in 1970, Nash lived in Lardé's house as a boarder. This stability seemed to help him, and he learned how to consciously discard his paranoid delusions. He was allowed by Princeton to audit classes. He continued to work on mathematics and was eventually allowed to teach again. In the 1990s, Lardé and Nash resumed their relationship, remarrying in 2001. John Charles Martin Nash earned a Ph.D. in mathematics from Rutgers University and was diagnosed with schizophrenia as an adult.
Death
On May 23, 2015, Nash and his wife died in a car accident on the New Jersey Turnpike near Monroe Township, NJ. They were on their way home from Newark Airport after a visit to Norway, where Nash had received the Abel Prize, when their taxicab driver, Tarek Girgis, lost control of the vehicle and struck a guardrail. Both passengers were ejected from the car upon impact. State police revealed that it appeared neither passenger was wearing a seatbelt at the time of the crash. At the time of his death, the 86-year-old Nash was a longtime resident of New Jersey. He was survived by two sons, John Charles Martin Nash, who lived with his parents at the time of their death, and elder child John Stier.
Following his death, obituaries appeared in scientific and popular media throughout the world. In addition to their obituary for Nash, The New York Times published an article containing quotes from Nash that had been assembled from media and other published sources. The quotes consisted of Nash's reflections on his life and achievements.
Legacy
At Princeton in the 1970s, Nash became known as "The Phantom of Fine Hall" (Princeton's mathematics center), a shadowy figure who would scribble arcane equations on blackboards in the middle of the night.
He is referred to in a novel set at Princeton, The Mind-Body Problem, 1983, by Rebecca Goldstein.
Sylvia Nasar's biography of Nash, A Beautiful Mind, was published in 1998. A film by the same name was released in 2001, directed by Ron Howard with Russell Crowe playing Nash; it won four Academy Awards, including Best Picture.
Awards
1978 – INFORMS John von Neumann Theory Prize
1994 – Nobel Memorial Prize in Economic Sciences
1999 – Leroy P Steele Prize
2002 class of Fellows of the Institute for Operations Research and the Management Sciences
2010 – Double Helix Medal
2015 – Abel Prize
References
Bibliography
Documentaries and video interviews
"A Brilliant Madness" – a PBS American Experience documentary
One on One – Professor John Nash with Riz Khan. Al Jazeera English, 2009-12-05 (, )
External links
Home Page of John F. Nash Jr. at Princeton
IDEAS/RePEc
"Nash Equilibrium" 2002 Slate article by Robert Wright, about Nash's work and world government
NSA releases Nash Encryption Machine plans to National Cryptologic Museum for public viewing, 2012
Nash, John (1928–2015) | Rare Books and Special Collections from Princeton's Mudd Library, including a copy of his dissertation (PDF)
Biography of John Forbes Nash Jr. from the Institute for Operations Research and the Management Sciences
1928 births
2015 deaths
20th-century American mathematicians
Abel Prize laureates
American atheists
American Nobel laureates
Board game designers
Carnegie Mellon University alumni
Institute for Advanced Study visiting scholars
Differential geometers
Fellows of the American Mathematical Society
Fellows of the Econometric Society
Fellows of the Institute for Operations Research and the Management Sciences
Game theorists
John von Neumann Theory Prize winners
Massachusetts Institute of Technology School of Science faculty
Members of the United States National Academy of Sciences
Nobel laureates in Economics
PDE theorists
People from Bluefield, West Virginia
People from West Windsor Township, New Jersey
People with schizophrenia
Road incident deaths in New Jersey
Princeton University alumni
Princeton University faculty
Mathematicians from West Virginia
Mathematicians from New Jersey
McLean Hospital patients
Members of the American Philosophical Society |
102600 | https://en.wikipedia.org/wiki/RC5 | RC5 | In cryptography, RC5 is a symmetric-key block cipher notable for its simplicity. Designed by Ronald Rivest in 1994, RC stands for "Rivest Cipher", or alternatively, "Ron's Code" (compare RC2 and RC4). The Advanced Encryption Standard (AES) candidate RC6 was based on RC5.
Description
Unlike many schemes, RC5 has a variable block size (32, 64 or 128 bits), key size (0 to 2040 bits) and number of rounds (0 to 255). The original suggested choice of parameters were a block size of 64 bits, a 128-bit key and 12 rounds.
A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive . RC5 also consists of a number of modular additions and eXclusive OR (XOR)s. The general structure of the algorithm is a Feistel-like network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden ratio as sources of "nothing up my sleeve numbers". The tantalising simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts.
The RC5 is basically denoted as RC5-w/r/b where w=word size in bits, r=number of rounds, b=number of 8-bit bytes in the key.
Algorithm
RC5 encryption and decryption both expand the random key into 2(r+1) words that will be used sequentially (and only once each) during the encryption and decryption processes. All of the below comes from Rivest's revised paper on RC5.
Key expansion
The key expansion algorithm is illustrated below, first in pseudocode, then example C code copied directly from the reference paper's appendix.
Following the naming scheme of the paper, the following variable names are used:
w - The length of a word in bits, typically 16, 32 or 64. Encryption is done in 2-word blocks.
u = w/8 - The length of a word in bytes.
b - The length of the key in bytes.
K[] - The key, considered as an array of bytes (using 0-based indexing).
c - The length of the key in words (or 1, if b = 0).
L[] - A temporary working array used during key scheduling. initialized to the key in words.
r - The number of rounds to use when encrypting data.
t = 2(r+1) - the number of round subkeys required.
S[] - The round subkey words.
Pw - The first magic constant, defined as , where Odd is the nearest odd integer to the given input, e is the base of the natural logarithm, and w is defined above. For common values of w, the associated values of Pw are given here in hexadecimal:
For w = 16: 0xB7E1
For w = 32: 0xB7E15163
For w = 64: 0xB7E151628AED2A6B
Qw - The second magic constant, defined as , where Odd is the nearest odd integer to the given input, where is the golden ratio, and w is defined above. For common values of w, the associated values of Qw are given here in hexadecimal:
For w = 16: 0x9E37
For w = 32: 0x9E3779B9
For w = 64: 0x9E3779B97F4A7C15
# Break K into words
# u = w / 8
c = ceiling(max(b, 1) / u)
# L is initially a c-length list of 0-valued w-length words
for i = b-1 down to 0 do:
L[i / u] = (L[i / u] <<< 8) + K[i]
# Initialize key-independent pseudorandom S array
# S is initially a t=2(r+1) length list of undefined w-length words
S[0] = P_w
for i = 1 to t-1 do:
S[i] = S[i - 1] + Q_w
# The main key scheduling loop
i = j = 0
A = B = 0
do 3 * max(t, c) times:
A = S[i] = (S[i] + A + B) <<< 3
B = L[j] = (L[j] + A + B) <<< (A + B)
i = (i + 1) % t
j = (j + 1) % c
# return S
The example source code is provided from the appendix of Rivest's paper on RC5. The implementation is designed to work with w = 32, r = 12, and b = 16.
void RC5_SETUP(unsigned char *K)
{
// w = 32, r = 12, b = 16
// c = max(1, ceil(8 * b/w))
// t = 2 * (r+1)
WORD i, j, k, u = w/8, A, B, L[c];
for (i = b-1, L[c-1] = 0; i != -1; i--)
L[i/u] = (L[i/u] << 8) + K[i];
for (S[0] = P, i = 1; i < t; i++)
S[i] = S[i-1] + Q;
for (A = B = i = j = k = 0; k < 3 * t; k++, i = (i+1) % t, j = (j+1) % c)
{
A = S[i] = ROTL(S[i] + (A + B), 3);
B = L[j] = ROTL(L[j] + (A + B), (A + B));
}
}
Encryption
Encryption involved several rounds of a simple function. 12 or 20 rounds seem to be recommended, depending on security needs and time considerations. Beyond the variables used above, the following variables are used in this algorithm:
A, B - The two words composing the block of plaintext to be encrypted.
A = A + S[0]
B = B + S[1]
for i = 1 to r do:
A = ((A ^ B) <<< B) + S[2 * i]
B = ((B ^ A) <<< A) + S[2 * i + 1]
# The ciphertext block consists of the two-word wide block composed of A and B, in that order.
return A, B
The example C code given by Rivest is this.
void RC5_ENCRYPT(WORD *pt, WORD *ct)
{
WORD i, A = pt[0] + S[0], B = pt[1] + S[1];
for (i = 1; i <= r; i++)
{
A = ROTL(A ^ B, B) + S[2*i];
B = ROTL(B ^ A, A) + S[2*i + 1];
}
ct[0] = A; ct[1] = B;
}
Decryption
Decryption is a fairly straightforward reversal of the encryption process. The below pseudocode shows the process.
for i = r down to 1 do:
B = ((B - S[2 * i + 1]) >>> A) ^ A
A = ((A - S[2 * i]) >>> B) ^ B
B = B - S[1]
A = A - S[0]
return A, B
The example C code given by Rivest is this.
void RC5_DECRYPT(WORD *ct, WORD *pt)
{
WORD i, B=ct[1], A=ct[0];
for (i = r; i > 0; i--)
{
B = ROTR(B - S[2*i + 1], A) ^ A;
A = ROTR(A - S[2*i], B) ^ B;
}
pt[1] = B - S[1]; pt[0] = A - S[0];
}
Cryptanalysis
12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. 18–20 rounds are suggested as sufficient protection.
A number of these challenge problems have been tackled using distributed computing, organised by Distributed.net. Distributed.net has brute-forced RC5 messages encrypted with 56-bit and 64-bit keys and has been working on cracking a 72-bit key since November 3, 2002. As of August 6, 2021, 7.900% of the keyspace has been searched and based on the rate recorded that day, it would take 127 years to complete 100% of the keyspace. The task has inspired many new and novel developments in the field of cluster computing.
RSA Security, which had a patent on the algorithm, offered a series of US$10,000 prizes for breaking ciphertexts encrypted with RC5, but these contests have been discontinued as of May 2007. As a result, distributed.net decided to fund the monetary prize. The individual who discovers the winning key will receive US$1,000, their team (if applicable) will receive US$1,000 and the Free Software Foundation will receive US$2,000.
See also
Madryga
Red Pike
References
External links
Rivests's revised paper describing the cipher
Rivest's original paper
SCAN's entry for the cipher
RSA Laboratories FAQ — What are RC5 and RC6?
Helger Lipmaa's links on RC5
Broken block ciphers |
102775 | https://en.wikipedia.org/wiki/Camden%2C%20New%20Jersey | Camden, New Jersey | Camden is a city in and the county seat of Camden County, New Jersey. Camden is located directly across the Delaware River from Philadelphia, Pennsylvania. At the 2020 U.S. Census, the city had a population of 71,791. Camden is the 14th most populous municipality in New Jersey. The city was incorporated on February 13, 1828. Camden has been the county seat of Camden County since the county was formed on March 13, 1844. The city derives its name from Charles Pratt, 1st Earl Camden. Camden is made up of over 20 different neighborhoods.
Beginning in the early 1900s, Camden was a prosperous industrial city, and remained so throughout the Great Depression and World War II. During the 1950s, Camden manufacturers began gradually closing their factories and moving out of the city. With the loss of manufacturing jobs came a sharp population decline. The growth of the interstate highway system also played a large role in suburbanization, which resulted in white flight. Civil unrest and crime became common in Camden. In 1971, civil unrest reached its peak, with riots breaking out in response to the death of Horacio Jimenez, a Puerto Rican motorist who was killed by two police officers.
The Camden waterfront holds three tourist attractions, the USS New Jersey; the Waterfront Music Pavilion; and the Adventure Aquarium. The city is the home of Rutgers University–Camden, which was founded as the South Jersey Law School in 1926, and Cooper Medical School of Rowan University, which opened in 2012. Camden also houses both Cooper University Hospital and Virtua Our Lady of Lourdes Medical Center. Camden County College and Rowan University also have campuses in downtown Camden. The "eds and meds" institutions account for roughly 45% of Camden's total employment.
Camden has been known for its high crime rate, though there has been a substantial decrease in crime in recent decades, especially since 2012, when the city disbanded its municipal police department and replaced it with a county-level police department. There were 23 homicides in Camden in 2017, the fewest in the city in three decades. The city saw 24 and 23 homicides in 2019 and 2020 respectively, the fourth-highest toll among New Jersey cities, behind Paterson, Trenton, and Newark. As of January 2021, violent crime was down 46% from its high in the 1990s and at the lowest level since the 1960s. Overall crime reports in 2020 were down 74% compared to 1974, the first year of uniform crime-reporting in the city; however, the population is also considerably lower today compared to that decade.
History
Early history
In 1626, Fort Nassau was established by the Dutch West India Company at the confluence of Big Timber Creek and the Delaware River. Throughout the 17th century, Europeans settled along the Delaware, competing to control the local fur trade. After the Restoration in 1660, the land around Camden was controlled by nobles serving under King Charles II, until it was sold off to a group of New Jersey Quakers in 1673. The area developed further when a ferry system was established along the east side of the Delaware River to facilitate trade between Fort Nassau and Philadelphia, the growing capital of the Quaker colony of Pennsylvania directly across the river. By the 1700s, Quakers and the Lenni Lenape, the indigenous inhabitants, were coexisting. The Quakers' expansion and use of natural resources, in addition to the introduction of alcohol and infectious disease, diminished the Lenape's population in the area.
The 1688 order of the County Court of Gloucester that sanctioned ferries between New Jersey and Philadelphia was: "Therefore we permit and appoint that a common passage or ferry for man or beast be provided, fixed and settled in some convenient and proper place between ye mouths or entrance of Cooper's Creek and Newton Creek, and that the government, managing and keeping of ye same be committed to ye said William Roydon and his assigns, who are hereby empowered and appointed to establish, fix and settle ye same within ye limits aforesaid, wherein all other persons are desired and requested to keep no other common or public passage or ferry." The ferry system was located along Cooper Street and was turned over to Daniel Cooper in 1695. Its creation resulted in a series of small settlements along the river, largely established by three families: the Coopers, the Kaighns, and the Mickels, and these lands would eventually be combined to create the future city. Of these, the Cooper family had the greatest impact on the formation of Camden. In 1773, Jacob Cooper developed some of the land he had inherited through his family into a "townsite," naming it Camden after Charles Pratt, the Earl of Camden.
19th century
For over 150 years, Camden served as a secondary economic and transportation hub for the Philadelphia area. However, that status began to change in the early 19th century. Camden was incorporated as a city on February 13, 1828, from portions of Newton Township, while the area was still part of Gloucester County. One of the U.S.'s first railroads, the Camden and Amboy Railroad, was chartered in Camden in 1830. The Camden and Amboy Railroad allowed travelers to travel between New York City and Philadelphia via ferry terminals in South Amboy, New Jersey and Camden. The railroad terminated on the Camden Waterfront, and passengers were ferried across the Delaware River to their final Philadelphia destination. The Camden and Amboy Railroad opened in 1834 and helped to spur an increase in population and commerce in Camden.
Horse ferries, or team boats, served Camden in the early 1800s. The ferries connected Camden and other Southern New Jersey towns to Philadelphia. Ferry systems allowed Camden to generate business and economic growth. "These businesses included lumber dealers, manufacturers of wooden shingles, pork sausage manufacturers, candle factories, coachmaker shops that manufactured carriages and wagons, tanneries, blacksmiths and harness makers." The Cooper's Ferry Daybook, 1819–1824, documenting Camden's Point Pleasant Teamboat, survives to this day. Originally a suburban town with ferry service to Philadelphia, Camden evolved into its own city. Until 1844, Camden was a part of Gloucester County. In 1840 the city's population had reached 3,371 and Camden appealed to state legislature, which resulted in the creation of Camden County in 1844.
The poet Walt Whitman spent his later years in Camden. He bought a house on Mickle Street in March 1884. Whitman spent the remainder of his life in Camden and died in 1892 of a stroke. Whitman was a prominent member of the Camden community at the end of the nineteenth century.
Camden quickly became an industrialized city in the latter half of the nineteenth century. In 1860 Census takers recorded eighty factories in the city and the number of factories grew to 125 by 1870. Camden began to industrialize in 1891 when Joseph Campbell incorporated his business Campbell's Soup. Through the Civil War era Camden gained a large immigrant population which formed the base of its industrial workforce. Between 1870 and 1920 Camden's population grew by 96,000 people due to the large influx of immigrants. Like other industrial cities, Camden prospered during strong periods of manufacturing demand and faced distress during periods of economic dislocation.
First half of the 20th century
At the turn of the 20th century, Camden became an industrialized city. At the height of Camden's industrialization, 12,000 workers were employed at RCA Victor, while another 30,000 worked at New York Shipbuilding. Camden Forge Company supplied materials for New York Ship during both world wars. RCA had 23 out of 25 of its factories inside Camden, and the Campbell Soup Company was also a major employer. In addition to major corporations, Camden also housed many small manufacturing companies as well as commercial offices.
From 1899 to 1967, Camden was the home of New York Shipbuilding Corporation, which at its World War II peak was the largest and most productive shipyard in the world. Notable naval vessels built at New York Ship include the ill-fated cruiser USS Indianapolis and the aircraft carrier USS Kitty Hawk. In 1962, the first commercial nuclear-powered ship, the NS Savannah, was launched in Camden. The Fairview Village section of Camden (initially Yorkship Village) is a planned European-style garden village that was built by the Federal government during World War I to house New York Shipbuilding Corporation workers.
From 1901 through 1929, Camden was headquarters of the Victor Talking Machine Company, and thereafter to its successor RCA Victor, the world's largest manufacturer of phonographs and phonograph records for the first two-thirds of the 20th century. Victor established some of the first commercial recording studios in Camden where Enrico Caruso, Arturo Toscanini, Sergei Rachmaninoff, Jascha Heifetz, Leopold Stokowski, John Philip Sousa, Woody Guthrie, Jimmie Rodgers, Fats Waller & The Carter Family among many others, made famous recordings. General Electric reacquired RCA and the remaining Camden factories in 1986.
In 1919, plans for the Delaware River Bridge were enacted as a means to reduce ferry traffic between Camden and Philadelphia. The bridge was estimated to cost $29 million, but the total cost at the end of the project was $37,103,765.42. New Jersey and Pennsylvania would each pay half of the final cost for the bridge. The bridge was opened at midnight on July 1, 1926. Thirty years later, in 1956 the bridge was renamed to the Benjamin Franklin Bridge.
During the 1930s, Camden faced a decline in economic prosperity due to the Great Depression. By the mid-1930s, the city had to pay its workers in scrip because they could not pay them in currency. Camden's industrial foundation kept the city from going bankrupt. Major corporations such as RCA Victor, Campbell's Soup and New York Shipbuilding Corporation employed close to 25,000 people in Camden through the depression years. New companies were also being created during this time. On June 6, 1933, the city hosted America's first drive-in movie theater.
Between 1929 and 1957, Camden Central Airport was active; during the 1930s, it was Philadelphia's main airport. It was located in Pennsauken Township, on the north bank of the Cooper River. Its terminal building was beside what became known as Airport Circle.
Camden's ethnic demographic shifted dramatically at the beginning of the twentieth century. German, British, and Irish immigrants made up the majority of the city at the beginning of the second half of the nineteenth century. By 1920, Italian and Eastern European immigrants had become the majority of the population. African Americans had also been present in Camden since the 1830s. The migration of African Americans from the south increased during World War II. The different ethnic groups began to form segregated communities within the city and around religious organizations. Communities formed around figures such as Tony Mecca from the Italian neighborhood, Mario Rodriguez from the Puerto Rican neighborhood, and Ulysses Wiggins from the African American neighborhood.
Second half of the 20th century
After close to 50 years of economic and industrial growth, the city of Camden experienced a period of economic stagnation and deindustrialization: after reaching a peak of 43,267 manufacturing jobs in 1950, there was an almost continuous decline to a new low of 10,200 manufacturing jobs in the city by 1982. With this industrial decline came a plummet in population: in 1950 there were 124,555 residents, compared to just 84,910 in 1980.
The city experienced white flight, as many White residents left the city for such segregated suburbs as Cherry Hill. The 1970 United States Census showed a loss of 15,000 residents, which reflected an increase of almost 50% in the number of Black residents, which grew from 27,700 to 40,000, and a simultaneous decline of 30% in the city's white population, which dropped from 89,000 to 61,000. Cherry Hill saw its population double to 64,000, which was 98.7% White. The city's population, which had been 59.8% White and 39.1% Black in 1970, was 30.6% White, 53.0% Black and 15.7% Other Race in 1980. By 1990, the balance was 19.0% White, 56.4% Black and 22.9% Other Race.
Alongside these declines, civil unrest and criminal activity rose in the city. From 1981 to 1990, mayor Randy Primas fought to renew the city economically. Ultimately Primas had not secured Camden's economic future as his successor, mayor Milton Milan, declared bankruptcy for the city in July 1999.
Industrial decline
After WWII, Camden's biggest manufacturing companies, RCA Victor and the Campbell Soup Company, decentralized their production operations. This period of capital flight was a means to regain control from Unionized workers and to avoid the rising labor costs unions demanded from the company. Campbell's kept its corporate headquarters in Camden, but the bulk of its cannery production was located elsewhere after a union worker's strike in 1934. Local South Jersey tomatoes were replaced in 1979 by Californian industrially produced tomato paste.
During the 1940s, RCA Victor began relocating some production to rural Indiana to employ low-wage ethnic Scottish-Irish workers and since 1968, has employed Mexican workers from Chihuahua.
The NY Shipbuilding Corporation, founded in 1899, shut down in 1967 due to mismanagement, unrest amongst labor workers, construction accidents, and a low demand for shipbuilding. When NY ship shut down, Camden lost its largest postwar employer.
The opening of the Cherry Hill Mall in 1961 increased Cherry Hill's property value while decreasing Camden's. Enclosed suburban malls, especially ones like Cherry Hill's, which boasted well-lit parking lots and babysitting services, were preferred by white middle-class over Philadelphia's central business district. Cherry Hill became the designated regional retail destination. The mall, as well as the Garden State Racetrack, the Cherry Hill Inn, and the Hawaiian Cottage Cafe, attracted the white middle class of Camden to the suburbs initially.
Manufacturing companies were not the only businesses that were hit. After they left Camden and outsourced their production, White-collar companies and workers followed suit, leaving for the newly constructed offices of Cherry Hill.
Unionization
Approximately ten million cans of soup were produced at Campbell's per day. This put additional stress on cannery workers who already faced dangerous conditions in an outmoded, hot and noisy factory. The Dorrance family, founders of Campbell's, made an immense amount of profit while lowering the costs of production.
The initial union strikes' intention was to gain union recognition, which they earned in 1940. Several other strikes would follow the next several decades, all demanding increased pay. Campbell's started hiring seasonal workers, immigrants, and contingent labor, the latter of which they would fire 8 weeks after hiring.
Civil unrest and crime
On September 6, 1949, mass murderer Howard Unruh went on a killing spree in his Camden neighborhood killing 13 people. Unruh, who was convicted and subsequently confined to a state psychiatric facility, died on October 19, 2009.
A civilian and a police officer were killed in a September 1969 riot, which broke out in response to accusation of police brutality. Two years later, public disorder returned with widespread riots in August 1971, following the death of a Puerto Rican motorist at the hands of white police officers. When the officers were not charged, Hispanic residents took to the streets and called for the suspension of those involved. The officers were ultimately charged, but remained on the job and tensions soon flared. On the night of August 19, 1971, riots erupted, and sections of Downtown were looted and torched over the next three days. Fifteen major fires were set before order was restored, and ninety people were injured. City officials ended up suspending the officers responsible for the death of the motorist, but they were later acquitted by a jury.
The Camden 28 were a group of anti-Vietnam War activists who, in 1971, planned and executed a raid on the Camden draft board, resulting in a high-profile trial against the activists that was seen by many as a referendum on the Vietnam War in which 17 of the defendants were acquitted by a jury even though they admitted having participated in the break-in.
In 1996, Governor of New Jersey Christine Todd Whitman frisked Sherron Rolax, a 16-year-old African-American youth, an event which was captured in an infamous photograph. Rolax alleged his civil rights were violated and sued the state of New Jersey. His suit was later dismissed.
Revitalization efforts
In 1981, Randy Primas was elected mayor of Camden, but entered office "haunted by the overpowering legacy of financial disinvestment." Following his election, the state of New Jersey closed the $4.6 million deficit that Primas had inherited, but also decided that Primas should lose budgetary control until he began providing the state with monthly financial statements, among other requirements. When he regained control, Primas had limited options regarding how to close the deficit, and so in an attempt to renew Camden, Primas campaigned for the city to adopt a prison and a trash-to-steam incinerator. While these two industries would provide some financial security for the city, the proposals did not go over well with residents, who overwhelmingly opposed both the prison and the incinerator.
While the proposed prison, which was to be located on the North Camden Waterfront, would generate $3.4 million for Camden, Primas faced extreme disapproval from residents. Many believed that a prison in the neighborhood would negatively affect North Camden's "already precarious economic situation." Primas, however, was wholly concerned with the economic benefits: he told The New York Times, "The prison was a purely economic decision on my part." Eventually, on August 12, 1985, the Riverfront State Prison opened its doors.
Camden residents also objected to the trash-to-steam incinerator, which was another proposed industry. Once again, Primas "...was motivated by fiscal more than social concerns," and he faced heavy opposition from Concerned Citizens of North Camden (CCNC) and from Michael Doyle, who was so opposed to the plant that he appeared on CBS's 60 Minutes, saying "Camden has the biggest concentration of people in all the county, and yet there is where they're going to send in this sewage... ...everytime you flush, you send to Camden, to Camden, to Camden." Despite this opposition, which eventually culminated in protests, "the county proceeded to present the city of Camden with a check for $1 million in March 1989, in exchange for the of city-owned land where the new facility was to be built... ...The $112 million plant finally fired up for the first time in March 1991."
Other notable events
Despite the declines in industry and population, other changes to the city took place during this period:
In 1950, Rutgers University absorbed the former College of South Jersey to create Rutgers University–Camden.
In 1992, the state of New Jersey under the Florio administration made an agreement with GE to ensure that GE would not close the remaining buildings in Camden. The state of New Jersey would build a new high-tech facility on the site of the old Campbell Soup Company factory and trade these new buildings to GE for the existing old RCA Victor buildings. Later, the new high tech buildings would be sold to Martin Marietta. In 1994, Martin Marietta merged with Lockheed to become Lockheed Martin. In 1997, Lockheed Martin divested the old Victor Camden Plant as part of the birth of L-3 Communications.
In 1999, Camden was selected as the location for the . That ship remains in Camden.
21st century
Originally the city's main industry was manufacturing, and in recent years Camden has shifted its focus to education and medicine in an attempt to revitalize itself. Of the top employers in Camden, many are education and/or healthcare providers: Cooper University Hospital, Cooper Medical School of Rowan University, Rowan University, Rutgers University-Camden, Camden County College, Virtua, Our Lady of Lourdes Medical Center, and CAMcare are all top employers. The eds and meds industry itself is the single largest source of jobs in the city: of the roughly 25,000 jobs in the city, 7,500 (30%) of them come from eds and meds institutions. The second-largest source of jobs in Camden is the retail trade industry, which provides roughly 3,000 (12%) jobs. While already the largest employer in the city, the eds and meds industry in Camden is growing and is doing so despite falling population and total employment: From 2000 to 2014, population and total employment in Camden fell by 3% and 10% respectively, but eds and meds employment grew by 67%.
Despite previous failures to transform the Camden Waterfront, in September 2015 Liberty Property Trust and Mayor Dana Redd announced an $830 million plan to rehabilitate the Waterfront. The project, which is the biggest private investment in the city's history, aims to redevelop of land south of the Ben Franklin Bridge and includes plans for 1.5 million square feet of commercial space, 211 residences, a 130-room hotel, more than 4,000 parking spaces, a downtown shuttle bus, a new ferry stop, a riverfront park, and two new roads. The project is a modification of a previous $1 billion proposal by Liberty Property Trust, which would have redeveloped and would have included of commercial space, 1,600 homes, and a 140-room hotel. On March 11, 2016, the New Jersey Economic Development Authority approved the modified plans and officials like Timothy J. Lizura of the NJEDA expressed their enthusiasm: "It's definitely a new day in Camden. For 20 years, we've tried to redevelop that city, and we finally have the traction between a very competent mayor's office, the county police force, all the educational reforms going on, and now the corporate interest. It really is the right ingredient for changing a paradigm which has been a wreck."
In 2013, the New Jersey Economic Development Authority created the New Jersey Economic Opportunity Act, which provides incentives for companies to relocate to or remain in economically struggling locations in the state. These incentives largely come in the form of tax breaks, which are payable over 10 years and are equivalent to a project's cost. According to The New York Times, "...the program has stimulated investment of about $1 billion and created or retained 7,600 jobs in Camden." This NJEDA incentive package has been used by organizations and firms such as the Philadelphia 76ers, Subaru of America, Lockheed Martin, and Holtec International.
In late 2014 the Philadelphia 76ers broke ground in Camden (across the street from the BB&T Pavilion) to construct a new 125,000-square-foot training complex. The Sixers Training Complex includes an office building and a 66,230-square-foot basketball facility with two regulation-size basketball courts, a 2,800-square-foot locker room, and a 7,000-square-foot roof deck. The $83 million complex had its grand opening on September 23, 2016, and was expected to provide 250 jobs for the city of Camden.
Also in late 2014, Subaru of America announced that in an effort to consolidate their operations, their new headquarters would be located in Camden. The $118 million project broke ground in December 2015 but was put on hold in mid-2016 because the original plans for the complex had sewage and waste water being pumped into an outdated sewage system. Adjustments to the plans were made and the project was expected to be completed in 2017, creating up to 500 jobs in the city upon completion. The building was completed in April 2018. The company also said that it would donate 50 cherry trees to the city and aim to follow a "zero landfill" policy in which all waste from the offices would be either reduced, reused, or recycled.
Several smaller-scale projects and transitions also took place during the 21st century.
In preparation for the 2000 Republican National Convention in Philadelphia, various strip clubs, hotels, and other businesses along Admiral Wilson Boulevard were torn down in 1999, and a park that once existed along the road was replenished.
In 2004, conversion of the old RCA Victor Building 17 to The Victor, an upscale apartment building was completed. The same year, the River LINE, between the Entertainment Center at the Waterfront in Camden and the Transit Center in Trenton, was opened, with a stop directly across from The Victor.
In 2010, massive police corruption was exposed that resulted in the convictions of several policemen, dismissals of 185 criminal cases, and lawsuit settlements totaling $3.5 million that were paid to 88 victims. On May 1, 2013, the Camden Police Department was dissolved and the newly formed Camden County Police Department took over full responsibility for policing the city. This move was met with some disapproval from residents of both the city and county.
As of 2019, numerous projects were underway downtown and along the waterfront.
Geography
According to the United States Census Bureau, the city had a total area of 10.34 square miles (26.78 km2), including 8.92 square miles (23.10 km2) of land and 1.42 square miles (3.68 km2) of water (13.75%).
Camden borders Collingswood, Gloucester City, Oaklyn, Pennsauken Township and Woodlynne in Camden County, as well as Philadelphia across the Delaware River in Pennsylvania. Just offshore of Camden is Pettys Island, which is part of Pennsauken Township. The Cooper River (popular for boating) flows through Camden, and Newton Creek forms Camden's southern boundary with Gloucester City.
Camden contains the United States' first federally funded planned community for working class residents, Yorkship Village (now called Fairview). The village was designed by Electus Darwin Litchfield, who was influenced by the "garden city" developments popular in England at the time.
Neighborhoods
Camden contains more than 20 generally recognized neighborhoods:
Ablett Village
Bergen Square
Beideman
Broadway
Centerville
Center City/Downtown Camden/Central Business District
Central Waterfront
Cooper
Cooper Grant
Cooper Point
Cramer Hill
Dudley
East Camden
Fairview
Gateway
Kaighn Point
Lanning Square
Liberty Park
Marlton
Morgan Village
North Camden
Parkside
Pavonia
Pyne Point
Rosedale
South Camden
Stockton
Waterfront South
Walt Whitman Park
Yorkship
Port
On the Delaware River, with access to the Atlantic Ocean, the Port of Camden handles break bulk, bulk cargo, as well as some containers. Terminals fall under the auspices of the South Jersey Port Corporation as well as private operators such as Holt Logistics/Holtec International. The port receives hundreds of ships moving international and domestic cargo annually and is one of the USA's largest shipping centers for wood products, cocoa and perishables.
Climate
Camden has a humid subtropical climate (Cfa in the Köppen climate classification) with hot summers and cool winters.
Demographics
2020 census
Note: the US Census treats Hispanic/Latino as an ethnic category. This table excludes Latinos from the racial categories and assigns them to a separate category. Hispanics/Latinos can be of any race.
2010 Census
The city of Camden was 47% Hispanic of any race, 44% non-Hispanic black, 6% non-Hispanic white, and 3% other. Camden is predominately populated by African Americans and Puerto Ricans.
The Census Bureau's 2006–2010 American Community Survey showed that (in 2010 inflation-adjusted dollars) median household income was $27,027 (with a margin of error of +/- $912) and the median family income was $29,118 (+/- $1,296). Males had a median income of $27,987 (+/- $1,840) versus $26,624 (+/- $1,155) for females. The per capita income for the city was $12,807 (+/- $429). About 33.5% of families and 36.1% of the population were below the poverty line, including 50.3% of those under age 18 and 26.2% of those age 65 or over.
As of 2006, 52% of the city's residents lived in poverty, one of the highest rates in the nation. The city had a median household income of $18,007, the lowest of all U.S. communities with populations of more than 65,000 residents. A group of poor Camden residents were the subject of a 20/20 special on poverty in America broadcast on January 26, 2007, in which Diane Sawyer profiled the lives of three young children growing up in Camden. A follow-up was shown on November 9, 2007.
In 2011, Camden's unemployment rate was 19.6%, compared with 10.6% in Camden County as a whole. As of 2009, the unemployment rate in Camden was 19.2%, compared to the 10% overall unemployment rate for Burlington, Camden and Gloucester counties and a rate of 8.4% in Philadelphia and the four surrounding counties in Southeastern Pennsylvania.
2000 Census
As of the 2000 United States Census there were 79,904 people, 24,177 households, and 17,431 families residing in the city. The population density was 9,057.0 people per square mile (3,497.9/km2). There were 29,769 housing units at an average density of 3,374.3 units per square mile (1,303.2/km2). The racial makeup of the city was 16.84% White, 53.35% African American, 0.54% Native American, 2.45% Asian, 0.07% Pacific Islander, 22.83% from other races, and 3.92% from two or more races. 38.82% of the population were Hispanic or Latino of any race.
There were 24,177 households, out of which 42.2% had children under the age of 18 living with them, 26.1% were married couples living together, 37.7% had a female householder with no husband present, and 27.9% were non-families. 22.5% of all households were made up of individuals, and 7.8% had someone living alone who was 65 years of age or older. The average household size was 3.52 and the average family size was 4.00.
In the city, the population is quite young with 34.6% under the age of 18, 12.0% from 18 to 24, 29.5% from 25 to 44, 16.3% from 45 to 64, and 7.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 94.3 males. For every 100 females age 18 and over, there were 90.0 males.
The median income for a household in the city was $23,421, and the median income for a family was $24,612. Males had a median income of $25,624 versus $21,411 for females. The per capita income for the city is $9,815. 35.5% of the population and 32.8% of families were below the poverty line. 45.5% of those under the age of 18 and 23.8% of those 65 and older were living below the poverty line.
In the 2000 Census, 30.85% of Camden residents identified themselves as being of Puerto Rican heritage. This was the third-highest proportion of Puerto Ricans in a municipality on the United States mainland, behind only Holyoke, Massachusetts and Hartford, Connecticut, for all communities in which 1,000 or more people listed an ancestry group.
Religion
Camden has religious institutions including many churches and their associated non-profit organizations and community centers such as the Little Rock Baptist Church in the Parkside section of Camden, First Nazarene Baptist Church,
Kaighn Avenue Baptist Church, and the Parkside United Methodist Church. Other congregations that are active now are Newton Monthly Meeting of the Religious Society of Friends, on Haddon Avenue and Cooper Street and the Masjid at 1231 Mechanic St, Camden, NJ 08104 .
The first Scientology church was incorporated in December 1953 in Camden by L. Ron Hubbard, his wife Mary Sue Hubbard, and John Galusha.
Father Michael Doyle, the pastor of Sacred Heart Catholic Church located in South Camden, has played a large role in Camden's spiritual and social history. In 1971, Doyle was part of the Camden 28, a group of anti-Vietnam War activists who planned to raid a draft board office in the city. This is noted by many as the start of Doyle's activities as a radical 'Catholic Left'. Following these activities, Monsignor Doyle went on to become the pastor of Sacred Heart Church, remaining known for his poetry and activism. Monsignor Doyle and the Sacred Heart Church's main mission is to form a connection between the primarily white suburban surrounding areas and the inner-city of Camden.
In 1982, Father Mark Aita of Holy Name of Camden founded the St. Luke's Catholic Medical Services. Aita, a medical doctor and a member of the Society of Jesus, created the first medical system in Camden that did not use rotating primary care physicians. Since its conception, St. Luke's has grown to include Patient Education Classes as well as home medical services, aiding over seven thousand Camden residents.
Culture
Camden's role as an industrial city gave rise to distinct neighborhoods and cultural groups that have affected the growth and decline of the city over the course of the 20th century. Camden is also home to historic landmarks detailing its rich history in literature, music, social work, and industry such as the Walt Whitman House, the Walt Whitman Cultural Arts Center, the Rutgers–Camden Center for the Arts and the Camden Children's Garden.
Camden's cultural history has been greatly affected by both its economic and social position over the years. From 1950 to 1970, industry plummeted, resulting in close to 20,000 jobs being lost for Camden residents. This mass unemployment as well as social pressure from neighboring townships caused an exodus of citizens, mostly white. This gap was filled by new African American and Latino citizens and led to a restructuring of Camden's communities. The number of White citizens who left to neighboring towns such as Collingswood or Cherry Hill left both new and old African American and Latino citizens to re-shape their community. To help in this process, numerous not-for-profit organizations such as Hopeworks or the Neighborhood Center were formed to facilitate Camden's movement into the 21st century.
Due to its location as county seat, as well as its proximity to Philadelphia, Camden has had strong connections with its neighboring city.
On July 17, 1951, the Delaware River Port Authority, a bi-state agency, was created to promote trade and better coordinate transportation between the two cities.
In June 2014, the Philadelphia 76ers announced that they would relocate their home offices and construct a practice facility on the Camden Waterfront, adding 250 permanent jobs in the city creating what CEO Scott O'Neil described as "biggest and best training facility in the country" using $82 million in tax incentives offered by the New Jersey Economic Development Authority.
The Battleship New Jersey, a museum ship located on the Delaware Waterfront, was a contested topic for the two cities. Philadelphia's DRPA funded millions of dollars into the museum ship project as well as the rest of the Waterfront, but the ship was originally donated to a Camden-based agency called the Home Port Alliance. They argue that Battleship New Jersey is necessary for Camden's economic growth. As of October 2001, the Home Port Alliance has maintained ownership of Battleship New Jersey.
Black culture
In 1967, Charles 'Poppy' Sharp founded the Black Believers of Knowledge, an organization founded on the betterment of African American citizens in South Camden. He would soon rename his organization to the Black People's Unity Movement (BPUM). The BPUM was one of the first major cultural organizations to arise after the deindustrialization of Camden's industrial life. Going against the building turmoil in the city, Sharp founded BPUM on "the belief that all the people in our community should contribute to positive change."
In 2001, Camden residents and entrepreneurs founded the South Jersey Caribbean Cultural and Development Organization (SJCCDO) as a non-profit organization aimed at promoting understanding and awareness of Caribbean Culture in South Jersey and Camden. The most prominent of the events that the SJCCDO organizes is the South Jersey Caribbean Festival, an event that is held for both cultural and economical reasons. The festival's primary focus is cultural awareness of all of Camden's residents. The festival also showcases free art and music as well as financial information and free promotion for Camden artists.
In 1986, Tawanda 'Wawa' Jones began the Camden Sophisticated Sisters, a youth drill team. CSS serves as a self-proclaimed 'positive outlet' for the Camden' students, offering both dance lessons as well as community service hours and social work opportunities. Since its conception CSS has grown to include two other organizations, all ran through Jones: Camden Distinguished Brothers and The Almighty Percussion Sound drum line. In 2013, CSS was featured on ABC's Dancing with the Stars.
Hispanic and Latino culture
On December 31, 1987, the Latin American Economic Development Association (LAEDA). LAEDA is a non-profit economic development organization that helps with the creation of small business for minorities in Camden. LAEDA was founded under in an attempt to revitalize Camden's economy and provide job experience for its residents. LAEDA operates on a two major methods of rebuilding, The Entrepreneurial Development Training Program (EDTP) and the Neighborhood Commercial Expansion Initiative (NCEI). In 1990, LAEDA began a program called The Entrepreneurial Development Training Program (EDTP) which would offer residents employment and job opportunities through ownership of small businesses. The program over time created 506 businesses and 1,169 jobs. As of 2016, half of these businesses are still in operation. Neighborhood Commercial Expansion Initiative (NCEI) then finds locations for these business to operate in, purchasing and refurbishing abandoned real estate. As of 2016 four buildings have been refurbished including the First Camden National Bank & Trust Company Building.
One of the longest-standing traditions in Camden's Hispanic community is the San Juan Bautista Parade, a celebration of St. John the Baptist, conducted annually starting in 1957. The parade began in 1957 when a group of parishioners from Our Lady of Mount Carmel marched with the church founder Father Leonardo Carrieri. This march was originally a way for the parishioners to recognize and show their Puerto Rican Heritage, and eventually became the modern-day San Juan Bautista Parade. Since its conception, the parade has grown into the Parada San Juan Bautista, Inc, a non-for-profit organization dedicated to maintaining the community presence of Camden's Hispanic and Latino members. Some of the work that the Parada San Juan Bautista, Inc has done include a month long event for the parade with a community commemorative mass and a coronation pageant. The organization also awards up to $360,000 in scholarships to high school students of Puerto Rican descent.
On May 30, 2000, Camden resident and grassroots organizer Lillian Santiago began a movement to rebuild abandoned lots in her North Camden neighborhood into playgrounds. The movement was met with resistance from the Camden government, citing monetary problems. As Santiago's movement gained more notability in her neighborhoods she was able to move other community members into action, including Reverend Heywood Wiggins. Wiggins was the president of the Camden Churches Organized for People, a coalition of 29 churches devoted to the improvement of Camden's communities, and with his support Santiago's movement succeeded. Santiago and Wiggins were also firm believers in Community Policing, which would result in their fight against Camden's corrupt police department and the eventual turnover to the State government.
Arts and entertainment
Camden has two generally recognized neighborhoods located on the Delaware River waterfront, Central and South. The Waterfront South was founded in 1851 by the Kaighns Point Land Company. During World War Two, Waterfront South housed many of the industrial workers for the New York Shipbuilding Company. Currently, the Waterfront is home to many historical buildings and cultural icons. The Waterfront South neighborhood is considered a federal and state historic area due to its history and culturally significant buildings, such as the Sacred Heart Church, and the South Camden Trust Company The Central Waterfront is located adjacent to the Benjamin Franklin Bridge and is home to the Nipper Building (also known as The Victor), the Adventure Aquarium, and Battleship New Jersey, a museum ship located at the Home Port Alliance.
Starting on February 16, 2012, Camden's Waterfront began an art crawl and volunteer initiative called Third Thursday in an effort to support local Camden business and restaurants. Part of Camden's art crawl movement exists in Studio Eleven One, a fully restored 1906 firehouse opened in 2011 that operated as an art gallery owned by William and Ronja Butlers. The Butlers moved to Camden in 2011 from Des Moines, Iowa and began the Third Thursday art movement. William Butler and Studio Eleven One are a part of his wife's company Thomas Lift LLC, self-described as a "socially conscious company" that works to connect Camden's art scene with philanthropic organizations. Some of the work they have done includes work against Human Trafficking, and ecological donations.
Starting in 2014, Camden began Connect The Lots, a community program designed to revitalize unused areas for community engagement. Connect the Lots was founded through The Kresge Foundation, and the project "seeks to create temporary, high-quality, safe outdoor spaces that are consistently programmed with local cultural and recreational activities". Other partnerships with the Connect the Lots foundation include the Cooper's Ferry Partnership, a private non-profit corporation dedicated to urban renewal. Connect the Lots' main work are their 'Pop up Parks' that they create around Camden. In 2014, Connect the lots created a pop up skate park for Camden youth with assistance from Camden residents as well as students. As of 2016, the Connect the Lots program free programs have expanded to include outdoor yoga and free concerts.
In October 2014, Camden finished construction of the Kroc Center, a Salvation Army funded community center located in the Cramer Hill neighborhood. The Kroc center's mission is to provide both social services to the people of Camden as well as community engagement opportunities. The center was funded by a $59 million donation from Joan Kroc, and from the Salvation Army. The project was launched in 2005 with a proposed completion date of one year. However, due to the location of the site as well as governmental concerns, the project was delayed. The Kroc center's location was an 85-acre former landfill which closed in 1971. Salvation Army Major Paul Cain states the landfill's location to the waterfront and the necessity to handle storm water management as main reasons for the delay. The center was eventually opened on October 4, 2014, with almost citywide acclaim. Camden Mayor Dana Redd on the opening of the center called it "the crown jewel of the city." The Kroc Center offers an 8-lane, 25-yard competition pool, a children's water park, various athletic and entertainment options, as well as an in center chapel.
The Symphony in C orchestra is based at Rutgers University-Camden. Established as the Haddonfield Symphony in 1952, the organization was renamed and relocated to Camden in 2006.
Philanthropy
Camden has a variety of non-profit Tax-Exempt Organizations aimed to assist city residents with a wide range of health and social services free or reduced charge to residents. Camden City, having one of the highest rates of poverty in New Jersey, fueled residents and local organizations to come together and develop organizations aimed to provide relief to its citizens. As of the 2000 Census, Camden's income per capita was $9,815. This ranking made Camden the poorest city in the state of New Jersey, as well as one of the poorest cities in the United States. Camden also has one of the highest rates of childhood poverty in the nation.
Camden was once a thriving industrialized city home to the RCA Victor, Campbell Soup Company and containing one of the largest shipping companies. Camden's decline stemmed from the lack in jobs once these companies moved over seas. Many of Camden's non-profit Organizations emerged during the 1900s when the city suffered a large decline in jobs which affected the city's growth and population. These organizations are located in all Camden sub-sections and offer free services to all city residents in an attempt to combat poverty and aid low income families. The services offered range from preventive health care, homeless shelters, early childhood education, to home ownership and restoration services. Nonprofits in Camden strive to assist Camden residents in need of all ages, from children to the elderly. Each nonprofit organization in Camden has an impact on the community with specific goals and services. These organizations survive through donations, partnerships, and fundraising. Volunteers are needed at many of these organizations to assist with various programs and duties. Camden's nonprofits also focus on development, prevention, and revitalization of the community. Nonprofit organizations serve as resources for the homeless, unemployed, or financially insufficient.
One of Camden's most prominent and longest-running organizations with a span of 103 years of service, is The Neighborhood Center located in the Morgan Village section of Camden. The Neighborhood Center was founded in 1913 by Eldridge Johnson, George Fox Sr., Mary Baird, and local families in the community geared to provide a safe environment for the city's children. The goal of Camden's Neighborhood Center is to promote and enable academic, athletic and arts achievements. The Neighborhood Center was created to assist the numerous families living in Camden in poverty. The Neighborhood Center also has an Urban Community Garden as of the year 2015. Many of the services and activities offered for the children are after school programs, and programs for teenagers are also available. These teenage youth programs aim to guide students toward success during and after their high school years. The activities at the Neighborhood Center are meant to challenge youth in a safe environment for fun and learning. These activities are developed with the aim of The Neighborhood Center helping to break the cycle of poverty that is common in the city of Camden.
Center for Family Services Inc offers a number of services and programs that total 76 free programs. This organization has operated in South Jersey for over 90 years and is one of the leading non-profits in the city. Cure4Camden is a community ran program focused on stopping the spread of violence in Camden and surrounding communities. They focus on stopping the spread of violence in the Camden City communities of:
Liberty Park
Whitman Park
Centerville
Cooper Plaza/Lanning Square
Center for Family Services offers additional programs such as: Active parenting and Baby Best Start program, Mental Health & Crisis Intervention, and Rehabilitative Care. They are located at 584 Benson St Camden NJ 801 Center for Family Services is a nonprofit organization helping adults, children, and families. Center for Family Services' main focus is "prevention." Center for Family Services has over 50 programs, aimed at the most "vulnerable" members of the community. These programs are made possible by donors, a board of trustees, and a professional staff. Their work helps prevent possible victims of abuse, neglect, or severe family problems. Their work helps thousands of people in the community and also provides intervention services to individuals and families. Their programs for children are home-based, community-based, as well as school-based. Center for Family Services is funded through partners, donors, and funders from the community and elsewhere.
Cathedral Soup Kitchen, Inc. is a Human Service-based non-profit Organization that is the largest emergency food distribution agency in Camden. The organization was founded in 1976 by four Camden residents after attending a lecture given by Mother Theresa. They ran off of donated food and funds for fourteen years until they were granted tax exempt status as a 501(c) (3) corporation in 1990. In the 1980s, a new program started at The Cathedral Kitchen called the "casserole program", which consisted of volunteers cooking and freezing casseroles to be donated and dropped off at the Cathedral Kitchen, and then be served to guests. Cathedral Kitchen faced many skeptics at first, despite the problems they were attempting to solve in the community, such as hunger. The Cathedral Kitchen's first cooking staff consisted of Clyde and Theresa Jones. Next, Sister Jean Spena joined the crew and the three members ran cooking operations over the course of several years. They provide 100,000 meals a year and launched a Culinary Arts Catering program in 2009. They provide hot meals Monday through Saturday to Camden County residents. The Cathedral Kitchen's annual revenue is $3,041,979.00.
A fundraising component of the Cathedral Kitchen is CK Cafe. CK Cafe is a small lunch restaurant used by the Cathedral Kitchen to provide employment to those who graduate from their programs as well as generate profits to continue to provide food to the hungry. CK Cafe is open Monday through Friday from 11:00 am to 2:00 pm. You can even place an order for takeout by calling their telephone number. CK Cafe even offers catering and event packages. The Cathedral Kitchen is innovative and unique compared to other soup kitchens, because those who eat at The Cathedral Kitchen are referred to as "dinner guests" rather than the homeless, the hungry, etc. The Cathedral Kitchen also offers various opportunities for those interested to volunteer. Another feature of The Cathedral Kitchen is their free health clinics with a variety of services offered including dental care and other social services.
Catholic Charities of Camden, Inc. is a faith-based organization that advocates and uplifts the lives of the poor and unemployed. They provide services in six New Jersey counties and serve over 28,000 people each year. The extent of the services offered exceed those of any of Camden's other Non- Profit Organizations. Catholic Charities Refugee
Camden Churches Organized for People (CCOP) is an arrangement between various congregations of Camden to partner together against problems in the community. CCOP is affiliated with Pacific Institute for Community Organization (PICO). CCOP is a non-religious, non-profit organization that works with believers in the Camden to solve social problems in the community. Their beliefs and morals are the foundation for their efforts to solve a multitude of problems in the Camden community. CCOP's system for community organizing was modeled after PICO, which stresses the importance of social change instead of social services when addressing the causes of residents and their families' problems. CCOP's initial efforts began in 1995, and was composed only of two directors and about 60 leaders from the 18 churches in the organization.
The congregation leaders of CCOP all had a considerable number of networking contacts but were also looking to expand and share their networking relationship with others. CCOP congregation leaders also had to listen to the concerns of those in their networking contacts, the community, and the congregations. One of the main services of CCOP was conducting one-on-one's with people in the community, to recognize patterns of residents' problems in the community.
Cooper Grant Neighborhood Association is located in the historic Cooper Grant neighborhood that once housed William Cooper, an English Quaker with long ties to Camden. His son Richard Cooper along with his four children are responsible for contributing to the creation of the Cooper Health System. This organizations goal is to enrich the lives of citizens living in the Cooper Grant neighborhood located from the Camden Waterfront up to Rutgers University Camden campus. This center offers community service to the citizens living in the historic area that include activism, improving community health and involvement, safety and security, housing development, affordable childcare services, and connecting neighborhoods and communities together. The Cooper Grant Neighborhood Association owns the Cooper Grant Community Garden. Project H.O.P.E organization offers healthcare to the homeless, preventive health Care, substance abuse programs, social work services, behavioral health care.
The Heart of Camden Organization offers home renovation and restoration services and home ownership programs. Heart of Camden receives donations from online shoppers through Amazon Smile. Heart of Camden Organization is partners with District Council Collaborative Board (DCCB). Heart of Camden Organization's accomplishments include the economic development of various entities such as the Waterfront South Theatre, Neighborhood Greenhouse, and a community center with a gymnasium. Another accomplishment of Heart of Camden Organization is its revitalization of Camden, which includes Liney's Park Community Gardens and Peace Park.
Fellowship House of South Camden is an organization that offers Christian (Nondenominational) based after school and summer programs. Fellowship House was founded in 1965 and started as a weekly Bible club program for students in the inner-city of Camden. Settlement was made on a house located at Fellowship House's current location in the year 1969. Fellowship House hired its first actual staff member, director Dick Wright, in the year 1973.
VolunteersofAmerica.org helps families facing poverty and is a community based organization geared toward helping families live self-sufficient, healthy lives. With a 120 years of service the Volunteers of America has dedicated their services to all Americans in need of help. Home for the Brave is a housing program aimed to assist homeless veterans. This program is a 30-bed housing program that coincides with the Homeless Veterans Reintegration program which is funded through the Department of Labor. Additional services include; Emergency Support, Community Support, Employment Services, Housing Services, Veterans Services, Behavioral Services, Senior Housing.
The Center for Aquatic Sciences was founded in 1989 and continues to promote its mission of: "education and youth development through promoting the understanding, appreciation and protection of aquatic life and habitats." In performing this mission, the Center strives to be a responsible member of the community, assisting in its economic and social redevelopment by providing opportunities for education, enrichment and employment. Education programs include programs for school groups in our on-site classrooms and aquarium auditorium as well as outreach programs throughout the Delaware Valley. The center also partners with schools in both Camden and Philadelphia to embed programs during the school day and to facilitate quality educational after-school experiences.
Research and conservation work includes an international program, where the center has studied and sought to protect the threatened Banggai cardinalfish and its coral reef habitat in Indonesia. This work has resulted in the Banggai cardinalfish being included as an endangered species on IUCN Red List of Threatened and Endangered Species, the FIRST saltwater aquarium fish to be listed as endangered in the federal Endangered Species Act (ESA), the publication of The Banggai Cardinalfish: Natural History, Conservation, and Culture of Pterapogon kauderni, and numerous peer review journal articles.
The center's flagship program is CAUSE (Community and Urban Science Enrichment). CAUSE is a many-faceted science enrichment program for children and youth. The program, initiated in 1993, has been extremely successful, boasting a 100% high school graduation rate and a 98% college enrollment rate, and has gained local and regional attention as a model for comprehensive, inner-city youth development programs, focusing on intense academics and mentoring for a manageable number of youth.
Economy
About 45% of employment in Camden is in the "eds and meds" sector, providing educational and medical institutions.
Largest employers
Campbell Soup Company
Cooper University Hospital
Delaware River Port Authority
L3 Technologies, formerly L-3 Communications
Our Lady of Lourdes Medical Center
Rutgers University–Camden
State of New Jersey
New Jersey Judiciary
Subaru of America; relocated from Cherry Hill in 2018
Susquehanna Bank
UrbanPromise Ministry (largest private employer of teenagers)
Urban enterprise zone
Portions of Camden are part of a joint Urban Enterprise Zone. The city was selected in 1983 as one of the initial group of 10 zones chosen to participate in the program. In addition to other benefits to encourage employment within the Zone, shoppers can take advantage of a reduced 3.3125% sales tax rate (half of the 6.625% rate charged statewide) at eligible merchants. Established in September 1988, the city's Urban Enterprise Zone status expires in December 2023.
The UEZ program in Camden and four other original UEZ cities had been allowed to lapse as of January 1, 2017, after Governor Chris Christie, who called the program an "abject failure", vetoed a compromise bill that would have extended the status for two years. In May 2018, Governor Phil Murphy signed a law that reinstated the program in these five cities and extended the expiration date in other zones.
Redevelopment
The state of New Jersey has awarded more than $1.65 billion in tax credits to more than 20 businesses through the New Jersey Economic Opportunity Act. These companies include Subaru, Lockheed Martin, American Water, EMR Eastern and Holtec.
Campbell Soup Company decided to go forward with a scaled-down redevelopment of the area around its corporate headquarters in Camden, including an expanded corporate headquarters. In June 2012, Campbell Soup Company acquired the site of the vacant Sears building located near its corporate offices, where the company plans to construct the Gateway Office Park, and razed the Sears building after receiving approval from the city government and the New Jersey Department of Environmental Protection.
In 2013, Cherokee Investment Partners had a plan to redevelop north Camden with 5,000 new homes and a shopping center on . Cherokee dropped their plans in the face of local opposition and the slumping real estate market.
They are among several companies receiving New Jersey Economic Development Authority (EDA) tax incentives to relocate jobs in the city.
Lockheed Martin was awarded $107 million in tax breaks, from the Economic Redevelopment Agency, to move to Camden. Lockheed rents 50,000 square feet of the L-3 communications building in Camden. Lockheed Martin invested $146.4 million into their Camden Project According to the Economic Redevelopment Agency. Lockheed stated that without these tax breaks they would have had to eliminate jobs.
In 2013 Camden received $59 million from the Kroc estate to be used in the construction of a new community center and another $10 million was raised by the Salvation Army to cover the remaining construction costs. The Ray and John Kroc Corps Community Center, opened in 2014, is a 120,000 square foot community center with an 8,000 square foot water park and a 60 ft ceiling. The community center also contains a food pantry, a computer lab, a black box theater, a chapel, two pools, a gym, an outdoor track and field, a library with reading rooms, and both indoor and outdoor basketball courts.
In 2015 Holtec was given $260 million over the course of 10-year to open up a 600,000-square-foot campus in Camden. Holtec stated that they plan to hire at least 1000 employees within the first year of them opening their doors in Camden. According to the Economic Development Agency, Holtec is slated to bring in $155,520 in net benefit to the state by moving to Camden, but in this deal, Holtec has no obligation to stay in Camden after its 10-year tax credits run out. Holtec's reports stated that the construction of the building would cost $260 million which would be equivalent to the tax benefits they received.
In fall 2017 Rutgers University Camden Campus opened up their Nursing and Science Building. Rutgers spent $62.5 million to build their 107,000-square-foot building located at 5th and Federal St. This building houses their physics, chemistry, biology and nursing classes along with nursing simulation labs.
In November 2017, Francisco "Frank" Moran was elected as the 48th Mayor of Camden. Prior to this, one of Moran's roles was as the director of Camden County Parks Department where he was in charge of overseeing several park projects expanding the Camden County Park System, including the Cooper River Park, as well as bringing back public ice skating rinks to the parks in Camden County.
Moran has helped in bringing several companies to Camden, including Subaru of America, Lockheed Martin, Philadelphia 76ers, Holtec International, American Water, Liberty Property Trust, as well as EMR.
Moran has also assisted with the public schools of Camden by supplying them with new resources, such as new technology for the classrooms, as well as new facilities.
American Water was awarded $164.2 million in tax credits from the New Jersey's Grow New Jersey Assistance Program to build a five-story 220,000-square-foot building at Camden's waterfront. American Water opened this building in December 2018 becoming the first in a long line of new waterfront attractions planned to come to Camden.
The NJ American Water Neighborhood Revitalization Tax Credit is a $985,000 grant which was introduced in July 2018. It is part of $4.8 million that New Jersey American Water has invested in Camden. Its purpose will be to allow current residents to remain in the city by providing them with $5,000 grants to make necessary home repairs. Some of the funding will also go towards Camden SMART (Stormwater Management and Resource Training). Funding will also go towards the Cramer Hill NOW Initiative, which focuses on improving infrastructure and parks.
On June 5, 2017, Cooper's Poynt Park was completed. The 5-acre park features multi-use trails, a playground, and new lighting. Visitors can see both the Delaware River and the Benjamin Franklin Bridge. Prior to 1985, the land the park resides on was open space that allowed Camden residents access to the waterfront. In 1985, the Riverfront State Prison was built, blocking that access. The land become available for the park to be built when the prison was demolished in 2009. Funding for the park was provided by Wells Fargo Regional Foundation, the William Penn Foundation, the State Department of Community Affairs, the Fund for New Jersey, and the Camden Economic Recovery Board.
Cooper's Ferry Partnership is a private non-profit founded in 1984. It was originally known as Cooper's Ferry Association until it merged with the Greater Camden Partnership in 2011, becoming Cooper's Ferry Partnership. Kris Kolluri is the current CEO. In a broad sense, their goal is to identify and advance economic development in Camden. While this does include housing rehabilitation, Cooper's Ferry is involved in multiple projects. This includes the Camden Greenway, which is a set of hiking and biking trails, and the Camden SMART (Stormwater Management and Resource Training) Initiative.
In January 2019, Camden received a $1 million grant from Bloomberg Philanthropies for A New View, which is a public art project seeking to change illegal dump sites into public art fixtures. A New View is part of Bloomberg Philanthropies larger Public Art Challenge. Additionally, the program will educate residents of the harmful effects of illegal dumping. The effort will include the Cooper's Ferry Partnership, the Rutgers-Camden Center for the Arts, the Camden Collaborative Initiative, and the Camden City Cultural and Heritage Commission, as well as local businesses and residents. Locations to be targeted include dumping sites within proximity of Port Authority Transit Corporation high speed-line, the RiverLine, and the Camden GreenWay. According to Mayor Francisco Moran, illegal dumping costs Camden more than $4 million each year.
Housing
Saint Joseph's Carpenters Society
Saint Josephs Carpenter Society (SJCS) is a 501c(3) non-profit organization located in Camden. Pilar Hogan is the current executive director. Their focus is on the rehabilitation of current residences, as well as the creation of new low income, rent-controlled housing. SJCS is attempting to tackle the problem of abandoned properties in Camden by tracking down the homeowners, so they can then purchase and rehabilitate the property. Since the organizations beginning, it has overseen the rehabilitation or construction of over 500 homes in Camden.
SJCS also provides some education and assistance in the home-buying process to prospective homebuyers in addition to their rehabilitation efforts. This includes a credit report analysis, information on how to establish credit, and assistance in finding other help for the homebuyers.
In March 2019, SJCS received $207,500 in federal funding from the U.S. Department of Housing and Urban Development's (HUD) NeighborWorks America program. NeighborWorks America is a public non-profit created by Congress in 1978, which is tasked with supporting community development efforts at the local level.
Mount Laurel Doctrine
The Mount Laurel doctrine stems from a 1975 court case, Southern Burlington County N.A.A.C.P. v. Mount Laurel Township (Mount Laurel I). The doctrine was an interpretation of the New Jersey State Constitution, and states that municipalities may not use their zoning laws in an exclusionary manner to make housing unaffordable to low and moderate income people. The court case itself was a challenge to Mount Laurel specifically, in which plaintiffs claimed that the township operated with the intent of making housing unaffordable for low and moderate income people. The doctrine is more broad than the court case, covering all New Jersey municipalities.
Failed redevelopment projects
In early 2013, ShopRite announced that they would open the first full-service grocery store in Camden in 30 years, with plans to open their doors in 2015. In 2016 the company announced that they no longer planned to move to Camden leaving the plot of land on Admiral Willson Boulevard barren and the 20-acre section of the city as a food desert.
In May 2018, Chinese company Ofo brought its dockless bikes to Camden, along with many other cities, for a six-month pilot in an attempt to break into the American market. After two months in July 2018 Ofo decided to remove its bikes from Camden as part of a broader pullout from most of the American cities they had entered due to a decision that it was not profitable to be in these American cities.
On March 28, 2019, a former financial officer for Hewlett-Packard, Gulsen Kama, alleged that the company received a tax break based on false information. The company qualified for a $2.7 million tax break from the Grow NJ incentive of the Economic Development Authority (EDA). Kama testified that the company qualified for the tax break because of a false cost-benefit analysis she was ordered to prepare. She claims the analysis included a plan to move to Florida that was not in consideration by the company. The Grow NJ Incentive has granted $11 billion in tax breaks to preserve and create jobs in New Jersey, but it has experienced problems as well. A state comptroller sample audit ordered by Governor Phil Murphy showed that approximately 3,000 jobs companies listed with the EDA do not actually exist. Those jobs could be worth $11 million in tax credits. The audit also showed that the EDA did not collect sufficient data on companies that received tax credits.
Government
Camden has historically been a stronghold of the Democratic Party. Voter turnout is very low; approximately 19% of Camden's voting-age population participated in the 2005 gubernatorial election.
Local government
Since July 1, 1961, the city has operated within the Faulkner Act, formally known as the Optional Municipal Charter Law, under a Mayor-Council form of government. The city is one of 71 municipalities (of the 565) statewide that use this form of government. The governing body is comprised of the Mayor and the City Council, with all members elected in partisan voting to four-year terms of office on a staggered basis. The Mayor is directly elected by the voters. The City Council is comprised of seven council members. Since 1994, the city has been divided into four council districts, with a single council member elected from each of the four districts and three council members being elected at-large; previously, the entire council was elected at-large. The four ward seats are up for election at the same time and the three at-large seats and the mayoral seat are up for election together two years later. For three decades before 1962 and from 1996 to 2007, Camden's municipal elections were held on a non-partisan basis; since 2007, the elections have been partisan.
, the Mayor of Camden is Democrat Francisco "Frank" Moran, whose term of office ends December 31, 2021. Members of the City Council are Council President Curtis Jenkins (D, 2021; at large), Vice President Marilyn Torres (D, 2023; Ward 3), Shaneka Boucher (D, 2023; Ward 1), Victor Carstarphen (D, 2023; Ward 2), Sheila Davis (D, 2021; at large), Angel Fuentes (D, 2021; at large) and Felisha Reyes-Morton (D, 2023; Ward 4).
In February 2019, Felisha Reyes-Morton was unanimously appointed to fill the Ward 4 seat expiring in December 2019 that became vacant following the resignation of Council Vice President Luis A. Lopez the previous month, after serving ten years in office.
In November 2018, Marilyn Torres was elected to serve the balance of the Ward 3 seat expiring in December 2019 that was vacated by Francisco Moran when he took office as mayor the previous January.
In February 2016, the City Council unanimously appointed Angel Fuentes to fill the at-large term ending in December 2017 that was vacated by Arthur Barclay when he took office in the New Jersey General Assembly in January 2016; Fuentes had served for 16 years on the city council before serving in the Assembly from 2010 to 2015. Fuentes was elected to another four-year term in November 2017.
Mayor Milton Milan was jailed for his connections to organized crime. On June 15, 2001, he was sentenced to serve seven years in prison on 14 counts of corruption, including accepting mob payoffs and concealing a $65,000 loan from a drug kingpin.
In 2018, the city had an average residential property tax bill of $1,710, the lowest in the county, compared to an average bill of $6,644 in Camden County and $8,767 statewide.
Federal, state and county representation
Camden is located in the 1st Congressional District and is part of New Jersey's 5th state legislative district.
Political corruption
Three Camden mayors have been jailed for corruption: Angelo Errichetti, Arnold Webster, and Milton Milan.
In 1981, Errichetti was convicted with three others for accepting a $50,000 bribe from FBI undercover agents in exchange for helping a non-existent Arab sheikh enter the United States. The FBI scheme was part of the Abscam operation. The 2013 film American Hustle is a fictionalized portrayal of this scheme.
In 1999, Webster, who was previously the superintendent of Camden City Public Schools, pleaded guilty to illegally paying himself $20,000 in school district funds after he became mayor.
In 2000, Milan was sentenced to more than six years in federal prison for accepting payoffs from associates of Philadelphia organized crime boss Ralph Natale, soliciting bribes and free home renovations from city vendors, skimming money from a political action committee, and laundering drug money.
The Courier-Post dubbed former State Senator Wayne R. Bryant, who represented the state's 5th Legislative District from 1995 to 2008, the "king of double dipping" for accepting no-show jobs in return for political benefits. In 2009, Bryant was sentenced to four years in federal prison for funneling $10.5 million to the University of Medicine and Dentistry of New Jersey (UMDNJ) in exchange for a no-show job and accepting fraudulent jobs to inflate his state pension and was assessed a fine of $25,000 and restitution to UMDNJ in excess of $110,000. In 2010, Bryant was charged with an additional 22 criminal counts of bribery and fraud, for taking $192,000 in false legal fees in exchange for backing redevelopment projects in Camden, Pennsauken Township and the New Jersey Meadowlands between 2004 and 2006.
Politics
As of November 6, 2018, there were a total of 42,264 registered voters in the city of Camden.
The current mayor is Frank Moran, who won the election in November 2017. Moran's predecessor was Dana Redd, who had served two terms from 2010 to January 2018. Moran is a member of the Democratic Party, as all Camden mayors have been since 1935. The last Republican Camden mayor was Frederick von Nieda, who only sat in office for a year.
As of March 23, 2011, there were a total of 43,893 registered voters in Camden, of which 17,403 (39.6%) were registered as Democrats, 885 (2.0%) were registered as Republicans and 25,601 (58.3%) were registered as Unaffiliated. There were four voters registered to other parties.
In the 2016 presidential election, Democrat Hillary Clinton received overwhelming support from the city of Camden. On May 11, 2016, Clinton held a rally at Camden County College. Much like prior presidential elections, Camden has heavily favored the Democratic candidate.
During his second term, Obama visited Camden in 2015 and said that "Hold you up as a symbol of promise for the nation. This city is on to something, no one is suggesting that the job is done," the president said. "It's still a work in progress." In the 2012 presidential election, Democrat Barack Obama was seeking reelection and was challenged by current Utah senator Mitt Romney then Massachusetts governor. The city overwhelmingly voted for Obama in the biggest Democratic landslide in Camden's history. In the 2008 presidential election, both tickets were open due to George W. Bush's term limit being up. Seasoned politician and war hero John McCain won the 2008 Republican Primary. While the younger Barack Obama narrowly won against former first lady Hillary Clinton in the contentious 2008 Democratic Primary. McCain received the most votes for a Republican nominee from Camden in the 21st century.
In the 2012 presidential election, Democrat Barack Obama received 96.8% of the vote (22,254 cast), ahead of Republican Mitt Romney with 3.0% (683 votes), and other candidates with 0.2% (57 votes), among the 23,230 ballots cast by the city's 47,624 registered voters (236 ballots were spoiled), for a turnout of 48.8%. In the 2008 presidential election, Democrat Barack Obama received 91.1% of the vote (22,197 cast), ahead of Republican John McCain, who received around 5.0% (1,213 votes), with 24,374 ballots cast among the city's 46,654 registered voters, for a turnout of 52.2%. In the 2004 presidential election, Democrat John Kerry received 84.4% of the vote (15,914 ballots cast), outpolling Republican George W. Bush, who received around 12.6% (2,368 votes), with 18,858 ballots cast among the city's 37,765 registered voters, for a turnout percentage of 49.9.
In the 2013 gubernatorial election, Democrat Barbara Buono received 79.9% of the vote (6,680 cast), ahead of Republican Chris Christie with 18.8% (1,569 votes), and other candidates with 1.4% (116 votes), among the 9,796 ballots cast by the city's 48,241 registered voters (1,431 ballots were spoiled), for a turnout of 20.3%. In the 2009 gubernatorial election, Democrat Jon Corzine received 85.6% of the vote (8,700 ballots cast), ahead of both Republican Chris Christie with 5.9% (604 votes) and Independent Chris Daggett with 0.8% (81 votes), with 10,166 ballots cast among the city's 43,165 registered voters, yielding a 23.6% turnout.
Transportation
Roads and highways
, the city had a total of of roadways, of which were maintained by the municipality, by Camden County, by the New Jersey Department of Transportation and by the Delaware River Port Authority.
Interstate 676 and U.S. Route 30 runs through Camden to the Benjamin Franklin Bridge on the north side of the city. Interstate 76 passes through briefly and interchanges with Interstate 676.
Route 168 passes through briefly in the south, and County Routes 537, 543, 551 and 561 all travel through the center of the city.
Public transportation
NJ Transit's Walter Rand Transportation Center is located at Martin Luther King Boulevard and Broadway. In addition to being a hub for NJ Transit (NJT) buses in the Southern Division, Greyhound Lines, the PATCO Speedline and River Line make stops at the station.
The PATCO Speedline offers frequent train service to Philadelphia and the suburbs to the east in Camden County, with stations at City Hall, Broadway (Walter Rand Transportation Center) and Ferry Avenue. The line operates 24 hours a day.
Since its opening in 2004, NJ Transit's River Line has offered light rail service to communities along the Delaware River north of Camden, and terminates in Trenton. Camden stations are 36th Street, Walter Rand Transportation Center, Cooper Street-Rutgers University, Aquarium and Entertainment Center.
NJ Transit bus service is available to and from Philadelphia on the 313, 315, 317, 318 and 400, 401, 402, 404, 406, 408, 409, 410, 412, 414, and 417, to Atlantic City is served by the 551 bus. Local service is offered on the 403, 405, 407, 413, 418, 419, 450, 451, 452, 453, and 457 lines.
Studies are being conducted to create the Camden-Philadelphia BRT, a bus rapid transit system, with a 2012 plan to develop routes that would cover the between Winslow Township and Philadelphia with a stop at the Walter Rand Transportation Center.
RiverLink Ferry is seasonal service across the Delaware River to Penn's Landing in Philadelphia.
Environmental problems
Air and water pollution
Situated on the Delaware River waterfront, the city of Camden contains many pollution-causing facilities, such as a trash incinerator and a sewage plant. Despite the additions of new waste-water and trash treatment facilities in the 1970s and 1980s, pollution in the city remains a problem due to faulty waste disposal practices and outdated sewer systems. The open-air nature of the waste treatment plants cause the smell of sewage and other toxic fumes to permeate through the air. This has encouraged local grassroots organizations to protest the development of these plants in Camden. The development of traffic-heavy highway systems between Philadelphia and South Jersey also contributed to the rise of air pollution in the area. Water contamination has been a problem in Camden for decades. In the 1970s, dangerous pollutants were found near the Delaware River at the Puchack Well Field, where many Camden citizens received their household water from, decreasing property values in Camden and causing health problems among the city's residents. Materials contaminating the water included cancer-causing metals and chemicals, affecting as many as 50,000 people between the early 1970s and late 1990s, when the six Puchack wells were officially shut down and declared a Superfund site. Camden also contains 22 of New Jersey's 217 combined sewer overflow outfalls, or CSOs, down from 28 in 2013.
CCMUA
The Camden City Municipal Utilities Authority, or CCMUA, was established in the early 1970s to treat sewage waste in Camden County, by City Democratic chairman and director of public works Angelo Errichetti, who became the authority's executive director. Errichetti called for a primarily state or federally funded sewage plant, which would have cost $14 million, and a region-wide collection of trash-waste. The sewage plant was a necessity to meet the requirements of the Federal Clean Water Act, as per the changes implemented to the act in 1972. James Joyce, chair of the county's Democratic Party at the time, had his own ambitions in regard to establishing a sewage authority that clashed with Errichetti's. While Errichetti formed his sewage authority through his own power, Joyce required the influence of the Camden County Board of Chosen Freeholders to form his. Errichetti and Joyce competed against each other to gain the cooperation of Camden's suburban communities, with Errichetti ultimately succeeding. Errichetti's political alliance with the county freeholders of Cherry Hill gave him an advantage and Joyce was forced to disband his County Sewerage Authority.
Errichetti later replaced Joyce as county Democratic chairman, after the latter resigned due to bribery charges, and retained control of the CCMUA even after leaving his position as executive director in 1973 to run for mayor of Camden. The CCMUA originally planned for the sewage facilities in Camden to treat waste water through a primary and secondary process before having it deposited into the Delaware River; however, funding stagnated and byproducts from the plant began to accumulate, causing adverse environmental effects in Camden. Concerned about the harmful chemicals that were being emitted from the waste build-up, the CCMUA requested permission to dump five million gallons of waste into the Atlantic Ocean. Their request was denied and the CCMUA began searching for alternative ways to dispose of the sludge, which eventually led to the construction of an incinerator, as it was more cost effective than previously proposed methods. In 1975, the CCMUA purchased Camden's two sewage treatment plants for $11.3 million, the first payment consisting of $2.5 million and the final payment to be made by the end of 1978.
Contamination in Waterfront South
Camden's Waterfront South neighborhood, located in the southern part of the city between the Delaware River and Interstate 676, is home to two dangerously contaminated areas, Welsbach/General Gas Mantle and Martin Aaron, Inc., the former of which has been emanating low levels of gamma radiation since the early 20th century. Several industrial pollution sites, including the Camden County Sewage Plant, the County Municipal Waste Combustor, the world's largest licorice processing plant, chemical companies, auto shops, and a cement manufacturing facility, are present in the Waterfront South neighborhood, which covers less than one square mile. The neighborhood contains 20% of Camden's contaminated areas and over twice the average number of pollution-emitting facilities per New Jersey ZIP Code.
According to the Rutgers University Journal of Law and Urban Policy, African-American residents of Waterfront South have a greater chance of developing cancer than anywhere in the state of Pennsylvania, 90% higher for females and 70% higher for males. 61% of Waterfront South residents have reported respiratory difficulties, with 48% of residents experiencing chronic chest tightness. Residents of Waterfront South formed the South Camden Citizens in Action, or SCCA, in 1997 to combat the environmental and health problems imposed from the rising amount of pollution and the trash-to-steam facilities being implemented by the CCMUA. One such facility, the Covanta Camden Energy Recovery Center (formerly the Camden Resource Recovery Facility), is located on Morgan Street in the Waterfront South neighborhood and burns 350,000 tons of waste from every town in Camden County, aside from Gloucester Township. The waste is then converted into electricity and sold to utility companies that power thousands of homes.
On December 12, 2018, renovation of Phoenix Park in Waterfront South was completed. The renovation was done by the Camden County Municipal Utilities Authority as well as the Camden Stormwater Management and Resource Training Initiative. According to officials, the park will improve air quality and stormwater management. Additionally, the park features walking trails providing a view of the Delaware River. Due to the project's success, it was named one of the 10 most innovative uses of federal water infrastructure funding in the country by the U.S. Environmental Protection Agency and the Environmental Council of the United States.
Superfund sites
Identified by the EPA in 1980, the Welsbach/General Gas Mantle site contained soil and building materials contaminated with radioactive materials. Radiation became prominent when the companies used thorium, a radioactive element withdrawn from monazite ore, in the production of their gas mantles. In the late 19th century and early 20th century, Welsbach Company was located in Gloucester City, which borders Camden, and was a major producer of gas mantles until gas lights were replaced by electric lights. The fabric of the Welsbach gas mantle was put into a solution that consisted of 99% thorium nitrate and 1% cerium nitrate in distilled water, causing it to emit a white light. Operating from 1915 to 1940 in Camden, General Gas Mantle, or GGM, was a manufacturer of gas mantles and served as a competitor for Welsbach. Unlike Welsbach, General Gas Mantle used only a refined, commercial thorium solution to produce its gas mantles. Welsbach and General Gas Mantle went out of business in the 1940s and had no successors.
In 1981, the EPA began investigating the area where the companies once operated for radioactive materials. Five areas were identified as having abnormally high levels of gamma radiation, including the locations of both companies and three primarily residential areas. In 1993, a sixth area was identified. Radioactive materials were identified at 100 properties located near the companies' former facilities in Camden and Gloucester City, as well as the company locations themselves. In 1996, due to the levels of contamination in the areas, the Welsbach and General Gas Mantle site was added to the National Priorities List, which consists of areas in the United States that are or could become contaminated with dangerous substances. The EPA demolished the General Gas Mantle building in late 2000 and only one building remains at the former Welsbach site. Since it was declared a Superfund site, the EPA has removed over 350,000 tons of contaminated materials from the Welsbach/General Gas Mantle site.
The Martin Aaron, Inc. site operated as a steel drum recycling facility for thirty years, from 1968 to 1998, though industrial companies have made use of the site since the late 19th century, contaminating soil and groundwater in the surrounding area. The drums at the facility, containing residue of hazardous chemicals, were not correctly handled or disposed of, releasing substances such as arsenic and polychlorinated biphenyl into the groundwater and soil. Waste such as abandoned equipment and empty steel drums was removed from the site by the EPA and NJDEP, the latter of which initially tested the site for contamination in 1987. Like the Welsbach/General Gas Mantle site, the Martin Aaron, Inc. site was placed on the National Priorities list in 1999.
Environmental justice
Residents of Camden have expressed discontent with the implementation of pollution-causing facilities in their city. Father Michael Doyle, a pastor at Waterfront South's Sacred Heart Church, blamed the city's growing pollution and sewage problem as the reason why residents were leaving Camden for the surrounding suburbs. Local groups protested through petitions, referendums, and other methods, such as Citizens Against Trash to Steam (CATS), established by Linda McHugh and Suzanne Marks. In 1999, the St. Lawrence Cement Company reached an agreement with the South Jersey Port Corporation and leased land to establish a plant in the Waterfront South neighborhood of Camden, motivated to operate on state land by a reduction in local taxes.
St. Lawrence received a backlash from both the residents of Camden and Camden's legal system, including a lawsuit that accused the DEP and St. Lawrence of violating the Civil Rights Act of 1964, due to the overwhelming majority of minorities living in waterfront South and the already poor environmental situation in the neighborhood. The cement grinding facility, open year-round, processed approximately 850,000 tons of slag, a substance often used in the manufacturing of cement, and emitted harmful pollutants, such as dust particles, carbon monoxide, radioactive materials, and lead among others. Also, due to the diesel-fueled trucks being used to transport the slag, a total of 77,000 trips, an additional 100 tons of pollutants were produced annually.
South Camden Citizens in Action v. New Jersey Department of Environmental Protection
In 2001, the SCCA filed a civil rights lawsuit against the NJDEP and the St. Lawrence Cement Company. Unlike other environmental justice cases, the lawsuit itself did not include specific accusations in regard to the environment, instead focusing on racial discrimination. The SCCA accused the NJDEP of discrimination after they issued air quality permits to St. Lawrence, which would have allowed the company to run a facility that violated Title VI of the Civil Rights Act of 1964. Title VI's role is to prevent agencies that receive federal funding from discriminating on the basis of race or nationality. Waterfront South, where the cement manufacturing company would operate, was a predominantly minority neighborhood that was already home to over 20% of Camden's dangerously contaminated sites.
In April 2001, the court, led by Judge Stephen Orlofsky, ruled in favor of the SCCA, stating that the NJDEP was in violation of Title VI, as they had not completed a full analysis of the area to judge how the environmental impact from the cement facility would affect the residents of Camden. This decision was challenged five days later with the ruling of US Supreme Court case Alexander v. Sandoval, which stated that only the federal agency in question could enforce rules and regulations, not citizens themselves. Orlofsky held his initial decision on the case and enacted another ruling that would allow citizens to make use of Section 1983, a civil rights statute which gave support to those whose rights had been infringed upon by the state, in regard to Title VI.
The NJDEP and St. Lawrence went on to appeal both of Orlofsky's rulings and the Third Circuit Court of Appeals subsequently reversed Orlofsky's second decision. The appeals court ruled that Section 1983 could not be used to enforce a ruling regarding Title VI and that private action could not be taken by the citizens. The final ruling in the case was that, while the NJDEP and St. Lawrence did violate Title VI, the decision could not be enforced through Section 1983. The lawsuit delayed the opening of the St. Lawrence cement facility by two months, costing the company millions of dollars. In the years following the court case, members of the SCCA were able to raise awareness concerning environmental justice at higher levels than before; they were portrayed in a positive light by news coverage in major platforms such as The New York Times, Business Week, The National Law Journal, and The Philadelphia Inquirer, and garnered support from long-time civil rights activists and the NAACP. The SCCA has engaged in several national events since the conclusion of South Camden, such as a press conference at the U.S. Senate, the Second National People of Color Environmental Leadership Summit, and the U.S. Commission on Civil Rights environmental justice hearings, all of which dealt with the advocacy of environmental justice.
Fire department
Officially organized in 1869, the Camden Fire Department (CFD) is the oldest paid fire department in New Jersey and is among the oldest paid fire departments in the United States. In 1916, the CFD was the first in the United States that had an all-motorized fire apparatus fleet. Layoffs have forced the city to rely on assistance from suburban fire departments in surrounding communities when firefighters from all 10 fire companies are unavailable due to calls.
The Camden Fire Department currently operates out of five fire stations, organized into two battalions. Each battalion is commanded by a battalion chief, who in turn reports to a deputy chief. The CFD currently operates five engine companies, one squad (rescue-pumper), three ladder companies, and one rescue company, as well as several other special, support, and reserve units. The department's fireboat is docked on the Delaware River. Currently, the quarters of Squad 7, a rescue-pumper, located at 1115 Kaighn Ave., has been closed for renovations. Squad 7 is currently operating out of the Broadway Station. Since 2010, the Camden Fire Department has suffered severe economic cutbacks, including company closures and staffing cuts.
Fire station locations and apparatus
Below is a list of all fire stations and company locations in the city of Camden according to Battalion. The Station on Kaighns Ave. is not usable as a fire station anymore due to the fact that the flooring is too weak so Squad 7 is now relocated at the fire station at 1301 Broadway. There is an apparatus fleet of 5 Engines, 1 Squad(rescue-pumper), 1 Rescue Company, 1 Haz-Mat Unit, 1 Collapse Rescue Unit, 3 Ladder Companies, 1 Fireboat, 1 Air Cascade Unit, 1 Chief of Department, 3 Deputy Chiefs, 1 Chief Fire Marshall and 2 Battalion Chiefs Units. Each shift is commanded by two Battalion Chiefs and one Deputy Chief.
Waterfront
One of the most popular attractions in Camden is the city's waterfront, along the Delaware River. The waterfront is highlighted by its three main attractions, the USS New Jersey, the Waterfront Music Pavilion, and the Adventure Aquarium. The waterfront is also the headquarters for Catapult Learning, a provider of K−12 contracted instructional services to public and private schools in the United States.
The Adventure Aquarium was originally opened in 1992 as the New Jersey State Aquarium at Camden. In 2005, after extensive renovation, the aquarium was reopened under the name Adventure Aquarium. The aquarium was one of the original centerpieces in Camden's plans to revitalize the city.
The Susquehanna Bank Center (formerly known as the Tweeter Center) is a 25,000-seat open-air concert amphitheater opened in 1995 and renamed after a 2008 deal in which the bank would pay $10 million over 15 years for naming rights.
The USS New Jersey (BB-62) was a U.S. Navy battleship that was intermittently active between the years 1943 and 1991. After its retirement, the ship was turned into the Battleship New Jersey Museum and Memorial, opened in 2001 along the waterfront. The New Jersey saw action during World War II, the Korean War, the Vietnam War, and provided support off Lebanon in early 1983.
Other attractions at the Waterfront are the Wiggins Park Riverstage and Marina, One Port Center, The Victor Lofts, the Walt Whitman House, the Walt Whitman Cultural Arts Center, the Rutgers–Camden Center for the Arts and the Camden Children's Garden.
In June 2014, the Philadelphia 76ers announced that they would move their practice facility and home offices to the Camden Waterfront, adding 250 permanent jobs in the city creating what CEO Scott O'Neil described as "biggest and best training facility in the country" using $82 million in tax savings offered by the New Jersey Economic Development Authority.
The Waterfront is also served by two modes of public transportation. NJ Transit serves the Waterfront on its River Line, while people from Philadelphia can commute using the RiverLink Ferry, which connects the Waterfront with Old City Philadelphia.
Riverfront State Prison, was a state penitentiary located near downtown Camden north of the Benjamin Franklin Bridge, which opened in August 1985 having been constructed at a cost of $31 million. The prison had a design capacity of 631 inmates, but housed 1,020 in 2007 and 1,017 in 2008. The last prisoners were transferred in June 2009 to other locations and the prison was closed and subsequently demolished, with the site expected to be redeveloped by the State of New Jersey, the City of Camden, and private investors. In December 2012, the New Jersey Legislature approved the sale of the site, considered surplus property to the New Jersey Economic Development Authority.
In September 2015, the Philadelphia-based real estate investment trust, Liberty Property Trust, announced its plans for a $1 billion project to revitalize Camden's Waterfront. This project plans to not only improve the infrastructure currently in place, but also to construct new buildings altogether, such as the new headquarters for American Water, which will be a five-story, 222,376-square-foot office building. American Water's new headquarters on the Camden Waterfront was opened in December 2018.
Other construction projects in the Liberty Property Trust $1 billion project include a Hilton Garden Inn to be opened on the Camden Waterfront in 2020, which will contain 180 rooms, a restaurant, and space for conferences to be held. The Camden Tower, an 18-story, 394,164-square-foot office building which will be the headquarters for the New Jersey-based companies Conner Strong & Buckelew, NFI and The Michaels Organization, which is planned to finish construction in spring of 2019. Also included are apartments on 11 Cooper Street, which will be housing 156 units as well as a retail space on the ground level. The construction of these apartments is planned to be completed by the spring of 2019.
In October 2018, Liberty Property Trust announced that they would be leaving the billion dollar project behind, and selling it to anyone who is interested, as a "strategic shift." They still plan on finishing buildings in which construction has already made significant progress, such as the Camden Tower, and the Hilton Garden Inn, however, they do not wish to start any new building projects on office buildings. They have stated that they wish to focus more on industrial space projects, rather than those of office spaces. However, Liberty Property Trust is still looking to develop four parcels of land along the Delaware river that is able to hold 500,000 square feet of land to be used for office space. One such company that has made plans to take advantage of this is Elwyn, a nonprofit that assists those living with disabilities based in Delaware. In February 2019 Elwyn received approval for assistance from New Jersey's Grow NJ economic development program that will help in covering the costs of the building. This office building would be built along the Delaware river, on one of the parcels owned previously owned by Liberty Property Trust, next to the currently under construction Camden Tower.
Education
Public schools
Camden's public schools are operated by the Camden City School District. As of the 2017–18 school year, the district, comprising 20 schools, had an enrollment of 9,570 students and 705.0 classroom teachers (on an FTE basis), for a student–teacher ratio of 13.6:1. The district is one of 31 former Abbott districts statewide, which are now referred to as "SDA Districts" based on the requirement for the state to cover all costs for school building and renovation projects in these districts under the supervision of the New Jersey Schools Development Authority.
High schools in the district (with 2017–18 enrollment data from the National Center for Education Statistics [number of students; grade levels]) are:) are
Brimm Medical Arts High School (211; 9–12),
Camden Big Picture Learning Academy (249; 6–12),
Camden High School (423; 9–12),
Creative Arts Morgan Village Academy (345; 6–12),
Pride Academy (NA; 6–12) and
Woodrow Wilson High School (784; 9–12).
Charter and renaissance schools
In 2012, The Urban Hope Act was signed into law, allowing renaissance schools to open in Trenton, Newark, and Camden. The renaissance schools, run by charter companies, differed from charter schools, as they enrolled students based on the surrounding neighborhood, similar to the city school district. This makes renaissance schools a hybrid of charter and public schools. This is the act that allowed Knowledge Is Power Program (KIPP), Uncommon Schools, and Mastery Schools to open in the city.
Under the renaissance charter school proposal, the Henry L. Bonsall Family School became Uncommon Schools Camden Prep Mt. Ephraim Campus, East Camden Middle School has become part of Mastery Charter Schools, Francis X. Mc Graw Elementary School and Rafael Cordero Molina Elementary School have become part of the Mastery charter network. The J.G Whittier Family school has become part of the KIPP Public Charter Schools as KIPP Cooper Norcross Academy.
Students were given the option to stay with the school under their transition or seek other alternatives.
In the 2013–2014 school year, Camden city proposed a budget of $72 million to allot to charter schools in the city. In previous years, Camden city charter schools have used $52 million and $66 million in the 2012–2013 and 2013–2014 school years, respectively.
On March 9, 2015, the first year of the new Camden Charter Schools the enrollment raised concern. Mastery and Uncommon charter schools failed to meet enrollment projections for their first year of operation by 15% and 21%, according to Education Law Center. Also, the KIPP and Uncommon Charter Schools had enrolled students with disabilities and English Language learners at a level far below the enrollment of these students in the Camden district. The enrollment data on the Mastery, Uncommon and KIPP charter chains comes from the state-operated Camden district and raised questions whether or not the data is viable, especially since it had said that Camden parents prefer charters over neighborhood public schools. In addition, there was a concern that these charter schools are not serving students with special needs at comparable level to district enrollment, developing the idea of growing student segregation and isolation in Camden schools as these chains expand in the coming years.
In October 2016, Governor Chris Christie, Camden Mayor Dana L. Redd, Camden Public Schools Superintendent Paymon Rouhanifard, and state and local representatives announced a historical $133 million investment of a new Camden High School Project. The new school is planned to be ready for student occupancy in 2021. It would have 9th and 12th grade. The school has a history of 100 years and has needed endless repairs. The plan is to give students a 21st-century education.
Chris Christie stated, " This new, state-of-the-art school will honor the proud tradition of the Castle on the Hill, enrich our society and improve the lives of students and those around them."
As of 2019, there are 3,850 Camden students enrolled in one of the city's renaissance schools, and 4,350 Camden students are enrolled one of the city's charter schools. Combined, these students make up approximately 55% of the 15,000 students in Camden.
Charter schools
Camden's Promise Charter School
Environment Community Opportunity (ECO) Charter School
Freedom Prep Charter School
Hope Community Charter School
LEAP Academy University Charter School
Renaissance schools
Uncommon Schools Camden Prep
KIPP Cooper Norcross
Lanning Square Primary School
Lanning Square Middle School
Whittier Middle School
Mastery Schools of Camden
Cramer Hill Elementary
Molina Lower Elementary
Molina Upper Elementary
East Camden Middle
Mastery High School of Camden
McGraw Elementary
Private education
Holy Name School, Sacred Heart Grade School, St. Anthony of Padua School and St. Joseph Pro-Cathedral School are K-8 elementary schools operating under the auspices of the Roman Catholic Diocese of Camden. They operate as four of the five schools in the Catholic Partnership Schools, a post-parochial model of Urban Catholic Education. The Catholic Partnership Schools are committed to sustaining safe and nurturing schools that inspire and prepare students for rigorous, college preparatory secondary schools or vocations.
Higher education
The University District, adjacent to the downtown, is home to the following institutions:
Camden County College – one of three main campuses, the college first came to the city in 1969, and constructed a campus building in Camden in 1991.
Rowan University at Camden, satellite campus – the Camden campus began with a program for teacher preparation in 1969 and expanded with standard college courses the following year and a full-time day program in 1980.
Cooper Medical School of Rowan University
Rutgers University–Camden – the Camden campus, one of three main sites in the university system, began as South Jersey Law School and the College of South Jersey in the 1920s and was merged into Rutgers in 1950.
Camden College of Arts & Sciences
School of Business – Camden
Rutgers School of Law-Camden
University of Medicine and Dentistry of New Jersey (UMDNJ)
Affiliated with Cooper University Hospital
Coriell Institute for Medical Research
Affiliated with Cooper University Hospital
Affiliated with Rowan University
Affiliated with University of Medicine and Dentistry of New Jersey
Libraries
The city was once home to two Carnegie libraries, the Main Building and the Cooper Library in Johnson Park. The city's once extensive library system, beleaguered by financial difficulties, threatened to close at the end of 2010, but was incorporated into the county system. The main branch closed in February 2011, and was later reopened by the county in the bottom floor of the Paul Robeson Library at Rutgers University.
Camden also has three academic libraries; The Paul Robeson Library at Rutgers University-Camden serves Rutgers undergraduate and graduate students, as well as students from the Camden campuses of Camden County College and Rowan University. Rutgers Law School has a law library and Cooper Medical School at Rowan has a medical library.
Sports
Baseball
The Camden Riversharks and Campbell's Field
The Camden Riversharks were an American professional baseball team based in Camden. They were a member of the Liberty Division of the Atlantic League of Professional Baseball. From the 2001 season to 2015, the Riversharks played their home games at Campbell's Field, which is situated next to the Benjamin Franklin Bridge. Due to its location on the Camden Waterfront the field offers a clear view of the Philadelphia skyline. The "Riversharks" name refers to the location of Camden on the Delaware River. The Riversharks were the first professional baseball team in Camden, New Jersey since the 1904 season.
When Rutgers-Camden owned the team in 2001, the River Sharks logo was a navy blue ring with the words Camden in between. Underneath was a sharp toothed shark eating the words "River Sharks." The Riversharks' last logo, introduced in 2005 with a new ownership group, consisted of a shark biting a baseball bat superimposed over a depiction of the Benjamin Franklin Bridge.
On October 21, 2015, the Camden Riversharks announced they would cease operations immediately due to the inability to reach an agreement on lease terms with the owner of Campbell's Field, the Camden County Improvement Authority. The Atlantic League of Professional Baseball, whom the Riversharks play for, announced New Britain Rock Cats had joined the league and Camden was one of two teams which could potentially replace the New England team. Since negotiations with the Riversharks and The Camden County Improvement Authority could not be met, the Riversharks ended their 15 years playing at Campbell's Field. The Riversharks folded after losing their lease, a development that followed the purchase of the financially troubled stadium by the CCIA for $3.5 million.
Campbell's Field
Campbell's Field opened alongside of the Ben Franklin Bridge in May 2001 after two years of construction. Campbell's Field was a 6,700-seat baseball park in Camden, New Jersey, United States that hosted its first regular season baseball game on May 11, 2001. The riverfront project was a joint venture backed by the state, Rutgers University, Cooper's Ferry Development Association and the Delaware River Port Authority. The construction of the ballpark was a $24 million project that also included $7 million in environmental remediation costs before building. Before the construction of Campbell's Field, the plot of land was vacant and historically known to house industrial buildings and businesses such as Campbell Soup Company Plant No. 2, Pennsylvania & Reading Rail Road's Linden Street Freight Station. The park, located at Delaware and Penn Avenues on the Camden Waterfront features a view of the Benjamin Franklin Bridge connecting Camden and a clear view of the Philadelphia skyline.
Campbell's Field was bought in August 2015 by the Camden County Improvement Authority (CCIA). In October 2015, after failing to reach an agreement with CCIA, the stadium's primary professional tenant, the Camden Riversharks, ceased operations.
After the loss of the Riversharks lease in 2015, the stadium had for the most part been unused, with its only activity being the home of Rutgers University-Camden's home baseball games. In September 2018, a contractor was awarded the $1.1 million task of demolishing the stadium, which had cost the state and port authority around $35 million in property loans and leases. Demolition was scheduled for December 2018 and would likely continue into the following spring. The site, which is notable in its history of being the site for multiple different buildings and complexes, is planned to become the host of future development projects jointly owned by Rutgers University and the city of Camden. As of spring 2019, the Rutgers baseball team will play the entirety of their season on the road, following the demolition of their home stadium. An investment totaling $15 million, planned to be split evenly between Rutgers and the city of Camden, will reportedly develop the area into a recreational complex for the city, as well as accommodations for the university's NCAA Division III sports teams.
Basketball
Philadelphia 76ers training facility
A training facility for Philadelphia's NBA team, the 76ers, had been planned for different areas, with the Camden waterfront being one of the potential sites. The team had also deliberated building on the local Camden Navy Yard, including receiving architect mock-ups of a 55,000 square foot facility for an estimated $20–25 million, but these plans didn't come to fruition. Eventually, an $82 million grant was approved by the New Jersey Economic Development Authority to begin construction of the training facility in Camden, and was scheduled to break ground in October 2014. Based on contingent hiring, the grant was to be paid out over 10 years, with the facility scheduled to host practices by 2016. The grant was somewhat controversial in that it saves the 76ers organization from paying any property taxes or fees that would be accrued by the building over its first decade. Vocal opponents of the facility claim that the site has now joined a list of large companies or industries that are invited to Camden with monetary incentive, but give little or nothing back to the community itself.
The facility was to be divided into both player and coach accommodations, as well as office facilities for the rest of the organization. 66,230 square feet were devoted solely to the 2 full-sized basketball courts and player training facilities, while the remainder of the 125,000 square foot complex was reserved for offices and operations. While the 76ers used to share their practice facilities with the Philadelphia College of Osteopathic Medicine, they now claim one of the largest and most advanced facilities in the NBA. The training facilities include the two full-size courts, as well as a weight room, full hydrotherapy room, Gatorade Fuel Bar, full players-only restaurant and personal chef, medical facilities, film room, and full locker room. The complex will eventually provide 250 jobs, including team staff and marketing employees.
Crime
Camden has a national reputation for its violent crime rates, which once ranked as the highest in the country, although recent years have seen a significant drop in violent crime, with 2017 seeing the lowest number of homicides in three decades.
Real estate analytics company NeighborhoodScout has named it within the top 5 "most dangerous" cities in the United States every year since it has compiled the list. Several times it has been ranked 1st on the list, most recently in 2015. In 2012, Camden set a new record by tallying 69 homicides, making for one of the highest murder rates ever recorded in an American city. In addition, since the FBI began uniformly reporting crime data in the mid 1980s, Camden has never seen its yearly violent crime rate drop below 2 per 100 residents. In comparison, the national rate is about 0.37 per 100 residents.
Morgan Quitno has ranked Camden as one of the top ten most dangerous cities in the United States since 1998, when they first included cities with populations less than 100,000. Camden was ranked as the third-most dangerous city in 2002, and the most dangerous city overall in 2004 and 2005. It improved to the fifth spot for the 2006 and 2007 rankings but rose to number two in 2008 and to the most dangerous spot in 2009. Morgan Quitno based its rankings on crime statistics reported to the Federal Bureau of Investigation in six categories: murder, rape, robbery, aggravated assault, burglary, and auto theft. In 2011 in The Nation, journalist Chris Hedges described Camden as "the physical refuse of postindustrial America", plagued with homelessness, drug trafficking, prostitution, robbery, looting, constant violence, and an overwhelmed police force (which in 2011 lost nearly half of its officers to budget-related layoffs).
On October 29, 2012, the FBI announced Camden was ranked first in violent crime per capita of cities with over 50,000 residents.
There were 23 homicides in Camden in 2017, the lowest since 1987 and almost half as many as the 44 murders the previous year. Both homicides and non-fatal shootings have declined sharply since 2012, when there were a record 67 homicides in the city. In 2020 there were again 23 homicides reported.
Law enforcement
In 2005, the Camden Police Department was operated by the state. In 2011, it was announced that a new county police department would be formed.
For two years, Camden experienced its lowest homicide rate since 2008. Camden also reorganized its police disbandment that same year. In 2011, Camden's budget was $167 million with $55 million allotted for police spending. However, the police force still experienced a budgetary shortfall when state aid fell through. Camden was rated No. 5 nationwide for homicides with approximately 87 murders per 100,000 residents in 2012. At its worst point, Camden's murder rate was six times the national average across the Delaware River and Philadelphia. The city added crime-fighting tactics like surveillance cameras, better street lighting, and curfews for children. Although they added these tactics, the number of murders had risen again. As a last resort, officers were only authorized to use handguns and handcuffs.
Robberies, property crimes, nonfatal shooting incidents, violent crimes, and aggravated assaults have declined since 2012. In November 2012, Camden began the process of terminating 273 officers to later hire 400 new officers, out of the 2,000 applicants that have already submitted letters of interest to the county, to have a fresh start of a larger, non-unionized group to safeguard the nation's poorest city. The city's officers rejected a contract proposal from the county that would have allowed approximately all 260 Camden county's police officers to Camden Police Metro Division, to only 49% of them to be eligible to be rehired once the 141-year-old department becomes disbanded.
Although the annual homicide rate averaged 48 since 2008, in April 2013 the city reported 57 homicides in a population of 77,000 compared to 67 homicides in 2012. In mid-March 2013, Camden residents would have noticed the first changes once the first group of officers became employed and were in an eight-week field of training on the Camden streets. On May 1, 2013, Camden County's Police Department was disbanded due to a union contract that made it financially impossible to keep officers on the street. While the existing county officers were still present, Camden County's Police Department brought in 25 new officers to train in neighborhoods in hopes they can regain the communities trust. The new police force had lower salaries along with fewer benefits than they had received from the city. Because of the reorganized force in 2013, the number of cops in the streets has increased and spread throughout Camden. Camden's new police force began patrolling in tandem, speaking with residents, and driving patrol cars. Camden County Police Department hosted several Meet Your Officers events to further engage with residents.
In 2018, the Camden County Police Department reported that violent crime had dropped 18%, led by a 21% decline in aggravated assaults; nonviolent crimes fell by 12%, the number of arson fell by 29%, burglaries by 21%, and nonfatal "shooting hit incidents" dropped by 15%. In 2017 there where 23 homicides reported, which was a 30-year low. In 2018, 2019, and 2020 there were 22, 24, and 23 homicides respectively.
Though Camden officers are equipped with GPS tracking devices and body cameras, the effectiveness and necessity of body cameras remains controversial. Following the police forces' reorganization, many Camden residents have been pulled over or issued tickets for minor violations, such as having tinted windows and failing a check for audible devices on their bikes. Police say the city's most heinous offenders oftentimes commit minor offenses, such as suspects of armed robbery often drive cars with tinted windows, and drug dealers deploy lookouts on bikes. J. Scott Thomson, Chief of Police, asserted "We are going to leverage every legal option that we have to deter their criminal activity."
A CNN report proposed that Camden might be a model for what police abolition or "defunding the police" could look like. The report noted that Camden still had a police force, but it was being administered by a different body and had changed some of its procedures and policies. A report in The Morning Call noted that the county police department, which is distinct from the county sheriff's office and operates solely in Camden, had a budget of $68.5 million in 2020, compared to the city department's $55 million in 2011 prior to its dissolution, and that police funding in Camden was higher on a per capita basis than that of other NJ cities with city-run departments. There are 380 officers in the county-run department, versus 370 in the dissolved city force.
Points of interest
Adventure Aquarium – Originally opened in 1992, it re-opened in its current form in May 2005 featuring about 8,000 animals living in varied forms of semi-aquatic, freshwater, and marine habitats.
Waterfront Music Pavilion – An outdoor amphitheater/indoor theater complex with a seating capacity of 25,000. Formerly known as the Susquehanna Bank Center.
Battleship New Jersey Museum and Memorial – Opened in October 2001, providing access to the battleship USS New Jersey that had been towed to the Camden area for restoration in 1999.
Harleigh Cemetery – Established in 1885, the cemetery is the burial site of Walt Whitman, several Congressmen, and many other South Jersey notables.
Walt Whitman House
National Register of Historic Places listings in Camden County, New Jersey
In popular culture
The fictional Camden mayor Carmine Polito in the 2013 film American Hustle is loosely based on 1970s Camden mayor Angelo Errichetti.
The 1995 film 12 Monkeys contains scenes on Camden's Admiral Wilson Boulevard.
Camden is the main location of 2016 action film Fight Valley starring Susie Celek and the MMA fighters Miesha Tate, Holly Holm and Cris Cyborg (more precisely in the fictional neighborhood of Fight Valley).
Notable people
Community members
Mary Ellen Avery (1927–2011), pediatrician whose research led to development of successful treatment for Infant respiratory distress syndrome.
The Camden 28, members of the Catholic left and other religious groups who broke into the draft board offices in Camden in opposition to the Vietnam War.
Artists and authors
Graham Alexander (born 1989), singer-songwriter, entertainer, and entrepreneur known best for his solo music career and for his roles in the Broadway shows Rain: A Tribute to the Beatles and Let It Be and as the founder of a new incarnation of the Victor Talking Machine Co.
Christine Andreas (born 1951), Broadway actress and singer.
Vernon Howe Bailey (1874–1953), artist.
Butch Ballard (1918–2011), jazz drummer who performed with Louis Armstrong, Count Basie and Duke Ellington.
Paul Baloche (born 1962), Christian music artist, worship leader, and singer-songwriter.
Carla L. Benson, vocalist best known for her recorded background vocals.
Cindy Birdsong (born 1939), singer who became famous as a member of The Supremes in 1967, when she replaced co-founding member Florence Ballard.
Nelson Boyd (born 1928), jazz bassist.
Stephen Decatur Button (1813–1897), architect; designer of schools, churches and Camden's Old City Hall (1874–75, demolished 1930).
James Cardwell (1921–1954), actor best known for his debut appearance in the film The Fighting Sullivans.
Betty Cavanna (1909–2001), author of popular teen romance novels, mysteries, and children's books.
Vedra Chandler (born 1980), singer and dancer.
David Aaron Clark (1960–2009), author, musician, pornographic actor, and pornographic video director.
Andrew Clements (1949–2019), writer of children's books, known for his debut novel Frindle
Russ Columbo (1908–1934), baritone, songwriter, violinist and actor known for his romantic ballads, such as "Prisoner of Love".
Jimmy Conlin (1884–1962), character actor who appeared in almost 150 films in his 32-year career.
Alex Da Corte (born 1980), visual artist.
Buddy DeFranco (1923–2014), jazz clarinetist.
Wayne Dockery (1941–2018), jazz double bassist.
Nick Douglas (born 1967), musician, best known for being the bass player of Doro Pesch's band.
Lola Falana (born 1942), singer and dancer.
Margaret Giannini (born 1921), physician and specialist in assistive technology and rehabilitation, who was the first director of the National Institute of Disability Rehabilitation Research.
Heather Henderson (born 1973), singer, model, podcaster, actress and Dance Party USA performer
Richard "Groove" Holmes (1931–1991), jazz organist.
Leon Huff (born 1942), songwriter and record producer.
Barbara Ingram (1947–1994), R&B background singer.
Chas. Floyd Johnson (born 1941), television producer, actor and activist, known for The Rockford Files (1975–1980), Magnum, P.I. (1982–1988) and Red Tails (2012).
Jaryd Jones-Smith (born 1995), American football offensive tackle for the Las Vegas Raiders of the NFL.
Edward Lewis (1919–2019), film producer and writer, known for the 1960 film Spartacus and for his collaborations with John Frankenheimer, producing or executive producing nine films together.
Eric Lewis (ELEW) (born 1973), pianist.
Michael Lisicky (born 1964), non-fiction writer and oboist with the Baltimore Symphony Orchestra.
Ann Pennington (1893–1971), actress, dancer and singer who starred on Broadway in the 1910s and 1920s, notably in the Ziegfeld Follies and George White's Scandals.
Jim Perry (1933–2015), television game show host, singer, announcer and performer in the 1970s and 1980s.
Ronny J (born 1992), record producer, rapper and singer.
Tasha Smith (born 1969), actress, director and producer who began her career in a starring role on the NBC comedy series Boston Common.
Anna Sosenko (1909–2000), songwriter and manager who achieved great popularity in the 1930s.
Richard Sterban (born 1943), member of the Oak Ridge Boys.
Mickalene Thomas (born 1970), artist.
Frank Tiberi (born 1928), leader of the Woody Herman Orchestra.
Tye Tribbett (born 1976), gospel music singer, songwriter, keyboardist and choir director.
Julia Udine (born 1993), singer and actress best known for playing the role of Christine Daaé in The Phantom of the Opera on Broadway and on tour.
Jack Vees (born 1955), composer and bassist.
Nick Virgilio (1928–1989), haiku poet.
Crystal Waters (born 1967), house and dance music singer and songwriter, best known for her 1990s dance hits "Gypsy Woman" and "100% Pure Love."
Walt Whitman (1819–1892), essayist, journalist and poet.
Buster Williams (born 1942), jazz bassist.
Politicians
Rob Andrews (born 1957), U.S. representative for New Jersey's 1st congressional district, served 1990–2014.
David Baird Jr. (1881–1955), U.S. Senator from 1929 to 1930, unsuccessful Republican nominee for governor in 1931.
David Baird Sr. (1839–1927), United States Senator from New Jersey.
Arthur Barclay (born 1982), politician who served on the Camden City Council for two years and has represented the 5th Legislative District in the New Jersey General Assembly since 2016.
William J. Browning (1850–1920), represented New Jersey's 1st congressional district in U.S. House of Representatives, 1911–1920.
William T. Cahill (1912–1996), politician who served six terms in the U.S. House of Representatives (1958–1970) and as Governor of New Jersey (1971–1975).
Bonnie Watson Coleman (born 1945), politician who has served as the U.S. representative for New Jersey's 12th congressional district since 2015.
Mary Keating Croce (1928–2016), politician who served in the New Jersey General Assembly for three two-year terms, from 1974 to 1980, before serving as the Chairwoman of the New Jersey State Parole Board in the 1990s.
Lawrence Curry (1936–2018), educator and politician who served in the Pennsylvania House of Representatives from 1993 to 2012, was born in Camden.
James Dellet (1788–1848), politician and a member of the United States House of Representatives from Alabama.
Angel Fuentes (born 1961), former Assmblyman who has served as President of the Camden city council.
Carmen M. Garcia, former Chief judge of Municipal Court in Trenton, New Jersey.
John J. Horn (1917–1999), labor leader and politician who served in both houses of the New Jersey Legislature before being nominated to serve as commissioner of the New Jersey Department of Labor and Industry.
Robert S. MacAlister (1897–1957), Los Angeles City Council member, 1934–39.
Richard Mroz, President of the New Jersey Board of Public Utilities.
Donald Norcross (born 1958), U.S. Congressman representing New Jersey's 1st congressional district.
Francis F. Patterson Jr. (1867–1935), represented New Jersey's 1st congressional district in U.S. House of Representatives, 1920–1927.
William T. Read (1878–1954), lawyer, President of the New Jersey Senate, and Treasurer of New Jersey
William Spearman (born 1958), politician who has represented the 5th Legislative District in the New Jersey General Assembly since 2018.
John F. Starr (1818–1904), represented New Jersey's 1st congressional district in U.S. House of Representatives, 1863–1867.
Athletes
Max Alexander (born 1981), boxer who was participant in ESPN reality series The Contender 3.
Rashad Baker (born 1982), professional football safety for Buffalo Bills, Minnesota Vikings, New England Patriots, Philadelphia Eagles and Oakland Raiders.
Martin V. Bergen (1872–1941), lawyer, college football coach.
Art Best (1953–2014), football running back who played three seasons in the National Football League with the Chicago Bears and New York Giants.
Audrey Bleiler (1933–1975), played in All-American Girls Professional Baseball League for 1951–1952 South Bend Blue Sox champion teams.
Fran Brown (born 1982), co-defensive coordinator and assistant head coach of the Temple Owls football.
Jordan Burroughs (born 1988), Olympic champion in freestyle wrestling who won Gold at the London Olympics in 2012.
Sean Chandler (born 1996), safety for the New York Giants of the National Football League.
Frank Chapot (1932–2016), Olympic silver medalist equestrian.
James A. Corea (1937–2001), radio personality and specialist in nutrition, rehabilitation and sports medicine.
Donovin Darius (born 1975), professional football player for Jacksonville Jaguars.
Rachel Dawson (born 1985), field hockey midfielder.
Rawly Eastwick (born 1950), Major League Baseball pitcher who won two games in 1975 World Series.
Shaun T. Fitness (born 1978), motivational speaker, fitness trainer and choreographer best known for his home fitness programs T25, Insanity and Hip-Hop Abs.
Jamaal Green (born 1980), American football defensive end who played in the NFL for the Philadelphia Eagles, Chicago Bears, and the Washington Redskins.
Harry Higgs (born 1991), professional golfer.
Andy Hinson (born c. 1931), retired American football head coach of the Bethune–Cookman University Wildcats football team from 1976 to 1978 and of the Cheyney University of Pennsylvania Wolves from 1979 to 1984.
George Hegamin (born 1973), offensive lineman who played for NFL's Dallas Cowboys, Philadelphia Eagles and Tampa Bay Buccaneers.
Kenny Jackson (born 1962), former wide receiver for the Philadelphia Eagles and co-owner of Kenny's Korner Deli.
Sig Jakucki (1909–79), former Major League pitcher for the St. Louis Browns, whose victory over the New York Yankees in the final game of the 1944 season gave the Browns their only pennant.
Mike Moriarty (born 1974), former Major League infielder for the Baltimore Orioles.
Ray Narleski (1928–2012), baseball player with Cleveland Indians and Detroit Tigers.
Harvey Pollack (1922–2015), director of statistical information for the Philadelphia 76ers, who at the time of his death was the only person still working for the NBA since its inaugural 1946–47 season.
Dwight Muhammad Qawi (born 1953), boxing world light-heavyweight and cruiserweight champion, International Boxing Hall of Famer known as the "Camden Buzzaw."
Haason Reddick (born 1994), linebacker for the Arizona Cardinals of the National Football League.
Buddy Rogers (1921–1992), professional wrestler.
Mike Rozier (born 1961), collegiate and professional football running back who won Heisman Trophy in 1983.
George Savitsky (1924–2012), offensive tackle who played in the National Football League for the Philadelphia Eagles.
Art Still (born 1955), collegiate and professional football defensive end and cousin to Devon Still
Devon Still (born 1989), collegiate and professional football defensive end
Billy Thompson (born 1963), college and professional basketball player who played for the Los Angeles Lakers and Miami Heat.
Sheena Tosta (born 1982), hurdler, Olympic silver medalist 2008.
Frank Townsend (1933–1965), professional wrestler and musician.
Dajuan Wagner (born 1983), professional basketball player for Cleveland Cavaliers, 2002–2005, and Polish team Prokom Trefl Sopot.
Jersey Joe Walcott (1914–1994), boxing world heavyweight champion, International Boxing Hall of Famer.
Other
Joe Angelo (1896–1978), U.S. Army veteran of World War I and recipient of the Distinguished Service Cross.
U. E. Baughman (1905–1978), head of United States Secret Service from 1948 to 1961.
Boston Corbett (1832–1894), Union Army soldier who killed John Wilkes Booth.
Richard Hollingshead (1900–1975), inventor of the drive-in theater.
Aaron McCargo Jr. (born 1971), chef and television personality who hosts Big Daddy's House, a cooking show on Food Network.
Lucy Taxis Shoe Meritt (1906–2003), classical archaeologist and a scholar of Greek architectural ornamentation and mouldings.
Thomas J. Osler (born c. 1940), mathematician, former national champion distance runner and author.
Jim Perry (1933–2015), game show host and television personality.
Tommy Roberts (born 1928), radio and TV broadcaster who launched simulcast in 1984, a television feed of horse races to racetracks, casinos and off-track betting facilities, enabling gamblers to watch and bet on live racing from all over the world.
Howard Unruh, (1921–2009), 1949 mass murderer.
Richard Valeriani (1932–2018), former White House correspondent and diplomatic correspondent with NBC News in the 1960s and 1970s.
John P. Van Leer (1825-1862), Union Army officer
Mary Schenck Woolman (1860–1940), pioneer in vocational education for women.
Phil Zimmermann (born 1954), programmer who developed Pretty Good Privacy (PGP), a type of data encryption.
References
External links
Camden County Historical Society
Invincible Cities: A Visual Encyclopedia of the American Ghetto, documentary photography of Camden by Camilo José Vergara and Rutgers University
1626 establishments in the Dutch Empire
1626 establishments in North America
1828 establishments in New Jersey
Cities in Camden County, New Jersey
County seats in New Jersey
Faulkner Act (mayor–council)
New Jersey Urban Enterprise Zones
Populated places established in 1626
Populated places established in 1828
Port cities and towns in New Jersey
Urban decay in the United States
Establishments in New Netherland
New Jersey populated places on the Delaware River |
102786 | https://en.wikipedia.org/wiki/MAME | MAME | MAME (originally an acronym of Multiple Arcade Machine Emulator) is a free and open-source emulator designed to recreate the hardware of arcade game systems in software on modern personal computers and other platforms. Its intention is to preserve gaming history by preventing vintage games from being lost or forgotten. It does this by emulating the inner workings of the emulated arcade machines; the ability to actually play the games is considered "a nice side effect". Joystiq has listed MAME as an application that every Windows and Mac gamer should have.
The first public MAME release was by Nicola Salmoria on February 5, 1997. It now supports over 7,000 unique games and 10,000 actual ROM image sets, though not all of the games are playable. MESS, an emulator for many video game consoles and computer systems, based on the MAME core, was integrated into MAME in 2015.
History and overview
The MAME project was started by Italian programmer Nicola Salmoria. It began as a project called Multi-Pac, intended to preserve games in the Pac-Man family, but the name was changed as more games were added to its framework. The first MAME version was released in 1996. In April 1997, Salmoria stepped down for his national service commitments, handing stewardship of the project to fellow Italian Mirko Buffoni for half a year. In May 2003, David Haywood took over as project coordinator; and from April 2005 to April 2011, the project was coordinated by Aaron Giles; then Angelo Salese stepped in as the coordinator; and in 2012, Miodrag Milanovic took over. The project is supported by hundreds of developers around the world and thousands of outside contributors.
At first, MAME was developed exclusively for MS-DOS, but was soon ported to Unix-like systems (X/MAME), Macintosh (MacMAME and later MAME OS X) and Windows (MAME32). Since 24 May 2001, with version 0.37b15, MAME's main development has occurred on the Windows platform, and most other platforms are supported through the SDLMAME project, which was integrated into the main development source tree in 2006. MAME has also been ported to other computers, game consoles, mobile phones and PDAs, and at one point even to digital cameras. In 2012, Google ported MAME to Native Client, which allows MAME to run inside Chrome.
Major releases of MAME occur approximately once a month. Windows executables in both 32-bit and 64-bit fashion are released on the development team's official website, along with the complete source code. Smaller, incremental "u" (for update) releases were released weekly (until version 0.149u1) as source diffs against the most recent major version, to keep code in synchronization among developers. MAME's source code is developed on a public GitHub repository, allowing those with the required expertise and tools to build the most up-to-date version and contribute enhancements as pull requests. Historical version numbers 0.32, and 0.38 through 0.52 inclusively, do not exist; the former was skipped due to similar naming of the GUI-equipped MAME32 variant (which itself has since been renamed MAMEUI due to the move to 64-bit builds), while the latter numbers were skipped due to the numerous releases in the 0.37 beta cycle (these version numbers have since been marked next to their equivalent 0.37 beta releases on the official MAMEdev website).
MAME's architecture has been extensively improved over the years. Support for both raster and vector displays, multiple CPUs, and sound chips were added in the project's first six months. A flexible timer system to coordinate synchronization between multiple emulated CPU cores was implemented, and ROM images started to be loaded according to their CRC32 hash in the ZIP files they were stored in. MAME has pioneered the reverse engineering of many undocumented system architectures, various CPUs (such as the M6809-derivative custom Konami CPU with new instructions) and sound chips (for example, Yamaha FM sound chips). MAME developers have been instrumental in reverse engineering many proprietary encryption algorithms utilized in arcade games, including Neo Geo, CP System II and CP System III.
MAME's popularity has gone mainstream, with enthusiasts building their own arcade game cabinets to relive the old games, and even with some companies producing illegal MAME derivatives to be installed in arcades. Cabinets can be built either from scratch or by taking apart and modifying an original arcade game cabinet. Cabinets inspired by classic games can also be purchased and assembled (with MAME optionally preinstalled).
Although MAME contains a rudimentary user interface, the use of MAME in arcade game cabinets and home theaters necessitates special launcher applications called front ends with more advanced features. They provide varying degrees of customization, allowing one to see images of games' cabinets, histories, playing tips, and even video of the game's play or attract mode.
The information within MAME is free for reuse, and companies have been known to utilize MAME when recreating their old classics on modern systems. Some have even hired MAME developers to create emulators for their old properties. An example is the Taito Legends pack, with ROMs readable on select versions of MAME.
On May 27, 2015 (0.162), the games console and computer system emulator MESS was integrated with MAME (so the MESS User Manual is still the most important usage instruction for the non-arcade parts of MAME). Since 2012, MAME has been maintained by former MESS project leader Miodrag Milanović.
In May 2015, it was announced that MAME's developers planned to re-license the software under a more common free and open-source license, away from the original MAME license. MAME developer Miodrag Milanovic explained that the change was to draw more developer interest, allow game manufacturers to distribute MAME to emulate their own games, and to make the software "a learning tool for developers working on development boards". The transition of MAME's licensing to BSD/GPL was completed in March 2016. Most of MAME's source code (90%+) is now available under the BSD-3-Clause license, and the complete project is under the GPL-2.0-or-later license.
On Feb 24, 2016 (0.171), MAME embedded the MEWUI front-end (and its developer joined the team), providing MAME with a flexible and more full-featured UI.
On Dec 30, 2020, exA-Arcadia, the Western copyright holders of the games Akai Katana and DoDonPachi SaiDaiOuJou filed a cease and desist notice to the MAME developers over those games being included in the emulator. MAME complied with the request, making both unplayable on the emulator outside of command line, as of version 0.239.
Design
The MAME core coordinates the emulation of several elements at the same time. These elements replicate the behavior of the hardware present in the original arcade machines. MAME can emulate many different central processing units (CPUs) and associated hardware. These elements are virtualized so MAME acts as a software layer between the original program of the game, and the platform MAME runs on. MAME supports arbitrary screen resolutions, refresh rates and display configurations. Multiple emulated monitors, as required by for example Darius, are supported as well.
Individual arcade systems are specified by drivers which take the form of C preprocessor macros. These drivers specify the individual components to be emulated and how they communicate with each other. While MAME was originally written in C, the need for object oriented programming caused the development team to begin to compile all code as C++ for MAME 0.136, taking advantage of additional features of that language in the process.
Although a great majority of the CPU emulation cores are interpretive, MAME also supports dynamic recompilation through an intermediate language called the Universal Machine Language (UML) to increase the emulation speed. Back-end targets supported are x86 and x64. A C back end is also available to further aid verification of the correctness. CPUs emulated in this manner are SH-2, MIPS R3000 and PowerPC.
Game data
The original program code, graphics and sound data need to be present so that the game can be emulated. In most arcade machines, the data is stored in read-only memory chips (ROMs), although other devices such as cassette tapes, floppy disks, hard disks, laserdiscs, and compact discs are also used. The contents of most of these devices can be copied to computer files, in a process called "dumping". The resulting files are often generically called ROM images or ROMs regardless of the kind of storage they came from. A game usually consists of multiple ROM and PAL images; these are collectively stored inside a single ZIP file, constituting a ROM set. In addition to the "parent" ROM set (usually chosen as the most recent "World" version of the game), games may have "clone" ROM sets with different program code, different language text intended for different markets etc. For example, Street Fighter II Turbo is considered a variant of Street Fighter II Champion Edition. System boards like the Neo Geo that have ROMs shared between multiple games require the ROMs to be stored in "BIOS" ROM sets and named appropriately.
Hard disks, compact discs and laserdiscs are stored in a MAME-specific format called CHD (Compressed Hunks of Data). Some arcade machines use analog hardware, such as laserdiscs, to store and play back audio/video data such as soundtracks and cinematics. This data must be captured and encoded into digital files that can be read by MAME. MAME does not support the use of external analog devices, which (along with identical speaker and speaker enclosures) would be required for a 100% faithful reproduction of the arcade experience. A number of games use sound chips that have not yet been emulated successfully. These games require sound samples in WAV file format for sound emulation. MAME additionally supports artwork files in PNG format for bezel and overlay graphics.
Philosophy and accuracy
The stated aim of the project is to document hardware, and so MAME takes a somewhat purist view of emulation, prohibiting programming hacks that might make a game run improperly or run faster at the expense of emulation accuracy. Components such as CPUs are emulated at a low level (meaning individual instructions are emulated) whenever possible, and high-level emulation (HLE) is only used when a chip is completely undocumented and cannot be reverse-engineered in detail. Signal level emulation is used to emulate audio circuitry that consists of analog components.
MAME emulates well over a thousand different arcade system boards, a majority of which are completely undocumented and custom designed to run either a single game or a very small number of them. The approach MAME takes with regards to accuracy is an incremental one; systems are emulated as accurately as they reasonably can be. Bootleg copies of games are often the first to be emulated, with proper (and copy protected) versions emulated later. Besides encryption, arcade games were usually protected with custom microcontroller units (MCUs) that implemented a part of the game logic or some other important functions. Emulation of these chips is preferred even when they have little or no immediately visible effect on the game itself. For example, the monster behavior in Bubble Bobble was not perfected until the code and data contained with the custom MCU was dumped through the decapping of the chip. This results in the ROM set requirements changing as the games are emulated to a more and more accurate degree, causing older versions of the ROM set becoming unusable in newer versions of MAME.
Portability and generality are also important to MAME. Combined with the uncompromising stance on accuracy, this often results in high system requirements. Although a 2 GHz processor is enough to run almost all 2D games, more recent systems and particularly systems with 3D graphics can be unplayably slow, even on the fastest computers. MAME does not currently take advantage of hardware acceleration to speed up the rendering of 3D graphics, in part because of the lack of a stable cross-platform 3D API, and in part because software rendering can in theory be an exact reproduction of the various custom 3D rendering approaches that were used in the arcade games.
Legal status
Owning and distributing MAME itself is legal in most countries, as it is merely an emulator. Companies such as Sony have attempted in court to prevent other software such as Virtual Game Station, a Sony PlayStation emulator from being sold, but they have been ultimately unsuccessful. MAME itself has thus far not been the subject of any court cases.
Most arcade games are still covered by copyright. Downloading or distributing copyrighted ROMs without permission from copyright holders is almost always a violation of copyright laws. However, some countries (including the US) allow the owner of a board to transfer data contained in its ROM chips to a personal computer or other device they own. Some copyright holders have explored making arcade game ROMs available to the public through licensing. For example, in 2003 Atari made MAME-compatible ROMs for 27 of its arcade games available on the Internet site Star ROMs. However, by 2006 the ROMs were no longer being sold there. At one point, various Capcom games were sold with the HotRod arcade joystick manufactured by Hanaho, but this arrangement was discontinued as well. Other copyright holders have released games which are no longer commercially viable free of charge to the public under licenses that prohibit commercial use of the games. Many of these games may be downloaded legally from the official MAME web site. The Spanish arcade game developer Gaelco has also released World Rally for non-commercial use on their website.
The MAME community has distanced itself from other groups redistributing ROMs via the Internet or physical media, claiming they are blatantly infringing copyright and harm the project by potentially bringing it into disrepute. Despite this, illegal distributions of ROMs are widespread on the Internet, and many "Full Sets" also exist which contains a full collection of a specific version's ROMs. In addition, many bootleg game systems, such as arcade multi carts, often use versions of MAME to run their games.
Original MAME license
MAME was formerly distributed under a custom self-written copyleft license, called the "MAME license" or the "MAME-like license", which was adopted also by other projects, e.g. Visual Pinball. This license ensures the availability of the licensed program's source code, whilst the redistribution of the program in commercial activities is prohibited. Due to this clause, the license is incompatible with the OSI's Open source definition and the FSF's Free software definition, and as such is not considered an open source, or free software license, respectively. The non-commercial clause was designed to prevent arcade operators from installing MAME cabinets and profiting from the works of the original manufacturers of the games. The ambiguity of the definition "commercial" lead to legal problems with the license.
Since March 2016 with version 0.172, MAME itself switched, by dual licensing, to common free software licenses, the BSD-3-Clause license, and the GPL-2.0-or-later license.
See also
Arcade emulator
MESS
List of free and open-source software packages
List of video game console emulators
References
External links
MAMEworld MAME resource and news site
Arcade Database Database containing details of any game supported by Mame, including past versions. There are images, videos, programs for downloading extra files, advanced searches, graphics and many other resources.
1997 software
Amiga emulation software
AmigaOS 4 software
Arcade video game emulators
Classic Mac OS emulation software
Cross-platform software
GP2X emulation software
Linux emulation software
Lua (programming language)-scriptable software
MacOS emulation software
Multi-emulators
Nintendo Entertainment System emulators
PlayStation emulators
Windows emulation software |
103127 | https://en.wikipedia.org/wiki/Brute-force%20search | Brute-force search | In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other.
While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutionswhich in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed.
This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a tablenamely, check all entries of the latter, sequentiallyis called linear search.
Implementing the brute-force search
Basic algorithm
In order candidate for P after the current one c.
valid (P, c): check whether candidate c is a solution for P.
output (P, c): use the solution c of P as appropriate to the application.
The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm
c ← first(P)
while c ≠ Λ do
if valid(P,c) then
output(P, c)
c ← next(P, c)
end while
For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests n ≥ 1 and c < n are unnecessary.)The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time.
Combinatorial explosion
The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letterwhich is only a 10% increase in the data sizewill multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called the combinatorial explosion, or the curse of dimensionality.
One example of a case where combinatorial complexity leads to solvability limit is in solving chess. Chess is not a solved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.
Speeding up brute-force searches
One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, in the eight queens problem the challenge is to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutionsabout 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements.
As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one.
In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests.
Reordering the search space
In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to , than the other way aroundbecause the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance.
Alternatives to brute-force search
There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means of Constraint propagation, that is efficiently implemented in Constraint programming languages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function.
In cryptography
In cryptography, a brute-force attack involves systematically checking all possible keys until the correct key is found. This strategy can in theory be used against any encrypted data (except a one-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier.
The key length used in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective by obfuscating the data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it.
References
See also
A brute-force algorithm to solve Sudoku puzzles.
Brute-force attack
Big O notation
Search algorithms |
103586 | https://en.wikipedia.org/wiki/Chosen-ciphertext%20attack | Chosen-ciphertext attack | A chosen-ciphertext attack (CCA) is an attack model for cryptanalysis where the cryptanalyst can gather information by obtaining the decryptions of chosen ciphertexts. From these pieces of information the adversary can attempt to recover the hidden secret key used for decryption.
For formal definitions of security against chosen-ciphertext attacks, see for example: Michael Luby and Mihir Bellare et al.
Introduction
A number of otherwise secure schemes can be defeated under chosen-ciphertext attack. For example, the El Gamal cryptosystem is semantically secure under chosen-plaintext attack, but this semantic security can be trivially defeated under a chosen-ciphertext attack. Early versions of RSA padding used in the SSL protocol were vulnerable to a sophisticated adaptive chosen-ciphertext attack which revealed SSL session keys. Chosen-ciphertext attacks have implications for some self-synchronizing stream ciphers as well. Designers of tamper-resistant cryptographic smart cards must be particularly cognizant of these attacks, as these devices may be completely under the control of an adversary, who can issue a large number of chosen-ciphertexts in an attempt to recover the hidden secret key.
It was not clear at all whether public key cryptosystems can withstand the chosen ciphertext attack until the initial breakthrough work of Moni Naor and Moti Yung in 1990, which suggested a mode of dual encryption with integrity proof (now known as the "Naor-Yung" encryption paradigm). This work made understanding of the notion of security against chosen ciphertext attack much clearer than before and open the research direction of constructing systems with various protections against variants of the attack.
When a cryptosystem is vulnerable to chosen-ciphertext attack, implementers must be careful to avoid situations in which an adversary might be able to decrypt chosen-ciphertexts (i.e., avoid providing a decryption oracle). This can be more difficult than it appears, as even partially chosen ciphertexts can permit subtle attacks. Additionally, other issues exist and some cryptosystems (such as RSA) use the same mechanism to sign messages and to decrypt them. This permits attacks when hashing is not used on the message to be signed. A better approach is to use a cryptosystem which is provably secure under chosen-ciphertext attack, including (among others) RSA-OAEP secure under the random oracle heuristics, Cramer-Shoup which was the first public key practical system to be secure. For symmetric encryption schemes it is known that authenticated encryption which is a primitive based on symmetric encryption gives security against chosen ciphertext attacks, as was first shown by Jonathan Katz and Moti Yung.
Varieties
Chosen-ciphertext attacks, like other attacks, may be adaptive or non-adaptive. In an adaptive chosen-ciphertext attack, the attacker can use the results from prior decryptions to inform their choices of which ciphertexts to have decrypted. In a non-adaptive attack, the attacker chooses the ciphertexts to have decrypted without seeing any of the resulting plaintexts. After seeing the plaintexts, the attacker can no longer obtain the decryption of additional ciphertexts.
Lunchtime attacks
A specially noted variant of the chosen-ciphertext attack is the "lunchtime", "midnight", or "indifferent" attack, in which an attacker may make adaptive chosen-ciphertext queries but only up until a certain point, after which the attacker must demonstrate some improved ability to attack the system. The term "lunchtime attack" refers to the idea that a user's computer, with the ability to decrypt, is available to an attacker while the user is out to lunch. This form of the attack was the first one commonly discussed: obviously, if the attacker has the ability to make adaptive chosen ciphertext queries, no encrypted message would be safe, at least until that ability is taken away. This attack is sometimes called the "non-adaptive chosen ciphertext attack"; here, "non-adaptive" refers to the fact that the attacker cannot adapt their queries in response to the challenge, which is given after the ability to make chosen ciphertext queries has expired.
Adaptive chosen-ciphertext attack
A (full) adaptive chosen-ciphertext attack is an attack in which ciphertexts may be chosen adaptively before and after a challenge ciphertext is given to the attacker, with only the stipulation that the challenge ciphertext may not itself be queried. This is a stronger attack notion than the lunchtime attack, and is commonly referred to as a CCA2 attack, as compared to a CCA1 (lunchtime) attack. Few practical attacks are of this form. Rather, this model is important for its use in proofs of security against chosen-ciphertext attacks. A proof that attacks in this model are impossible implies that any realistic chosen-ciphertext attack cannot be performed.
A practical adaptive chosen-ciphertext attack is the Bleichenbacher attack against PKCS#1.
Numerous cryptosystems are proven secure against adaptive chosen-ciphertext attacks, some proving this security property based only on algebraic assumptions, some additionally requiring an idealized random oracle assumption. For example, the Cramer-Shoup system is secure based on number theoretic assumptions and no idealization, and after a number of subtle investigations it was also established that the practical scheme RSA-OAEP is secure under the RSA assumption in the idealized random oracle model.
See also
Dancing on the Lip of the Volcano: Chosen Ciphertext Attacks on Apple iMessage (Usenix 2016)
References
Cryptographic attacks |
105012 | https://en.wikipedia.org/wiki/Solvable%20group | Solvable group | In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup.
Motivation
Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equation. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable (note this theorem holds only in characteristic 0). This means associated to a polynomial there is a tower of field extensionssuch that
where , so is a solution to the equation where
contains a splitting field for
Example
For example, the smallest Galois field extension of containing the elementgives a solvable group. It has associated field extensionsgiving a solvable group containing (acting on the ) and (acting on ).
Definition
A group G is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups 1 = G0 < G1 < ⋅⋅⋅ < Gk = G such that Gj−1 is normal in Gj, and Gj /Gj−1 is an abelian group, for j = 1, 2, …, k.
Or equivalently, if its derived series, the descending normal series
where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of G. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the quotient H/N is abelian if and only if N includes the commutator subgroup of H. The least n such that G(n) = 1 is called the derived length of the solvable group G.
For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups correspond to nth roots (radicals) over some field. The equivalence does not necessarily hold for infinite groups: for example, since every nontrivial subgroup of the group Z of integers under addition is isomorphic to Z itself, it has no composition series, but the normal series {0, Z}, with its only factor group isomorphic to Z, proves that it is in fact solvable.
Examples
Abelian groups
The basic example of solvable groups are abelian groups. They are trivially solvable since a subnormal series being given by just the group itself and the trivial group. But non-abelian groups may or may not be solvable.
Nilpotent groups
More generally, all nilpotent groups are solvable. In particular, finite p-groups are solvable, as all finite p-groups are nilpotent.
Quaternion groups
In particular, the quaternion group is a solvable group given by the group extensionwhere the kernel is the subgroup generated by .
Group extensions
Group extensions form the prototypical examples of solvable groups. That is, if and are solvable groups, then any extensiondefines a solvable group . In fact, all solvable groups can be formed from such group extensions.
Nonabelian group which is non-nilpotent
A small example of a solvable, non-nilpotent group is the symmetric group S3. In fact, as the smallest simple non-abelian group is A5, (the alternating group of degree 5) it follows that every group with order less than 60 is solvable.
Finite groups of odd order
The celebrated Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order.
Non-example
The group S5 is not solvable — it has a composition series {E, A5, S5} (and the Jordan–Hölder theorem states that every other composition series is equivalent to that one), giving factor groups isomorphic to A5 and C2; and A5 is not abelian. Generalizing this argument, coupled with the fact that An is a normal, maximal, non-abelian simple subgroup of Sn for n > 4, we see that Sn is not solvable for n > 4. This is a key step in the proof that for every n > 4 there are polynomials of degree n which are not solvable by radicals (Abel–Ruffini theorem). This property is also used in complexity theory in the proof of Barrington's theorem.
Subgroups of GL2
Consider the subgroups of for some field . Then, the group quotient can be found by taking arbitrary elements in , multiplying them together, and figuring out what structure this gives. SoNote the determinant condition on implies , hence is a subgroup (which are the matrices where ). For fixed , the linear equation implies , which is an arbitrary element in since . Since we can take any matrix in and multiply it by the matrixwith , we can get a diagonal matrix in . This shows the quotient group .
Remark
Notice that this description gives the decomposition of as where acts on by . This implies . Also, a matrix of the formcorresponds to the element in the group.
Borel subgroups
For a linear algebraic group its Borel subgroup is defined as a subgroup which is closed, connected, and solvable in , and it is the maximal possible subgroup with these properties (note the second two are topological properties). For example, in and the group of upper-triangular, or lower-triangular matrices are two of the Borel subgroups. The example given above, the subgroup in is the Borel subgroup.
Borel subgroup in GL3
In there are the subgroupsNotice , hence the Borel group has the form
Borel subgroup in product of simple linear algebraic groups
In the product group the Borel subgroup can be represented by matrices of the formwhere is an upper triangular matrix and is a upper triangular matrix.
Z-groups
Any finite group whose p-Sylow subgroups are cyclic is a semidirect product of two cyclic groups, in particular solvable. Such groups are called Z-groups.
OEIS values
Numbers of solvable groups with order n are (start with n = 0)
0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, 2, 2, 1, 15, 2, 2, 5, 4, 1, 4, 1, 51, 1, 2, 1, 14, 1, 2, 2, 14, 1, 6, 1, 4, 2, 2, 1, 52, 2, 5, 1, 5, 1, 15, 2, 13, 2, 2, 1, 12, 1, 2, 4, 267, 1, 4, 1, 5, 1, 4, 1, 50, ...
Orders of non-solvable groups are
60, 120, 168, 180, 240, 300, 336, 360, 420, 480, 504, 540, 600, 660, 672, 720, 780, 840, 900, 960, 1008, 1020, 1080, 1092, 1140, 1176, 1200, 1260, 1320, 1344, 1380, 1440, 1500, ...
Properties
Solvability is closed under a number of operations.
If G is solvable, and H is a subgroup of G, then H is solvable.
If G is solvable, and there is a homomorphism from G onto H, then H is solvable; equivalently (by the first isomorphism theorem), if G is solvable, and N is a normal subgroup of G, then G/N is solvable.
The previous properties can be expanded into the following "three for the price of two" property: G is solvable if and only if both N and G/N are solvable.
In particular, if G and H are solvable, the direct product G × H is solvable.
Solvability is closed under group extension:
If H and G/H are solvable, then so is G; in particular, if N and H are solvable, their semidirect product is also solvable.
It is also closed under wreath product:
If G and H are solvable, and X is a G-set, then the wreath product of G and H with respect to X is also solvable.
For any positive integer N, the solvable groups of derived length at most N form a subvariety of the variety of groups, as they are closed under the taking of homomorphic images, subalgebras, and (direct) products. The direct product of a sequence of solvable groups with unbounded derived length is not solvable, so the class of all solvable groups is not a variety.
Burnside's theorem
Burnside's theorem states that if G is a finite group of order paqb where p and q are prime numbers, and a and b are non-negative integers, then G is solvable.
Related concepts
Supersolvable groups
As a strengthening of solvability, a group G is called supersolvable (or supersoluble) if it has an invariant normal series whose factors are all cyclic. Since a normal series has finite length by definition, uncountable groups are not supersolvable. In fact, all supersolvable groups are finitely generated, and an abelian group is supersolvable if and only if it is finitely generated. The alternating group A4 is an example of a finite solvable group that is not supersolvable.
If we restrict ourselves to finitely generated groups, we can consider the following arrangement of classes of groups:
cyclic < abelian < nilpotent < supersolvable < polycyclic < solvable < finitely generated group.
Virtually solvable groups
A group G is called virtually solvable if it has a solvable subgroup of finite index. This is similar to virtually abelian. Clearly all solvable groups are virtually solvable, since one can just choose the group itself, which has index 1.
Hypoabelian
A solvable group is one whose derived series reaches the trivial subgroup at a finite stage. For an infinite group, the finite derived series may not stabilize, but the transfinite derived series always stabilizes. A group whose transfinite derived series reaches the trivial group is called a hypoabelian group, and every solvable group is a hypoabelian group. The first ordinal α such that G(α) = G(α+1) is called the (transfinite) derived length of the group G, and it has been shown that every ordinal is the derived length of some group .
See also
Prosolvable group
Parabolic subgroup
Notes
References
External links
Solvable groups as iterated extensions
Properties of groups |