id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
23939 | https://en.wikipedia.org/wiki/Perl | Perl | Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. "Perl" refers to Perl 5, but from 2000 to 2019 it also referred to its redesigned "sister language", Perl 6, before the latter's name was officially changed to Raku in October 2019.
Though Perl is not officially an acronym, there are various backronyms in use, including "Practical Extraction and Reporting Language". Perl was developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier. Since then, it has undergone many changes and revisions. Raku, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from each other.
The Perl languages borrow features from other programming languages including C, sh, AWK, and sed; They provide text processing facilities without the arbitrary data-length limits of many contemporary Unix command line tools. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its powerful regular expression and string parsing abilities.
In addition to CGI, Perl 5 is used for system administration, network programming, finance, bioinformatics, and other applications, such as for GUIs. It has been nicknamed "the Swiss Army chainsaw of scripting languages" because of its flexibility and power, and also its ugliness. In 1998, it was also referred to as the "duct tape that holds the Internet together," in reference to both its ubiquitous use as a glue language and its perceived inelegance.
Perl is a highly expressive programming language: source code for a given algorithm can be short and highly compressible.
Name
Perl was originally named "Pearl". Wall wanted to give the language a short name with positive connotations. Wall discovered the existing PEARL programming language before Perl's official release and changed the spelling of the name.
When referring to the language, the name is capitalized: Perl. When referring to the program itself, the name is uncapitalized (perl) because most Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl. Randal L. Schwartz, however, capitalized the language's name in the book to make it stand out better when typeset. This case distinction was subsequently documented as canonical.
The name is occasionally expanded as a backronym: Practical Extraction and Report Language and Wall's own Pathologically Eclectic Rubbish Lister which is in the manual page for perl.
History
Early versions
Larry Wall began work on Perl in 1987, while working as a programmer at Unisys, and version 1.0 was released to the comp.sources.unix newsgroup on February 1, 1988. The language expanded rapidly over the next few years.
Perl 2, released in 1988, featured a better regular expression engine. Perl 3, released in 1989, added support for binary data streams.
Originally, the only documentation for Perl was a single lengthy man page. In 1991, Programming Perl, known to many Perl programmers as the "Camel Book" because of its cover, was published and became the de facto reference for the language. At the same time, the Perl version number was bumped to 4, not to mark a major change in the language but to identify the version that was well documented by the book.
Early Perl 5
Perl 4 went through a series of maintenance releases, culminating in Perl 4.036 in 1993, whereupon Wall abandoned Perl 4 to begin work on Perl 5. Initial design of Perl 5 continued into 1994. The perl5-porters mailing list was established in May 1994 to coordinate work on porting Perl 5 to different platforms. It remains the primary forum for development, maintenance, and porting of Perl 5.
Perl 5.000 was released on October 17, 1994. It was a nearly complete rewrite of the interpreter, and it added many new features to the language, including objects, references, lexical (my) variables, and modules. Importantly, modules provided a mechanism for extending the language without modifying the interpreter. This allowed the core interpreter to stabilize, even as it enabled ordinary Perl programmers to add new language features. Perl 5 has been in active development since then.
Perl 5.001 was released on March 13, 1995. Perl 5.002 was released on February 29, 1996 with the new prototypes feature. This allowed module authors to make subroutines that behaved like Perl builtins. Perl 5.003 was released June 25, 1996, as a security release.
One of the most important events in Perl 5 history took place outside of the language proper and was a consequence of its module support. On October 26, 1995, the Comprehensive Perl Archive Network (CPAN) was established as a repository for the Perl language and Perl modules; as of May 2017, it carries over 185,178 modules in 35,190 distributions, written by more than 13,071 authors, and is mirrored worldwide at more than 245 locations.
Perl 5.004 was released on May 15, 1997, and included, among other things, the UNIVERSAL package, giving Perl a base object from which all classes were automatically derived and the ability to require versions of modules. Another significant development was the inclusion of the CGI.pm module, which contributed to Perl's popularity as a CGI scripting language.
Perl 5.004 added support for Microsoft Windows, Plan 9, QNX, and AmigaOS.
Perl 5.005 was released on July 22, 1998. This release included several enhancements to the regex engine, new hooks into the backend through the B::* modules, the qr// regex quote operator, a large selection of other new core modules, and added support for several more operating systems, including BeOS.
2000–2020
Perl 5.6 was released on March 22, 2000. Major changes included 64-bit support, Unicode string representation, support for files over 2 GiB, and the "our" keyword. When developing Perl 5.6, the decision was made to switch the versioning scheme to one more similar to other open source projects; after 5.005_63, the next version became 5.5.640, with plans for development versions to have odd numbers and stable versions to have even numbers.
In 2000, Wall put forth a call for suggestions for a new version of Perl from the community. The process resulted in 361 RFC (request for comments) documents that were to be used in guiding development of Perl 6. In 2001, work began on the "Apocalypses" for Perl 6, a series of documents meant to summarize the change requests and present the design of the next generation of Perl. They were presented as a digest of the RFCs, rather than a formal document. At this point, Perl 6 existed only as a description of a language.
Perl 5.8 was first released on July 18, 2002, and had nearly yearly updates since then. Perl 5.8 improved Unicode support, added a new I/O implementation, added a new thread implementation, improved numeric accuracy, and added several new modules. As of 2013 this version still remains the most popular version of Perl and is used by Red Hat 5, Suse 10, Solaris 10, HP-UX 11.31 and AIX 5.
In 2004, work began on the "Synopses"documents that originally summarized the Apocalypses, but which became the specification for the Perl 6 language. In February 2005, Audrey Tang began work on Pugs, a Perl 6 interpreter written in Haskell. This was the first concerted effort toward making Perl 6 a reality. This effort stalled in 2006.
PONIE is an acronym for Perl On New Internal Engine. The PONIE Project existed from 2003 until 2006 and was to be a bridge between Perl 5 and Perl 6. It was an effort to rewrite the Perl 5 interpreter to run on Parrot, the Perl 6 virtual machine. The goal was to ensure the future of the millions of lines of Perl 5 code at thousands of companies around the world. The PONIE project ended in 2006 and is no longer being actively developed. Some of the improvements made to the Perl 5 interpreter as part of PONIE were folded into that project.
On December 18, 2007, the 20th anniversary of Perl 1.0, Perl 5.10.0 was released. Perl 5.10.0 included notable new features, which brought it closer to Perl 6. These included a switch statement (called "given"/"when"), regular expressions updates, and the 'smart match operator (~~).
Around this same time, development began in earnest on another implementation of Perl 6 known as Rakudo Perl, developed in tandem with the Parrot virtual machine. As of November 2009, Rakudo Perl has had regular monthly releases and now is the most complete implementation of Perl 6.
A major change in the development process of Perl 5 occurred with Perl 5.11; the development community has switched to a monthly release cycle of development releases, with a yearly schedule of stable releases. By that plan, bugfix point releases will follow the stable releases every three months.
On April 12, 2010, Perl 5.12.0 was released. Notable core enhancements include new package NAME VERSION syntax, the Yada Yada operator (intended to mark placeholder code that is not yet implemented), implicit strictures, full Y2038 compliance, regex conversion overloading, DTrace support, and Unicode 5.2. On January 21, 2011, Perl 5.12.3 was released; it contains updated modules and some documentation changes. Version 5.12.4 was released on June 20, 2011. The latest version of that branch, 5.12.5, was released on November 10, 2012.
On May 14, 2011, Perl 5.14 was released with JSON support built-in.
On May 20, 2012, Perl 5.16 was released. Notable new features include the ability to specify a given version of Perl that one wishes to emulate, allowing users to upgrade their version of Perl, but still run old scripts that would normally be incompatible. Perl 5.16 also updates the core to support Unicode 6.1.
On May 18, 2013, Perl 5.18 was released. Notable new features include the new dtrace hooks, lexical subs, more CORE:: subs, overhaul of the hash for security reasons, support for Unicode 6.2.
On May 27, 2014, Perl 5.20 was released. Notable new features include subroutine signatures, hash slices/new slice syntax, postfix dereferencing (experimental), Unicode 6.3, using consistent random number generator.
Some observers credit the release of Perl 5.10 with the start of the Modern Perl movement. In particular, this phrase describes a style of development that embraces the use of the CPAN, takes advantage of recent developments in the language, and is rigorous about creating high quality code. While the book "Modern Perl" may be the most visible standard-bearer of this idea, other groups such as the Enlightened Perl Organization have taken up the cause.
In late 2012 and 2013, several projects for alternative implementations for Perl 5 started: Perl5 in Perl6 by the Rakudo Perl team, by Stevan Little and friends, by the Perl11 team under Reini Urban, by , and , a Kickstarter project led by Will Braswell and affiliated with the Perll11 project.
2020 onward
In June 2020, Perl 7 was announced as the successor to Perl 5. Perl 7 was to initially be based on Perl 5.32 with a release expected in first half of 2021, and release candidates sooner.
This plan was revised in May 2021, without any release timeframe or version of Perl 5 for use as a baseline specified. When Perl 7 is released, Perl 5 will go into long term maintenance. Supported Perl 5 versions however will continue to get important security and bug fixes.
Symbols
Camel
Programming Perl, published by O'Reilly Media, features a picture of a dromedary camel on the cover and is commonly called the "Camel Book". This image has become an unofficial symbol of Perl as well as a general hacker emblem, appearing on T-shirts and other clothing items.
O'Reilly owns the image as a trademark but licenses it for non-commercial use, requiring only an acknowledgement and a link to www.perl.com. Licensing for commercial use is decided on a case-by-case basis. O'Reilly also provides "Programming Republic of Perl" logos for non-commercial sites and "Powered by Perl" buttons for any site that uses Perl.
Onion
The Perl Foundation owns an alternative symbol, an onion, which it licenses to its subsidiaries, Perl Mongers, PerlMonks, Perl.org, and others. The symbol is a visual pun on pearl onion.
Raptor
Sebastian Riedel, the creator of Mojolicious, created a logo depicting a raptor dinosaur, which is available under a CC-SA License, Version 4.0. The analogue of the raptor comes from a series of talks given by Matt S Trout beginning in 2010.
Overview
According to Wall, Perl has two slogans. The first is "There's more than one way to do it," commonly known as TMTOWTDI. The second slogan is "Easy things should be easy and hard things should be possible".
Features
The overall structure of Perl derives broadly from C. Perl is procedural in nature, with variables, expressions, assignment statements, brace-delimited blocks, control structures, and subroutines.
Perl also takes features from shell programming. All variables are marked with leading sigils, which allow variables to be interpolated directly into strings. However, unlike the shell, Perl uses sigils on all accesses to variables, and unlike most other programming languages that use sigils, the sigil doesn't denote the type of the variable but the type of the expression. So for example, while an array is denoted by the sigil "@" (for example @arrayname), an individual member of the array is denoted by the scalar sigil "$" (for example $arrayname[3]). Perl also has many built-in functions that provide tools often used in shell programming (although many of these tools are implemented by programs external to the shell) such as sorting, and calling operating system facilities.
Perl takes hashes ("associative arrays") from AWK and regular expressions from sed. These simplify many parsing, text-handling, and data-management tasks. Shared with Lisp is the implicit return of the last value in a block, and all statements are also expressions which can be used in larger expressions themselves.
Perl 5 added features that support complex data structures, first-class functions (that is, closures as values), and an object-oriented programming model. These include references, packages, class-based method dispatch, and lexically scoped variables, along with compiler directives (for example, the strict pragma). A major additional feature introduced with Perl 5 was the ability to package code as reusable modules. Wall later stated that "The whole intent of Perl 5's module system was to encourage the growth of Perl culture rather than the Perl core."
All versions of Perl do automatic data-typing and automatic memory management. The interpreter knows the type and storage requirements of every data object in the program; it allocates and frees storage for them as necessary using reference counting (so it cannot deallocate circular data structures without manual intervention). Legal type conversions — for example, conversions from number to string — are done automatically at run time; illegal type conversions are fatal errors.
Design
The design of Perl can be understood as a response to three broad trends in the computer industry: falling hardware costs, rising labor costs, and improvements in compiler technology. Many earlier computer languages, such as Fortran and C, aimed to make efficient use of expensive computer hardware. In contrast, Perl was designed so that computer programmers could write programs more quickly and easily.
Perl has many features that ease the task of the programmer at the expense of greater CPU and memory requirements. These include automatic memory management; dynamic typing; strings, lists, and hashes; regular expressions; introspection; and an eval() function. Perl follows the theory of "no built-in limits," an idea similar to the Zero One Infinity rule.
Wall was trained as a linguist, and the design of Perl is very much informed by linguistic principles. Examples include Huffman coding (common constructions should be short), good end-weighting (the important information should come first), and a large collection of language primitives. Perl favors language constructs that are concise and natural for humans to write, even where they complicate the Perl interpreter.
Perl's syntax reflects the idea that "things that are different should look different." For example, scalars, arrays, and hashes have different leading sigils. Array indices and hash keys use different kinds of braces. Strings and regular expressions have different standard delimiters. This approach can be contrasted with a language such as Lisp, where the same basic syntax, composed of simple and universal symbolic expressions, is used for all purposes.
Perl does not enforce any particular programming paradigm (procedural, object-oriented, functional, or others) or even require the programmer to choose among them.
There is a broad practical bent to both the Perl language and the community and culture that surround it. The preface to Programming Perl begins: "Perl is a language for getting your job done." One consequence of this is that Perl is not a tidy language. It includes many features, tolerates exceptions to its rules, and employs heuristics to resolve syntactical ambiguities. Because of the forgiving nature of the compiler, bugs can sometimes be hard to find. Perl's function documentation remarks on the variant behavior of built-in functions in list and scalar contexts by saying, "In general, they do what you want, unless you want consistency."
No written specification or standard for the Perl language exists for Perl versions through Perl 5, and there are no plans to create one for the current version of Perl. There has been only one implementation of the interpreter, and the language has evolved along with it. That interpreter, together with its functional tests, stands as a de facto specification of the language. Perl 6, however, started with a specification, and several projects aim to implement some or all of the specification.
Applications
Perl has many and varied applications, compounded by the availability of many standard and third-party modules.
Perl has chiefly been used to write CGI scripts: large projects written in Perl include cPanel, Slash, Bugzilla, RT, TWiki, and Movable Type; high-traffic websites that use Perl extensively include Priceline.com, Craigslist, IMDb, LiveJournal, DuckDuckGo, Slashdot and Ticketmaster.
It is also an optional component of the popular LAMP technology stack for Web development, in lieu of PHP or Python. Perl is used extensively as a system programming language in the Debian Linux distribution.
Perl is often used as a glue language, tying together systems and interfaces that were not specifically designed to interoperate, and for "data munging," that is, converting or processing large amounts of data for tasks such as creating reports. In fact, these strengths are intimately linked. The combination makes Perl a popular all-purpose language for system administrators, particularly because short programs, often called "one-liner programs," can be entered and run on a single command line.
Perl code can be made portable across Windows and Unix; such code is often used by suppliers of software (both COTS and bespoke) to simplify packaging and maintenance of software build- and deployment-scripts.
Perl/Tk and wxPerl are commonly used to add graphical user interfaces to Perl scripts.
Implementation
Perl is implemented as a core interpreter, written in C, together with a large collection of modules, written in Perl and C. , the interpreter is 150,000 lines of C code and compiles to a 1 MB executable on typical machine architectures. Alternatively, the interpreter can be compiled to a link library and embedded in other programs. There are nearly 500 modules in the distribution, comprising 200,000 lines of Perl and an additional 350,000 lines of C code (much of the C code in the modules consists of character encoding tables).
The interpreter has an object-oriented architecture. All of the elements of the Perl language—scalars, arrays, hashes, coderefs, file handles—are represented in the interpreter by C structs. Operations on these structs are defined by a large collection of macros, typedefs, and functions; these constitute the Perl C API. The Perl API can be bewildering to the uninitiated, but its entry points follow a consistent naming scheme, which provides guidance to those who use it.
The life of a Perl interpreter divides broadly into a compile phase and a run phase. In Perl, the phases are the major stages in the interpreter's life-cycle. Each interpreter goes through each phase only once, and the phases follow in a fixed sequence.
Most of what happens in Perl's compile phase is compilation, and most of what happens in Perl's run phase is execution, but there are significant exceptions. Perl makes important use of its capability to execute Perl code during the compile phase. Perl will also delay compilation into the run phase. The terms that indicate the kind of processing that is actually occurring at any moment are compile time and run time. Perl is in compile time at most points during the compile phase, but compile time may also be entered during the run phase. The compile time for code in a string argument passed to the eval built-in occurs during the run phase. Perl is often in run time during the compile phase and spends most of the run phase in run time. Code in BEGIN blocks executes at run time but in the compile phase.
At compile time, the interpreter parses Perl code into a syntax tree. At run time, it executes the program by walking the tree. Text is parsed only once, and the syntax tree is subject to optimization before it is executed, so that execution is relatively efficient. Compile-time optimizations on the syntax tree include constant folding and context propagation, but peephole optimization is also performed.
Perl has a Turing-complete grammar because parsing can be affected by run-time code executed during the compile phase. Therefore, Perl cannot be parsed by a straight Lex/Yacc lexer/parser combination. Instead, the interpreter implements its own lexer, which coordinates with a modified GNU bison parser to resolve ambiguities in the language.
It is often said that "Only perl can parse Perl," meaning that only the Perl interpreter (perl) can parse the Perl language (Perl), but even this is not, in general, true. Because the Perl interpreter can simulate a Turing machine during its compile phase, it would need to decide the halting problem in order to complete parsing in every case. It is a longstanding result that the halting problem is undecidable, and therefore not even perl can always parse Perl. Perl makes the unusual choice of giving the user access to its full programming power in its own compile phase. The cost in terms of theoretical purity is high, but practical inconvenience seems to be rare.
Other programs that undertake to parse Perl, such as source-code analyzers and auto-indenters, have to contend not only with ambiguous syntactic constructs but also with the undecidability of Perl parsing in the general case. Adam Kennedy's PPI project focused on parsing Perl code as a document (retaining its integrity as a document), instead of parsing Perl as executable code (that not even Perl itself can always do). It was Kennedy who first conjectured that "parsing Perl suffers from the 'halting problem'," which was later proved.
Perl is distributed with over 250,000 functional tests for core Perl language and over 250,000 functional tests for core modules. These run as part of the normal build process and extensively exercise the interpreter and its core modules. Perl developers rely on the functional tests to ensure that changes to the interpreter do not introduce software bugs; additionally, Perl users who see that the interpreter passes its functional tests on their system can have a high degree of confidence that it is working properly.
Availability
Perl is dual licensed under both the Artistic License 1.0 and the GNU General Public License. Distributions are available for most operating systems. It is particularly prevalent on Unix and Unix-like systems, but it has been ported to most modern (and many obsolete) platforms. With only six reported exceptions, Perl can be compiled from source code on all POSIX-compliant, or otherwise-Unix-compatible, platforms.
Because of unusual changes required for the classic Mac OS environment, a special port called MacPerl was shipped independently.
The Comprehensive Perl Archive Network carries a complete list of supported platforms with links to the distributions available on each. CPAN is also the source for publicly available Perl modules that are not part of the core Perl distribution.
Windows
Users of Microsoft Windows typically install one of the native binary distributions of Perl for Win32, most commonly Strawberry Perl or ActivePerl. Compiling Perl from source code under Windows is possible, but most installations lack the requisite C compiler and build tools. This also makes it difficult to install modules from the CPAN, particularly those that are partially written in C.
ActivePerl is a closed-source distribution from ActiveState that has regular releases that track the core Perl releases. The distribution previously included the Perl package manager (PPM), a popular tool for installing, removing, upgrading, and managing the use of common Perl modules; however, this tool was discontinued as of ActivePerl 5.28. Included also is PerlScript, a Windows Script Host (WSH) engine implementing the Perl language. Visual Perl is an ActiveState tool that adds Perl to the Visual Studio .NET development suite. A VBScript-to-Perl converter, as well as a Perl compiler for Windows, and converters of awk and sed to Perl have also been produced by this company and included on the ActiveState CD for Windows, which includes all of their distributions plus the Komodo IDE and all but the first on the Unix/Linux/Posix variant thereof in 2002 and subsequently.
Strawberry Perl is an open-source distribution for Windows. It has had regular, quarterly releases since January 2008, including new modules as feedback and requests come in. Strawberry Perl aims to be able to install modules like standard Perl distributions on other platforms, including compiling XS modules.
The Cygwin emulation layer is another way of running Perl under Windows. Cygwin provides a Unix-like environment on Windows, and both Perl and CPAN are available as standard pre-compiled packages in the Cygwin setup program. Since Cygwin also includes gcc, compiling Perl from source is also possible.
A perl executable is included in several Windows Resource kits in the directory with other scripting tools.
Implementations of Perl come with the MKS Toolkit, Interix (the base of earlier implementations of Windows Services for Unix), and UWIN.
Database interfaces
Perl's text-handling capabilities can be used for generating SQL queries; arrays, hashes, and automatic memory management make it easy to collect and process the returned data. For example, in Tim Bunce's Perl DBI application programming interface (API), the arguments to the API can be the text of SQL queries; thus it is possible to program in multiple languages at the same time (e.g., for generating a Web page using HTML, JavaScript, and SQL in a here document). The use of Perl variable interpolation to programmatically customize each of the SQL queries, and the specification of Perl arrays or hashes as the structures to programmatically hold the resulting data sets from each SQL query, allows a high-level mechanism for handling large amounts of data for post-processing by a Perl subprogram.
In early versions of Perl, database interfaces were created by relinking the interpreter with a client-side database library. This was sufficiently difficult that it was done for only a few of the most-important and most widely used databases, and it restricted the resulting perl executable to using just one database interface at a time.
In Perl 5, database interfaces are implemented by Perl DBI modules. The DBI (Database Interface) module presents a single, database-independent interface to Perl applications, while the DBD (Database Driver) modules handle the details of accessing some 50 different databases; there are DBD drivers for most ANSI SQL databases.
DBI provides caching for database handles and queries, which can greatly improve performance in long-lived execution environments such as mod perl, helping high-volume systems avert load spikes as in the Slashdot effect.
In modern Perl applications, especially those written using web frameworks such as Catalyst, the DBI module is often used indirectly via object-relational mappers such as DBIx::Class, Class::DBI or Rose::DB::Object that generate SQL queries and handle data transparently to the application author.
Comparative performance
The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages. The submitted Perl implementations typically perform toward the high end of the memory-usage spectrum and give varied speed results. Perl's performance in the benchmarks game is typical for interpreted languages.
Large Perl programs start more slowly than similar programs in compiled languages because perl has to compile the source every time it runs. In a talk at the YAPC::Europe 2005 conference and subsequent article "A Timely Start," Jean-Louis Leroy found that his Perl programs took much longer to run than expected because the perl interpreter spent significant time finding modules within his over-large include path. Unlike Java, Python, and Ruby, Perl has only experimental support for pre-compiling. Therefore, Perl programs pay this overhead penalty on every execution. The run phase of typical programs is long enough that amortized startup time is not substantial, but benchmarks that measure very short execution times are likely to be skewed due to this overhead.
A number of tools have been introduced to improve this situation. The first such tool was Apache's mod perl, which sought to address one of the most-common reasons that small Perl programs were invoked rapidly: CGI Web development. ActivePerl, via Microsoft ISAPI, provides similar performance improvements.
Once Perl code is compiled, there is additional overhead during the execution phase that typically isn't present for programs written in compiled languages such as C or C++. Examples of such overhead include bytecode interpretation, reference-counting memory management, and dynamic type-checking.
Optimizing
The most critical routines can be written in other languages (such as C), which can be connected to Perl via simple Inline modules or the more complex, but flexible, XS mechanism.
Perl 5
Perl 5, the language usually referred to as "Perl", continues to be actively developed. Perl 5.12.0 was released in April 2010 with some new features influenced by the design of Perl 6, followed by Perl 5.14.1 (released on June 17, 2011), Perl 5.16.1 (released on August 9, 2012.), and Perl 5.18.0 (released on May 18, 2013). Perl 5 development versions are released on a monthly basis, with major releases coming out once per year.
The relative proportion of Internet searches for "Perl programming", as compared with similar searches for other programming languages, steadily declined from about 10% in 2005 to about 2% in 2011, to about 0.7% in 2020.
Raku (Perl 6)
At the 2000 Perl Conference, Jon Orwant made a case for a major new language-initiative. This led to a decision to begin work on a redesign of the language, to be called Perl 6. Proposals for new language features were solicited from the Perl community at large, which submitted more than 300 RFCs.
Wall spent the next few years digesting the RFCs and synthesizing them into a coherent framework for Perl 6. He presented his design for Perl 6 in a series of documents called "apocalypses"numbered to correspond to chapters in Programming Perl. , the developing specification of Perl 6 was encapsulated in design documents called Synopsesnumbered to correspond to Apocalypses.
Thesis work by Bradley M. Kuhn, overseen by Wall, considered the possible use of the Java virtual machine as a runtime for Perl. Kuhn's thesis showed this approach to be problematic. In 2001, it was decided that Perl 6 would run on a cross-language virtual machine called Parrot. This will mean that other languages targeting the Parrot will gain native access to CPAN, allowing some level of cross-language development.
In 2005, Audrey Tang created the Pugs project, an implementation of Perl 6 in Haskell. This acted as, and continues to act as, a test platform for the Perl 6 language (separate from the development of the actual implementation)allowing the language designers to explore. The Pugs project spawned an active Perl/Haskell cross-language community centered around the Libera Chat #raku IRC channel. Many functional programming influences were absorbed by the Perl 6 design team.
In 2012, Perl 6 development was centered primarily on two compilers:
Rakudo, an implementation running on the Parrot virtual machine and the Java virtual machine.
Niecza, which targets the Common Language Runtime.
In 2013, MoarVM (“Metamodel On A Runtime”), a C language-based virtual machine designed primarily for Rakudo was announced.
In October 2019, Perl 6 was renamed to Raku.
only the Rakudo implementation and MoarVM are under active development, and other virtual machines, such as the Java Virtual Machine and JavaScript, are supported.
Perl 7
Perl 7 was announced on 24 June 2020 at "The Perl Conference in the Cloud" as the successor to Perl 5. Based on Perl 5.32, Perl 7 is designed to be backward compatible with modern Perl 5 code; Perl 5 code, without boilerplate (pragma) header needs adding use compat::perl5; to stay compatible, but modern code can drop some of the boilerplate.
Perl community
Perl's culture and community has developed alongside the language itself. Usenet was the first public venue in which Perl was introduced, but over the course of its evolution, Perl's community was shaped by the growth of broadening Internet-based services including the introduction of the World Wide Web. The community that surrounds Perl was, in fact, the topic of Wall's first "State of the Onion" talk.
State of the Onion
State of the Onion is the name for Wall's yearly keynote-style summaries on the progress of Perl and its community. They are characterized by his hallmark humor, employing references to Perl's culture, the wider hacker culture, Wall's linguistic background, sometimes his family life, and occasionally even his Christian background.
Each talk is first given at various Perl conferences and is eventually also published online.
Perl pastimes
JAPHs
In email, Usenet, and message board postings, "Just another Perl hacker" (JAPH) programs are a common trend, originated by Randal L. Schwartz, one of the earliest professional Perl trainers. In the parlance of Perl culture, Perl programmers are known as Perl hackers, and from this derives the practice of writing short programs to print out the phrase "Just another Perl hacker". In the spirit of the original concept, these programs are moderately obfuscated and short enough to fit into the signature of an email or Usenet message. The "canonical" JAPH as developed by Schwartz includes the comma at the end, although this is often omitted.
Perl golf
Perl "golf" is the pastime of reducing the number of characters (key "strokes") used in a Perl program to the bare minimum, much in the same way that golf players seek to take as few shots as possible in a round. The phrase's first use emphasized the difference between pedestrian code meant to teach a newcomer and terse hacks likely to amuse experienced Perl programmers, an example of the latter being JAPHs that were already used in signatures in Usenet postings and elsewhere. Similar stunts had been an unnamed pastime in the language APL in previous decades. The use of Perl to write a program that performed RSA encryption prompted a widespread and practical interest in this pastime. In subsequent years, the term "code golf" has been applied to the pastime in other languages. A Perl Golf Apocalypse was held at Perl Conference 4.0 in Monterey, California in July 2000.
Obfuscation
As with C, obfuscated code competitions were a well known pastime in the late 1990s. The Obfuscated Perl Contest was a competition held by The Perl Journal from 1996 to 2000 that made an arch virtue of Perl's syntactic flexibility. Awards were given for categories such as "most powerful"—programs that made efficient use of space—and "best four-line signature" for programs that fit into four lines of 76 characters in the style of a Usenet signature block.
Poetry
Perl poetry is the practice of writing poems that can be compiled as legal Perl code, for example the piece known as Black Perl. Perl poetry is made possible by the large number of English words that are used in the Perl language. New poems are regularly submitted to the community at PerlMonks.
Perl on IRC
A number of IRC channels offer support for Perl and some of its modules.
CPAN Acme
There are also many examples of code written purely for entertainment on the CPAN. Lingua::Romana::Perligata, for example, allows writing programs in Latin. Upon execution of such a program, the module translates its source code into regular Perl and runs it.
The Perl community has set aside the "Acme" namespace for modules that are fun in nature (but its scope has widened to include exploratory or experimental code or any other module that is not meant to ever be used in production). Some of the Acme modules are deliberately implemented in amusing ways. This includes Acme::Bleach, one of the first modules in the Acme:: namespace, which allows the program's source code to be "whitened" (i.e., all characters replaced with whitespace) and yet still work.
Example code
In older versions of Perl, one would write the Hello World program as:
print "Hello, World!\n";
Here is a more complex Perl program, that counts down seconds from a given starting value:
#!/usr/bin/perl
use strict;
use warnings;
my ( $remaining, $total );
$remaining = $total = shift(@ARGV);
STDOUT->autoflush(1);
while ( $remaining ) {
printf ( "Remaining %s/%s \r", $remaining--, $total );
sleep 1;
}
print "\n";
The perl interpreter can also be used for one-off scripts on the command line. The following example (as invoked from an sh-compatible shell, such as Bash) translates the string "Bob" in all files ending with .txt in the current directory to "Robert":
$ perl -i.bak -lp -e 's/Bob/Robert/g' *.txt
Criticism
Perl has been referred to as "line noise" and a write-only language by its critics. The earliest such mention was in the first edition of the book Learning Perl, a Perl 4 tutorial book written by Randal L. Schwartz, in the first chapter of which he states: "Yes, sometimes Perl looks like line noise to the uninitiated, but to the seasoned Perl programmer, it looks like checksummed line noise with a mission in life." He also stated that the accusation that Perl is a write-only language could be avoided by coding with "proper care". The Perl overview document states that the names of built-in "magic" scalar variables "look like punctuation or line noise". However, the English module provides both long and short English alternatives. document states that line noise in regular expressions could be mitigated using the /x modifier to add whitespace.
According to the Perl 6 FAQ, Perl 6 was designed to mitigate "the usual suspects" that elicit the "line noise" claim from Perl 5 critics, including the removal of "the majority of the punctuation variables" and the sanitization of the regex syntax. The Perl 6 FAQ also states that what is sometimes referred to as Perl's line noise is "the actual syntax of the language" just as gerunds and prepositions are a part of the English language. In a December 2012 blog posting, despite claiming that "Rakudo Perl 6 has failed and will continue to fail unless it gets some adult supervision", chromatic stated that the design of Perl 6 has a "well-defined grammar" as well as an "improved type system, a unified object system with an intelligent metamodel, metaoperators, and a clearer system of context that provides for such niceties as pervasive laziness". He also stated that "Perl 6 has a coherence and a consistency that Perl 5 lacks."
See also
Outline of Perl
Perl Data Language
Perl Object Environment
Plain Old Documentation
References
Further reading
Learning Perl 6th Edition (2011), O'Reilly. Beginner-level introduction to Perl.
Beginning Perl 1st Edition (2012), Wrox. A beginner's tutorial for those new to programming or just new to Perl.
Modern Perl 2nd Edition (2012), Onyx Neon. Describes Modern Perl programming techniques.
Programming Perl 4th Edition (2012), O'Reilly. The definitive Perl reference.
Effective Perl Programming 2nd Edition (2010), Addison-Wesley. Intermediate- to advanced-level guide to writing idiomatic Perl.
Perl Cookbook, . Practical Perl programming examples.
Functional programming techniques in Perl.
External links
American inventions
Programming languages
C programming language family
Cross-platform software
Dynamic programming languages
Dynamically typed programming languages
Free compilers and interpreters
Free software programmed in C
High-level programming languages
Multi-paradigm programming languages
Object-oriented programming languages
Procedural programming languages
Programming languages created in 1987
Scripting languages
Software using the Artistic license
Text-oriented programming languages
Unix programming tools
Articles with example Perl code |
24077 | https://en.wikipedia.org/wiki/PDF | PDF | Portable Document Format (PDF), standardized as ISO 32000, is a file format developed by Adobe in 1992 to present documents, including text formatting and images, in a manner independent of application software, hardware, and operating systems. Based on the PostScript language, each PDF file encapsulates a complete description of a fixed-layout flat document, including the text, fonts, vector graphics, raster images and other information needed to display it. PDF has its roots in "The Camelot Project" initiated by Adobe co-founder John Warnock in 1991.
PDF was standardized as ISO 32000 in 2008. The last edition as ISO 32000-2:2020 was published in December 2020.
PDF files may contain a variety of content besides flat text and graphics including logical structuring elements, interactive elements such as annotations and form-fields, layers, rich media (including video content), three-dimensional objects using U3D or PRC, and various other data formats. The PDF specification also provides for encryption and digital signatures, file attachments, and metadata to enable workflows requiring these features.
History
Adobe Systems made the PDF specification available free of charge in 1993. In the early years PDF was popular mainly in desktop publishing workflows, and competed with a variety of formats such as DjVu, Envoy, Common Ground Digital Paper, Farallon Replica and even Adobe's own PostScript format.
PDF was a proprietary format controlled by Adobe until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization as ISO 32000-1:2008, at which time control of the specification passed to an ISO Committee of volunteer industry experts. In 2008, Adobe published a Public Patent License to ISO 32000-1 granting royalty-free rights for all patents owned by Adobe that are necessary to make, use, sell, and distribute PDF-compliant implementations.
PDF 1.7, the sixth edition of the PDF specification that became ISO 32000-1, includes some proprietary technologies defined only by Adobe, such as Adobe XML Forms Architecture (XFA) and JavaScript extension for Acrobat, which are referenced by ISO 32000-1 as normative and indispensable for the full implementation of the ISO 32000-1 specification. These proprietary technologies are not standardized and their specification is published only on Adobe's website. Many of them are also not supported by popular third-party implementations of PDF.
In December 2020, the second edition of PDF 2.0, ISO 32000-2:2020, was published, including clarifications, corrections and critical updates to normative references. ISO 32000-2 does not include any proprietary technologies as normative references.
Technical details
A PDF file is often a combination of vector graphics, text, and bitmap graphics. The basic types of content in a PDF are
Typeset text stored as content streams (i.e., not encoded in plain text);
Vector graphics for illustrations and designs that consist of shapes and lines;
Raster graphics for photographs and other types of images
Multimedia objects in the document.
In later PDF revisions, a PDF document can also support links (inside document or web page), forms, JavaScript (initially available as a plugin for Acrobat 3.0), or any other types of embedded contents that can be handled using plug-ins.
PDF combines three technologies:
An equivalent subset of the PostScript page description programming language but in declarative form, for generating the layout and graphics.
A font-embedding/replacement system to allow fonts to travel with the documents.
A structured storage system to bundle these elements and any associated content into a single file, with data compression where appropriate.
PostScript language
PostScript is a page description language run in an interpreter to generate an image, a process requiring many resources. It can handle graphics and standard features of programming languages such as if statements and loop commands. PDF is largely based on PostScript but simplified to remove flow control features like these, while graphics commands equivalent to lineto remain.
Historically, the PostScript-like PDF code is generated from a source PostScript file. The graphics commands that are output by the PostScript code are collected and tokenized. Any files, graphics, or fonts to which the document refers also are collected. Then, everything is compressed to a single file. Therefore, the entire PostScript world (fonts, layout, measurements) remains intact.
As a document format, PDF has several advantages over PostScript:
PDF contains tokenized and interpreted results of the PostScript source code, for direct correspondence between changes to items in the PDF page description and changes to the resulting page appearance.
PDF (from version 1.4) supports transparent graphics; PostScript does not.
PostScript is an interpreted programming language with an implicit global state, so instructions accompanying the description of one page can affect the appearance of any following page. Therefore, all preceding pages in a PostScript document must be processed to determine the correct appearance of a given page, whereas each page in a PDF document is unaffected by the others. As a result, PDF viewers allow the user to quickly jump to the final pages of a long document, whereas a PostScript viewer needs to process all pages sequentially before being able to display the destination page (unless the optional PostScript Document Structuring Conventions have been carefully compiled and included).
PDF 1.6 and later supports interactive 3D documents embedded in a PDF file: 3D drawings can be embedded using U3D or PRC and various other data formats.
File format
A PDF file contains 7-bit ASCII characters, except for certain elements that may have binary content.
The file starts with a header containing a magic number (as a readable string) and the version of the format, for example %PDF-1.7. The format is a subset of a COS ("Carousel" Object Structure) format. A COS tree file consists primarily of objects, of which there are nine types:
Boolean values, representing true or false
Real numbers
Integers
Strings, enclosed within parentheses ((...)) or represented as hexadecimal within single angle brackets (<...>). Strings may contain 8-bit characters.
Names, starting with a forward slash (/)
Arrays, ordered collections of objects enclosed within square brackets ([...])
Dictionaries, collections of objects indexed by names enclosed within double angle brackets (<<...>>)
Streams, usually containing large amounts of optionally compressed binary data, preceded by a dictionary and enclosed between the stream and endstream keywords.
The null object
Furthermore, there may be comments, introduced with the percent sign (%). Comments may contain 8-bit characters.
Objects may be either direct (embedded in another object) or indirect. Indirect objects are numbered with an object number and a generation number and defined between the obj and endobj keywords if residing in the document root. Beginning with PDF version 1.5, indirect objects (except other streams) may also be located in special streams known as object streams (marked /Type /ObjStm). This technique enables non-stream objects to have standard stream filters applied to them, reduces the size of files that have large numbers of small indirect objects and is especially useful for Tagged PDF. Object streams do not support specifying an object's generation number (other than 0).
An index table, also called the cross-reference table, is located near the end of the file and gives the byte offset of each indirect object from the start of the file. This design allows for efficient random access to the objects in the file, and also allows for small changes to be made without rewriting the entire file (incremental update). Before PDF version 1.5, the table would always be in a special ASCII format, be marked with the xref keyword, and follow the main body composed of indirect objects. Version 1.5 introduced optional cross-reference streams, which have the form of a standard stream object, possibly with filters applied. Such a stream may be used instead of the ASCII cross-reference table and contains the offsets and other information in binary format. The format is flexible in that it allows for integer width specification (using the /W array), so that for example, a document not exceeding 64 KiB in size may dedicate only 2 bytes for object offsets.
At the end of a PDF file is a footer containing
The startxref keyword followed by an offset to the start of the cross-reference table (starting with the xref keyword) or the cross-reference stream object, followed by
The %%EOF end-of-file marker.
If a cross-reference stream is not being used, the footer is preceded by the trailer keyword followed by a dictionary containing information that would otherwise be contained in the cross-reference stream object's dictionary:
A reference to the root object of the tree structure, also known as the catalog (/Root)
The count of indirect objects in the cross-reference table (/Size)
Other optional information
There are two layouts to the PDF files: non-linearized (not "optimized") and linearized ("optimized"). Non-linearized PDF files can be smaller than their linear counterparts, though they are slower to access because portions of the data required to assemble pages of the document are scattered throughout the PDF file. Linearized PDF files (also called "optimized" or "web optimized" PDF files) are constructed in a manner that enables them to be read in a Web browser plugin without waiting for the entire file to download, since all objects required for the first page to display are optimally organized at the start of the file. PDF files may be optimized using Adobe Acrobat software or QPDF.
Imaging model
The basic design of how graphics are represented in PDF is very similar to that of PostScript, except for the use of transparency, which was added in PDF 1.4.
PDF graphics use a device-independent Cartesian coordinate system to describe the surface of a page. A PDF page description can use a matrix to scale, rotate, or skew graphical elements. A key concept in PDF is that of the graphics state, which is a collection of graphical parameters that may be changed, saved, and restored by a page description. PDF has (as of version 2.0) 25 graphics state properties, of which some of the most important are:
The current transformation matrix (CTM), which determines the coordinate system
The clipping path
The color space
The alpha constant, which is a key component of transparency
Black point compensation control (introduced in PDF 2.0)
Vector graphics
As in PostScript, vector graphics in PDF are constructed with paths. Paths are usually composed of lines and cubic Bézier curves, but can also be constructed from the outlines of text. Unlike PostScript, PDF does not allow a single path to mix text outlines with lines and curves. Paths can be stroked, filled, fill then stroked, or used for clipping. Strokes and fills can use any color set in the graphics state, including patterns. PDF supports several types of patterns. The simplest is the tiling pattern in which a piece of artwork is specified to be drawn repeatedly. This may be a colored tiling pattern, with the colors specified in the pattern object, or an uncolored tiling pattern, which defers color specification to the time the pattern is drawn. Beginning with PDF 1.3 there is also a shading pattern, which draws continuously varying colors. There are seven types of shading patterns of which the simplest are the axial shading (Type 2) and radial shading (Type 3).
Raster images
Raster images in PDF (called Image XObjects) are represented by dictionaries with an associated stream. The dictionary describes the properties of the image, and the stream contains the image data. (Less commonly, small raster images may be embedded directly in a page description as an inline image.) Images are typically filtered for compression purposes. Image filters supported in PDF include the following general-purpose filters:
ASCII85Decode, a filter used to put the stream into 7-bit ASCII,
ASCIIHexDecode, similar to ASCII85Decode but less compact,
FlateDecode, a commonly used filter based on the deflate algorithm defined in (deflate is also used in the gzip, PNG, and zip file formats among others); introduced in PDF 1.2; it can use one of two groups of predictor functions for more compact zlib/deflate compression: Predictor 2 from the TIFF 6.0 specification and predictors (filters) from the PNG specification (),
LZWDecode, a filter based on LZW Compression; it can use one of two groups of predictor functions for more compact LZW compression: Predictor 2 from the TIFF 6.0 specification and predictors (filters) from the PNG specification,
RunLengthDecode, a simple compression method for streams with repetitive data using the run-length encoding algorithm and the image-specific filters,
DCTDecode, a lossy filter based on the JPEG standard,
CCITTFaxDecode, a lossless bi-level (black/white) filter based on the Group 3 or Group 4 CCITT (ITU-T) fax compression standard defined in ITU-T T.4 and T.6,
JBIG2Decode, a lossy or lossless bi-level (black/white) filter based on the JBIG2 standard, introduced in PDF 1.4, and
JPXDecode, a lossy or lossless filter based on the JPEG 2000 standard, introduced in PDF 1.5.
Normally all image content in a PDF is embedded in the file. But PDF allows image data to be stored in external files by the use of external streams or Alternate Images. Standardized subsets of PDF, including PDF/A and PDF/X, prohibit these features.
Text
Text in PDF is represented by text elements in page content streams. A text element specifies that characters should be drawn at certain positions. The characters are specified using the encoding of a selected font resource.
A font object in PDF is a description of a digital typeface. It may either describe the characteristics of a typeface, or it may include an embedded font file. The latter case is called an embedded font while the former is called an unembedded font. The font files that may be embedded are based on widely used standard digital font formats: Type 1 (and its compressed variant CFF), TrueType, and (beginning with PDF 1.6) OpenType. Additionally PDF supports the Type 3 variant in which the components of the font are described by PDF graphic operators.
Fourteen typefaces, known as the standard 14 fonts, have a special significance in PDF documents:
Times (v3) (in regular, italic, bold, and bold italic)
Courier (in regular, oblique, bold and bold oblique)
Helvetica (v3) (in regular, oblique, bold and bold oblique)
Symbol
Zapf Dingbats
These fonts are sometimes called the base fourteen fonts. These fonts, or suitable substitute fonts with the same metrics, should be available in most PDF readers, but they are not guaranteed to be available in the reader, and may only display correctly if the system has them installed. Fonts may be substituted if they are not embedded in a PDF.
Within text strings, characters are shown using character codes (integers) that map to glyphs in the current font using an encoding. There are several predefined encodings, including WinAnsi, MacRoman, and many encodings for East Asian languages and a font can have its own built-in encoding. (Although the WinAnsi and MacRoman encodings are derived from the historical properties of the Windows and Macintosh operating systems, fonts using these encodings work equally well on any platform.) PDF can specify a predefined encoding to use, the font's built-in encoding or provide a lookup table of differences to a predefined or built-in encoding (not recommended with TrueType fonts). The encoding mechanisms in PDF were designed for Type 1 fonts, and the rules for applying them to TrueType fonts are complex.
For large fonts or fonts with non-standard glyphs, the special encodings Identity-H (for horizontal writing) and Identity-V (for vertical) are used. With such fonts, it is necessary to provide a ToUnicode table if semantic information about the characters is to be preserved.
Transparency
The original imaging model of PDF was, like PostScript's, opaque: each object drawn on the page completely replaced anything previously marked in the same location. In PDF 1.4 the imaging model was extended to allow transparency. When transparency is used, new objects interact with previously marked objects to produce blending effects. The addition of transparency to PDF was done by means of new extensions that were designed to be ignored in products written to PDF 1.3 and earlier specifications. As a result, files that use a small amount of transparency might view acceptably by older viewers, but files making extensive use of transparency could be viewed incorrectly by an older viewer.
The transparency extensions are based on the key concepts of transparency groups, blending modes, shape, and alpha. The model is closely aligned with the features of Adobe Illustrator version 9. The blend modes were based on those used by Adobe Photoshop at the time. When the PDF 1.4 specification was published, the formulas for calculating blend modes were kept secret by Adobe. They have since been published.
The concept of a transparency group in PDF specification is independent of existing notions of "group" or "layer" in applications such as Adobe Illustrator. Those groupings reflect logical relationships among objects that are meaningful when editing those objects, but they are not part of the imaging model.
Additional features
Logical structure and accessibility
A "tagged" PDF (see clause 14.8 in ISO 32000) includes document structure and semantics information to enable reliable text extraction and accessibility. Technically speaking, tagged PDF is a stylized use of the format that builds on the logical structure framework introduced in PDF 1.3. Tagged PDF defines a set of standard structure types and attributes that allow page content (text, graphics, and images) to be extracted and reused for other purposes.
Tagged PDF is not required in situations where a PDF file is intended only for print. Since the feature is optional, and since the rules for Tagged PDF were relatively vague in ISO 32000-1, support for tagged PDF amongst consuming devices, including assistive technology (AT), is uneven at this time. ISO 32000-2, however, includes an improved discussion of tagged PDF which is anticipated to facilitate further adoption.
An ISO-standardized subset of PDF specifically targeted at accessibility, PDF/UA, was first published in 2012.
Optional Content Groups (layers)
With the introduction of PDF version, 1.5 (2003) came the concept of Layers. Layers, or as they are more formally known Optional Content Groups (OCGs), refer to sections of content in a PDF document that can be selectively viewed or hidden by document authors or consumers. This capability is useful in CAD drawings, layered artwork, maps, multi-language documents, etc.
Basically, it consists of an Optional Content Properties Dictionary added to the document root. This dictionary contains an array of Optional Content Groups (OCGs), each describing a set of information and each of which may be individually displayed or suppressed, plus a set of Optional Content Configuration Dictionaries, which give the status (Displayed or Suppressed) of the given OCGs.
Encryption and signatures
A PDF file may be encrypted, for security, in which case a password is needed to view or edit the contents. PDF 2.0 defines 256-bit AES encryption as standard for PDF 2.0 files. The PDF Reference also defines ways that third parties can define their own encryption systems for PDF.
PDF files may be digitally signed, to provide secure authentication; complete details on implementing digital signatures in PDF is provided in ISO 32000-2.
PDF files may also contain embedded DRM restrictions that provide further controls that limit copying, editing or printing. These restrictions depend on the reader software to obey them, so the security they provide is limited.
The standard security provided by PDF consists of two different methods and two different passwords: a user password, which encrypts the file and prevents opening, and an owner password, which specifies operations that should be restricted even when the document is decrypted, which can include modifying, printing, or copying text and graphics out of the document, or adding or modifying text notes and AcroForm fields. The user password encrypts the file, while the owner password does not, instead relying on client software to respect these restrictions. An owner password can easily be removed by software, including some free online services. Thus, the use restrictions that a document author places on a PDF document are not secure, and cannot be assured once the file is distributed; this warning is displayed when applying such restrictions using Adobe Acrobat software to create or edit PDF files.
Even without removing the password, most freeware or open source PDF readers ignore the permission "protections" and allow the user to print or make copy of excerpts of the text as if the document were not limited by password protection.
Beginning with PDF 1.5, Usage rights (UR) signatures are used to enable additional interactive features that are not available by default in a particular PDF viewer application. The signature is used to validate that the permissions have been granted by a bona fide granting authority. For example, it can be used to allow a user
To save the PDF document along with a modified form and/or annotation data
Import form data files in FDF, XFDF, and text (CSV/TSV) formats
Export form data files in FDF and XFDF formats
Submit form data
Instantiate new pages from named page templates
Apply a digital signature to existing digital signature form field
Create, delete, modify, copy, import, and export annotations
For example, Adobe Systems grants permissions to enable additional features in Adobe Reader, using public-key cryptography. Adobe Reader verifies that the signature uses a certificate from an Adobe-authorized certificate authority. Any PDF application can use this same mechanism for its own purposes.
Under specific circumstances including non-patched systems of the receiver, the information the receiver of a digital signed document sees can be manipulated by the sender after the document has been signed by the signer.
PAdES (PDF Advanced Electronic Signatures) is a set of restrictions and extensions to PDF and ISO 32000-1 making it suitable for advanced electronic signatures. This is published by ETSI as TS 102 778.
File attachments
PDF files can have file attachments which processors may access and open or save to a local filesystem.
Metadata
PDF files can contain two types of metadata. The first is the Document Information Dictionary, a set of key/value fields such as author, title, subject, creation and update dates. This is optional and is referenced from Info key in the trailer of the file. A small set of fields is defined and can be extended with additional text values if required. This method is deprecated in PDF 2.0.
In PDF 1.4, support was added for Metadata Streams, using the Extensible Metadata Platform (XMP) to add XML standards-based extensible metadata as used in other file formats. PDF 2.0 allows metadata to be attached to any object in the document, such as information about embedded illustrations, fonts, images as well as the whole document (attaching to the document catalog), using an extensible schema.
PDF documents can also contain display settings, including the page display layout and zoom level in a Viewer Preferences object. Adobe Reader uses these settings to override the user's default settings when opening the document. The free Adobe Reader cannot remove these settings.
Accessibility
PDF files can be created specifically to be accessible for people with disabilities. PDF file formats in use can include tags, text equivalents, captions, audio descriptions, and more. Some software can automatically produce tagged PDFs, but this feature is not always enabled by default. Leading screen readers, including JAWS, Window-Eyes, Hal, and Kurzweil 1000 and 3000 can read tagged PDF. Moreover, tagged PDFs can be re-flowed and magnified for readers with visual impairments. Adding tags to older PDFs and those that are generated from scanned documents can present some challenges.
One of the significant challenges with PDF accessibility is that PDF documents have three distinct views, which, depending on the document's creation, can be inconsistent with each other. The three views are (i) the physical view, (ii) the tags view, and (iii) the content view. The physical view is displayed and printed (what most people consider a PDF document). The tags view is what screen readers and other assistive technologies use to deliver high-quality navigation and reading experience to users with disabilities. The content view is based on the physical order of objects within the PDF's content stream and may be displayed by software that does not fully support the tags' view, such as the Reflow feature in Adobe's Reader.
PDF/UA, the International Standard for accessible PDF based on ISO 32000-1 was first published as ISO 14289–1 in 2012 and establishes normative language for accessible PDF technology.
Multimedia
Rich Media PDF is a PDF file including interactive content that can be embedded or linked within the file.
Forms
Interactive Forms is a mechanism to add forms to the PDF file format. PDF currently supports two different methods for integrating data and PDF forms. Both formats today coexist in the PDF specification:
AcroForms (also known as Acrobat forms), introduced in the PDF 1.2 format specification and included in all later PDF specifications.
XML Forms Architecture (XFA) forms, introduced in the PDF 1.5 format specification. Adobe XFA Forms are not compatible with AcroForms. XFA was deprecated from PDF with PDF 2.0.
AcroForms were introduced in the PDF 1.2 format. AcroForms permit using objects (e.g. text boxes, Radio buttons, etc.) and some code (e.g. JavaScript). Alongside the standard PDF action types, interactive forms (AcroForms) support submitting, resetting, and importing data. The "submit" action transmits the names and values of selected interactive form fields to a specified uniform resource locator (URL). Interactive form field names and values may be submitted in any of the following formats, (depending on the settings of the action's ExportFormat, SubmitPDF, and XFDF flags):
HTML Form format HTML 4.01 Specification since PDF 1.5; HTML 2.0 since 1.2
Forms Data Format (FDF) based on PDF, uses the same syntax and has essentially the same file structure, but is much simpler than PDF since the body of an FDF document consists of only one required object. Forms Data Format is defined in the PDF specification (since PDF 1.2). The Forms Data Format can be used when submitting form data to a server, receiving the response, and incorporating it into the interactive form. It can also be used to export form data to stand-alone files that can be imported back into the corresponding PDF interactive form. FDF was originally defined in 1996 as part of ISO 32000-2:2017.
XML Forms Data Format (XFDF) (external XML Forms Data Format Specification, Version 2.0; supported since PDF 1.5; it replaced the "XML" form submission format defined in PDF 1.4) the XML version of Forms Data Format, but the XFDF implements only a subset of FDF containing forms and annotations. Some entries in the FDF dictionary do not have XFDF equivalents – such as the Status, Encoding, JavaScript, Page's keys, EmbeddedFDFs, Differences, and Target. In addition, XFDF does not allow the spawning, or addition, of new pages based on the given data; as can be done when using an FDF file. The XFDF specification is referenced (but not included) in PDF 1.5 specification (and in later versions). It is described separately in XML Forms Data Format Specification. The PDF 1.4 specification allowed form submissions in XML format, but this was replaced by submissions in XFDF format in the PDF 1.5 specification. XFDF conforms to the XML standard. XFDF can be used in the same way as FDF; e.g., form data is submitted to a server, modifications are made, then sent back and the new form data is imported in an interactive form. It can also be used to export form data to stand-alone files that can be imported back into the corresponding PDF interactive form. As of August, 2019, XFDF 3.0 is an ISO/IEC standard under the formal name ISO 19444-1:2019 - Document management — XML Forms Data Format — Part 1: Use of ISO 32000-2 (XFDF 3.0). This standard is a normative reference of ISO 32000-2.
PDF
The entire document can be submitted rather than individual fields and values, as was defined in PDF 1.4.
AcroForms can keep form field values in external stand-alone files containing key-value pairs. The external files may use Forms Data Format (FDF) and XML Forms Data Format (XFDF) files. The usage rights (UR) signatures define rights for import form data files in FDF, XFDF and text (CSV/TSV) formats, and export form data files in FDF and XFDF formats.
In PDF 1.5, Adobe Systems introduced a proprietary format for forms; Adobe XML Forms Architecture (XFA). Adobe XFA Forms are not compatible with ISO 32000's AcroForms feature, and most PDF processors do not handle XFA content. The XFA specification is referenced from ISO 32000-1/PDF 1.7 as an external proprietary specification, and was entirely deprecated from PDF with ISO 32000-2 (PDF 2.0).
Licensing
Anyone may create applications that can read and write PDF files without having to pay royalties to Adobe Systems; Adobe holds patents to PDF, but licenses them for royalty-free use in developing software complying with its PDF specification.
Security
In November 2019, researchers from Ruhr University Bochum and Hackmanit GmbH published attacks on digitally signed PDFs . They showed how to change the visible content in a signed PDF without invalidating the signature in 21 of 22 desktop PDF viewers and 6 of 8 online validation services by abusing implementation flaws.
On the same conference, they additionally showed how to exfiltrate the plaintext of encrypted content in PDFs. In 2021, they showed new so-called shadow attacks on PDFs that abuse the flexibility of features provided in the specification. An overview of security issues in PDFs regarding denial of service, information disclosure, data manipulation, and Arbitrary code execution attacks was presented by Jens Müller.
PDF attachments carrying viruses were first discovered in 2001. The virus, named OUTLOOK.PDFWorm or Peachy, uses Microsoft Outlook to send itself as an attached Adobe PDF file. It was activated with Adobe Acrobat, but not with Acrobat Reader.
From time to time, new vulnerabilities are discovered in various versions of Adobe Reader, prompting the company to issue security fixes. Other PDF readers are also susceptible. One aggravating factor is that a PDF reader can be configured to start automatically if a web page has an embedded PDF file, providing a vector for attack. If a malicious web page contains an infected PDF file that takes advantage of a vulnerability in the PDF reader, the system may be compromised even if the browser is secure. Some of these vulnerabilities are a result of the PDF standard allowing PDF documents to be scripted with JavaScript. Disabling JavaScript execution in the PDF reader can help mitigate such future exploits, although it does not protect against exploits in other parts of the PDF viewing software. Security experts say that JavaScript is not essential for a PDF reader and that the security benefit that comes from disabling JavaScript outweighs any compatibility issues caused. One way of avoiding PDF file exploits is to have a local or web service convert files to another format before viewing.
On March 30, 2010 security researcher Didier Stevens reported an Adobe Reader and Foxit Reader exploit that runs a malicious executable if the user allows it to launch when asked.
Software
Viewers and editors
PDF viewers are generally provided free of charge, and many versions are available from a variety of sources.
There are many software options for creating PDFs, including the PDF printing capabilities built into macOS, iOS, and most Linux distributions, LibreOffice, Microsoft Office 2007 (if updated to SP2) and later, WordPerfect 9, Scribus, numerous PDF print drivers for Microsoft Windows, the pdfTeX typesetting system, the DocBook PDF tools, applications developed around Ghostscript and Adobe Acrobat itself as well as Adobe InDesign, Adobe FrameMaker, Adobe Illustrator, Adobe Photoshop. Google's online office suite Google Docs allows for uploading and saving to PDF. Some web apps offer free PDF editing and annotation tools.
The Free Software Foundation once thought of as one of their high priority projects to be "developing a free, high-quality and fully functional set of libraries and programs that implement the PDF file format and associated technologies to the ISO 32000 standard." In 2011, however, the GNU PDF project was removed from the list of "high priority projects" due to the maturation of the Poppler library, which has enjoyed wider use in applications such as Evince with the GNOME desktop environment. Poppler is based on Xpdf code base. There are also commercial development libraries available as listed in List of PDF software.
The Apache PDFBox project of the Apache Software Foundation is an open source Java library for working with PDF documents. PDFBox is licensed under the Apache License.
Printing
Raster image processors (RIPs) are used to convert PDF files into a raster format suitable for imaging onto paper and other media in printers, digital production presses and prepress in a process known as rasterisation. RIPs capable of processing PDF directly include the Adobe PDF Print Engine from Adobe Systems and Jaws and the Harlequin RIP from Global Graphics.
In 1993, the Jaws raster image processor from Global Graphics became the first shipping prepress RIP that interpreted PDF natively without conversion to another format. The company released an upgrade to their Harlequin RIP with the same capability in 1997.
Agfa-Gevaert introduced and shipped Apogee, the first prepress workflow system based on PDF, in 1997.
Many commercial offset printers have accepted the submission of press-ready PDF files as a print source, specifically the PDF/X-1a subset and variations of the same. The submission of press-ready PDF files is a replacement for the problematic need for receiving collected native working files.
In 2006, PDF was widely accepted as the standard print job format at the Open Source Development Labs Printing Summit. It is supported as a print job format by the Common Unix Printing System and desktop application projects such as GNOME, KDE, Firefox, Thunderbird, LibreOffice and OpenOffice have switched to emit print jobs in PDF.
Some desktop printers also support direct PDF printing, which can interpret PDF data without external help.
Native display model
PDF was selected as the "native" metafile format for Mac OS X, replacing the PICT format of the earlier classic Mac OS. The imaging model of the Quartz graphics layer is based on the model common to Display PostScript and PDF, leading to the nickname Display PDF. The Preview application can display PDF files, as can version 2.0 and later of the Safari web browser. System-level support for PDF allows Mac OS X applications to create PDF documents automatically, provided they support the OS-standard printing architecture. The files are then exported in PDF 1.3 format according to the file header. When taking a screenshot under Mac OS X versions 10.0 through 10.3, the image was also captured as a PDF; later versions save screen captures as a PNG file, though this behavior can be set back to PDF if desired.
Annotation
Adobe Acrobat is one example of proprietary software that allows the user to annotate, highlight, and add notes to already created PDF files. One UNIX application available as free software (under the GNU General Public License) is PDFedit. The freeware Foxit Reader, available for Microsoft Windows, macOS and Linux, allows annotating documents. Tracker Software's PDF-XChange Viewer allows annotations and markups without restrictions in its freeware alternative. Apple's macOS's integrated PDF viewer, Preview, does also enable annotations as does the open-source software Skim, with the latter supporting interaction with LaTeX, SyncTeX, and PDFSync and integration with BibDesk reference management software. Freeware Qiqqa can create an annotation report that summarizes all the annotations and notes one has made across their library of PDFs. The Text Verification Tool exports differences in documents as annotations and markups.
There are also web annotation systems that support annotation in pdf and other documents formats. In cases where PDFs are expected to have all of the functionality of paper documents, ink annotation is required.
Alternatives
The Open XML Paper Specification is a competing format used both as a page description language and as the native print spooler format for Microsoft Windows since Windows Vista.
Mixed Object: Document Content Architecture is a competing format. MO:DCA-P is a part of Advanced Function Presentation.
See also
Web document
XSL Formatting Objects
References
Further reading
PDF 2.0
PDF 2.0
PDF 1.7 (ISO 32000-1:2008)
PDF 1.7 and errata to 1.7
PDF 1.6 () and errata to 1.6
PDF 1.5 and errata to 1.5
PDF 1.4 () and errata to 1.4
PDF 1.3 () and errata to 1.3
External links
PDF Association – The PDF Association is the industry association for software developers producing or processing PDF files.
Adobe PDF 101: Summary of PDF
Adobe: PostScript vs. PDF – Official introductory comparison of PS, EPS vs. PDF.
– Information about PDF/E and PDF/UA specification for accessible documents file format (archived by The Wayback Machine)
PDF/A-1 ISO standard published by the International Organization for Standardization (with corrigenda)
PDF Reference and Adobe Extensions to the PDF Specification
Portable Document Format: An Introduction for Programmers – Introduction to PDF vs. PostScript and PDF internals (up to v1.3)
The Camelot Paper – the paper in which John Warnock outlined the project that created PDF
Everything you wanted to know about PDF but was afraid to ask – recording of a talk by Leonard Rosenthol (45 mins) (Adobe Systems) at TUG 2007
Computer-related introductions in 1993
Adobe Inc.
Digital press
Electronic documents
Graphics file formats
ISO standards
Office document file formats
Open formats
Page description languages
Vector graphics |
24107 | https://en.wikipedia.org/wiki/Peer-to-peer | Peer-to-peer | Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the application. They are said to form a peer-to-peer network of nodes.
Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client–server model in which the consumption and supply of resources is divided.
While P2P systems had previously been used in many application domains, the architecture was popularized by the file sharing system Napster, originally released in 1999. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general.
Historical development
While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster (originally released in 1999). The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems." The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1.
Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts to the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful client–server network where "every participating node could request and serve content." However, ARPANET was not self-organized, and it lacked the ability to "provide any means for context or content-based routing beyond 'simple' address-based routing."
Therefore, Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces a decentralized model of control. The basic model is a client–server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of Email clients and their direct connections is strictly a client–server relationship.
In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions."
Architecture
A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server. A typical example of a file transfer that uses the client–server model is the File Transfer Protocol (FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests.
Routing and resource discovery
Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers are able to communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two).
Unstructured networks
Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols).
Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network.
However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful.
Structured networks
In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare.
The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key.
However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance.
Notable distributed networks that use DHTs include Tixati, an alternative to BitTorrent's distributed tracker, the Kad network, the Storm botnet, YaCy, and the Coral Content Distribution Network. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications.
Hybrid models
Hybrid models are a combination of peer-to-peer and client–server models. A common hybrid model is to have a central server that helps peers find each other. Spotify was an example of a hybrid model [until 2014]. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks.
CoopNet content distribution system
CoopNet (Cooperative Networking) was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working at Microsoft Research and Carnegie Mellon University. When a server experiences an increase in load it redirects incoming peers to other peers who have agreed to mirror the content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottle-neck is most likely in the outgoing bandwidth than the CPU, hence its server-centric design. It assigns peers to other peers who are 'close in IP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors. Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes.
Security and trust
Peer-to-peer systems pose unique challenges from a computer security perspective.
Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits.
Routing attacks
Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes.
Corrupted data and malware
The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the gnutella network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for.
Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts.
Resilient and scalable computer networks
The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client–server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down.
Distributed storage and search
There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point.
In this sense, the community of users in a P2P network is completely responsible for deciding what content is available. Unpopular files will eventually disappear and become unavailable as more people stop sharing them. Popular files, however, will be highly and easily distributed. Popular files on a P2P network actually have more stability and availability than files on central networks. In a centralized network, a simple loss of connection between the server and clients is enough to cause a failure, but in P2P networks, the connections between every node must be lost in order to cause a data sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its own backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems.
Applications
Content delivery
In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as Bittorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.
File-sharing networks
Many peer-to-peer file sharing networks, such as Gnutella, G2, and the eDonkey network popularized peer-to-peer technologies.
Peer-to-peer content delivery networks.
Peer-to-peer content services, e.g. caches for improved performance such as Correli Caches
Software publication and distribution (Linux distribution, several games); via file sharing networks.
Copyright infringements
Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement.
Multimedia
The P2PTV and PDTP protocols.
Some proprietary multimedia applications use a peer-to-peer network along with streaming servers to stream audio and video to their clients.
Peercasting for multicasting streams.
Pennsylvania State University, MIT and Simon Fraser University are carrying on a project called LionShare designed for facilitating file sharing among educational institutions globally.
Osiris is a program that allows its users to create anonymous and autonomous web portals distributed via P2P network.
The Theta Network is a cryptocurrency token platform that enables peer-to-peer streaming and CDN caching.
Other P2P applications
Bitcoin and additional such as Ether, Nxt and Peercoin are peer-to-peer-based digital cryptocurrencies.
Dalesa, a peer-to-peer web cache for LANs (based on IP multicasting).
Dat, a distributed version-controlled publishing platform.
Filecoin is an open source, public, cryptocurrency and digital payment system intended to be a blockchain-based cooperative digital storage and data retrieval method.
I2P, an overlay network used to browse the Internet anonymously.
Unlike the related I2P, the Tor network is not itself peer-to-peer; however, it can enable peer-to-peer applications to be built on top of it via onion services.
The InterPlanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia distribution protocol. Nodes in the IPFS network form a distributed file system.
Jami, a peer-to-peer chat and SIP app.
JXTA, a peer-to-peer protocol designed for the Java platform.
Netsukuku, a Wireless community network designed to be independent from the Internet.
Open Garden, connection sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth.
Resilio Sync, a directory-syncing app.
Research like the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system.
Syncthing, a directory-syncing app.
Tradepal and M-commerce applications that power real-time marketplaces.
The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May, 2003, Anthony Tether, then director of DARPA, testified that the United States military uses P2P networks.
WebTorrent is a P2P streaming torrent client in JavaScript for use in web browsers, as well as in the WebTorrent Desktop stand alone version that bridges WebTorrent and BitTorrent serverless networks.
Microsoft in Windows 10 uses a proprietary peer to peer technology called "Delivery Optimization" to deploy operating system updates using end-users PCs either on the local network or other PCs. According to Microsoft's Channel 9 it led to a 30%-50% reduction in Internet bandwidth usage.
Artisoft's LANtastic was built as a peer-to-peer operating system. Machines can be both servers and workstations at the same time.
Hotline Communications Hotline Client was built as decentralized servers with tracker software dedicated to any type of files and still operates today
Social implications
Incentivizing resource sharing and cooperation
Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance." Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources.
Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction.
Privacy and anonymity
Some peer-to-peer networks (e.g. Freenet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity.
Perpetrators of live streaming sexual abuse and other cybercrimes have used peer-to-peer platforms to carry out activities with anonymity.
Political implications
Intellectual property law and illegal sharing
Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems.
A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses.
Network neutrality
Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP's solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet.
Current research
Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work." If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments."
Besides all the above stated facts, there have been work done on ns-2 open source network simulator. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here.
See also
References
External links
Ghosh Debjani, Rajan Payas, Pandey Mayank P2P-VoD Streaming: Design Issues & User Experience Challenges Springer Proceedings, June 2014
Glossary of P2P terminology
Foundation of Peer-to-Peer Computing, Special Issue, Elsevier Journal of Computer Communication, (Ed) Javed I. Khan and Adam Wierzbicki, Volume 31, Issue 2, February 2008
Marling Engle & J. I. Khan. Vulnerabilities of P2P systems and a critical look at their solutions, May 2006
Stephanos Androutsellis-Theotokis and Diomidis Spinellis. A survey of peer-to-peer content distribution technologies. ACM Computing Surveys, 36(4):335–371, December 2004.
Biddle, Peter, Paul England, Marcus Peinado, and Bryan Willman, The Darknet and the Future of Content Distribution. In 2002 ACM Workshop on Digital Rights Management, November 2002.
John F. Buford, Heather Yu, Eng Keong Lua P2P Networking and Applications. , Morgan Kaufmann, December 2008
Djamal-Eddine Meddour, Mubashar Mushtaq, and Toufik Ahmed, "Open Issues in P2P Multimedia Streaming", in the proceedings of the 1st Multimedia Communications Workshop MULTICOMM 2006 held in conjunction with IEEE ICC 2006 pp 43–48, June 2006, Istanbul, Turkey.
Detlef Schoder and Kai Fischbach, "Core Concepts in Peer-to-Peer (P2P) Networking". In: Subramanian, R.; Goodman, B. (eds.): P2P Computing: The Evolution of a Disruptive Technology, Idea Group Inc, Hershey. 2005
Ramesh Subramanian and Brian Goodman (eds), Peer-to-Peer Computing: Evolution of a Disruptive Technology, , Idea Group Inc., Hershey, PA, United States, 2005.
Shuman Ghosemajumder. Advanced Peer-Based Technology Business Models. MIT Sloan School of Management, 2002.
Silverthorne, Sean. Music Downloads: Pirates- or Customers?. Harvard Business School Working Knowledge, 2004.
Glasnost test P2P traffic shaping (Max Planck Institute for Software Systems)
File sharing networks
File sharing
Software engineering terminology |
24222 | https://en.wikipedia.org/wiki/Public-key%20cryptography | Public-key cryptography | Public-key cryptography, or asymmetric cryptography, is a cryptographic system that uses pairs of keys. Each pair consists of a public key (which may be known to others) and a private key (which may not be known by anyone except the owner). The generation of such key pairs depends on cryptographic algorithms which are based on mathematical problems termed one-way functions. Effective security requires keeping the private key private; the public key can be openly distributed without compromising security.
In such a system, any person can encrypt a message using the intended receiver's public key, but that encrypted message can only be decrypted with the receiver's private key. This allows, for instance, a server program to generate a cryptographic key intended for a suitable symmetric-key cryptography, then to use a client's openly-shared public key to encrypt that newly generated symmetric key. The server can then send this encrypted symmetric key over an insecure channel to the client; only the client can decrypt it using the client's private key (which pairs with the public key used by the server to encrypt the message). With the client and server both having the same symmetric key, they can safely use symmetric key encryption (likely much faster) to communicate over otherwise-insecure channels. This scheme has the advantage of not having to manually pre-share symmetric keys (a fundamentally difficult problem) while gaining the higher data throughput advantage of symmetric-key cryptography.
With public-key cryptography, robust authentication is also possible. A sender can combine a message with a private key to create a short digital signature on the message. Anyone with the sender's corresponding public key can combine that message with a claimed digital signature; if the signature matches the message, the origin of the message is verified (i.e., it must have been made by the owner of the corresponding private key).
Public key algorithms are fundamental security primitives in modern cryptosystems, including applications and protocols which offer assurance of the confidentiality, authenticity and non-repudiability of electronic communications and data storage. They underpin numerous Internet standards, such as Transport Layer Security (TLS), S/MIME, PGP, and GPG. Some public key algorithms provide key distribution and secrecy (e.g., Diffie–Hellman key exchange), some provide digital signatures (e.g., Digital Signature Algorithm), and some provide both (e.g., RSA). Compared to symmetric encryption, asymmetric encryption is rather slower than good symmetric encryption, too slow for many purposes. Today's cryptosystems (such as TLS, Secure Shell) use both symmetric encryption and asymmetric encryption, often by using asymmetric encryption to securely exchange a secret key which is then used for symmetric encryption.
Description
Before the mid-1970s, all cipher systems used symmetric key algorithms, in which the same cryptographic key is used with the underlying algorithm by both the sender and the recipient, who must both keep it secret. Of necessity, the key in every such system had to be exchanged between the communicating parties in some secure way prior to any use of the system – for instance, via a secure channel. This requirement is never trivial and very rapidly becomes unmanageable as the number of participants increases, or when secure channels aren't available, or when, (as is sensible cryptographic practice), keys are frequently changed. In particular, if messages are meant to be secure from other users, a separate key is required for each possible pair of users.
By contrast, in a public key system, the public keys can be disseminated widely and openly, and only the corresponding private keys need be kept secret by its owner.
Two of the best-known uses of public key cryptography are:
Public key encryption, in which a message is encrypted with the intended recipient's public key. For properly chosen and used algorithms, messages cannot in practice be decrypted by anyone who does not possess the matching private key, who is thus presumed to be the owner of that key and so the person associated with the public key. This can be used to ensure confidentiality of a message.
Digital signatures, in which a message is signed with the sender's private key and can be verified by anyone who has access to the sender's public key. This verification proves that the sender had access to the private key, and therefore is very likely to be the person associated with the public key. This also ensures that the message has not been tampered with, as a signature is mathematically bound to the message it originally was made from, and verification will fail for practically any other message, no matter how similar to the original message.
One important issue is confidence/proof that a particular public key is authentic, i.e. that it is correct and belongs to the person or entity claimed, and has not been tampered with or replaced by some (perhaps malicious) third party. There are several possible approaches, including:
A public key infrastructure (PKI), in which one or more third parties – known as certificate authorities – certify ownership of key pairs. TLS relies upon this. This implies that the PKI system (software, hardware, and management) is trust-able by all involved.
A "web of trust" which decentralizes authentication by using individual endorsements of links between a user and the public key belonging to that user. PGP uses this approach, in addition to lookup in the domain name system (DNS). The DKIM system for digitally signing emails also uses this approach.
Applications
The most obvious application of a public key encryption system is for encrypting communication to provide confidentiality – a message that a sender encrypts using the recipient's public key which can be decrypted only by the recipient's paired private key.
Another application in public key cryptography is the digital signature. Digital signature schemes can be used for sender authentication.
Non-repudiation systems use digital signatures to ensure that one party cannot successfully dispute its authorship of a document or communication.
Further applications built on this foundation include: digital cash, password-authenticated key agreement, time-stamping services and non-repudiation protocols.
Hybrid Cryptosystems
Because asymmetric key algorithms are nearly always much more computationally intensive than symmetric ones, it is common to use a public/private asymmetric key-exchange algorithm to encrypt and exchange a symmetric key, which is then used by symmetric-key cryptography to transmit data using the now-shared symmetric key for a symmetric key encryption algorithm. PGP, SSH, and the SSL/TLS family of schemes use this procedure; they are thus called hybrid cryptosystems. The initial asymmetric cryptography-based key exchange to share a server-generated symmetric key from the server to client has the advantage of not requiring that a symmetric key be pre-shared manually, such as on printed paper or discs transported by a courier, while providing the higher data throughput of symmetric key cryptography over asymmetric key cryptography for the remainder of the shared connection.
Weaknesses
As with all security-related systems, it is important to identify potential weaknesses. Aside from poor choice of an asymmetric key algorithm (there are few which are widely regarded as satisfactory) or too short a key length, the chief security risk is that the private key of a pair becomes known. All security of messages, authentication, etc., will then be lost.
Algorithms
All public key schemes are in theory susceptible to a "brute-force key search attack". However, such an attack is impractical if the amount of computation needed to succeed – termed the "work factor" by Claude Shannon – is out of reach of all potential attackers. In many cases, the work factor can be increased by simply choosing a longer key. But other algorithms may inherently have much lower work factors, making resistance to a brute-force attack (e.g., from longer keys) irrelevant. Some special and specific algorithms have been developed to aid in attacking some public key encryption algorithms; both RSA and ElGamal encryption have known attacks that are much faster than the brute-force approach. None of these are sufficiently improved to be actually practical, however.
Major weaknesses have been found for several formerly promising asymmetric key algorithms. The "knapsack packing" algorithm was found to be insecure after the development of a new attack. As with all cryptographic functions, public-key implementations may be vulnerable to side-channel attacks that exploit information leakage to simplify the search for a secret key. These are often independent of the algorithm being used. Research is underway to both discover, and to protect against, new attacks.
Alteration of public keys
Another potential security vulnerability in using asymmetric keys is the possibility of a "man-in-the-middle" attack, in which the communication of public keys is intercepted by a third party (the "man in the middle") and then modified to provide different public keys instead. Encrypted messages and responses must, in all instances, be intercepted, decrypted, and re-encrypted by the attacker using the correct public keys for the different communication segments so as to avoid suspicion.
A communication is said to be insecure where data is transmitted in a manner that allows for interception (also called "sniffing"). These terms refer to reading the sender's private data in its entirety. A communication is particularly unsafe when interceptions can't be prevented or monitored by the sender.
A man-in-the-middle attack can be difficult to implement due to the complexities of modern security protocols. However, the task becomes simpler when a sender is using insecure media such as public networks, the Internet, or wireless communication. In these cases an attacker can compromise the communications infrastructure rather than the data itself. A hypothetical malicious staff member at an Internet Service Provider (ISP) might find a man-in-the-middle attack relatively straightforward. Capturing the public key would only require searching for the key as it gets sent through the ISP's communications hardware; in properly implemented asymmetric key schemes, this is not a significant risk.
In some advanced man-in-the-middle attacks, one side of the communication will see the original data while the other will receive a malicious variant. Asymmetric man-in-the-middle attacks can prevent users from realizing their connection is compromised. This remains so even when one user's data is known to be compromised because the data appears fine to the other user. This can lead to confusing disagreements between users such as "it must be on your end!" when neither user is at fault. Hence, man-in-the-middle attacks are only fully preventable when the communications infrastructure is physically controlled by one or both parties; such as via a wired route inside the sender's own building. In summation, public keys are easier to alter when the communications hardware used by a sender is controlled by an attacker.
Public key infrastructure
One approach to prevent such attacks involves the use of a public key infrastructure (PKI); a set of roles, policies, and procedures needed to create, manage, distribute, use, store and revoke digital certificates and manage public-key encryption. However, this has potential weaknesses.
For example, the certificate authority issuing the certificate must be trusted by all participating parties to have properly checked the identity of the key-holder, to have ensured the correctness of the public key when it issues a certificate, to be secure from computer piracy, and to have made arrangements with all participants to check all their certificates before protected communications can begin. Web browsers, for instance, are supplied with a long list of "self-signed identity certificates" from PKI providers – these are used to check the bona fides of the certificate authority and then, in a second step, the certificates of potential communicators. An attacker who could subvert one of those certificate authorities into issuing a certificate for a bogus public key could then mount a "man-in-the-middle" attack as easily as if the certificate scheme were not used at all. In an alternative scenario rarely discussed, an attacker who penetrates an authority's servers and obtains its store of certificates and keys (public and private) would be able to spoof, masquerade, decrypt, and forge transactions without limit.
Despite its theoretical and potential problems, this approach is widely used. Examples include TLS and its predecessor SSL, which are commonly used to provide security for web browser transactions (for example, to securely send credit card details to an online store).
Aside from the resistance to attack of a particular key pair, the security of the certification hierarchy must be considered when deploying public key systems. Some certificate authority – usually a purpose-built program running on a server computer – vouches for the identities assigned to specific private keys by producing a digital certificate. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over that time. When a private key used for certificate creation higher in the PKI server hierarchy is compromised, or accidentally disclosed, then a "man-in-the-middle attack" is possible, making any subordinate certificate wholly insecure.
Examples
Examples of well-regarded asymmetric key techniques for varied purposes include:
Diffie–Hellman key exchange protocol
DSS (Digital Signature Standard), which incorporates the Digital Signature Algorithm
ElGamal
Elliptic-curve cryptography
Elliptic Curve Digital Signature Algorithm (ECDSA)
Elliptic-curve Diffie–Hellman (ECDH)
Ed25519 and Ed448 (EdDSA)
X25519 and X448 (ECDH/EdDH)
Various password-authenticated key agreement techniques
Paillier cryptosystem
RSA encryption algorithm (PKCS#1)
Cramer–Shoup cryptosystem
YAK authenticated key agreement protocol
Examples of asymmetric key algorithms not yet widely adopted include:
NTRUEncrypt cryptosystem
McEliece cryptosystem
Examples of notable – yet insecure – asymmetric key algorithms include:
Merkle–Hellman knapsack cryptosystem
Examples of protocols using asymmetric key algorithms include:
S/MIME
GPG, an implementation of OpenPGP, and an Internet Standard
EMV, EMV Certificate Authority
IPsec
PGP
ZRTP, a secure VoIP protocol
Transport Layer Security standardized by IETF and its predecessor Secure Socket Layer
SILC
SSH
Bitcoin
Off-the-Record Messaging
History
During the early history of cryptography, two parties would rely upon a key that they would exchange by means of a secure, but non-cryptographic, method such as a face-to-face meeting, or a trusted courier. This key, which both parties must then keep absolutely secret, could then be used to exchange encrypted messages. A number of significant practical difficulties arise with this approach to distributing keys.
Anticipation
In his 1874 book The Principles of Science, William Stanley Jevons wrote:Can the reader say what two numbers multiplied together will produce the number 8616460799? I think it unlikely that anyone but myself will ever know.
Here he described the relationship of one-way functions to cryptography, and went on to discuss specifically the factorization problem used to create a trapdoor function. In July 1996, mathematician Solomon W. Golomb said: "Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, although he certainly did not invent the concept of public key cryptography."
Classified discovery
In 1970, James H. Ellis, a British cryptographer at the UK Government Communications Headquarters (GCHQ), conceived of the possibility of "non-secret encryption", (now called public key cryptography), but could see no way to implement it. In 1973, his colleague Clifford Cocks implemented what has become known as the RSA encryption algorithm, giving a practical method of "non-secret encryption", and in 1974 another GCHQ mathematician and cryptographer, Malcolm J. Williamson, developed what is now known as Diffie–Hellman key exchange.
The scheme was also passed to the USA's National Security Agency. Both organisations had a military focus and only limited computing power was available in any case; the potential of public key cryptography remained unrealised by either organization:
I judged it most important for military use ... if you can share your key rapidly and electronically, you have a major advantage over your opponent. Only at the end of the evolution from Berners-Lee designing an open internet architecture for CERN, its adaptation and adoption for the Arpanet ... did public key cryptography realise its full potential.
—Ralph Benjamin
These discoveries were not publicly acknowledged for 27 years, until the research was declassified by the British government in 1997.
Public discovery
In 1976, an asymmetric key cryptosystem was published by Whitfield Diffie and Martin Hellman who, influenced by Ralph Merkle's work on public key distribution, disclosed a method of public key agreement. This method of key exchange, which uses exponentiation in a finite field, came to be known as Diffie–Hellman key exchange. This was the first published practical method for establishing a shared secret-key over an authenticated (but not confidential) communications channel without using a prior shared secret. Merkle's "public key-agreement technique" became known as Merkle's Puzzles, and was invented in 1974 and only published in 1978.
In 1977, a generalization of Cocks' scheme was independently invented by Ron Rivest, Adi Shamir and Leonard Adleman, all then at MIT. The latter authors published their work in 1978 in Martin Gardner's Scientific American column, and the algorithm came to be known as RSA, from their initials. RSA uses exponentiation modulo a product of two very large primes, to encrypt and decrypt, performing both public key encryption and public key digital signatures. Its security is connected to the extreme difficulty of factoring large integers, a problem for which there is no known efficient general technique (though prime factorization may be obtained through brute-force attacks; this grows much more difficult the larger the prime factors are). A description of the algorithm was published in the Mathematical Games column in the August 1977 issue of Scientific American.
Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed, including the Rabin cryptosystem, ElGamal encryption, DSA - and elliptic curve cryptography.
See also
Books on cryptography
GNU Privacy Guard
ID-based encryption (IBE)
Key escrow
Key-agreement protocol
PGP word list
Post-quantum cryptography
Pretty Good Privacy
Pseudonymity
Public key fingerprint
Public key infrastructure (PKI)
Quantum computing
Quantum cryptography
Secure Shell (SSH)
Symmetric-key algorithm
Threshold cryptosystem
Web of trust
Notes
References
. The first two sections contain a very good introduction to public-key cryptography.
IEEE 1363: Standard Specifications for Public-Key Cryptography
Christof Paar, Jan Pelzl, "Introduction to Public-Key Cryptography", Chapter 6 of "Understanding Cryptography, A Textbook for Students and Practitioners". (companion web site contains online cryptography course that covers public-key cryptography), Springer, 2009.
External links
Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s.
An account of how GCHQ kept their invention of PKE secret until 1997
Anonymity networks
Cryptographic software
Cryptographic protocols
Cryptography
Banking technology
Public key infrastructure
Network architecture |
24304 | https://en.wikipedia.org/wiki/Password | Password | A password, sometimes called a passcode (for example in Apple devices), is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to be memorized, but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical. Using the terminology of the NIST Digital Identity Guidelines, the secret is held by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol, the verifier is able to infer the claimant's identity.
In general, a password is an arbitrary string of characters including letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called a personal identification number (PIN).
Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called a passphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security.
History
Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password or watchword, and would only allow a person or group to pass if they knew the password. Polybius describes the system for the distribution of watchwords in the Roman military as follows:
The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next to him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.
Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—flash—which was presented as a challenge, and answered with the correct response—thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.
Passwords have been used with computers since the earliest days of computing. The Compatible Time-Sharing System (CTSS), an operating system introduced at MIT in 1961, was the first computer system to implement password login. CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy." In the early 1970s, Robert Morris developed a system of storing login passwords in a hashed form as part of the Unix operating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks.
In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller machines (ATMs), etc. A typical computer user has passwords for many purposes: logging into accounts, retrieving e-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.
Choosing a secure and memorable password
The easier a password is for the owner to remember generally means it will be easier for an attacker to guess. However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system. Others argue longer passwords provide more security (e.g., entropy) than shorter passwords with a wide variety of characters.
In The Memorability and Security of Passwords, Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords.
Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method, but a single dictionary word is not. Having a personally designed algorithm for generating obscure passwords is another good method.
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions that are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.
In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media):
The name of a pet, child, family member, or significant other
Anniversary dates and birthdays
Birthplace
Name of a favorite holiday
Something related to a favorite sports team
The word "password"
Alternatives to memorization
Traditional advice to memorize passwords and never write them down has become a challenge because of the sheer number of passwords users of computers and the internet are expected to maintain. One survey concluded that the average user has around 100 passwords. To manage the proliferation of passwords, some users employ the same password for multiple accounts, a dangerous practice since a data breach in one account could compromise the rest. Less risky alternatives include the use of password managers, single sign-on systems and simply keeping paper lists of less critical passwords. Such practices can reduce the number of passwords that must be memorized, such as the password manager's master password, to a more manageable number.
Factors in the security of a password system
The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection against computer viruses, man-in-the-middle attacks and the like. Physical security issues are also a concern, from deterring shoulder surfing to more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes. See password strength and computer security for more information.
Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.
Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token. Less extreme measures include extortion, rubber hose cryptanalysis, and side channel attack.
Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow.
Rate at which an attacker can try guessed passwords
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts, also known as throttling. In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords if they have been well chosen and are not easily guessed.
Many systems store a cryptographic hash of the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running.
Passwords that are used to generate cryptographic keys (e.g., for disk encryption or Wi-Fi security) can also be subjected to high rate guessing. Lists of common passwords are widely available and can make password attacks very efficient. (See Password cracking.) Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such as PGP and Wi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks. See key stretching.
Limits on the number of password guesses
An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner. Attackers may conversely use knowledge of this mitigation to implement a denial of service attack against the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage via social engineering.
Form of stored passwords
Some computer systems store user passwords as plaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.
More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure don't store passwords at all, but a one-way derivation, such as a polynomial, modulus, or an advanced hash function.
Roger Needham invented the now-common approach of storing only a "hashed" form of the plaintext password. When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in many implementations, another value known as a salt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users. MD5 and SHA1 are frequently used cryptographic hash functions, but they are not recommended for password hashing unless they are used as part of a larger construction such as in PBKDF2.
The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the /etc/passwd file or the /etc/shadow file.
The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted. If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable to rainbow table attacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.
If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover a plaintext password. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in many languages are widely available on the Internet. The existence of password cracking tools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words, or that use easily guessable patterns.
A modified version of the DES algorithm was used as the basis for the password hashing algorithm in early Unix systems. The crypt algorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks. The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix-like systems (e.g., Linux or the various BSD systems) use more secure password hashing algorithms such as PBKDF2, bcrypt, and scrypt, which have large salts and an adjustable cost or number of iterations.
A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a widely deployed and insecure example.
Methods of verifying a password over a network
Simple transmission of the password
Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping by wiretapping methods. If it is carried as packeted data over the Internet, anyone able to watch the packets containing the logon information can snoop with a very low probability of detection.
Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent as plaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored as plaintext on at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied to backup, cache or history files on any of these systems.
Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text.
Transmission through encrypted channels
The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL-protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use; see cryptography.
Hash-based challenge–response methods
Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge–response authentication; the latter requires a client to prove to a server that they know what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash.
Zero-knowledge password proofs
Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it.
Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the unhashed password is required to gain access.
Procedures for changing passwords
Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., via wiretapping) before the new password can even be installed in the password database and if the new password is given to a compromised employee, little is gained. Some websites include the user-selected password in an unencrypted confirmation e-mail message, with the obvious increased vulnerability.
Identity management systems are increasingly used to automate the issuance of replacements for lost passwords, a feature called self-service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened).
Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.
Password longevity
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst. There is often an increase in the number of people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable. Because of these issues, there is some debate as to whether password aging is effective. Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse.
Number of users per password
Single passwords are also much less convenient to change because many people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data.
Password security architecture
Common techniques used to improve the security of computer systems protected by a password include:
Not displaying the password on the display screen as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•).
Allowing passwords of adequate length. (Some legacy operating systems, including early versions of Unix and Windows, limited passwords to an 8 character maximum, reducing security.)
Requiring users to re-enter their password after a period of inactivity (a semi log-off policy).
Enforcing a password policy to increase password strength and security.
Assigning randomly chosen passwords.
Requiring minimum password lengths.
Some systems require characters from various character classes in a password—for example, "must have at least one uppercase and at least one lowercase letter". However, all-lowercase passwords are more secure per keystroke than mixed capitalization passwords.
Employ a password blacklist to block the use of weak, easily guessed passwords
Providing an alternative to keyboard entry (e.g., spoken passwords, or biometric identifiers).
Requiring more than one authentication system, such as two-factor authentication (something a user has and something the user knows).
Using encrypted tunnels or password-authenticated key agreement to prevent access to transmitted passwords via network attacks
Limiting the number of allowed failures within a given time period (to prevent repeated password guessing). After the limit is reached, further attempts will fail (including correct password attempts) until the beginning of the next time period. However, this is vulnerable to a form of denial of service attack.
Introducing a delay between password submission attempts to slow down automated password guessing programs.
Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.
Password reuse
It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because an attacker needs to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusing usernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimised by using mnemonic techniques, writing passwords down on paper, or using a password manager.
It has been argued by Redmond researchers Dinei Florencio and Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remembering long, complex passwords for a few important accounts, such as bank accounts. Similar arguments were made by Forbes in not change passwords as often as many "experts" advise, due to the same limitations in human memory.
Writing down passwords on paper
Historically, many security experts asked people to memorize their passwords: "Never write down a password". More recently, many security experts such as Bruce Schneier recommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.
Password manager software can also store passwords relatively safely, in an encrypted file sealed with a single master password.
After death
According to a survey by the University of London, one in ten people are now leaving their passwords in their wills to pass on this important information when they die. One-third of people, according to the poll, agree that their password-protected data is important enough to pass on in their will.
Multi-factor authentication
Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code that must be entered in addition to a password. More sophisticated factors include such things as hardware tokens and biometric security.
Password rules
Most organizations specify a password policy that sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks that define requirements for user authentication to government services, including requirements for passwords.
Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by the National Institute of Standards and Technology (NIST), authored by Bill Burr. It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 Wall Street Journal article, Burr reported he regrets these proposals and made a mistake when he recommended them.
According to a 2017 rewrite of this NIST report, many websites have rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts. The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd". A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack.
Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren’t fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good."
Pieris Tsokkis and Eliana Stavrou were able to identify some bad password construction strategies through their research and development of a password generator tool. They came up with eight categories of password construction strategies based on exposed password lists, password cracking tools, and online reports citing the most used passwords. These categories include user-related information, keyboard combinations and patterns, placement strategy, word processing, substitution, capitalization, append dates, and a combination of the previous categories
Password cracking
Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested.
Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms of entropy.
Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible to discover are considered strong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users.
Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically. For example, Columbia University found 22% of user passwords could be recovered with little effort. According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006. He also reported that the single most common password was password1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.)
Incidents
On July 16, 1998, CERT reported an incident where an attacker had found 186,126 encrypted passwords. At the time the attacker was discovered, 47,642 passwords had already been cracked.
In September, 2001, after the deaths of 960 New York employees in the September 11 attacks, financial services firm Cantor Fitzgerald through Microsoft broke the passwords of deceased employees to gain access to files needed for servicing client accounts. Technicians used brute-force attacks, and interviewers contacted families to gather personalized information that might reduce the search time for weaker passwords.
In December 2009, a major password breach of the Rockyou.com website occurred that led to the release of 32 million passwords. The hacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the Internet. Passwords were stored in cleartext in the database and were extracted through a SQL injection vulnerability. The Imperva Application Defense Center (ADC) did an analysis on the strength of the passwords.
In June 2011, NATO (North Atlantic Treaty Organization) experienced a security breach that led to the public release of first and last names, usernames, and passwords for more than 11,000 registered users of their e-bookshop. The data was leaked as part of Operation AntiSec, a movement that includes Anonymous, LulzSec, as well as other hacking groups and individuals. The aim of AntiSec is to expose personal, sensitive, and restricted information to the world, using any means necessary.
On July 11, 2011, Booz Allen Hamilton, a consulting firm that does work for the Pentagon, had their servers hacked by Anonymous and leaked the same day. "The leak, dubbed 'Military Meltdown Monday,' includes 90,000 logins of military personnel—including personnel from USCENTCOM, SOCOM, the Marine corps, various Air Force facilities, Homeland Security, State Department staff, and what looks like private sector contractors." These leaked passwords wound up being hashed in SHA1, and were later decrypted and analyzed by the ADC team at Imperva, revealing that even military personnel look for shortcuts and ways around the password requirements.
Alternatives to passwords for authentication
The numerous ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Unfortunately, some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative. A 2012 paper examines why passwords have proved so hard to supplant (despite numerous predictions that they would soon be a thing of the past); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide."
Single-use passwords. Having passwords that are only valid once makes many potential attacks ineffective. Most users find single-use passwords extremely inconvenient. They have, however, been widely implemented in personal online banking, where they are known as Transaction Authentication Numbers (TANs). As most home users only perform a small number of transactions each week, the single-use issue has not led to intolerable customer dissatisfaction in this case.
Time-synchronized one-time passwords are similar in some ways to single-use passwords, but the value to be entered is displayed on a small (generally pocketable) item and changes every minute or so.
PassWindow one-time passwords are used as single-use passwords, but the dynamic characters to be entered are visible only when a user superimposes a unique printed visual key over a server-generated challenge image shown on the user's screen.
Access controls based on public-key cryptography e.g. ssh. The necessary keys are usually too large to memorize (but see proposal Passmaze) and must be stored on a local computer, security token or portable memory device, such as a USB flash drive or even floppy disk. The private key may be stored on a cloud service provider, and activated by the use of a password or two-factor authentication.
Biometric methods promise authentication based on unalterable personal characteristics, but currently (2008) have high error rates and require additional hardware to scan, for example, fingerprints, irises, etc. They have proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie fingerprint spoof demonstration, and, because these characteristics are unalterable, they cannot be changed if compromised; this is a highly important consideration in access control as a compromised access token is necessarily insecure.
Single sign-on technology is claimed to eliminate the need for having multiple passwords. Such schemes do not relieve users and administrators from choosing reasonable single passwords, nor system designers or administrators from ensuring that private access control information passed among systems enabling single sign-on is secure against attack. As yet, no satisfactory standard has been developed.
Envaulting technology is a password-free way to secure data on removable storage devices such as USB flash drives. Instead of user passwords, access control is based on the user's access to a network resource.
Non-text-based passwords, such as graphical passwords or mouse-movement based passwords. Graphical passwords are an alternative means of authentication for log-in intended to be used in place of conventional password; they use images, graphics or colours instead of letters, digits or special characters. One system requires users to select a series of faces as a password, utilizing the human brain's ability to recall faces easily. In some implementations the user is required to pick from a series of images in the correct sequence in order to gain access. Another graphical password solution creates a one-time password using a randomly generated grid of images. Each time the user is required to authenticate, they look for the images that fit their pre-chosen categories and enter the randomly generated alphanumeric character that appears in the image to form the one-time password. So far, graphical passwords are promising, but are not widely used. Studies on this subject have been made to determine its usability in the real world. While some believe that graphical passwords would be harder to crack, others suggest that people will be just as likely to pick common images or sequences as they are to pick common passwords.
2D Key (2-Dimensional Key) is a 2D matrix-like key input method having the key styles of multiline passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises, to create big password/key beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography) using fully memorizable private key upon the current private key management technologies like encrypted private key, split private key, and roaming private key.
Cognitive passwords use question and answer cue/response pairs to verify identity.
"The password is dead"
That "the password is dead" is a recurring idea in computer security. The reasons given often include reference to the usability as well as security problems of passwords. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by numerous people at least since 2004.
Alternatives to passwords include biometrics, two-factor authentication or single sign-on, Microsoft's Cardspace, the Higgins project, the Liberty Alliance, NSTIC, the FIDO Alliance and various Identity 2.0 proposals.
However, in spite of these predictions and efforts to replace them passwords are still the dominant form of authentication on the web. In "The Persistence of Passwords," Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.
They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used."
Following this, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security. Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, while every scheme does worse than passwords on deployability. The authors conclude with the following observation: "Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery."
See also
Access code (disambiguation)
Authentication
CAPTCHA
Cognitive science
Combination lock
Electronic lock
Diceware
Kerberos (protocol)
Keyfile
Passphrase
Secure Password Sharing
Password cracking
Password fatigue
Password length parameter
Password manager
Password notification e-mail
Password policy
Password psychology
Password strength
Password synchronization
Password-authenticated key agreement
Personal identification number
Pre-shared key
Random password generator
Rainbow table
Self-service password reset
Usability of web authentication systems
References
External links
Graphical Passwords: A Survey
Large list of commonly used passwords
Large collection of statistics about passwords
Research Papers on Password-based Cryptography
The international passwords conference
Procedural Advice for Organisations and Administrators (PDF)
Centre for Security, Communications and Network Research, University of Plymouth (PDF)
2017 draft update to NIST password standards for the U.S. federal government
Memorable and secure password generator
Password authentication
Identity documents
Security |
24668 | https://en.wikipedia.org/wiki/Pentium%20%28original%29 | Pentium (original) | The Pentium is a microprocessor that was introduced by Intel on March 22, 1993, as the first CPU in the Pentium brand. It was instruction set compatible with the 80486 but was a new and very different microarchitecture design. The P5 Pentium was the first superscalar x86 microarchitecture and the world's first superscalar microprocessor to be in mass production. It included dual integer pipelines, a faster floating-point unit, wider data bus, separate code and data caches, and many other techniques and features to enhance performance and support security, encryption, and multiprocessing, for workstations and servers.
Considered the fifth main generation in the 8086 compatible line of processors, its implementation and microarchitecture was called P5. As with all new processors from Intel since the Pentium, some new instructions were added to enhance performance for specific types of workloads.
The Pentium was the first Intel x86 to build in robust hardware support for multiprocessing similar to that of large IBM mainframe computers. Intel worked closely with IBM to define this ability and then Intel designed it into the P5 microarchitecture. This new ability was absent in prior x86 generations and x86 copies from competitors.
To realize its greatest potential, compilers had to be optimized to exploit the instruction level parallelism provided by the new superscalar dual pipelines and applications needed to be recompiled. Intel spent substantial effort and resources working with development tool vendors, and major independent software vendor (ISV) and operating system (OS) companies to optimize their products for Pentium before product launch.
In October 1996, the similar Pentium MMX was introduced, complementing the same basic microarchitecture with the MMX instruction set, larger caches, and some other enhancements.
Competitors included the Motorola 68040, Motorola 68060, PowerPC 601, and the SPARC, MIPS, Alpha families, most of which also used a superscalar in-order dual instruction pipeline configuration at some time.
Intel discontinued the P5 Pentium processors (sold as a cheaper product since the release of the Pentium II in 1997) in early 2000 in favor of the Celeron processor, which had also replaced the 80486 brand.
Development
The P5 microarchitecture was designed by the same Santa Clara team which designed the 386 and 486. Design work started in 1989; the team decided to use a superscalar architecture, with on-chip cache, floating-point, and branch prediction. The preliminary design was first successfully simulated in 1990, followed by the laying-out of the design. By this time, the team had several dozen engineers. The design was taped out, or transferred to silicon, in April 1992, at which point beta-testing began. By mid-1992, the P5 team had 200 engineers. Intel at first planned to demonstrate the P5 in June 1992 at the trade show PC Expo, and to formally announce the processor in September 1992, but design problems forced the demo to be cancelled, and the official introduction of the chip was delayed until the spring of 1993.
John H. Crawford, chief architect of the original 386, co-managed the design of the P5, along with Donald Alpert, who managed the architectural team. Dror Avnon managed the design of the FPU. Vinod K. Dham was general manager of the P5 group.
Intel's Larrabee multicore architecture project uses a processor core derived from a P5 core (P54C), augmented by multithreading, 64-bit instructions, and a 16-wide vector processing unit. Intel's low-powered Bonnell microarchitecture employed in early Atom processor cores also uses an in-order dual pipeline similar to P5.
Intel used the Pentium name instead of 80586, because it discovered that numbers cannot be trademarked.
Improvements over the i486
The P5 microarchitecture brings several important advances over the prior i486 architecture.
Performance:
Superscalar architecture – The Pentium has two datapaths (pipelines) that allow it to complete two instructions per clock cycle in many cases. The main pipe (U) can handle any instruction, while the other (V) can handle the most common simple instructions. Some reduced instruction set computer (RISC) proponents had argued that the "complicated" x86 instruction set would probably never be implemented by a tightly pipelined microarchitecture, much less by a dual-pipeline design. The 486 and the Pentium demonstrated that this was indeed possible and feasible.
64-bit external databus doubles the amount of information possible to read or write on each memory access and therefore allows the Pentium to load its code cache faster than the 80486; it also allows faster access and storage of 64-bit and 80-bit x87 FPU data.
Separation of code and data caches lessens the fetch and operand read/write conflicts compared to the 486. To reduce access time and implementation cost, both of them are 2-way associative, instead of the single 4-way cache of the 486. A related enhancement in the Pentium is the ability to read a contiguous block from the code cache even when it is split between two cache lines (at least 17 bytes in worst case).
Much faster floating-point unit. Some instructions showed an enormous improvement, most notably FMUL, with up to 15 times higher throughput than in the 80486 FPU. The Pentium is also able to execute a FXCH ST(x) instruction in parallel with an ordinary (arithmetical or load/store) FPU instruction.
Four-input address adders enables the Pentium to further reduce the address calculation latency compared to the 80486. The Pentium can calculate full addressing modes with segment-base + base-register + scaled register + immediate offset in a single cycle; the 486 has a three-input address adder only, and must therefore divide such calculations between two cycles.
The microcode can employ both pipelines to enable auto-repeating instructions such as REP MOVSW perform one iteration every clock cycle, while the 80486 needed three clocks per iteration (and the earliest x86 chips significantly more than the 486). Also, optimization of the access to the first microcode words during the decode stages helps in making several frequent instructions execute significantly more quickly, especially in their most common forms and in typical cases. Some examples are (486→Pentium, in clock cycles): CALL (3→1), RET (5→2), shifts/rotates (2–3→1).
A faster, fully hardware-based multiplier makes instructions such as MUL and IMUL several times faster (and more predictable) than in the 80486; the execution time is reduced from 13 to 42 clock cycles down to 10–11 for 32-bit operands.
Virtualized interrupt to speed up virtual 8086 mode.
Branch prediction
Other features:
Enhanced debug features with the introduction of the Processor-based debug port (see Pentium Processor Debugging in the Developers Manual, Vol 1).
Enhanced self-test features like the L1 cache parity check (see Cache Structure in the Developers Manual, Vol 1).
New instructions: CPUID, CMPXCHG8B, RDTSC, RDMSR, WRMSR, RSM.
Test registers TR0–TR7 and MOV instructions for access to them were eliminated.
The later Pentium MMX also added the MMX instruction set, a basic integer single instruction, multiple data (SIMD) instruction set extension marketed for use in multimedia applications. MMX could not be used simultaneously with the x87 FPU instructions because the registers were reused (to allow fast context switches). More important enhancements were the doubling of the instruction and data cache sizes and a few microarchitectural changes for better performance.
The Pentium was designed to execute over 100 million instructions per second (MIPS), and the 75 MHz model was able to reach 126.5 MIPS in certain benchmarks. The Pentium architecture typically offered just under twice the performance of a 486 processor per clock cycle in common benchmarks. The fastest 80486 parts (with slightly improved microarchitecture and 100 MHz operation) were almost as powerful as the first-generation Pentiums, and the AMD Am5x86 was roughly equal to the Pentium 75 regarding pure ALU performance.
Errata
The early versions of 60–100 MHz P5 Pentiums had a problem in the floating-point unit that resulted in incorrect (but predictable) results from some division operations. This flaw, discovered in 1994 by professor Thomas Nicely at Lynchburg College, Virginia, became widely known as the Pentium FDIV bug and caused embarrassment for Intel, which created an exchange program to replace the faulty processors.
In 1997, another erratum was discovered that could allow a malicious program to crash a system without any special privileges, the "F00F bug". All P5 series processors were affected and no fixed steppings were ever released, however contemporary operating systems were patched with workarounds to prevent crashes.
Cores and steppings
The Pentium was Intel's primary microprocessor for personal computers during the mid-1990s. The original design was reimplemented in newer processes and new features were added to maintain its competitiveness, and to address specific markets such as portable computers. As a result, there were several variants of the P5 microarchitecture.
P5
The first Pentium microprocessor core was code-named "P5". Its product code was 80501 (80500 for the earliest steppings Q0399). There were two versions, specified to operate at 60 MHz and 66 MHz respectively, using Socket 4. This first implementation of the Pentium used a traditional 5-volt power supply (descended from the usual transistor-transistor logic (TTL) compatibility requirements). It contained 3.1 million transistors and measured 16.7 mm by 17.6 mm for an area of 293.92 mm2. It was fabricated in a 0.8 μm bipolar complementary metal–oxide–semiconductor (BiCMOS) process. The 5-volt design resulted in relatively high energy consumption for its operating frequency when compared to the directly following models.
P54C
The P5 was followed by the P54C (80502) in 1994, with versions specified to operate at 75, 90, or 100 MHz using a 3.3 volt power supply. Marking the switch to Socket 5, this was the first Pentium processor to operate at 3.3 volts, reducing energy consumption, but necessitating voltage regulation on mainboards. As with higher-clocked 486 processors, an internal clock multiplier was employed from here on to let the internal circuitry work at a higher frequency than the external address and data buses, as it is more complicated and cumbersome to increase the external frequency, due to physical constraints. It also allowed two-way multiprocessing, and had an integrated local APIC and new power management features. It contained 3.3 million transistors and measured 163 mm2. It was fabricated in a BiCMOS process which has been described as both 0.5 μm and 0.6 μm due to differing definitions.
P54CQS
The P54C was followed by the P54CQS in early 1995, which operated at 120 MHz. It was fabricated in a 0.35 μm BiCMOS process and was the first commercial microprocessor to be fabricated in a 0.35 μm process. Its transistor count is identical to the P54C and, despite the newer process, it had an identical die area as well. The chip was connected to the package using wire bonding, which only allows connections along the edges of the chip. A smaller chip would have required a redesign of the package, as there is a limit on the length of the wires and the edges of the chip would be further away from the pads on the package. The solution was to keep the chip the same size, retain the existing pad-ring, and only reduce the size of the Pentium's logic circuitry to enable it to achieve higher clock frequencies.
P54CS
The P54CQS was quickly followed by the P54CS, which operated at 133, 150, 166 and 200 MHz, and introduced Socket 7. It contained 3.3 million transistors, measured 90 mm2 and was fabricated in a 0.35 μm BiCMOS process with four levels of interconnect.
P24T
The P24T Pentium OverDrive for 486 systems were released in 1995, which were based on 3.3 V 0.6 μm versions using a 63 or 83 MHz clock. Since these used Socket 2/3, some modifications had to be made to compensate for the 32-bit data bus and slower on-board L2 cache of 486 motherboards. They were therefore equipped with a 32 KB L1 cache (double that of pre-P55C Pentium CPUs).
P55C
The P55C (or 80503) was developed by Intel's Research & Development Center in Haifa, Israel. It was sold as Pentium with MMX Technology (usually just called Pentium MMX); although it was based on the P5 core, it featured a new set of 57 "MMX" instructions intended to improve performance on multimedia tasks, such as encoding and decoding digital media data. The Pentium MMX line was introduced on October 22, 1996, and released in January 1997.
The new instructions worked on new data types: 64-bit packed vectors of either eight 8-bit integers, four 16-bit integers, two 32-bit integers, or one 64-bit integer. So, for example, the PADDUSB (Packed ADD Unsigned Saturated Byte) instruction adds two vectors, each containing eight 8-bit unsigned integers together, elementwise; each addition that would overflow saturates, yielding 255, the maximal unsigned value that can be represented in a byte. These rather specialized instructions generally require special coding by the programmer for them to be used.
Other changes to the core include a 6-stage pipeline (vs. 5 on P5) with a return stack (first done on Cyrix 6x86) and better parallelism, an improved instruction decoder, 16KB L1 data cache + 16KB L1 instruction cache with Both 4-way associativity (vs. 8KB L1 Data/instruction with 2-way on P5), 4 write buffers that could now be used by either pipeline (vs. one corresponding to each pipeline on P5) and an improved branch predictor taken from the Pentium Pro, with a 512-entry buffer (vs. 256 on P5).
It contained 4.5 million transistors and had an area of 140 mm2. It was fabricated in a 0.28 μm CMOS process with the same metal pitches as the previous 0.35 μm BiCMOS process, so Intel described it as "0.35 μm" because of its similar transistor density. The process has four levels of interconnect.
While the P55C remained compatible with Socket 7, the voltage requirements for powering the chip differ from the standard Socket 7 specifications. Most motherboards manufactured for Socket 7 before the establishment of the P55C standard are not compliant with the dual voltage rail required for proper operation of this CPU (2.8 volt core voltage, 3.3 volt input/output (I/O) voltage). Intel addressed the issue with OverDrive upgrade kits that featured an interposer with its own voltage regulation.
Tillamook
Pentium MMX notebook CPUs used a mobile module that held the CPU. This module was a printed circuit board (PCB) with the CPU directly attached to it in a smaller form factor. The module snapped to the notebook motherboard, and typically a heat spreader was installed and made contact with the module. However, with the 0.25 μm Tillamook Mobile Pentium MMX (named after a city in Oregon), the module also held the 430TX chipset along with the system's 512 KB static random-access memory (SRAM) cache memory.
Models and variants
Competitors
After the introduction of the Pentium, competitors such as NexGen, AMD, Cyrix, and Texas Instruments announced Pentium-compatible processors in 1994. CIO magazine identified NexGen's Nx586 as the first Pentium-compatible CPU, while PC Magazine described the Cyrix 6x86 as the first. These were followed by the AMD K5, which was delayed due to design difficulties. AMD later bought NexGen to help design the AMD K6, and Cyrix was bought by National Semiconductor. Later processors from AMD and Intel retain compatibility with the original Pentium.
See also
List of Intel CPU microarchitectures
List of Intel Pentium microprocessors
Cache on a stick (COASt), L2 cache modules for Pentium
IA-32 instruction set architecture (ISA)
Intel 82497 cache controller
Competitors
AMD K5, AMD K6
Cyrix 6x86
WinChip C6
NexGen Nx586
Rise mP6
References
External links
CPU-Collection.de - Intel Pentium images and descriptions
Plasma Online Intel CPU Identification
The Pentium Timeline Project The Pentium Timeline Project maps oldest and youngest chip known of every s-spec made. Data are shown in an interactive timeline.
Intel datasheets
Pentium (P5)
Pentium (P54)
Pentium MMX (P55C)
Mobile Pentium MMX (P55C)
Mobile Pentium MMX (Tillamook)
Intel manuals
These official manuals provide an overview of the Pentium processor and its features:
Pentium Processor Family Developer's Manual Pentium Processor (Volume 1) (Intel order number 241428)
Pentium Processor Family Developer's Manual Volume 2: Instruction Set Reference (Intel order number 243191)
Pentium Processor Family Developer's Manual Volume 3: Architecture and Programming Manual (Intel order number 241430)
Computer-related introductions in 1993
Intel x86 microprocessors
Intel microarchitectures
Superscalar microprocessors
32-bit microprocessors |
25009 | https://en.wikipedia.org/wiki/Privacy | Privacy | Privacy (, ) is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.
When something is private to a person, it usually means that something is inherently special or sensitive to them. The domain of privacy partially overlaps with security, which can include the concepts of appropriate use and protection of information. Privacy may also take the form of bodily integrity. The right not to be subjected to unsanctioned invasions of privacy by the government, corporations, or individuals is part of many countries' privacy laws, and in some cases, constitutions.
The concept of universal individual privacy is a modern concept primarily associated with Western culture, particularly British and North American, and remained virtually unknown in some cultures until recent times. Now, most cultures recognize the ability of individuals to withhold certain parts of personal information from wider society. With the rise of technology, the debate regarding privacy has shifted from a bodily sense to a digital sense. As the world has become digital, there have been conflicts regarding the legal right to privacy and where it is applicable. In most countries, the right to a reasonable expectation to digital privacy has been extended from the original right to privacy, and many countries, notably the US, under its agency, the Federal Trade Commission, and those within the European Union (EU), have passed acts that further protect digital privacy from public and private entities and grant additional rights to users of technology.
With the rise of the Internet, there has been an increase in the prevalence of social bots, causing political polarization and harassment. Online harassment has also spiked, particularly with teenagers, which has consequently resulted in multiple privacy breaches. Selfie culture, the prominence of networks like Facebook and Instagram, location technology, and the use of advertisements and their tracking methods also pose threats to digital privacy.
Through the rise of technology and immensity of the debate regarding privacy, there have been various conceptions of privacy, which include the right to be let alone as defined in "The Right to Privacy", the first U.S. publication discussing privacy as a legal right, to the theory of the privacy paradox, which describes the notion that users' online may say they are concerned about their privacy, but in reality, are not. Along with various understandings of privacy, there are actions that reduce privacy, the most recent classification includes processing of information, sharing information, and invading personal space to get private information, as defined by Daniel J. Solove. Conversely, in order to protect a users's privacy, multiple steps can be taken, specifically through practicing encryption, anonymity, and taking further measures to bolster the security of their data.
History
Privacy has historical roots in ancient Greek philosophical discussions. The most well-known of these was Aristotle's distinction between two spheres of life: the public sphere of the polis, associated with political life, and the private sphere of the oikos, associated with domestic life. In the United States, more systematic treatises of privacy did not appear until the 1890s, with the development of privacy law in America.
Technology
As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as the printing press or the Internet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the 1890 article by Samuel Warren and Louis Brandeis, "The Right to Privacy", and that it was written mainly in response to the increase in newspapers and photographs made possible by printing technologies.
In 1948, 1984, written by George Orwell, was published. A classic dystopian novel, 1984 describes the life of Winston Smith in 1984, located in Oceania, a totalitarian state. The all-controlling Party, the party in power led by Big Brother, is able to control power through mass surveillance and limited freedom of speech and thought. George Orwell provides commentary on the negative effects of totalitarianism, particularly on privacy and censorship. Parallels have been drawn between 1984 and modern censorship and privacy, a notable example being that large social media companies, rather than the government, are able to monitor a user's data and decide what is allowed to be said online through their censorship policies, ultimately for monetary purposes.
In the 1960s, people began to consider how changes in technology were bringing changes in the concept of privacy. Vance Packard’s The Naked Society was a popular book on privacy from that era and led US discourse on privacy at that time. In addition, Alan Westin's Privacy and Freedom shifted the debate regarding privacy from a physical sense, how the government controls a person's body (i.e. Roe v. Wade) and other activities such as wiretapping and photography. As important records became digitized, Westin argued that personal data was becoming too accessible and that a person should have complete jurisdiction over his or her data, laying the foundation for the modern discussion of privacy.
New technologies can also create new ways to gather private information. For example, in the United States, it was thought that heat sensors intended to be used to find marijuana-growing operations would be acceptable. Contrary to popular opinion, in 2001 in Kyllo v. United States (533 U.S. 27) it was decided that the use of thermal imaging devices that can reveal previously unknown information without a warrant does indeed constitute a violation of privacy. In 2019, after developing a corporate rivalry in competing voice-recognition software, Apple and Amazon required employees to listen to intimate moments and faithfully transcribe the contents.
Police and government
Police and citizens often conflict on what degree the police can intrude a citizen's digital privacy. For instance, in 2012, the Supreme Court ruled unanimously in United States v. Jones (565 U.S. 400), in the case of Antoine Jones who was arrested of drug possession using a GPS tracker on his car that was placed without a warrant, that warrantless tracking infringes the Fourth Amendment. The Supreme Court also justified that there is some "reasonable expectation of privacy" in transportation since the reasonable expectation of privacy had already been established under Griswold v. Connecticut (1965). The Supreme Court also further clarified that the Fourth Amendment did not only pertain to physical instances of intrusion but also digital instances, and thus United States v. Jones became a landmark case.
In 2014, the Supreme Court ruled unanimously in Riley v. California (573 U.S. 373), where David Leon Riley was arrested after he was pulled over for driving on expired license tags when the police searched his phone and discovered that he was tied to a shooting, that searching a citizen's phone without a warrant was an unreasonable search, a violation of the Fourth Amendment. The Supreme Court concluded that the cell phones contained personal information different than trivial items, and went beyond to state that information stored on the cloud was not necessarily a form of evidence. Riley v. California evidently became a landmark case, protecting the digital protection of citizen's privacy when confronted with the police.
A recent notable occurrence of the conflict between law enforcement and a citizen in terms of digital privacy has been in the 2018 case, Carpenter v. United States (585 U.S. ). In this case, the FBI used cell phone records without a warrant to arrest Timothy Ivory Carpenter on multiple charges, and the Supreme Court ruled that the warrantless search of cell phone records violated the Fourth Amendment, citing that the Fourth Amendment protects "reasonable expectations of privacy" and that information sent to third parties still falls under data that can be included under "reasonable expectations of privacy".
Beyond law enforcement, many interactions between the government and citizens have been revealed either lawfully or unlawfully, specifically through whistleblowers. One notable example is Edward Snowden, who released multiple operations related to the mass surveillance operations of the National Security Agency (NSA), where it was discovered that the NSA continues to breach the security of millions of people, mainly through mass surveillance programs whether it was collecting great amounts of data through third party private companies, hacking into other embassies or frameworks of international countries, and various breaches of data, which prompted a culture shock and stirred international debate related to digital privacy.
Internet
Andrew Grove, co-founder and former CEO of Intel Corporation, offered his thoughts on internet privacy in an interview published in May 2000:
Legal discussions of Internet privacy
The Internet has brought new concerns about privacy in an age where computers can permanently store records of everything: "where every online photo, status update, Twitter post and blog entry by and about us can be stored forever", writes law professor and author Jeffrey Rosen.
One of the first instances of privacy being discussed in a legal manner was in 1914, the Federal Trade Commission (FTC) was established, under the Federal Trade Commission Act, whose initial goal was to promote competition amongst businesses and prohibit unfair and misleading businesses. However, since the 1970s, the FTC has become involved in privacy law and enforcement, the first instance being the FTC's implementation and enforcement of the Fair Credit Reporting Act (FCRA), which regulates how credit card bureaus can use a client's data and grants consumer's further credit card rights. In addition to the FCRA, the FTC has implemented various other important acts that protect consumer privacy. For example, the FTC passed the Children's Online Privacy Protection Act (COPPA) of 1998, which regulates services geared towards children under the age of thirteen, and the Red Flags Rule, passed in 2010, which warrants that companies have measures to protect clients against identity theft, and if clients become victims of identity theft, that there are steps to alleviate the consequences of identity theft.
In 2018, the European Union (EU)'s General Data Protection Regulation (GDPR) went into effect, a privacy legislation that replaced the Data Protection Directive of 1995. The GDPR requires how consumers within the EU must have complete and concise knowledge about how companies use their data and have the right to gain and correct data that a companies stores regarding them, enforcing stricter privacy legislations compared to the Data Protection Directive of 1995.
Social networking
Several online social network sites (OSNs) are among the top 10 most visited websites globally. Facebook for example, as of August 2015, was the largest social-networking site, with nearly 2.7 billion members, who upload over 4.75 billion pieces of content daily. While Twitter is significantly smaller with 316 million registered users, the US Library of Congress recently announced that it will be acquiring and permanently storing the entire archive of public Twitter posts since 2006.
A review and evaluation of scholarly work regarding the current state of the value of individuals' privacy of online social networking show the following results: "first, adults seem to be more concerned about potential privacy threats than younger users; second, policy makers should be alarmed by a large part of users who underestimate risks of their information privacy on OSNs; third, in the case of using OSNs and its services, traditional one-dimensional privacy approaches fall short". This is exacerbated by deanonymization research indicating that personal traits such as sexual orientation, race, religious and political views, personality, or intelligence can be inferred based on a wide variety of digital footprints, such as samples of text, browsing logs, or Facebook Likes.
Intrusions of social media privacy are known to affect employment in the United States. Microsoft reports that 75 percent of U.S. recruiters and human-resource professionals now do online research about candidates, often using information provided by search engines, social-networking sites, photo/video-sharing sites, personal web sites and blogs, and Twitter. They also report that 70 percent of U.S. recruiters have rejected candidates based on internet information. This has created a need by many candidates to control various online privacy settings in addition to controlling their online reputations, the conjunction of which has led to legal suits against both social media sites and US employers.
Selfie culture
Selfies are popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and 51 million with the hashtag #me. However, due to modern corporate and governmental surveillance, this may pose a risk to privacy. In a research study which takes a sample size of 3763, researchers found that for users posting selfies on social media, women generally have greater concerns over privacy than men, and that users' privacy concerns inversely predict their selfie behavior and activity.
Online harassment
After the 1999 Columbine Shooting, where violent video games and music were thought to be one of the main influences on the killers, some states began to pass anti-bullying laws where some included cyber-bullying laws. The suicide of 13-year-old Megan Meier, where Meier was harassed on Myspace, prompted Missouri to pass anti-harassment laws though the perpetrators were later declared innocent. Through the rise of smartphones and the rise in popularity with social media such as Facebook and Instagram, messaging, online forums, gaming communities, and email, online harassment continued to grow. 18-year-old Jessica Logan committed suicide in 2009 after her boyfriend sent explicit photos of her to various teenagers in different high schools, where she was then harassed through Myspace, which led to her school passing anti-harassment laws. Further notable occurrences where digital privacy was invaded include the death of Tyler Clementi and Amanda Todd, whose death instigated Canadian funding towards studies on bullying and legislation regarding cyber-bullying to be passed, but problems regarding the lack of protection for users were risen, by Todd's mother herself, since this bill allowed for companies to completely access a user's data.
All U.S. states have now passed laws regarding online harassment. 15% of adolescents aged 12-18 have been subject to cyberbullying, according to a 2017 report conducted by the National Center for Education Statistics and Bureau of Justice. Within the past year, 15.7% of high schoolers were subject to cyberbullying according to the CDC's 2019 Youth Risk Behavior Surveillance System.
Bot accounts
Bots originate from the 1980s, where they were known as an IRC (Internet Relayed Chat) which served essential purposes such as stating the date and time and over time now expand to other purposes, such as flagging copyright on articles.
Forms of social media, such as Twitter, Facebook, and Instagram, have a prevalent activity of social bots, different than IRC bots, representing accounts that are not human and perform autonomous behavior to some degree. Bots, especially those with malicious intent, became most prevalent in the 2016 U.S. presidential election, where both the Trump and Clinton campaign had millions of bots essentially working on their account to influence the election. A subsection of these bots would target and assault certain journalists, causing journalists to stop reporting on matters since they dreaded further harassment. In the same election, Russian Twitter bots, presenting themselves as Midwestern swing-voter Republicans were used to amplify and spread misinformation. In October 2020, data scientist Emilio Ferrara found that 19% of tweets related to the 2016 election were generated by bots. Following the election, A 2017 study revealed that nearly 48 million Twitter accounts are bots. Furthermore, it was found that approximately 33% of tweets related to Brexit were produced by bots. Data indicates that the use of bots has increased since the 2016 election, and AI-driven bots are becoming hard to detect, and soon will be able to emulate human-like behavior, by being able to comment, submit comments regarding policy, etc. affecting political debate, outcomes of an election, and a user's perception of those online and the humans they interact with.
Although bots have been used in a negative context when politics are described, many bots have been used to protect against online harassment. For example, since 2020, Facebook researchers have been developing Web-Enabled Simulation (WES) bots that emulate bad human behavior and then engineers use this data to determine the best correctives.
Privacy and location-based services
Increasingly, mobile devices facilitate location tracking. This creates user privacy problems. A user's location and preferences constitute personal information. Their improper use violates that user's privacy. A recent MIT study by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study
further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity.
Several methods to protect user privacy in location-based services have been proposed, including the use of anonymizing servers and blurring of information. Methods to quantify privacy have also been proposed, to calculate the equilibrium between the benefit of providing accurate location information and the drawbacks of risking personal privacy.
Advertising on mobile devices
When the internet was first introduced, the internet became the predominant medium of advertising, shifting from newspapers and magazines. With the growth of digital advertisements, people began to be tracked using HTTP cookies, and this data was used to target relevant audiences. Since the introduction of iPhones and Androids, data brokers were also planted within apps, for further tracking. Since the growth of cookies, resulting in a $350 billion digital industry especially focused on mobile devices, digital privacy has become the main source of concern for many mobile users, especially with the rise of privacy scandals, such as the Cambridge Analytica Scandal. Recently, Apple has introduced features that prohibit advertisers from tracking a user's data without their consent, as seen with their implementation of pop-up notifications that let users decide the extent to which a company can track their behavior. Google has begun to roll out similar features, but concerns have risen about how a privacy-conscious internet will function without advertisers being able to use data from users as a form of capital. Apple has set a precedent implementing stricter crackdown on privacy, especially with their introduction of the pop-up feature, which has made it harder for businesses, especially small businesses, on other mediums, like Facebook to target relevant audiences, since these advertisers no longer have relevant data. Google, contrary to Apple, has remained relatively lax in its crackdown, supporting cookies until at least 2023, until a privacy-conscious internet solution is found.
Ethical controversies over location privacy
There have been scandals regarding location privacy. One instance was the scandal considering AccuWeather, where it was revealed the AccuWeather was selling locational data, which consisted of a user's locational data even if they opted out of AccuWeather to track his or her location, to Reveal Mobile, a company that monetizes data related a user's location. Other international cases are similar to when in 2017, a leaky API inside the McDelivery App exposed private data, which considered of home addresses, of 2.2 million users With the rise of such scandals, many large American technology companies such as Google, Apple, and Facebook have been subjected to hearings and pressure under the U.S. legislative system. In 2011, with the rise of locational technology, US Senator Al Franken wrote an open letter to Steve Jobs, noting the ability of iPhones and iPads to record and store users' locations in unencrypted files, although Apple denied doing so. This conflict has perpetuated further into 2021, a recent example being where the U.S. state of Arizona found in a court case that Google mislead its users and stored the location of users regardless of their location settings.
Metadata
The ability to do online inquiries about individuals has expanded dramatically over the last decade. Importantly, directly observed behavior, such as browsing logs, search queries, or contents of a public Facebook profile, can be automatically processed to infer secondary information about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.
In Australia, the Telecommunications (Interception and Access) Amendment (Data Retention) Act 2015 made a distinction between collecting the contents of messages sent between users and the metadata surrounding those messages.
Protection of privacy on the Internet
Covert collection of personally identifiable information has been identified as a primary concern by the U.S. Federal Trade Commission. Although some privacy advocates recommend the deletion of original and third-party HTTP cookies, Anthony Miyazaki, marketing professor at Florida International University and privacy scholar, warns that the "elimination of third-party cookie use by Web sites can be circumvented by cooperative strategies with third parties in which information is transferred after the Web site's use of original domain cookies." As of December 2010, the Federal Trade Commission is reviewing policy regarding this issue as it relates to behavioral advertising.
Legal right to privacy
Most countries give citizens rights to privacy in their constitutions. Representative examples of this include the Constitution of Brazil, which says "the privacy, private life, honor and image of people are inviolable"; the Constitution of South Africa says that "everyone has a right to privacy"; and the Constitution of the Republic of Korea says "the privacy of no citizen shall be infringed." The Italian Constitution also defines the right to privacy. Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights.
Many countries have broad privacy laws outside their constitutions, including Australia's Privacy Act 1988, Argentina's Law for the Protection of Personal Data of 2000, Canada's 2000 Personal Information Protection and Electronic Documents Act, and Japan's 2003 Personal Information Protection Law.
Beyond national privacy laws, there are international privacy agreements. The United Nations Universal Declaration of Human Rights says "No one shall be subjected to arbitrary interference with [their] privacy, family, home or correspondence, nor to attacks upon [their] honor and reputation." The Organisation for Economic Co-operation and Development published its Privacy Guidelines in 1980. The European Union's 1995 Data Protection Directive guides privacy protection in Europe. The 2004 Privacy Framework by the Asia-Pacific Economic Cooperation is a privacy protection agreement for the members of that organization.
Argument against legal protection of privacy
The argument against the legal protection of privacy is predominant in the US. The landmark US Supreme Court case, Griswold v. Connecticut, established a reasonable expectation to privacy. However, some conservative justices do not consider privacy to be a legal right, as when discussing the 2003 case, Lawrence v. Texas (539 U.S. 558), Supreme Court Justice Antonin Scalia did not consider privacy to be a right, and Supreme Court Justice Clarence Thomas argued that there is "no general right to privacy" in the U.S. Constitution in 2007. Many Republican interest groups and activists desire for appointed justices to be like Justice Thomas and Scalia since they uphold originalism, which indirectly helps strengthen the argument against the legal protection of privacy.
Free market vs consumer protection
Approaches to privacy can, broadly, be divided into two categories: free market or consumer protection.
One example of the free market approach is to be found in the voluntary OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. The principles reflected in the guidelines, free of legislative interference, are analyzed in an article putting them into perspective with concepts of the GDPR put into law later in the European Union.
In a consumer protection approach, in contrast, it is claimed that individuals may not have the time or knowledge to make informed choices, or may not have reasonable alternatives available. In support of this view, Jensen and Potts showed that most privacy policies are above the reading level of the average person.
By country
Australia
The Privacy Act 1988 is administered by the Office of the Australian Information Commissioner. The initial introduction of privacy law in 1998 extended to the public sector, specifically to Federal government departments, under the Information Privacy Principles. State government agencies can also be subject to state based privacy legislation. This built upon the already existing privacy requirements that applied to telecommunications providers (under Part 13 of the Telecommunications Act 1997), and confidentiality requirements that already applied to banking, legal and patient / doctor relationships.
In 2008 the Australian Law Reform Commission (ALRC) conducted a review of Australian privacy law and produced a report titled "For Your Information". Recommendations were taken up and implemented by the Australian Government via the Privacy Amendment (Enhancing Privacy Protection) Bill 2012.
In 2015, the Telecommunications (Interception and Access) Amendment (Data Retention) Act 2015 was passed, to some controversy over its human rights implications and the role of media.
European Union
Although there are comprehensive regulations for data protection in the European Union, one study finds that despite the laws, there is a lack of enforcement in that no institution feels responsible to control the parties involved and enforce their laws. The European Union also champions the Right to be Forgotten concept in support of its adoption by other countries.
India
Since the introduction of the Aadhaar project in 2009, which resulted in all 1.2 billion Indians being associated with a 12-digit biometric-secured number. Aadhaar has uplifted the poor in India by providing them with a form of identity and preventing the fraud and waste of resources, as normally the government would not be able to allocate its resources to its intended assignees due to the ID issues . With the rise of Aadhaar, India has debated whether Aadhaar violates an individual's privacy and whether any organization should have access to an individual's digital profile, as the Aadhaar card became associated with other economic sectors, allowing for the tracking of individuals by both public and private bodies. Aadhaar databases have suffered from security attacks as well and the project was also met with mistrust regarding the safety of the social protection infrastructures. In 2017, where the Aadhar was challenged, the Indian Supreme Court declared privacy as a human right, but postponed the decision regarding the constitutionality of Aadhaar for another bench. In September 2018, the Indian Supreme Court determined that the Aadhaar project did not violate the legal right to privacy.
United Kingdom
In the United Kingdom, it is not possible to bring an action for invasion of privacy. An action may be brought under another tort (usually breach of confidence) and privacy must then be considered under EC law. In the UK, it is sometimes a defence that disclosure of private information was in the public interest. There is, however, the Information Commissioner's Office (ICO), an independent public body set up to promote access to official information and protect personal information. They do this by promoting good practice, ruling on eligible complaints, giving information to individuals and organisations, and taking action when the law is broken. The relevant UK laws include: Data Protection Act 1998; Freedom of Information Act 2000; Environmental Information Regulations 2004; Privacy and Electronic Communications Regulations 2003. The ICO has also provided a "Personal Information Toolkit" online which explains in more detail the various ways of protecting privacy online.
United States
Although the US Constitution does not explicitly include the right to privacy, individual as well as locational privacy are implicitly granted by the Constitution under the 4th Amendment. The Supreme Court of the United States has found that other guarantees have "penumbras" that implicitly grant a right to privacy against government intrusion, for example in Griswold v. Connecticut. In the United States, the right of freedom of speech granted in the First Amendment has limited the effects of lawsuits for breach of privacy. Privacy is regulated in the US by the Privacy Act of 1974, and various state laws. The Privacy Act of 1974 only applies to Federal agencies in the executive branch of the Federal government. Certain privacy rights have been established in the United States via legislation such as the Children's Online Privacy Protection Act (COPPA), the Gramm–Leach–Bliley Act (GLB), and the Health Insurance Portability and Accountability Act (HIPAA).
Unlike the EU and most EU-member states, the US does not recognize the right to privacy of non-US citizens. The UN's Special Rapporteur on the right to privacy, Joseph A. Cannataci, criticized this distinction.
Conceptions of privacy
Privacy as contextual integrity
The theory of contextual integrity defines privacy as an appropriate information flow, where appropriateness, in turn, is defined as conformance with legitimate, informational norms specific to social contexts.
Right to be let alone
In 1890, the United States jurists Samuel D. Warren and Louis Brandeis wrote "The Right to Privacy", an article in which they argued for the "right to be let alone", using that phrase as a definition of privacy. This concept relies on the theory of natural rights and focuses on protecting individuals. The citation was a response to recent technological developments, such as photography, and sensationalist journalism, also known as yellow journalism.
There is extensive commentary over the meaning of being "let alone", and among other ways, it has been interpreted to mean the right of a person to choose seclusion from the attention of others if they wish to do so, and the right to be immune from scrutiny or being observed in private settings, such as one's own home. Although this early vague legal concept did not describe privacy in a way that made it easy to design broad legal protections of privacy, it strengthened the notion of privacy rights for individuals and began a legacy of discussion on those rights in the US.
Limited access
Limited access refers to a person's ability to participate in society without having other individuals and organizations collect information about them.
Various theorists have imagined privacy as a system for limiting access to one's personal information. Edwin Lawrence Godkin wrote in the late 19th century that "nothing is better worthy of legal protection than private life, or, in other words, the right of every man to keep his affairs to himself, and to decide for himself to what extent they shall be the subject of public observation and discussion." Adopting an approach similar to the one presented by Ruth Gavison Nine years earlier, Sissela Bok said that privacy is "the condition of being protected from unwanted access by others—either physical access, personal information, or attention."
Control over information
Control over one's personal information is the concept that "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Generally, a person who has consensually formed an interpersonal relationship with another person is not considered "protected" by privacy rights with respect to the person they are in the relationship with. Charles Fried said that "Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era of big data, control over information is under pressure.
States of privacy
Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve. Solitude is a physical separation from others; Intimacy is a "close, relaxed; and frank relationship between two or more individuals" that results from the seclusion of a pair or small group of individuals. Anonymity is the "desire of individuals for times of 'public privacy.'" Lastly, reserve is the "creation of a psychological barrier against unwanted intrusion"; this creation of a psychological barrier requires others to respect an individual's need or desire to restrict communication of information concerning himself or herself.
In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual. (In this sense, "accessing" an individual includes accessing personal information about him or her.) Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want them to access or experience him or her. Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.
Secrecy
Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to "conceal information about themselves that others might use to their disadvantage".
In various legal contexts, when privacy is described as secrecy, a conclusion is reached: if privacy is secrecy, then rights to privacy do not apply for any information which is already publicly disclosed. When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.
Personhood and autonomy
Privacy may be understood as a necessary precondition for the development and preservation of personhood. Jeffrey Reiman defined privacy in terms of a recognition of one's ownership of their physical and mental reality and a moral right to self-determination. Through the "social ritual" of privacy, or the social practice of respecting an individual's privacy barriers, the social group communicates to developing children that they have exclusive moral rights to their bodies — in other words, moral ownership of their body. This entails control over both active (physical) and cognitive appropriation, the former being control over one's movements and actions and the latter being control over who can experience one's physical existence and when.
Alternatively, Stanley Benn defined privacy in terms of a recognition of oneself as a subject with agency—as an individual with the capacity to choose. Privacy is required to exercise choice. Overt observation makes the individual aware of himself or herself as an object with a "determinate character" and "limited probabilities." Covert observation, on the other hand, changes the conditions in which the individual is exercising choice without his or her knowledge and consent.
In addition, privacy may be viewed as a state that enables autonomy, a concept closely connected to that of personhood. According to Joseph Kufer, an autonomous self-concept entails a conception of oneself as a "purposeful, self-determining, responsible agent" and an awareness of one's capacity to control the boundary between self and other—that is, to control who can access and experience him or her and to what extent. Furthermore, others must acknowledge and respect the self's boundaries—in other words, they must respect the individual's privacy.
The studies of psychologists such as Jean Piaget and Victor Tausk show that, as children learn that they can control who can access and experience them and to what extent, they develop an autonomous self-concept. In addition, studies of adults in particular institutions, such as Erving Goffman's study of "total institutions" such as prisons and mental institutions, suggest that systemic and routinized deprivations or violations of privacy deteriorate one's sense of autonomy over time.
Self-identity and personal growth
Privacy may be understood as a prerequisite for the development of a sense of self-identity. Privacy barriers, in particular, are instrumental in this process. According to Irwin Altman, such barriers "define and limit the boundaries of the self" and thus "serve to help define [the self]." This control primarily entails the ability to regulate contact with others. Control over the "permeability" of the self's boundaries enables one to control what constitutes the self and thus to define what is the self.
In addition, privacy may be seen as a state that fosters personal growth, a process integral to the development of self-identity. Hyman Gross suggested that, without privacy—solitude, anonymity, and temporary releases from social roles—individuals would be unable to freely express themselves and to engage in self-discovery and self-criticism. Such self-discovery and self-criticism contributes to one's understanding of oneself and shapes one's sense of identity.
Intimacy
In a way analogous to how the personhood theory imagines privacy as some essential part of being an individual, the intimacy theory imagines privacy to be an essential part of the way that humans have strengthened or intimate relationships with other humans. Because part of human relationships includes individuals volunteering to self-disclose most if not all personal information, this is one area in which privacy does not apply.
James Rachels advanced this notion by writing that privacy matters because "there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people." Protecting intimacy is at the core of the concept of sexual privacy, which law professor Danielle Citron argues should be protected as a unique form of privacy.
Physical privacy
Physical privacy could be defined as preventing "intrusions into one's physical space or solitude." An example of the legal basis for the right to physical privacy is the U.S. Fourth Amendment, which guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures".
Physical privacy may be a matter of cultural sensitivity, personal dignity, and/or shyness. There may also be concerns about safety, if, for example one is wary of becoming the victim of crime or stalking.
Organizational
Government agencies, corporations, groups/societies and other organizations may desire to keep their activities or secrets from being revealed to other organizations or individuals, adopting various security practices and controls in order to keep private information confidential. Organizations may seek legal protection for their secrets. For example, a government administration may be able to invoke executive privilege or declare certain information to be classified, or a corporation might attempt to protect valuable proprietary information as trade secrets.
Privacy self-synchronization
Privacy self-synchronization is a hypothesized mode by which the stakeholders of an enterprise privacy program spontaneously contribute collaboratively to the program's maximum success. The stakeholders may be customers, employees, managers, executives, suppliers, partners or investors. When self-synchronization is reached, the model states that the personal interests of individuals toward their privacy is in balance with the business interests of enterprises who collect and use the personal information of those individuals.
An individual right
David Flaherty believes networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "[i]ndividuals want to be left alone and to exercise some control over how information about them is used".
Richard Posner and Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labour market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud. For Lessig, privacy breaches online can be regulated through code and law. Lessig claims "the protection of privacy would be stronger if people conceived of the right as a property right", and that "individuals should be able to control information about themselves".
A collective value and a human right
There have been attempts to establish privacy as one of the fundamental human rights, whose social value is an essential component in the functioning of democratic societies.
Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, and collective components. Shared ideas about privacy allows freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".
Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in the United Nations Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers." Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.
Dr. Eliza Watt, Westminster Law School, University of Westminster in London, UK, proposes application of the International Human Right Law (IHRL) concept of “virtual control” as an approach to deal with extraterritorial mass surveillance by state intelligence agencies. Dr. Watt envisions the “virtual control” test, understood as a remote control over the individual's right to privacy of communications, where privacy is recognized under the ICCPR, Article 17. This, she contends, may help to close the normative gap that is being exploited by nation states.
Privacy paradox and economic valuation
The privacy paradox is a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not. While this term was coined as early as 1998, it wasn't used in its current popular sense until the year 2000.
Susan B. Barnes similarly used the term privacy paradox to refer to the ambiguous boundary between private and public space on social media. When compared to adults, young people tend to disclose more information on social media. However, this does not mean that they are not concerned about their privacy. Susan B. Barnes gave a case in her article: in a television interview about Facebook, a student addressed her concerns about disclosing personal information online. However, when the reporter asked to see her Facebook page, she put her home address, phone numbers, and pictures of her young son on the page.
The privacy paradox has been studied and scripted in different research settings. Several studies have shown this inconsistency between privacy attitudes and behavior among online users. However, by now an increasing number of studies have also shown that there are significant and at times large correlations between privacy concerns and information sharing behavior, which speaks against the privacy paradox. A meta-analysis of 166 studies published on the topic reported an overall small but significant relation between privacy concerns and informations sharing or use of privacy protection measures. So although there are several individual instances or anecdotes where behavior appear paradoxical, on average privacy concerns and privacy behaviors seem to be related, and several findings question the general existence of the privacy paradox.
However, the relationship between concerns and behavior is likely only small, and there are several arguments that can explain why that is the case. According to the attitude-behavior gap, attitudes and behaviors are in general and in most cases not closely related. A main explanation for the partial mismatch in the context of privacy specifically is that users lack awareness of the risks and the degree of protection. Users may underestimate the harm of disclosing information online. On the other hand, some researchers argue that the mismatch comes from lack of technology literacy and from the design of sites. For example, users may not know how to change their default settings even though they care about their privacy. Psychologists Sonja Utz and Nicole C. Krämer particularly pointed out that the privacy paradox can occur when users must trade-off between their privacy concerns and impression management.
Research on irrational decision making
A study conducted by Susanne Barth and Menno D.T. de Jo demonstrates that decision making takes place on an irrational level, especially when it comes to mobile computing. Mobile applications in particular are often built up in such a way that spurs decision making that is fast and automatic without assessing risk factors. Protection measures against these unconscious mechanisms are often difficult to access while downloading and installing apps. Even with mechanisms in place to protect user privacy, users may not have the knowledge or experience to enable these mechanisms.
Users of mobile applications generally have very little knowledge of how their personal data are used. When they decide which application to download, they typically do not rely on the information provided by application vendors regarding the collection and use of personal data. Other research finds that users are much more likely to be swayed by cost, functionality, design, ratings, reviews and number of downloads than requested permissions, regardless of how important users may claim permissions to be when asked.
A study by Zafeiropoulou specifically examined location data, which is a form of personal information increasingly used by mobile applications. Their survey also found evidence that supports the existence of privacy paradox for location data. Privacy risk perception in relation to the use of privacy-enhancing technologies survey data indicates that a high perception of privacy risk is an insufficient motivator for people to adopt privacy protecting strategies, while knowing they exist. It also raises a question on what the value of data is, as there is no equivalent of a stock-market for personal information.
The economic valuation of privacy
The willingness to incur a privacy risk is suspected to be driven by a complex array of factors including risk attitudes, personal value for private information, and general attitudes to privacy (which may be derived from surveys). One experiment aiming to determine the monetary value of several types of personal information indicated relatively low evaluations of personal information.
Information asymmetry
Users are not always given the tools to live up to their professed privacy concerns, and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small. One study suggests that people think their browser history is worth the equivalent of a cheap meal. Another finds that attitudes to privacy risk do not appear to depend on whether it is already under threat or not.
Inherent necessity for privacy violation
It is suggested by Andréa Belliger and David J. Krieger that the privacy paradox should not be considered a paradox, but more of a privacy dilemma, for services that cannot exist without the user sharing private data. However, the general public is typically not given the choice whether to share private data or not, making it difficult to verify any claim that a service truly cannot exist without sharing private data.
Privacy Calculus
The privacy calculus model posits that two factors determine privacy behavior, namely privacy concerns (or perceived risks) and expected benefits. By now, the privacy calculus was supported by several studies, and it stands in direct contrast to the privacy paradox. Both perspectives can be consoled if they are understood from a more moderate position: Behavior is neither completely paradoxical nor completely logical, and the consistency between concerns and behavior depends on users, situations, or contexts.
Actions which reduce privacy
As with other conceptions of privacy, there are various ways to discuss what kinds of processes or actions remove, challenge, lessen, or attack privacy. In 1960 legal scholar William Prosser created the following list of activities which can be remedied with privacy protection:
Intrusion into a person's private space, own affairs, or wish for solitude
Public disclosure of personal information about a person which could be embarrassing for them to have revealed
Promoting access to information about a person which could lead the public to have incorrect beliefs about them
Encroaching someone's personality rights, and using their likeness to advance interests which are not their own
From 2004 to 2008, building from this and other historical precedents, Daniel J. Solove presented another classification of actions which are harmful to privacy, including collection of information which is already somewhat public, processing of information, sharing information, and invading personal space to get private information.
Collecting information
In the context of harming privacy, information collection means gathering whatever information can be obtained by doing something to obtain it. Examples include surveillance and interrogation. Another example is how consumers and marketers also collect information in the business context through facial recognition which has recently caused a concern for things such as privacy. There is currently research being done related to this topic.
Aggregating information
It can happen that privacy is not harmed when information is available, but that the harm can come when that information is collected as a set, then processed together in such a way that the collective reporting of pieces of information encroaches on privacy. Actions in this category which can lessen privacy include the following:
data aggregation, which is connecting many related but unconnected pieces of information
identification, which can mean breaking the de-identification of items of data by putting it through a de-anonymization process, thus making facts which were intended to not name particular people to become associated with those people
insecurity, such as lack of data security, which includes when an organization is supposed to be responsible for protecting data instead suffers a data breach which harms the people whose data it held
secondary use, which is when people agree to share their data for a certain purpose, but then the data is used in ways without the data donors’ informed consent
exclusion is the use of a person's data without any attempt to give the person an opportunity to manage the data or participate in its usage
Information dissemination
Information dissemination is an attack on privacy when information which was shared in confidence is shared or threatened to be shared in a way that harms the subject of the information.
There are various examples of this. Breach of confidentiality is when one entity promises to keep a person's information private, then breaks that promise. Disclosure is making information about a person more accessible in a way that harms the subject of the information, regardless of how the information was collected or the intent of making it available. Exposure is a special type of disclosure in which the information disclosed is emotional to the subject or taboo to share, such as revealing their private life experiences, their nudity, or perhaps private body functions. Increased accessibility means advertising the availability of information without actually distributing it, as in the case of doxxing. Blackmail is making a threat to share information, perhaps as part of an effort to coerce someone. Appropriation is an attack on the personhood of someone, and can include using the value of someone's reputation or likeness to advance interests which are not those of the person being appropriated. Distortion is the creation of misleading information or lies about a person.
Invasion
Invasion of privacy, a subset of expectation of privacy, is a different concept from the collecting, aggregating, and disseminating information because those three are a misuse of available data, whereas invasion is an attack on the right of individuals to keep personal secrets. An invasion is an attack in which information, whether intended to be public or not, is captured in a way that insults the personal dignity and right to private space of the person whose data is taken.
Intrusion
An intrusion is any unwanted entry into a person's private personal space and solitude for any reason, regardless of whether data is taken during that breach of space. Decisional interference is when an entity somehow injects itself into the personal decision-making process of another person, perhaps to influence that person's private decisions but in any case doing so in a way that disrupts the private personal thoughts that a person has.
Examples of invasions of privacy
In 2019, contract workers for Apple and Amazon reported being forced to continue listening to "intimate moments" captured on the companies' smart speakers in order to improve the quality of their automated speech recognition software.
Techniques to improve privacy
Similarly to actions which reduce privacy, there are multiple angles of privacy and multiple techniques to improve them to varying extents. When actions are done at an organizational level, they may be referred to as cybersecurity.
Encryption
Individuals can encrypt e-mails via enabling either two encryption protocols, S/MIME, which is built into companies like Apple or Outlook and thus most common, or PGP. The Signal messaging app, which encrypts messages so that only the recipient can read the message, is notable for being available on many mobile devices and implementing a form of perfect forward secrecy.
Anonymity
Anonymizing proxies or anonymizing networks like I2P and Tor can be used to prevent Internet service providers (ISP) from knowing which sites one visits and with whom one communicates, by hiding IP addresses and location, but does not necessarily protect a user from third party data mining. Anonymizing proxies are built into a user's device, in comparison to a Virtual Private Network (VPN), where users must download software. Using a VPN hides all data and connections that are exchanged between servers and a user's computer, resulting in the online data of the user being unshared and secure, providing a barrier between the user and their ISP, and is especially important to use when a user is connected to public Wi-Fi. However, users should understand that all their data does flow through the VPN's servers rather than the ISP. Users should decide for themselves if they wish to use either an anonymizing proxy or a VPN.
In a more non-technical sense, using incognito mode or private browsing mode will prevent a user's computer from saving history, Internet files, and cookies, but the ISP will still have access to the users' search history. Using anonymous search engines will not share a user's history, clicks, and will obstruct ad blockers.
User empowerment
Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making, like restricting data access permissions during application installation, but this would not completely bridge the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters, the design needs to be more user-oriented.
Other security measures
In a social sense, simply limiting the amount of personal information that users posts on social media could increase their security, which in turn makes it harder for criminals to perform identity theft. Moreover, creating a set of complex passwords and using two-factor authentication can allow users to be less susceptible to their accounts being compromised when various data leaks occur. Furthermore, users should protect their digital privacy by using anti-virus software, which can block harmful viruses like a pop-up scanning for personal information on a users' computer.
Legal methods
Although there are laws that promote the protection of users, in some countries, like the U.S., there is no federal digital privacy law and privacy settings are essentially limited by the state of current enacted privacy laws. To further their privacy, users can start conversing with representatives, letting representatives know that privacy is a main concern, which in turn increases the likelihood of further privacy laws being enacted.
See also
Civil liberties
Digital identity
Global surveillance
Identity theft in the United States
Open data
Open access
Privacy-enhancing technologies
Privacy policy
Solitude
Transparency
Wikipedia's privacy policy – Wikimedia Foundation
Works cited
References
External links
Glenn Greenwald: Why privacy matters. Video on YouTube, provided by TED. Published 10 October 2014.
International Privacy Index world map, The 2007 International Privacy Ranking, Privacy International (London).
"Privacy" entry in the Stanford Encyclopedia of Philosophy.
Privacy law
Human rights
Identity management
Digital rights
Civil rights and liberties |
25220 | https://en.wikipedia.org/wiki/Quantum%20computing | Quantum computing | Quantum computing is a type of computation that harnesses the collective properties of quantum states, such as superposition, interference, and entanglement, to perform calculations. The devices that perform quantum computations are known as quantum computers. Though current quantum computers are too small to outperform usual (classical) computers for practical applications, they are believed to be capable of solving certain computational problems, such as integer factorization (which underlies RSA encryption), substantially faster than classical computers. The study of quantum computing is a subfield of quantum information science.
Quantum computing began in 1980 when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things a classical computer could not feasibly do. In 1994, Peter Shor developed a quantum algorithm for factoring integers with the potential to decrypt RSA-encrypted communications. In 1998 Isaac Chuang, Neil Gershenfeld and Mark Kubinec created the first two-qubit quantum computer that could perform computations. Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream." In recent years, investment in quantum computing research has increased in the public and private sectors. On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), claimed to have performed a quantum computation that was infeasible on any classical computer, but whether this claim was or is still valid is a topic of active research.
There are several types of quantum computers (also known as quantum computing systems), including the quantum circuit model, quantum Turing machine, adiabatic quantum computer, one-way quantum computer, and various quantum cellular automata. The most widely used model is the quantum circuit, based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. A qubit can be in a 1 or 0 quantum state, or in a superposition of the 1 and 0 states. When it is measured, however, it is always 0 or 1; the probability of either outcome depends on the qubit's quantum state immediately prior to measurement.
Efforts towards building a physical quantum computer focus on technologies such as transmons, ion traps and topological quantum computers, which aim to create high-quality qubits. These qubits may be designed differently, depending on the full quantum computer's computing model, whether quantum logic gates, quantum annealing, or adiabatic quantum computation. There are currently a number of significant obstacles to constructing useful quantum computers. It is particularly difficult to maintain qubits' quantum states, as they suffer from quantum decoherence and state fidelity. Quantum computers therefore require error correction.
Any computational problem that can be solved by a classical computer can also be solved by a quantum computer. Conversely, any problem that can be solved by a quantum computer can also be solved by a classical computer, at least in principle given enough time. In other words, quantum computers obey the Church–Turing thesis. This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any feasible amount of time—a feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory.
Quantum circuit
Definition
The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. This model is a complex linear-algebraic generalization of boolean circuits.
A memory consisting of bits of information has possible states. A vector representing all memory states thus has entries (one for each state). This vector is viewed as a probability vector and represents the fact that the memory is to be found in a particular state.
In the classical view, one entry would have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero.
In quantum mechanics, probability vectors can be generalized to density operators. The quantum state vector formalism is usually introduced first because it is conceptually simpler, and because it can be used instead of the density matrix formalism for pure states, where the whole quantum system is known.
We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that
A quantum memory may then be found in any quantum superposition of the two classical states and :
The coefficients and are complex numbers. One qubit of information is said to be encoded into the quantum memory. The state is not itself a probability vector but can be connected with a probability vector via a measurement operation. If the quantum memory is measured to determine whether the state is or (this is known as a computational basis measurement), the zero state would be observed with probability and the one state with probability . The numbers and are called probability amplitudes.
The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix
Mathematically, the application of such a logic gate to a quantum state vector is modelled with matrix multiplication. Thus and .
The mathematics of single qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are
The CNOT gate can then be represented using the following matrix:
As a mathematical consequence of this definition, , , , and . In other words, the CNOT applies a NOT gate ( from before) to the second qubit if and only if the first qubit is in the state . If the first qubit is , nothing is done to either qubit.
In summary, a quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements.
Any quantum computation (which is, in the above formalism, any unitary matrix over qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.
Quantum algorithms
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.
Quantum algorithms that offer more than a polynomial speedup over the best known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely.
Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems. Many examples of provable quantum speedups for query problems are related to Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, which uses Grover's algorithm, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees, which is a variant of the search problem.
Potential applications
Cryptography
A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.
Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).
Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking.
Search problems
The most well-known example of a problem admitting a polynomial quantum speedup is unstructured search, finding a marked item out of a list of items in a database. This can be solved by Grover's algorithm using queries to the database, quadratically fewer than the queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups.
Problems that can be efficiently addressed with Grover's algorithm have the following properties:
There is no searchable structure in the collection of possible answers,
The number of possible answers to check is the same as the number of inputs to the algorithm, and
There exists a boolean function that evaluates each input and determines whether it is the correct answer
For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest of government agencies.
Simulation of quantum systems
Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.
Quantum simulations might be used to predict future paths of particles and protons under superposition in the double-slit experiment.
About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry while naturally occurring organisms also produce ammonia. Quantum simulations might be used to understand this process increasing production.
Quantum annealing and adiabatic optimization
Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.
Machine learning
Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks.
For example, the quantum algorithm for linear systems of equations, or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks.
Computational biology
In the field of computational biology, quantum computing has played a big role in solving many biological problems. One of the well-known examples would be in computational genomics and how computing has drastically reduced the time to sequence a human genome. Given how computational biology is using generic data modeling and storage, its applications to computational biology are expected to arise as well.
Computer-aided drug design and generative chemistry
Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms. Hybrid architectures combining quantum computers with deep classical networks, such as Quantum Variational Autoencoders, can already be trained on commercially available annealers and used to generate novel drug-like molecular structures.
Developing physical quantum computers
Challenges
There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer:
Physically scalable to increase the number of qubits
Qubits that can be initialized to arbitrary values
Quantum gates that are faster than decoherence time
Universal gate set
Qubits that can be read easily
Sourcing parts for quantum computers is also very difficult. Many quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co.
The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers which enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge.
Quantum decoherence
One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds.
As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.
These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.
As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing.
Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds.
A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.
Quantum supremacy
Quantum supremacy is a term coined by John Preskill referring to the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark.
In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to or the closing of the gap between Sycamore and classical supercomputers.
In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer Jiuzhang to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds.
On November 16, 2021 at the quantum computing summit IBM presented a 127-qubit microprocessor named IBM Eagle.
Skepticism
Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales.
Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:
"So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be... about 10300... Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system? My answer is simple. No, never."
Candidates for physical realizations
For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):
Superconducting quantum computing (qubit implemented by the state of small superconducting circuits [Josephson junctions])
Trapped ion quantum computer (qubit implemented by the internal state of trapped ions)
Neutral atoms in optical lattices (qubit implemented by internal states of neutral atoms trapped in an optical lattice)
Quantum dot computer, spin-based (e.g. the Loss-DiVincenzo quantum computer) (qubit given by the spin states of trapped electrons)
Quantum dot computer, spatial-based (qubit given by electron position in double quantum dot)
Quantum computing using engineered quantum wells, which could in principle enable the construction of quantum computers that operate at room temperature
Coupled quantum wire (qubit implemented by a pair of quantum wires coupled by a quantum point contact)
Nuclear magnetic resonance quantum computer (NMRQC) implemented with the nuclear magnetic resonance of molecules in solution, where qubits are provided by nuclear spins within the dissolved molecule and probed with radio waves
Solid-state NMR Kane quantum computers (qubit realized by the nuclear spin state of phosphorus donors in silicon)
Vibrational quantum computer (qubits realized by vibrational superpositions in cold molecules)
Electrons-on-helium quantum computers (qubit is the electron spin)
Cavity quantum electrodynamics (CQED) (qubit provided by the internal state of trapped atoms coupled to high-finesse cavities)
Molecular magnet (qubit given by spin states)
Fullerene-based ESR quantum computer (qubit based on the electronic spin of atoms or molecules encased in fullerenes)
Nonlinear optical quantum computer (qubits realized by processing states of different modes of light through both linear and nonlinear elements)
Linear optical quantum computer (qubits realized by processing states of different modes of light through linear elements e.g. mirrors, beam splitters and phase shifters)
Diamond-based quantum computer (qubit realized by the electronic or nuclear spin of nitrogen-vacancy centers in diamond)
Bose-Einstein condensate-based quantum computer
Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
Rare-earth-metal-ion-doped inorganic crystal based quantum computers (qubit realized by the internal electronic state of dopants in optical fibers)
Metallic-like carbon nanospheres-based quantum computers
The large number of candidates demonstrates that quantum computing, despite rapid progress, is still in its infancy.
Models of computation for quantum computing
There are a number of models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed. For practical implementations, the four relevant models of computation are:
Quantum gate array – Computation decomposed into a sequence of few-qubit quantum gates.
One-way quantum computer – Computation decomposed into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation.
Adiabatic quantum computer, based on quantum annealing – Computation decomposed into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.
Topological quantum computer – Computation decomposed into the braiding of anyons in a 2D lattice.
The quantum Turing machine is theoretically important but the physical implementation of this model is not feasible. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
Relation to computability and complexity theory
Computability theory
Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers.
Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem and the existence of quantum computers does not disprove the Church–Turing thesis.
Quantum complexity theory
While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers.
The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that and is widely suspected that , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity.
The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP).
The relationship of BQP to the basic classical complexity classes can be summarized as follows:
It is also known that BQP is contained in the complexity class (or more precisely in the associated class of decision problems ), which is a subclass of PSPACE.
It has been speculated that further advances in physics could lead to even faster computers. For instance, it has been shown that a non-local hidden variable quantum computer based on Bohmian Mechanics could implement a search of an -item database in at most steps, a slight speedup over Grover's algorithm, which runs in steps. Note, however, that neither search method would allow quantum computers to solve NP-complete problems in polynomial time. Theories of quantum gravity, such as M-theory and loop quantum gravity, may allow even faster computers to be built. However, defining computation in these theories is an open problem due to the problem of time; that is, within these physical theories there is currently no obvious way to describe what it means for an observer to submit input to a computer at one point in time and then receive output at a later point in time.
See also
Chemical computer
D-Wave Systems
DNA computing
Electronic quantum holography
Intelligence Advanced Research Projects Activity
Kane quantum computer
List of emerging technologies
List of quantum processors
Magic state distillation
Natural computing
Photonic computing
Post-quantum cryptography
Quantum algorithm
Quantum annealing
Quantum bus
Quantum cognition
Quantum circuit
Quantum complexity theory
Quantum cryptography
Quantum logic gate
Quantum machine learning
Quantum supremacy
Quantum threshold theorem
Quantum volume
Rigetti Computing
Supercomputer
Superposition
Theoretical computer science
Timeline of quantum computing
Topological quantum computer
Valleytronics
References
Further reading
Textbooks
Academic papers
Table 1 lists switching and dephasing times for various systems.
External links
Stanford Encyclopedia of Philosophy: "Quantum Computing" by Amit Hagar and Michael E. Cuffaro.
Quantum computing for the very curious by Andy Matuschak and Michael Nielsen
Quantum Computing Made Easy on Satalia blog
Lectures
Quantum computing for the determined – 22 video lectures by Michael Nielsen
Video Lectures by David Deutsch
Lectures at the Institut Henri Poincaré (slides and videos)
Online lecture on An Introduction to Quantum Computing, Edward Gerjuoy (2008)
Lomonaco, Sam. Four Lectures on Quantum Computing given at Oxford University in July 2006
Models of computation
Quantum cryptography
Information theory
Computational complexity theory
Classes of computers
Theoretical computer science
Open problems
Computer-related introductions in 1980
Emerging technologies |
25223 | https://en.wikipedia.org/wiki/Quasigroup | Quasigroup | In mathematics, especially in abstract algebra, a quasigroup is an algebraic structure resembling a group in the sense that "division" is always possible. Quasigroups differ from groups mainly in that they are not necessarily associative.
A quasigroup with an identity element is called a loop.
Definitions
There are at least two structurally equivalent formal definitions of quasigroup. One defines a quasigroup as a set with one binary operation, and the other, from universal algebra, defines a quasigroup as having three primitive operations. The homomorphic image of a quasigroup defined with a single binary operation, however, need not be a quasigroup. We begin with the first definition.
Algebra
A quasigroup is a non-empty set Q with a binary operation ∗ (that is, a magma, indicating that a quasigroup has to satisfy closure property), obeying the Latin square property. This states that, for each a and b in Q, there exist unique elements x and y in Q such that both
a ∗ x = b,
y ∗ a = b
hold. (In other words: Each element of the set occurs exactly once in each row and exactly once in each column of the quasigroup's multiplication table, or Cayley table. This property ensures that the Cayley table of a finite quasigroup, and, in particular, finite group, is a Latin square.) The uniqueness requirement can be replaced by the requirement that the magma be cancellative.
The unique solutions to these equations are written and . The operations '\' and '/' are called, respectively, left division and right division.
The empty set equipped with the empty binary operation satisfies this definition of a quasigroup. Some authors accept the empty quasigroup but others explicitly exclude it.
Universal algebra
Given some algebraic structure, an identity is an equation in which all variables are tacitly universally quantified, and in which all operations are among the primitive operations proper to the structure. Algebraic structures axiomatized solely by identities are called varieties. Many standard results in universal algebra hold only for varieties. Quasigroups are varieties if left and right division are taken as primitive.
A quasigroup is a type (2,2,2) algebra (i.e., equipped with three binary operations) satisfying the identities:
y = x ∗ (x \ y),
y = x \ (x ∗ y),
y = (y / x) ∗ x,
y = (y ∗ x) / x.
In other words: Multiplication and division in either order, one after the other, on the same side by the same element, have no net effect.
Hence if is a quasigroup according to the first definition, then is the same quasigroup in the sense of universal algebra. And vice versa: if is a quasigroup according to the sense of universal algebra, then is a quasigroup according to the first definition.
Loops
A loop is a quasigroup with an identity element; that is, an element, e, such that
x ∗ e = x and e ∗ x = x for all x in Q.
It follows that the identity element, e, is unique, and that every element of Q has unique left and right inverses (which need not be the same).
A quasigroup with an idempotent element is called a pique ("pointed idempotent quasigroup"); this is a weaker notion than a loop but common nonetheless because, for example, given an abelian group, , taking its subtraction operation as quasigroup multiplication yields a pique with the group identity (zero) turned into a "pointed idempotent". (That is, there is a principal isotopy .)
A loop that is associative is a group. A group can have a non-associative pique isotope, but it cannot have a nonassociative loop isotope.
There are weaker associativity properties that have been given special names.
For instance, a Bol loop is a loop that satisfies either:
x ∗ (y ∗ (x ∗ z)) = (x ∗ (y ∗ x)) ∗ z for each x, y and z in Q (a left Bol loop),
or else
((z ∗ x) ∗ y) ∗ x = z ∗ ((x ∗ y) ∗ x) for each x, y and z in Q (a right Bol loop).
A loop that is both a left and right Bol loop is a Moufang loop. This is equivalent to any one of the following single Moufang identities holding for all x, y, z:
x ∗ (y ∗ (x ∗ z)) = ((x ∗ y) ∗ x) ∗ z,
z ∗ (x ∗ (y ∗ x)) = ((z ∗ x) ∗ y) ∗ x,
(x ∗ y) ∗ (z ∗ x) = x ∗ ((y ∗ z) ∗ x), or
(x ∗ y) ∗ (z ∗ x) = (x ∗ (y ∗ z)) ∗ x.
Symmetries
Smith (2007) names the following important properties and subclasses:
Semisymmetry
A quasigroup is semisymmetric if the following equivalent identities hold:
x ∗ y = y / x,
y ∗ x = x \ y,
x = (y ∗ x) ∗ y,
x = y ∗ (x ∗ y).
Although this class may seem special, every quasigroup Q induces a semisymmetric quasigroup QΔ on the direct product cube Q3 via the following operation:
where "//" and "\\" are the conjugate division operations given by and .
Triality
Total symmetry
A narrower class is a totally symmetric quasigroup (sometimes abbreviated TS-quasigroup) in which all conjugates coincide as one operation: . Another way to define (the same notion of) totally symmetric quasigroup is as a semisymmetric quasigroup which also is commutative, i.e. .
Idempotent total symmetric quasigroups are precisely (i.e. in a bijection with) Steiner triples, so such a quasigroup is also called a Steiner quasigroup, and sometimes the latter is even abbreviated as squag. The term sloop refers to an analogue for loops, namely, totally symmetric loops that satisfy instead of . Without idempotency, total symmetric quasigroups correspond to the geometric notion of extended Steiner triple, also called Generalized Elliptic Cubic Curve (GECC).
Total antisymmetry
A quasigroup is called totally anti-symmetric if for all , both of the following implications hold:
(c ∗ x) ∗ y = (c ∗ y) ∗ x implies that x = y
x ∗ y = y ∗ x implies that x = y.
It is called weakly totally anti-symmetric if only the first implication holds.
This property is required, for example, in the Damm algorithm.
Examples
Every group is a loop, because if and only if , and if and only if .
The integers Z (or the rationals Q or the reals R) with subtraction (−) form a quasigroup. These quasiqroups are not loops because there is no identity element (0 is a right identity because , but not a left identity because, in general, ).
The nonzero rationals Q× (or the nonzero reals R×) with division (÷) form a quasigroup.
Any vector space over a field of characteristic not equal to 2 forms an idempotent, commutative quasigroup under the operation .
Every Steiner triple system defines an idempotent, commutative quasigroup: is the third element of the triple containing a and b. These quasigroups also satisfy for all x and y in the quasigroup. These quasigroups are known as Steiner quasigroups.
The set where and with all other products as in the quaternion group forms a nonassociative loop of order 8. See hyperbolic quaternions for its application. (The hyperbolic quaternions themselves do not form a loop or quasigroup.)
The nonzero octonions form a nonassociative loop under multiplication. The octonions are a special type of loop known as a Moufang loop.
An associative quasigroup is either empty or is a group, since if there is at least one element, the invertibility of the quasigroup binary operation combined with associativity implies the existence of an identity element which then implies the existence of inverse elements, thus satisfying all three requirements of a group.
The following construction is due to Hans Zassenhaus. On the underlying set of the four-dimensional vector space F4 over the 3-element Galois field define
(x1, x2, x3, x4) ∗ (y1, y2, y3, y4) = (x1, x2, x3, x4) + (y1, y2, y3, y4) + (0, 0, 0, (x3 − y3)(x1y2 − x2y1)).
Then, is a commutative Moufang loop that is not a group.
More generally, the nonzero elements of any division algebra form a quasigroup.
Properties
In the remainder of the article we shall denote quasigroup multiplication simply by juxtaposition.
Quasigroups have the cancellation property: if , then . This follows from the uniqueness of left division of ab or ac by a. Similarly, if , then .
The Latin square property of quasigroups implies that, given any two of the three variables in , the third variable is uniquely determined.
Multiplication operators
The definition of a quasigroup can be treated as conditions on the left and right multiplication operators , defined by
The definition says that both mappings are bijections from Q to itself. A magma Q is a quasigroup precisely when all these operators, for every x in Q, are bijective. The inverse mappings are left and right division, that is,
In this notation the identities among the quasigroup's multiplication and division operations (stated in the section on universal algebra) are
where 1 denotes the identity mapping on Q.
Latin squares
The multiplication table of a finite quasigroup is a Latin square: an table filled with n different symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column.
Conversely, every Latin square can be taken as the multiplication table of a quasigroup in many ways: the border row (containing the column headers) and the border column (containing the row headers) can each be any permutation of the elements. See small Latin squares and quasigroups.
Infinite quasigroups
For a countably infinite quasigroup Q, it is possible to imagine an infinite array in which every row and every column corresponds to some element q of Q, and where the element a*b is in the row corresponding to a and the column responding to b. In this situation too, the Latin Square property says that each row and each column of the infinite array will contain every possible value precisely once.
For an uncountably infinite quasigroup, such as the group of non-zero real numbers under multiplication, the Latin square property still holds, although the name is somewhat unsatisfactory, as it is not possible to produce the array of combinations to which the above idea of an infinite array extends since the real numbers cannot all be written in a sequence. (This is somewhat misleading however, as the reals can be written in a sequence of length , assuming the Well-Ordering Theorem.)
Inverse properties
The binary operation of a quasigroup is invertible in the sense that both and , the left and right multiplication operators, are bijective, and hence invertible.
Every loop element has a unique left and right inverse given by
A loop is said to have (two-sided) inverses if for all x. In this case the inverse element is usually denoted by .
There are some stronger notions of inverses in loops which are often useful:
A loop has the left inverse property if for all and . Equivalently, or .
A loop has the right inverse property if for all and . Equivalently, or .
A loop has the antiautomorphic inverse property if or, equivalently, if .
A loop has the weak inverse property when if and only if . This may be stated in terms of inverses via or equivalently .
A loop has the inverse property if it has both the left and right inverse properties. Inverse property loops also have the antiautomorphic and weak inverse properties. In fact, any loop which satisfies any two of the above four identities has the inverse property and therefore satisfies all four.
Any loop which satisfies the left, right, or antiautomorphic inverse properties automatically has two-sided inverses.
Morphisms
A quasigroup or loop homomorphism is a map between two quasigroups such that . Quasigroup homomorphisms necessarily preserve left and right division, as well as identity elements (if they exist).
Homotopy and isotopy
Let Q and P be quasigroups. A quasigroup homotopy from Q to P is a triple of maps from Q to P such that
for all x, y in Q. A quasigroup homomorphism is just a homotopy for which the three maps are equal.
An isotopy is a homotopy for which each of the three maps is a bijection. Two quasigroups are isotopic if there is an isotopy between them. In terms of Latin squares, an isotopy is given by a permutation of rows α, a permutation of columns β, and a permutation on the underlying element set γ.
An autotopy is an isotopy from a quasigroup to itself. The set of all autotopies of a quasigroup form a group with the automorphism group as a subgroup.
Every quasigroup is isotopic to a loop. If a loop is isotopic to a group, then it is isomorphic to that group and thus is itself a group. However, a quasigroup which is isotopic to a group need not be a group. For example, the quasigroup on R with multiplication given by is isotopic to the additive group , but is not itself a group. Every medial quasigroup is isotopic to an abelian group by the Bruck–Toyoda theorem.
Conjugation (parastrophe)
Left and right division are examples of forming a quasigroup by permuting the variables in the defining equation. From the original operation ∗ (i.e., ) we can form five new operations: (the opposite operation), / and \, and their opposites. That makes a total of six quasigroup operations, which are called the conjugates or parastrophes of ∗. Any two of these operations are said to be "conjugate" or "parastrophic" to each other (and to themselves).
Isostrophe (paratopy)
If the set Q has two quasigroup operations, ∗ and ·, and one of them is isotopic to a conjugate of the other, the operations are said to be isostrophic to each other. There are also many other names for this relation of "isostrophe", e.g., paratopy.
Generalizations
Polyadic or multiary quasigroups
An n-ary quasigroup is a set with an n-ary operation, with , such that the equation has a unique solution for any one variable if all the other n variables are specified arbitrarily. Polyadic or multiary means n-ary for some nonnegative integer n.
A 0-ary, or nullary, quasigroup is just a constant element of Q. A 1-ary, or unary, quasigroup is a bijection of Q to itself. A binary, or 2-ary, quasigroup is an ordinary quasigroup.
An example of a multiary quasigroup is an iterated group operation, ; it is not necessary to use parentheses to specify the order of operations because the group is associative. One can also form a multiary quasigroup by carrying out any sequence of the same or different group or quasigroup operations, if the order of operations is specified.
There exist multiary quasigroups that cannot be represented in any of these ways. An n-ary quasigroup is irreducible if its operation cannot be factored into the composition of two operations in the following way:
where and . Finite irreducible n-ary quasigroups exist for all ; see Akivis and Goldberg (2001) for details.
An n-ary quasigroup with an n-ary version of associativity is called an n-ary group.
Right- and left-quasigroups
A right-quasigroup is a type (2,2) algebra satisfying both identities:
y = (y / x) ∗ x;
y = (y ∗ x) / x.
Similarly, a left-quasigroup is a type (2,2) algebra satisfying both identities:
y = x ∗ (x \ y);
y = x \ (x ∗ y).
Number of small quasigroups and loops
The number of isomorphism classes of small quasigroups and loops is given here:
See also
Division ring – a ring in which every non-zero element has a multiplicative inverse
Semigroup – an algebraic structure consisting of a set together with an associative binary operation
Monoid – a semigroup with an identity element
Planar ternary ring – has an additive and multiplicative loop structure
Problems in loop theory and quasigroup theory
Mathematics of Sudoku
Notes
References
External links
quasigroups
Non-associative algebra
Group theory
Latin squares |
25274 | https://en.wikipedia.org/wiki/Quantum%20information | Quantum information | Quantum information is the information of the state of a quantum system. It is the basic entity of study in quantum information theory, and can be manipulated using quantum information processing techniques. Quantum information refers to both the technical definition in terms of Von Neumann entropy and the general computational term.
It is an interdisciplinary field that involves quantum mechanics, computer science, information theory, philosophy and cryptography among other fields. Its study is also relevant to disciplines such as cognitive science, psychology and neuroscience. Its main focus is in extracting information from matter at the microscopic scale. Observation in science is one of the most important ways of acquiring information and measurement is required in order to quantify the observation, making this crucial to the scientific method. In quantum mechanics, due to the uncertainty principle, non-commuting observables cannot be precisely measured simultaneously, as an eigenstate in one basis is not an eigenstate in the other basis. As both variables are not simultaneously well defined, a quantum state can never contain definitive information about both variables.
Information is something that is encoded in the state of a quantum system; it is physical. While quantum mechanics deals with examining properties of matter at the microscopic level, quantum information science focuses on extracting information from those properties, and quantum computation manipulates and processes information – performs logical operations – using quantum information processing techniques.
Quantum information, like classical information, can be processed using digital computers, transmitted from one location to another, manipulated with algorithms, and analyzed with computer science and mathematics. Just like the basic unit of classical information is the bit, quantum information deals with qubits. Quantum information can be measured using Von Neumann entropy.
Recently, the field of quantum computing has become an active research area because of the possibility to disrupt modern computation, communication, and cryptography.
History and development
Development from fundamental quantum mechanics
The history of quantum information theory began at the turn of the 20th century when classical physics was revolutionized into quantum physics. The theories of classical physics were predicting absurdities such as the ultraviolet catastrophe, or electrons spiraling into the nucleus. At first these problems were brushed aside by adding ad hoc hypotheses to classical physics. Soon, it became apparent that a new theory must be created in order to make sense of these absurdities, and the theory of quantum mechanics was born.
Quantum mechanics was formulated by Schrödinger using wave mechanics and Heisenberg using matrix mechanics. The equivalence of these methods was proven later. Their formulations described the dynamics of microscopic systems but had several unsatisfactory aspects in describing measurement processes. Von Neumann formulated quantum theory using operator algebra in a way that it described measurement as well as dynamics. These studies emphasized the philosophical aspects of measurement rather than a quantitative approach to extracting information via measurements.
See: Dynamical Pictures
Development from communication
In 1960s, Stratonovich, Helstrom and Gordon proposed a formulation of optical communications using quantum mechanics. This was the first historical appearance of quantum information theory. They mainly studied error probabilities and channel capacities for communication. Later, Holevo obtained an upper bound of communication speed in the transmission of a classical message via a quantum channel.
Development from atomic physics and relativity
In the 1970s, techniques for manipulating single-atom quantum states, such as the atom trap and the scanning tunneling microscope, began to be developed, making it possible to isolate single atoms and arrange them in arrays. Prior to these developments, precise control over single quantum systems was not possible, and experiments utilized coarser, simultaneous control over a large number of quantum systems. The development of viable single-state manipulation techniques led to increased interest in the field of quantum information and computation.
In the 1980s, interest arose in whether it might be possible to use quantum effects to disprove Einstein's theory of relativity. If it were possible to clone an unknown quantum state, it would be possible to use entangled quantum states to transmit information faster than the speed of light, disproving Einstein's theory. However, the no-cloning theorem showed that such cloning is impossible. The theorem was one of the earliest results of quantum information theory.
Development from cryptography
Despite all the excitement and interest over studying isolated quantum systems and trying to find a way to circumvent the theory of relativity, research in quantum information theory became stagnant in the 1980s. However, around the same time another avenue started dabbling into quantum information and computation: Cryptography. In a general sense, cryptography is the problem of doing communication or computation involving two or more parties who may not trust one another.
Bennett and Brassard developed a communication channel on which it is impossible to eavesdrop without being detected, a way of communicating secretly at long distances using the BB84 quantum cryptographic protocol. The key idea was the use of the fundamental principle of quantum mechanics that observation disturbs the observed, and the introduction of an eavesdropper in a secure communication line will immediately let the two parties trying to communicate know of the presence of the eavesdropper.
Development from computer science and mathematics
With the advent of Alan Turing's revolutionary ideas of a programmable computer, or Turing machine, he showed that any real-world computation can be translated into an equivalent computation involving a Turing machine. This is known as the Church–Turing thesis.
Soon enough, the first computers were made and computer hardware grew at such a fast pace that the growth, through experience in production, was codified into an empirical relationship called Moore's law. This 'law' is a projective trend that states that the number of transistors in an integrated circuit doubles every two years. As transistors began to become smaller and smaller in order to pack more power per surface area, quantum effects started to show up in the electronics resulting in inadvertent interference. This led to the advent of quantum computing, which used quantum mechanics to design algorithms.
At this point, quantum computers showed promise of being much faster than classical computers for certain specific problems. One such example problem was developed by David Deutsch and Richard Jozsa, known as the Deutsch–Jozsa algorithm. This problem however held little to no practical applications. Peter Shor in 1994 came up with a very important and practical problem, one of finding the prime factors of an integer. The discrete logarithm problem as it was called, could be solved efficiently on a quantum computer but not on a classical computer hence showing that quantum computers are more powerful than Turing machines.
Development from information theory
Around the time computer science was making a revolution, so was information theory and communication, through Claude Shannon. Shannon developed two fundamental theorems of information theory: noiseless channel coding theorem and noisy channel coding theorem. He also showed that error correcting codes could be used to protect information being sent.
Quantum information theory also followed a similar trajectory, Ben Schumacher in 1995 made an analogue to Shannon's noiseless coding theorem using the qubit. A theory of error-correction also developed, which allows quantum computers to make efficient computations regardless of noise, and make reliable communication over noisy quantum channels.
Qubits and information theory
Quantum information differs strongly from classical information, epitomized by the bit, in many striking and unfamiliar ways. While the fundamental unit of classical information is the bit, the most basic unit of quantum information is the qubit. Classical information is measured using Shannon entropy, while the quantum mechanical analogue is Von Neumann entropy. Given a statistical ensemble of quantum mechanical systems with the density matrix , it is given by Many of the same entropy measures in classical information theory can also be generalized to the quantum case, such as Holevo entropy and the conditional quantum entropy.
Unlike classical digital states (which are discrete), a qubit is continuous-valued, describable by a direction on the Bloch sphere. Despite being continuously valued in this way, a qubit is the smallest possible unit of quantum information, and despite the qubit state being continuous-valued, it is impossible to measure the value precisely. Five famous theorems describe the limits on manipulation of quantum information.
no-teleportation theorem, which states that a qubit cannot be (wholly) converted into classical bits; that is, it cannot be fully "read".
no-cloning theorem, which prevents an arbitrary qubit from being copied.
no-deleting theorem, which prevents an arbitrary qubit from being deleted.
no-broadcast theorem, which prevents an arbitrary qubit from being delivered to multiple recipients, although it can be transported from place to place (e.g. via quantum teleportation).
no-hiding theorem, which demonstrates the conservation of quantum information.
These theorems prove that quantum information within the universe is conserved. They open up possibilities in quantum information processing.
Quantum information processing
The state of a qubit contains all of its information. This state is frequently expressed as a vector on the Bloch sphere. This state can be changed by applying linear transformations or quantum gates to them. These unitary transformations are described as rotations on the Bloch Sphere. While classical gates correspond to the familiar operations of Boolean logic, quantum gates are physical unitary operators.
Due to the volatility of quantum systems and the impossibility of copying states, the storing of quantum information is much more difficult than storing classical information. Nevertheless, with the use of quantum error correction quantum information can still be reliably stored in principle. The existence of quantum error correcting codes has also led to the possibility of fault-tolerant quantum computation.
Classical bits can be encoded into and subsequently retrieved from configurations of qubits, through the use of quantum gates. By itself, a single qubit can convey no more than one bit of accessible classical information about its preparation. This is Holevo's theorem. However, in superdense coding a sender, by acting on one of two entangled qubits, can convey two bits of accessible information about their joint state to a receiver.
Quantum information can be moved about, in a quantum channel, analogous to the concept of a classical communications channel. Quantum messages have a finite size, measured in qubits; quantum channels have a finite channel capacity, measured in qubits per second.
Quantum information, and changes in quantum information, can be quantitatively measured by using an analogue of Shannon entropy, called the von Neumann entropy.
In some cases quantum algorithms can be used to perform computations faster than in any known classical algorithm. The most famous example of this is Shor's algorithm that can factor numbers in polynomial time, compared to the best classical algorithms that take sub-exponential time. As factorization is an important part of the safety of RSA encryption, Shor's algorithm sparked the new field of post-quantum cryptography that tries to find encryption schemes that remain safe even when quantum computers are in play. Other examples of algorithms that demonstrate quantum supremacy include Grover's search algorithm, where the quantum algorithm gives a quadratic speed-up over the best possible classical algorithm. The complexity class of problems efficiently solvable by a quantum computer is known as BQP.
Quantum key distribution (QKD) allows unconditionally secure transmission of classical information, unlike classical encryption, which can always be broken in principle, if not in practice. Do note that certain subtle points regarding the safety of QKD are still hotly debated.
The study of all of the above topics and differences comprises quantum information theory.
Relation to quantum mechanics
Quantum mechanics is the study of how microscopic physical systems change dynamically in nature. In the field of quantum information theory, the quantum systems studied are abstracted away from any real world counterpart. A qubit might for instance physically be a photon in a linear optical quantum computer, an ion in a trapped ion quantum computer, or it might be a large collection of atoms as in a superconducting quantum computer. Regardless of the physical implementation, the limits and features of qubits implied by quantum information theory hold as all these systems are mathematically described by the same apparatus of density matrices over the complex numbers. Another important difference with quantum mechanics is that, while quantum mechanics often studies infinite-dimensional systems such as a harmonic oscillator, quantum information theory concerns both with continuous-variable systems and finite-dimensional systems.
Entropy and information
Entropy measures the uncertainty in the state of a physical system. Entropy can be studied from the point of view of both the classical and quantum information theories.
Classical information theory
Classical information is based on the concepts of information laid out by Claude Shannon. Classical information, in principle, can be stored in a bit of binary strings. Any system having two states is a capable bit.
Shannon entropy
Shannon entropy is the quantification of the information gained by measuring the value of a random variable. Another way of thinking about it is by looking at the uncertainty of a system prior to measurement. As a result, entropy, as pictured by Shannon, can be seen either as a measure of the uncertainty prior to making a measurement or as a measure of information gained after making said measurement.
Shannon entropy, written as a functional of a discrete probability distribution, associated with events , can be seen as the average information associated with this set of events, in units of bits:
This definition of entropy can be used to quantify the physical resources required to store the output of an information source. The ways of interpreting Shannon entropy discussed above are usually only meaningful when the number of samples of an experiment is large.
Rényi entropy
The Rényi entropy is a generalization of Shannon entropy defined above. The Rényi entropy of order r, written as a function of a discrete probability distribution, , associated with events , is defined as:
for and .
We arrive at the definition of Shannon entropy from Rényi when , of Hartley entropy (or max-entropy) when , and min-entropy when .
Quantum information theory
Quantum information theory is largely an extension of classical information theory to quantum systems. Classical information is produced when measurements of quantum systems are made.
Von Neumann entropy
One interpretation of Shannon entropy was the uncertainty associated with a probability distribution. When we want to describe the information or the uncertainty of a quantum state, the probability distributions are simply swapped out by density operators .
s are the eigenvalues of .
Von Neumann plays a similar role in quantum information that Shannon entropy does in classical information
Applications
Quantum communication
Quantum communication is one of the applications of quantum physics and quantum information. There are some famous theorems such as the no-cloning theorem that illustrate some important properties in quantum communication. Dense coding and quantum teleportation are also applications of quantum communication. They are two opposite ways to communicate using qubits. While teleportation transfers one qubit from Alice and Bob by communicating two classical bits under the assumption that Alice and Bob have a pre-shared Bell state, dense coding transfers two classical bits from Alice to Bob by using one qubit, again under the same assumption, that Alice and Bob have a pre-shared Bell state.
Quantum key distribution
One of the best known applications of quantum cryptography is quantum key distribution which provide a theoretical solution to the security issue of a classical key. The advantage of quantum key distribution is that it is impossible to copy a quantum key because of the no-cloning theorem. If someone tries to read encoded data, the quantum state being transmitted will change. This could be used to detect eavesdropping.
BB84
The first quantum key distribution scheme BB84, developed by Charles Bennett and Gilles Brassard in 1984. It is usually explained as a method of securely communicating a private key from a third party to another for use in one-time pad encryption.
E91
E91 was made by Artur Ekert in 1991. His scheme uses entangled pairs of photons. These two photons can be created by Alice, Bob, or by a third party including eavesdropper Eve. One of the photons is distributed to Alice and the other to Bob so that each one ends up with one photon from the pair.
This scheme relies on two properties of quantum entanglement:
The entangled states are perfectly correlated which means that if Alice and Bob both measure their particles having either a vertical or horizontal polarization, they always get the same answer with 100% probability. The same is true if they both measure any other pair of complementary (orthogonal) polarizations. This necessitates that the two distant parties have exact directionality synchronization. However, from quantum mechanics theory the quantum state is completely random so that it is impossible for Alice to predict if she will get vertical polarization or horizontal polarization results.
Any attempt at eavesdropping by Eve destroys this quantum entanglement such that Alice and Bob can detect.
B92
B92 is a simpler version of BB84.
The main difference between B92 and BB84:
B92 only needs two states
BB84 needs 4 polarization states
Like the BB84, Alice transmits to Bob a string of photons encoded with randomly chosen bits but this time the bits Alice chooses the bases she must use. Bob still randomly chooses a basis by which to measure but if he chooses the wrong basis, he will not measure anything which is guaranteed by quantum mechanics theories. Bob can simply tell Alice after each bit she sends whether or not he measured it correctly.
Quantum computation
The most widely used model in quantum computation is the quantum circuit, which are based on the quantum bit "qubit". Qubit is somewhat analogous to the bit in classical computation. Qubits can be in a 1 or 0 quantum state, or they can be in a superposition of the 1 and 0 states. However, when qubits are measured the result of the measurement is always either a 0 or a 1; the probabilities of these two outcomes depend on the quantum state that the qubits were in immediately prior to the measurement.
Any quantum computation algorithm can be represented as a network of quantum logic gates.
Quantum decoherence
If a quantum system were perfectly isolated, it would maintain coherence perfectly, but it would be impossible to test the entire system. If it is not perfectly isolated, for example during a measurement, coherence is shared with the environment and appears to be lost with time; this process is called quantum decoherence. As a result of this process, quantum behavior is apparently lost, just as energy appears to be lost by friction in classical mechanics.
Quantum error correction
QEC is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.
Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of ancilla qubits. A quantum error correcting code protects quantum information against errors.
Journals
Many journals publish research in quantum information science, although only a few are dedicated to this area. Among these are:
International Journal of Quantum Information
Quantum Information & Computation
Quantum Information Processing
npj Quantum Information
Quantum
Quantum Science and Technology
See also
Categorical quantum mechanics
Einstein's thought experiments
Interpretations of quantum mechanics
POVM (positive operator valued measure)
Quantum clock
Quantum cognition
Quantum foundations
Quantum information science
Quantum statistical mechanics
Qutrit
Typical subspace
Notes
References
Gregg Jaeger's book on Quantum Information, Springer, New York, 2007,
Lectures at the Institut Henri Poincaré (slides and videos)
International Journal of Quantum Information World Scientific
Quantum Information Processing Springer
Michael A. Nielsen, Isaac L. Chuang, "Quantum Computation and Quantum Information"
J. Watrous, The Theory of Quantum Information (Cambridge Univ. Press, 2018). Freely available at
John Preskill, Course Information for Physics 219/Computer Science 219 Quantum Computation, Caltech
Masahito Hayashi, "Quantum Information: An Introduction"
Masahito Hayashi, "Quantum Information Theory: Mathematical Foundation"
Vlatko Vedral, "Introduction to Quantum Information Science"
Quantum information theory
it:Informazione quantistica |
25385 | https://en.wikipedia.org/wiki/RSA%20%28cryptosystem%29 | RSA (cryptosystem) | RSA (Rivest–Shamir–Adleman) is a public-key cryptosystem that is widely used for secure data transmission. It is also one of the oldest. The acronym "RSA" comes from the surnames of Ron Rivest, Adi Shamir and Leonard Adleman, who publicly described the algorithm in 1977. An equivalent system was developed secretly in 1973 at GCHQ (the British signals intelligence agency) by the English mathematician Clifford Cocks. That system was declassified in 1997.
In a public-key cryptosystem, the encryption key is public and distinct from the decryption key, which is kept secret (private).
An RSA user creates and publishes a public key based on two large prime numbers, along with an auxiliary value. The prime numbers are kept secret. Messages can be encrypted by anyone, via the public key, but can only be decoded by someone who knows the prime numbers.
The security of RSA relies on the practical difficulty of factoring the product of two large prime numbers, the "factoring problem". Breaking RSA encryption is known as the RSA problem. Whether it is as difficult as the factoring problem is an open question. There are no published methods to defeat the system if a large enough key is used.
RSA is a relatively slow algorithm. Because of this, it is not commonly used to directly encrypt user data. More often, RSA is used to transmit shared keys for symmetric-key cryptography, which are then used for bulk encryption–decryption.
History
The idea of an asymmetric public-private key cryptosystem is attributed to Whitfield Diffie and Martin Hellman, who published this concept in 1976. They also introduced digital signatures and attempted to apply number theory. Their formulation used a shared-secret-key created from exponentiation of some number, modulo a prime number. However, they left open the problem of realizing a one-way function, possibly because the difficulty of factoring was not well-studied at the time.
Ron Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts Institute of Technology made several attempts over the course of a year to create a one-way function that was hard to invert. Rivest and Shamir, as computer scientists, proposed many potential functions, while Adleman, as a mathematician, was responsible for finding their weaknesses. They tried many approaches, including "knapsack-based" and "permutation polynomials". For a time, they thought what they wanted to achieve was impossible due to contradictory requirements. In April 1977, they spent Passover at the house of a student and drank a good deal of Manischewitz wine before returning to their homes at around midnight. Rivest, unable to sleep, lay on the couch with a math textbook and started thinking about their one-way function. He spent the rest of the night formalizing his idea, and he had much of the paper ready by daybreak. The algorithm is now known as RSA the initials of their surnames in same order as their paper.
Clifford Cocks, an English mathematician working for the British intelligence agency Government Communications Headquarters (GCHQ), described an equivalent system in an internal document in 1973. However, given the relatively expensive computers needed to implement it at the time, it was considered to be mostly a curiosity and, as far as is publicly known, was never deployed. His discovery, however, was not revealed until 1997 due to its top-secret classification.
Kid-RSA (KRSA) is a simplified public-key cipher published in 1997, designed for educational purposes. Some people feel that learning Kid-RSA gives insight into RSA and other public-key ciphers, analogous to simplified DES.
Patent
A patent describing the RSA algorithm was granted to MIT on 20 September 1983: "Cryptographic communications system and method". From DWPI's abstract of the patent:
A detailed description of the algorithm was published in August 1977, in Scientific American's Mathematical Games column. This preceded the patent's filing date of December 1977. Consequently, the patent had no legal standing outside the United States. Had Cocks's work been publicly known, a patent in the United States would not have been legal either.
When the patent was issued, terms of patent were 17 years. The patent was about to expire on 21 September 2000, but RSA Security released the algorithm to the public domain on 6 September 2000.
Operation
The RSA algorithm involves four steps: key generation, key distribution, encryption, and decryption.
A basic principle behind RSA is the observation that it is practical to find three very large positive integers , , and , such that with modular exponentiation for all integers (with ):
and that knowing and , or even , it can be extremely difficult to find . The triple bar (≡) here denotes modular congruence.
In addition, for some operations it is convenient that the order of the two exponentiations can be changed and that this relation also implies
RSA involves a public key and a private key. The public key can be known by everyone and is used for encrypting messages. The intention is that messages encrypted with the public key can only be decrypted in a reasonable amount of time by using the private key. The public key is represented by the integers and , and the private key by the integer (although is also used during the decryption process, so it might be considered to be a part of the private key too). represents the message (previously prepared with a certain technique explained below).
Key generation
The keys for the RSA algorithm are generated in the following way:
Choose two distinct prime numbers p and q.
For security purposes, the integers p and q should be chosen at random and should be similar in magnitude but differ in length by a few digits to make factoring harder. Prime integers can be efficiently found using a primality test.
p and q are kept secret.
Compute .
n is used as the modulus for both the public and private keys. Its length, usually expressed in bits, is the key length.
n is released as part of the public key.
Compute λ(n), where λ is Carmichael's totient function. Since n = pq, λ(n) = lcm(λ(p), λ(q)), and since p and q are prime, λ(p) = φ(p) = p − 1, and likewise λ(q) = q − 1. Hence λ(n) = lcm(p − 1, q − 1).
λ(n) is kept secret.
The lcm may be calculated through the Euclidean algorithm, since lcm(a, b) = |ab|/gcd(a, b).
Choose an integer e such that and ; that is, e and λ(n) are coprime.
e having a short bit-length and small Hamming weight results in more efficient encryption the most commonly chosen value for e is . The smallest (and fastest) possible value for e is 3, but such a small value for e has been shown to be less secure in some settings.
e is released as part of the public key.
Determine d as ; that is, d is the modular multiplicative inverse of e modulo λ(n).
This means: solve for d the equation ; d can be computed efficiently by using the extended Euclidean algorithm, since, thanks to e and λ(n) being coprime, said equation is a form of Bézout's identity, where d is one of the coefficients.
d is kept secret as the private key exponent.
The public key consists of the modulus n and the public (or encryption) exponent e. The private key consists of the private (or decryption) exponent d, which must be kept secret. p, q, and λ(n) must also be kept secret because they can be used to calculate d. In fact, they can all be discarded after d has been computed.
In the original RSA paper, the Euler totient function is used instead of λ(n) for calculating the private exponent d. Since φ(n) is always divisible by λ(n), the algorithm works as well. The possibility of using Euler totient function results also from Lagrange's theorem applied to the multiplicative group of integers modulo pq. Thus any d satisfying also satisfies . However, computing d modulo φ(n) will sometimes yield a result that is larger than necessary (i.e. ). Most of the implementations of RSA will accept exponents generated using either method (if they use the private exponent d at all, rather than using the optimized decryption method based on the Chinese remainder theorem described below), but some standards such as FIPS 186-4 may require that . Any "oversized" private exponents not meeting this criterion may always be reduced modulo λ(n) to obtain a smaller equivalent exponent.
Since any common factors of and are present in the factorisation of = = , it is recommended that and have only very small common factors, if any, besides the necessary 2.
Note: The authors of the original RSA paper carry out the key generation by choosing d and then computing e as the modular multiplicative inverse of d modulo φ(n), whereas most current implementations of RSA, such as those following PKCS#1, do the reverse (choose e and compute d). Since the chosen key can be small, whereas the computed key normally is not, the RSA paper's algorithm optimizes decryption compared to encryption, while the modern algorithm optimizes encryption instead.
Key distribution
Suppose that Bob wants to send information to Alice. If they decide to use RSA, Bob must know Alice's public key to encrypt the message, and Alice must use her private key to decrypt the message.
To enable Bob to send his encrypted messages, Alice transmits her public key to Bob via a reliable, but not necessarily secret, route. Alice's private key is never distributed.
Encryption
After Bob obtains Alice's public key, he can send a message to Alice.
To do it, he first turns (strictly speaking, the un-padded plaintext) into an integer (strictly speaking, the padded plaintext), such that by using an agreed-upon reversible protocol known as a padding scheme. He then computes the ciphertext , using Alice's public key , corresponding to
This can be done reasonably quickly, even for very large numbers, using modular exponentiation. Bob then transmits to Alice. Note that at least nine values of m will yield a ciphertext equal to
m,
but this is very unlikely to occur in practice.
Decryption
Alice can recover from by using her private key exponent by computing
Given , she can recover the original message by reversing the padding scheme.
Example
Here is an example of RSA encryption and decryption. The parameters used here are artificially small, but one can also use OpenSSL to generate and examine a real keypair.
Choose two distinct prime numbers, such as
and
Compute giving
Compute the Carmichael's totient function of the product as giving
Choose any number that is coprime to 780. Choosing a prime number for e leaves us only to check that e is not a divisor of 780.
Let .
Compute d, the modular multiplicative inverse of , yielding as
The public key is (, ). For a padded plaintext message m, the encryption function is
The private key is (, ). For an encrypted ciphertext c, the decryption function is
For instance, in order to encrypt , we calculate
To decrypt , we calculate
Both of these calculations can be computed efficiently using the square-and-multiply algorithm for modular exponentiation. In real-life situations the primes selected would be much larger; in our example it would be trivial to factor n = 3233 (obtained from the freely available public key) back to the primes p and q. e, also from the public key, is then inverted to get d, thus acquiring the private key.
Practical implementations use the Chinese remainder theorem to speed up the calculation using modulus of factors (mod pq using mod p and mod q).
The values dp, dq and qinv, which are part of the private key are computed as follows:
Here is how dp, dq and qinv are used for efficient decryption (encryption is efficient by choice of a suitable d and e pair):
Signing messages
Suppose Alice uses Bob's public key to send him an encrypted message. In the message, she can claim to be Alice, but Bob has no way of verifying that the message was from Alice, since anyone can use Bob's public key to send him encrypted messages. In order to verify the origin of a message, RSA can also be used to sign a message.
Suppose Alice wishes to send a signed message to Bob. She can use her own private key to do so. She produces a hash value of the message, raises it to the power of d (modulo n) (as she does when decrypting a message), and attaches it as a "signature" to the message. When Bob receives the signed message, he uses the same hash algorithm in conjunction with Alice's public key. He raises the signature to the power of e (modulo n) (as he does when encrypting a message), and compares the resulting hash value with the message's hash value. If the two agree, he knows that the author of the message was in possession of Alice's private key and that the message has not been tampered with since being sent.
This works because of exponentiation rules:
Thus the keys may be swapped without loss of generality, that is, a private key of a key pair may be used either to:
Decrypt a message only intended for the recipient, which may be encrypted by anyone having the public key (asymmetric encrypted transport).
Encrypt a message which may be decrypted by anyone, but which can only be encrypted by one person; this provides a digital signature.
Proofs of correctness
Proof using Fermat's little theorem
The proof of the correctness of RSA is based on Fermat's little theorem, stating that for any integer a and prime p, not dividing a.
We want to show that
for every integer m when p and q are distinct prime numbers and e and d are positive integers satisfying .
Since is, by construction, divisible by both and , we can write
for some nonnegative integers h and k.
To check whether two numbers, such as med and m, are congruent mod pq, it suffices (and in fact is equivalent) to check that they are congruent mod p and mod q separately.
To show , we consider two cases:
If , m is a multiple of p. Thus med is a multiple of p. So .
If ,
where we used Fermat's little theorem to replace with 1.
The verification that proceeds in a completely analogous way:
If , med is a multiple of q. So .
If ,
This completes the proof that, for any integer m, and integers e, d such that ,
Notes:
Proof using Euler's theorem
Although the original paper of Rivest, Shamir, and Adleman used Fermat's little theorem to explain why RSA works, it is common to find proofs that rely instead on Euler's theorem.
We want to show that , where is a product of two different prime numbers, and e and d are positive integers satisfying . Since e and d are positive, we can write for some non-negative integer h. Assuming that m is relatively prime to n, we have
where the second-last congruence follows from Euler's theorem.
More generally, for any e and d satisfying , the same conclusion follows from Carmichael's generalization of Euler's theorem, which states that for all m relatively prime to n.
When m is not relatively prime to n, the argument just given is invalid. This is highly improbable (only a proportion of numbers have this property), but even in this case, the desired congruence is still true. Either or , and these cases can be treated using the previous proof.
Padding
Attacks against plain RSA
There are a number of attacks against plain RSA as described below.
When encrypting with low encryption exponents (e.g., ) and small values of the m (i.e., ), the result of is strictly less than the modulus n. In this case, ciphertexts can be decrypted easily by taking the eth root of the ciphertext over the integers.
If the same clear-text message is sent to e or more recipients in an encrypted way, and the receivers share the same exponent e, but different p, q, and therefore n, then it is easy to decrypt the original clear-text message via the Chinese remainder theorem. Johan Håstad noticed that this attack is possible even if the clear texts are not equal, but the attacker knows a linear relation between them. This attack was later improved by Don Coppersmith (see Coppersmith's attack).
Because RSA encryption is a deterministic encryption algorithm (i.e., has no random component) an attacker can successfully launch a chosen plaintext attack against the cryptosystem, by encrypting likely plaintexts under the public key and test whether they are equal to the ciphertext. A cryptosystem is called semantically secure if an attacker cannot distinguish two encryptions from each other, even if the attacker knows (or has chosen) the corresponding plaintexts. RSA without padding is not semantically secure.
RSA has the property that the product of two ciphertexts is equal to the encryption of the product of the respective plaintexts. That is, . Because of this multiplicative property, a chosen-ciphertext attack is possible. E.g., an attacker who wants to know the decryption of a ciphertext may ask the holder of the private key d to decrypt an unsuspicious-looking ciphertext for some value r chosen by the attacker. Because of the multiplicative property, c′ is the encryption of . Hence, if the attacker is successful with the attack, they will learn from which they can derive the message m by multiplying mr with the modular inverse of r modulo n.
Given the private exponent d, one can efficiently factor the modulus n = pq. And given factorization of the modulus n = pq, one can obtain any private key (d′, n) generated against a public key (e′, n).
Padding schemes
To avoid these problems, practical RSA implementations typically embed some form of structured, randomized padding into the value m before encrypting it. This padding ensures that m does not fall into the range of insecure plaintexts, and that a given message, once padded, will encrypt to one of a large number of different possible ciphertexts.
Standards such as PKCS#1 have been carefully designed to securely pad messages prior to RSA encryption. Because these schemes pad the plaintext m with some number of additional bits, the size of the un-padded message M must be somewhat smaller. RSA padding schemes must be carefully designed so as to prevent sophisticated attacks that may be facilitated by a predictable message structure. Early versions of the PKCS#1 standard (up to version 1.5) used a construction that appears to make RSA semantically secure. However, at Crypto 1998, Bleichenbacher showed that this version is vulnerable to a practical adaptive chosen-ciphertext attack. Furthermore, at Eurocrypt 2000, Coron et al. showed that for some types of messages, this padding does not provide a high enough level of security. Later versions of the standard include Optimal Asymmetric Encryption Padding (OAEP), which prevents these attacks. As such, OAEP should be used in any new application, and PKCS#1 v1.5 padding should be replaced wherever possible. The PKCS#1 standard also incorporates processing schemes designed to provide additional security for RSA signatures, e.g. the Probabilistic Signature Scheme for RSA (RSA-PSS).
Secure padding schemes such as RSA-PSS are as essential for the security of message signing as they are for message encryption. Two USA patents on PSS were granted ( and ); however, these patents expired on 24 July 2009 and 25 April 2010 respectively. Use of PSS no longer seems to be encumbered by patents. Note that using different RSA key pairs for encryption and signing is potentially more secure.
Security and practical considerations
Using the Chinese remainder algorithm
For efficiency, many popular crypto libraries (such as OpenSSL, Java and .NET) use for decryption and signing the following optimization based on the Chinese remainder theorem. The following values are precomputed and stored as part of the private key:
and the primes from the key generation,
These values allow the recipient to compute the exponentiation more efficiently as follows:
,
This is more efficient than computing exponentiation by squaring, even though two modular exponentiations have to be computed. The reason is that these two modular exponentiations both use a smaller exponent and a smaller modulus.
Integer factorization and RSA problem
The security of the RSA cryptosystem is based on two mathematical problems: the problem of factoring large numbers and the RSA problem. Full decryption of an RSA ciphertext is thought to be infeasible on the assumption that both of these problems are hard, i.e., no efficient algorithm exists for solving them. Providing security against partial decryption may require the addition of a secure padding scheme.
The RSA problem is defined as the task of taking eth roots modulo a composite n: recovering a value m such that , where is an RSA public key, and c is an RSA ciphertext. Currently the most promising approach to solving the RSA problem is to factor the modulus n. With the ability to recover prime factors, an attacker can compute the secret exponent d from a public key , then decrypt c using the standard procedure. To accomplish this, an attacker factors n into p and q, and computes that allows the determination of d from e. No polynomial-time method for factoring large integers on a classical computer has yet been found, but it has not been proven that none exists; see integer factorization for a discussion of this problem.
Multiple polynomial quadratic sieve (MPQS) can be used to factor the public modulus n.
The first RSA-512 factorization in 1999 used hundreds of computers and required the equivalent of 8,400 MIPS years, over an elapsed time of approximately seven months. By 2009, Benjamin Moody could factor an 512-bit RSA key in 73 days using only public software (GGNFS) and his desktop computer (a dual-core Athlon64 with a 1,900 MHz CPU). Just less than 5 gigabytes of disk storage was required and about 2.5 gigabytes of RAM for the sieving process.
Rivest, Shamir, and Adleman noted that Miller has shown that – assuming the truth of the extended Riemann hypothesis – finding d from n and e is as hard as factoring n into p and q (up to a polynomial time difference). However, Rivest, Shamir, and Adleman noted, in section IX/D of their paper, that they had not found a proof that inverting RSA is as hard as factoring.
, the largest publicly known factored RSA number had 829 bits (250 decimal digits, RSA-250). Its factorization, by a state-of-the-art distributed implementation, took approximately 2700 CPU years. In practice, RSA keys are typically 1024 to 4096 bits long. In 2003, RSA Security estimated that 1024-bit keys were likely to become crackable by 2010. As of 2020, it is not known whether such keys can be cracked, but minimum recommendations have moved to at least 2048 bits. It is generally presumed that RSA is secure if n is sufficiently large, outside of quantum computing.
If n is 300 bits or shorter, it can be factored in a few hours in a personal computer, using software already freely available. Keys of 512 bits have been shown to be practically breakable in 1999, when RSA-155 was factored by using several hundred computers, and these are now factored in a few weeks using common hardware. Exploits using 512-bit code-signing certificates that may have been factored were reported in 2011. A theoretical hardware device named TWIRL, described by Shamir and Tromer in 2003, called into question the security of 1024-bit keys.
In 1994, Peter Shor showed that a quantum computer – if one could ever be practically created for the purpose – would be able to factor in polynomial time, breaking RSA; see Shor's algorithm.
Faulty key generation
Finding the large primes p and q is usually done by testing random numbers of the correct size with probabilistic primality tests that quickly eliminate virtually all of the nonprimes.
The numbers p and q should not be "too close", lest the Fermat factorization for n be successful. If p − q is less than 2n1/4 (n = p⋅q, which even for "small" 1024-bit values of n is ), solving for p and q is trivial. Furthermore, if either p − 1 or q − 1 has only small prime factors, n can be factored quickly by Pollard's p − 1 algorithm, and hence such values of p or q should be discarded.
It is important that the private exponent d be large enough. Michael J. Wiener showed that if p is between q and 2q (which is quite typical) and , then d can be computed efficiently from n and e.
There is no known attack against small public exponents such as , provided that the proper padding is used. Coppersmith's attack has many applications in attacking RSA specifically if the public exponent e is small and if the encrypted message is short and not padded. 65537 is a commonly used value for e; this value can be regarded as a compromise between avoiding potential small-exponent attacks and still allowing efficient encryptions (or signature verification). The NIST Special Publication on Computer Security (SP 800-78 Rev. 1 of August 2007) does not allow public exponents e smaller than 65537, but does not state a reason for this restriction.
In October 2017, a team of researchers from Masaryk University announced the ROCA vulnerability, which affects RSA keys generated by an algorithm embodied in a library from Infineon known as RSALib. A large number of smart cards and trusted platform modules (TPM) were shown to be affected. Vulnerable RSA keys are easily identified using a test program the team released.
Importance of strong random number generation
A cryptographically strong random number generator, which has been properly seeded with adequate entropy, must be used to generate the primes p and q. An analysis comparing millions of public keys gathered from the Internet was carried out in early 2012 by Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung and Christophe Wachter. They were able to factor 0.2% of the keys using only Euclid's algorithm.
They exploited a weakness unique to cryptosystems based on integer factorization. If is one public key, and is another, then if by chance (but q is not equal to q′), then a simple computation of factors both n and n′, totally compromising both keys. Lenstra et al. note that this problem can be minimized by using a strong random seed of bit length twice the intended security level, or by employing a deterministic function to choose q given p, instead of choosing p and q independently.
Nadia Heninger was part of a group that did a similar experiment. They used an idea of Daniel J. Bernstein to compute the GCD of each RSA key n against the product of all the other keys n′ they had found (a 729-million-digit number), instead of computing each gcd(n, n′) separately, thereby achieving a very significant speedup, since after one large division, the GCD problem is of normal size.
Heninger says in her blog that the bad keys occurred almost entirely in embedded applications, including "firewalls, routers, VPN devices, remote server administration devices, printers, projectors, and VOIP phones" from more than 30 manufacturers. Heninger explains that the one-shared-prime problem uncovered by the two groups results from situations where the pseudorandom number generator is poorly seeded initially, and then is reseeded between the generation of the first and second primes. Using seeds of sufficiently high entropy obtained from key stroke timings or electronic diode noise or atmospheric noise from a radio receiver tuned between stations should solve the problem.
Strong random number generation is important throughout every phase of public-key cryptography. For instance, if a weak generator is used for the symmetric keys that are being distributed by RSA, then an eavesdropper could bypass RSA and guess the symmetric keys directly.
Timing attacks
Kocher described a new attack on RSA in 1995: if the attacker Eve knows Alice's hardware in sufficient detail and is able to measure the decryption times for several known ciphertexts, Eve can deduce the decryption key d quickly. This attack can also be applied against the RSA signature scheme. In 2003, Boneh and Brumley demonstrated a more practical attack capable of recovering RSA factorizations over a network connection (e.g., from a Secure Sockets Layer (SSL)-enabled webserver). This attack takes advantage of information leaked by the Chinese remainder theorem optimization used by many RSA implementations.
One way to thwart these attacks is to ensure that the decryption operation takes a constant amount of time for every ciphertext. However, this approach can significantly reduce performance. Instead, most RSA implementations use an alternate technique known as cryptographic blinding. RSA blinding makes use of the multiplicative property of RSA. Instead of computing , Alice first chooses a secret random value r and computes . The result of this computation, after applying Euler's theorem, is , and so the effect of r can be removed by multiplying by its inverse. A new value of r is chosen for each ciphertext. With blinding applied, the decryption time is no longer correlated to the value of the input ciphertext, and so the timing attack fails.
Adaptive chosen-ciphertext attacks
In 1998, Daniel Bleichenbacher described the first practical adaptive chosen-ciphertext attack against RSA-encrypted messages using the PKCS #1 v1 padding scheme (a padding scheme randomizes and adds structure to an RSA-encrypted message, so it is possible to determine whether a decrypted message is valid). Due to flaws with the PKCS #1 scheme, Bleichenbacher was able to mount a practical attack against RSA implementations of the Secure Sockets Layer protocol and to recover session keys. As a result of this work, cryptographers now recommend the use of provably secure padding schemes such as Optimal Asymmetric Encryption Padding, and RSA Laboratories has released new versions of PKCS #1 that are not vulnerable to these attacks.
A variant of this attack, dubbed "BERserk", came back in 2014. It impacted the Mozilla NSS Crypto Library, which was used notably by Firefox and Chrome.
Side-channel analysis attacks
A side-channel attack using branch-prediction analysis (BPA) has been described. Many processors use a branch predictor to determine whether a conditional branch in the instruction flow of a program is likely to be taken or not. Often these processors also implement simultaneous multithreading (SMT). Branch-prediction analysis attacks use a spy process to discover (statistically) the private key when processed with these processors.
Simple Branch Prediction Analysis (SBPA) claims to improve BPA in a non-statistical way. In their paper, "On the Power of Simple Branch Prediction Analysis", the authors of SBPA (Onur Aciicmez and Cetin Kaya Koc) claim to have discovered 508 out of 512 bits of an RSA key in 10 iterations.
A power-fault attack on RSA implementations was described in 2010. The author recovered the key by varying the CPU power voltage outside limits; this caused multiple power faults on the server.
Tricky implementation
There are many details to keep in mind in order to implement RSA securely (strong PRNG, acceptable public exponent...) . This makes the implementation challenging, to the point the book Practical Cryptography With Go suggest to avoid RSA if possible.
Implementations
Some cryptography libraries that provide support for RSA include:
Botan
Bouncy Castle
cryptlib
Crypto++
Libgcrypt
Nettle
OpenSSL
wolfCrypt
GnuTLS
mbed TLS
LibreSSL
See also
Acoustic cryptanalysis
Computational complexity theory
Cryptographic key length
Diffie–Hellman key exchange
Key exchange
Key management
Elliptic-curve cryptography
Public-key cryptography
Trapdoor function
References
Further reading
External links
The Original RSA Patent as filed with the U.S. Patent Office by Rivest; Ronald L. (Belmont, MA), Shamir; Adi (Cambridge, MA), Adleman; Leonard M. (Arlington, MA), December 14, 1977, .
PKCS #1: RSA Cryptography Standard (RSA Laboratories website)
The PKCS #1 standard "provides recommendations for the implementation of public-key cryptography based on the RSA algorithm, covering the following aspects: cryptographic primitives; encryption schemes; signature schemes with appendix; ASN.1 syntax for representing keys and for identifying the schemes".
Thorough walk through of RSA
Prime Number Hide-And-Seek: How the RSA Cipher Works
Onur Aciicmez, Cetin Kaya Koc, Jean-Pierre Seifert: On the Power of Simple Branch Prediction Analysis
Example of an RSA implementation with PKCS#1 padding (GPL source code)
Kocher's article about timing attacks
An animated explanation of RSA with its mathematical background by CrypTool
How RSA Key used for Encryption in real world
Public-key encryption schemes
Digital signature schemes |
25748 | https://en.wikipedia.org/wiki/Router%20%28computing%29 | Router (computing) | A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node.
A router is connected to two or more data lines from different IP networks. When a data packet comes in on one of the lines, the router reads the network address information in the packet header to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey.
The most familiar type of IP routers are home and small office routers that simply forward IP packets between the home computers and the Internet. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Operation
When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol. Each router builds up a routing table, a list of routes, between two computer systems on the interconnected networks.
The software that runs the router is composed of two functional processing units that operate simultaneously, called planes:
Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table. The control-plane logic then strips non-essential directives from the table and builds a forwarding information base (FIB) to be used by the forwarding plane.
Forwarding plane: This unit forwards the data packets between incoming and outgoing interface connections. It reads the header of each packet as it comes in, matches the destination to entries in the FIB supplied by the control plane, and directs the packet to the outgoing network specified in the FIB.
Applications
A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission. It can also support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix.
Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' (ISPs') networks. The largest routers (such as the Cisco CRS-1 or Juniper PTX) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use.
Access, core and distribution
Access routers, including small office/home office (SOHO) models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt, or DD-WRT.
Distribution routers aggregate traffic from multiple access routers. Distribution routers are often responsible for enforcing quality of service across a wide area network (WAN), so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of edge routers.
Security
External networks must be carefully considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, and other security functions, or they may be handled by separate devices. Routers also commonly perform network address translation which restricts connections initiated from external connections but is not recognized as a security feature by all experts. Some experts argue that open source routers are more secure and reliable than closed source routers because open-source routers allow mistakes to be quickly found and corrected.
Routing different networks
Routers are also often distinguished on the basis of the network in which they operate. A router in a local area network (LAN) of a single organisation is called an interior router. A router that is operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network (WAN) is called a border router, or gateway router.
Internet connectivity and internal use
Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). defines the types of BGP routers according to their functions:
Edge router (also called a provider edge router): Placed at the edge of an ISP network. The router uses Exterior Border Gateway Protocol (EBGP) to routers at other ISPs or large enterprise autonomous systems.
Subscriber edge router (also called a customer edge router): Located at the edge of the subscriber's network, it also uses EBGP to its provider's autonomous system. It is typically used in an (enterprise) organization.
Inter-provider border router: A BGP router for interconnecting ISPs that maintains BGP sessions with other BGP routers in ISP Autonomous Systems.
Core router: Resides within an Autonomous System as a backbone to carry traffic between edge routers.
Within an ISP: In the ISP's autonomous system, a router uses internal BGP to communicate with other ISP edge routers, other intranet core routers, or the ISP's intranet provider border routers.
Internet backbone: The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs' system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP routers described here. An ISP core router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching protocols.
Port forwarding: Routers are also used for port forwarding between private Internet-connected servers.
Voice, data, fax, and video processing routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video and fax traffic on the Internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Use of access server-type routers expanded with the advent of the Internet, first with dial-up access and another resurgence with voice phone service.
Larger networks commonly use multilayer switches, with layer-3 devices being used to simply interconnect multiple subnets within the same security zone, and higher-layer switches when filtering, translation, load balancing, or other higher-level functions are required, especially between zones.
History
The concept of an Interface computer was first proposed by Donald Davies for the NPL network in 1966. The same idea was conceived by Wesley Clark the following year for use in the ARPANET. Named Interface Message Processors (IMPs), these computers had fundamentally the same functionality as a router does today. The idea for a router (called gateway at the time) initially came about through an international group of computer networking researchers called the International Networking Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, it became a subcommittee of the International Federation for Information Processing later that year. These gateway devices were different from most previous packet switching schemes in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that function entirely to the hosts. This particular idea, the end-to-end principle, had been previously pioneered in the CYCLADES network.
The idea was explored in more detail, with the intention to produce a prototype system as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system; due to corporate intellectual property concerns it received little attention outside Xerox for years. Some time after early 1974, the first Xerox routers became operational. The first true IP router was developed by Ginny Strazisar at BBN, as part of that DARPA-initiated effort, during 1975–1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet.
The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981 and both were also based on PDP-11s. Stanford's router program was by William Yeager and MIT's by Noel Chiappa. Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking when protocols other than TCP/IP were in use. Modern routers that handle both IPv4 and IPv6 are multiprotocol but are simpler devices than ones processing AppleTalk, DECnet, IP, and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose minicomputers served as routers. Modern high-speed routers are network processors or highly specialized computers with extra hardware acceleration added to speed both common routing functions, such as packet forwarding, and specialized functions such as IPsec encryption. There is substantial use of Linux and Unix software-based machines, running open source routing code, for research and other applications. The Cisco IOS operating system was independently designed. Major router operating systems, such as Junos and NX-OS, are extensively modified versions of Unix software.
Forwarding
The main purpose of a router is to connect multiple networks and forward packets destined either for directly attached networks or more remote networks. A router is considered a layer-3 device because its primary forwarding decision is based on the information in the layer-3 IP packet, specifically the destination IP address. When a router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the addresses in the routing table. Once a match is found, the packet is encapsulated in the layer-2 data link frame for the outgoing interface indicated in the table entry. A router typically does not look into the packet payload, but only at the layer-3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, quality of service (QoS). For pure IP forwarding, a router is designed to minimize the state information associated with individual packets. Once a packet is forwarded, the router does not retain any historical information about the packet.
The routing table itself can contain information derived from a variety of sources, such as a default or static routes that are configured manually, or dynamic entries from routing protocols where the router learns routes from other routers. A default route is one that is used to route all traffic whose destination does not otherwise appear in the routing table; it is common – even necessary – in small networks, such as a home or small business where the default route simply sends all non-local traffic to the Internet service provider. The default route can be manually configured (as a static route); learned by dynamic routing protocols; or be obtained by DHCP.
A router can run more than one routing protocol at a time, particularly if it serves as an autonomous system border router between parts of a network that run different routing protocols; if it does so, then redistribution may be used (usually selectively) to share information between the different protocols running on the same router.
Besides deciding to which interface a packet is forwarded, which is handled primarily via the routing table, a router also has to manage congestion when packets arrive at a rate higher than the router can process. Three policies commonly used are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented: the router simply drops new incoming packets once buffer space in the router is exhausted. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until reaching a pre-determined maximum, when it drops all incoming packets, thus reverting to tail drop. WRED can be configured to drop packets more readily dependent on the type of traffic.
Another function a router performs is traffic classification and deciding which packet should be processed first. This is managed through QoS, which is critical when Voice over IP is deployed, so as not to introduce excessive latency.
Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made.
Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead of scheduling CPU time to process the packets. Others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC.
See also
Mobile broadband modem
Modem
Residential gateway
Switch virtual interface
Wireless router
Notes
References
External links
Internet architecture
Hardware routers
Networking hardware
Server appliance
Computer networking |
25831 | https://en.wikipedia.org/wiki/RC4 | RC4 | In cryptography, RC4 (Rivest Cipher 4 also known as ARC4 or ARCFOUR meaning Alleged RC4, see below) is a stream cipher. While it is remarkable for its simplicity and speed in software, multiple vulnerabilities have been discovered in RC4, rendering it insecure. It is especially vulnerable when the beginning of the output keystream is not discarded, or when nonrandom or related keys are used. Particularly problematic uses of RC4 have led to very insecure protocols such as WEP.
, there is speculation that some state cryptologic agencies may possess the capability to break RC4 when used in the TLS protocol. IETF has published RFC 7465 to prohibit the use of RC4 in TLS; Mozilla and Microsoft have issued similar recommendations.
A number of attempts have been made to strengthen RC4, notably Spritz, RC4A, VMPC, and RC4+.
History
RC4 was designed by Ron Rivest of RSA Security in 1987. While it is officially termed "Rivest Cipher 4", the RC acronym is alternatively understood to stand for "Ron's Code" (see also RC2, RC5 and RC6).
RC4 was initially a trade secret, but in September 1994 a description of it was anonymously posted to the Cypherpunks mailing list. It was soon posted on the sci.crypt newsgroup, where it was analyzed within days by Bob Jenkins. From there it spread to many sites on the Internet. The leaked code was confirmed to be genuine as its output was found to match that of proprietary software using licensed RC4. Because the algorithm is known, it is no longer a trade secret. The name RC4 is trademarked, so RC4 is often referred to as ARCFOUR or ARC4 (meaning alleged RC4) to avoid trademark problems. RSA Security has never officially released the algorithm; Rivest has, however, linked to the English Wikipedia article on RC4 in his own course notes in 2008 and confirmed the history of RC4 and its code in a 2014 paper by him.
RC4 became part of some commonly used encryption protocols and standards, such as WEP in 1997 and WPA in 2003/2004 for wireless cards; and SSL in 1995 and its successor TLS in 1999, until it was prohibited for all versions of TLS by RFC 7465 in 2015, due to the RC4 attacks weakening or breaking RC4 used in SSL/TLS. The main factors in RC4's success over such a wide range of applications have been its speed and simplicity: efficient implementations in both software and hardware were very easy to develop.
Description
RC4 generates a pseudorandom stream of bits (a keystream). As with any stream cipher, these can be used for encryption by combining it with the plaintext using bit-wise exclusive-or; decryption is performed the same way (since exclusive-or with given data is an involution). This is similar to the one-time pad except that generated pseudorandom bits, rather than a prepared stream, are used.
To generate the keystream, the cipher makes use of a secret internal state which consists of two parts:
A permutation of all 256 possible bytes (denoted "S" below).
Two 8-bit index-pointers (denoted "i" and "j").
The permutation is initialized with a variable length key, typically between 40 and 2048 bits, using the key-scheduling algorithm (KSA). Once this has been completed, the stream of bits is generated using the pseudo-random generation algorithm (PRGA).
Key-scheduling algorithm (KSA)
The key-scheduling algorithm is used to initialize the permutation in the array "S". "keylength" is defined as the number of bytes in the key and can be in the range 1 ≤ keylength ≤ 256, typically between 5 and 16, corresponding to a key length of 40 – 128 bits. First, the array "S" is initialized to the identity permutation. S is then processed for 256 iterations in a similar way to the main PRGA, but also mixes in bytes of the key at the same time.
for i from 0 to 255
S[i] := i
endfor
j := 0
for i from 0 to 255
j := (j + S[i] + key[i mod keylength]) mod 256
swap values of S[i] and S[j]
endfor
Pseudo-random generation algorithm (PRGA)
For as many iterations as are needed, the PRGA modifies the state and outputs a byte of the keystream. In each iteration, the PRGA:
increments
looks up the th element of , , and adds that to
exchanges the values of and then uses the sum as an index to fetch a third element of (the keystream value below)
then bitwise exclusive ORed (XORed) with the next byte of the message to produce the next byte of either ciphertext or plaintext.
Each element of S is swapped with another element at least once every 256 iterations.
i := 0
j := 0
while GeneratingOutput:
i := (i + 1) mod 256
j := (j + S[i]) mod 256
swap values of S[i] and S[j]
K := S[(S[i] + S[j]) mod 256]
output K
endwhile
Thus, this produces a stream of which are XOR'ed with the to obtain the . So .
RC4-based random number generators
Several operating systems include , an API originating in OpenBSD providing access to a random number generator originally based on RC4. In OpenBSD 5.5, released in May 2014, was modified to use ChaCha20. The implementations of arc4random in FreeBSD, NetBSD and Linux's libbsd also use ChaCha20. According to manual pages shipped with the operating system, in the 2017 release of its desktop and mobile operating systems, Apple replaced RC4 with AES in its implementation of arc4random. Man pages for the new arc4random include the backronym "A Replacement Call for Random" for ARC4 as a mnemonic, as it provides better random data than rand() does.
Proposed new random number generators are often compared to the RC4 random number generator.
Several attacks on RC4 are able to distinguish its output from a random sequence.
Implementation
Many stream ciphers are based on linear-feedback shift registers (LFSRs), which, while efficient in hardware, are less so in software. The design of RC4 avoids the use of LFSRs and is ideal for software implementation, as it requires only byte manipulations. It uses 256 bytes of memory for the state array, S[0] through S[255], k bytes of memory for the key, key[0] through key[k-1], and integer variables, i, j, and K. Performing a modular reduction of some value modulo 256 can be done with a bitwise AND with 255 (which is equivalent to taking the low-order byte of the value in question).
Test vectors
These test vectors are not official, but convenient for anyone testing their own RC4 program. The keys and plaintext are ASCII, the keystream and ciphertext are in hexadecimal.
Security
Unlike a modern stream cipher (such as those in eSTREAM), RC4 does not take a separate nonce alongside the key. This means that if a single long-term key is to be used to securely encrypt multiple streams, the protocol must specify how to combine the nonce and the long-term key to generate the stream key for RC4. One approach to addressing this is to generate a "fresh" RC4 key by hashing a long-term key with a nonce. However, many applications that use RC4 simply concatenate key and nonce; RC4's weak key schedule then gives rise to related key attacks, like the Fluhrer, Mantin and Shamir attack (which is famous for breaking the WEP standard).
Because RC4 is a stream cipher, it is more malleable than common block ciphers. If not used together with a strong message authentication code (MAC), then encryption is vulnerable to a bit-flipping attack. The cipher is also vulnerable to a stream cipher attack if not implemented correctly.
It is noteworthy, however, that RC4, being a stream cipher, was for a period of time the only common cipher that was immune to the 2011 BEAST attack on TLS 1.0. The attack exploits a known weakness in the way cipher block chaining mode is used with all of the other ciphers supported by TLS 1.0, which are all block ciphers.
In March 2013, there were new attack scenarios proposed by Isobe, Ohigashi, Watanabe and Morii, as well as AlFardan, Bernstein, Paterson, Poettering and Schuldt that use new statistical biases in RC4 key table to recover plaintext with large number of TLS encryptions.
The use of RC4 in TLS is prohibited by RFC 7465 published in February 2015.
Roos' biases and key reconstruction from permutation
In 1995, Andrew Roos experimentally observed that the first byte of the keystream is correlated to the first three bytes of the key and the first few bytes of the permutation after the KSA are correlated to some linear combination of the key bytes. These biases remained unexplained until 2007, when Goutam Paul, Siddheshwar Rathi and Subhamoy Maitra proved the keystream–key correlation and in another work Goutam Paul and Subhamoy Maitra proved the permutation–key correlations. The latter work also used the permutation–key correlations to design the first algorithm for complete key reconstruction from the final permutation after the KSA, without any assumption on the key or initialization vector. This algorithm has a constant probability of success in a time which is the square root of the exhaustive key search complexity. Subsequently, many other works have been performed on key reconstruction from RC4 internal states. Subhamoy Maitra and Goutam Paul also showed that the Roos-type biases still persist even when one considers nested permutation indices, like or . These types of biases are used in some of the later key reconstruction methods for increasing the success probability.
Biased outputs of the RC4
The keystream generated by the RC4 is biased to varying degrees towards certain sequences making it vulnerable to distinguishing attacks. The best such attack is due to Itsik Mantin and Adi Shamir who showed that the second output byte of the cipher was biased toward zero with probability 1/128 (instead of 1/256). This is due to the fact that if the third byte of the original state is zero, and the second byte is not equal to 2, then the second output byte is always zero. Such bias can be detected by observing only 256 bytes.
Souradyuti Paul and Bart Preneel of COSIC showed that the first and the second bytes of the RC4 were also biased. The number of required samples to detect this bias is 225 bytes.
Scott Fluhrer and David McGrew also showed such attacks which distinguished the keystream of the RC4 from a random stream given a gigabyte of output.
The complete characterization of a single step of RC4 PRGA was performed by Riddhipratim Basu, Shirshendu Ganguly, Subhamoy Maitra, and Goutam Paul. Considering all the permutations, they prove that the distribution of the output is not uniform given i and j, and as a consequence, information about j is always leaked into the output.
Fluhrer, Mantin and Shamir attack
In 2001, a new and surprising discovery was made by Fluhrer, Mantin and Shamir: over all the possible RC4 keys, the statistics for the first few bytes of output keystream are strongly non-random, leaking information about the key. If the nonce and long-term key are simply concatenated to generate the RC4 key, this long-term key can be discovered by analysing a large number of messages encrypted with this key. This and related effects were then used to break the WEP ("wired equivalent privacy") encryption used with 802.11 wireless networks. This caused a scramble for a standards-based replacement for WEP in the 802.11 market, and led to the IEEE 802.11i effort and WPA.
Protocols can defend against this attack by discarding the initial portion of the keystream. Such a modified algorithm is traditionally called "RC4-drop[]", where is the number of initial keystream bytes that are dropped. The SCAN default is = 768 bytes, but a conservative value would be = 3072 bytes.
The Fluhrer, Mantin and Shamir attack does not apply to RC4-based SSL, since SSL generates the encryption keys it uses for RC4 by hashing, meaning that different SSL sessions have unrelated keys.
Klein's attack
In 2005, Andreas Klein presented an analysis of the RC4 stream cipher showing more correlations between the RC4 keystream and the key. Erik Tews, Ralf-Philipp Weinmann, and Andrei Pychkine used this analysis to create aircrack-ptw, a tool which cracks 104-bit RC4 used in 128-bit WEP in under a minute. Whereas the Fluhrer, Mantin, and Shamir attack used around 10 million messages, aircrack-ptw can break 104-bit keys in 40,000 frames with 50% probability, or in 85,000 frames with 95% probability.
Combinatorial problem
A combinatorial problem related to the number of inputs and outputs of the RC4 cipher was first posed by Itsik Mantin and Adi Shamir in 2001, whereby, of the total 256 elements in the typical state of RC4, if x number of elements (x ≤ 256) are only known (all other elements can be assumed empty), then the maximum number of elements that can be produced deterministically is also in the next 256 rounds. This conjecture was put to rest in 2004 with a formal proof given by Souradyuti Paul and Bart Preneel.
Royal Holloway attack
In 2013, a group of security researchers at the Information Security Group at Royal Holloway, University of London reported an attack that can become effective using only 234 encrypted messages. While yet not a practical attack for most purposes, this result is sufficiently close to one that it has led to speculation that it is plausible that some state cryptologic agencies may already have better attacks that render RC4 insecure. Given that, , a large amount of TLS traffic uses RC4 to avoid attacks on block ciphers that use cipher block chaining, if these hypothetical better attacks exist, then this would make the TLS-with-RC4 combination insecure against such attackers in a large number of practical scenarios.
In March 2015 researcher to Royal Holloway announced improvements to their attack, providing a 226 attack against passwords encrypted with RC4, as used in TLS.
Bar-mitzvah attack
On the Black Hat Asia 2015, Itsik Mantin presented another attack against SSL using RC4 cipher.
NOMORE attack
In 2015, security researchers from KU Leuven presented new attacks against RC4 in both TLS and WPA-TKIP. Dubbed the Numerous Occurrence MOnitoring & Recovery Exploit (NOMORE) attack, it is the first attack of its kind that was demonstrated in practice. Their attack against TLS can decrypt a secure HTTP cookie within 75 hours. The attack against WPA-TKIP can be completed within an hour, and allows an attacker to decrypt and inject arbitrary packets.
RC4 variants
As mentioned above, the most important weakness of RC4 comes from the insufficient key schedule; the first bytes of output reveal information about the key. This can be corrected by simply discarding some initial portion of the output stream. This is known as RC4-dropN, where N is typically a multiple of 256, such as 768 or 1024.
A number of attempts have been made to strengthen RC4, notably Spritz, RC4A, VMPC, and RC4+.
RC4A
Souradyuti Paul and Bart Preneel have proposed an RC4 variant, which they call RC4A.
RC4A uses two state arrays and , and two indexes and . Each time is incremented, two bytes are generated:
First, the basic RC4 algorithm is performed using and , but in the last step, is looked up in .
Second, the operation is repeated (without incrementing again) on and , and is output.
Thus, the algorithm is:
All arithmetic is performed modulo 256
i := 0
j1 := 0
j2 := 0
while GeneratingOutput:
i := i + 1
j1 := j1 + S1[i]
swap values of S1[i] and S1[j1]
output S2[S1[i] + S1[j1]]
j2 := j2 + S2[i]
swap values of S2[i] and S2[j2]
output S1[S2[i] + S2[j2]]
endwhile
Although the algorithm required the same number of operations per output byte, there is greater parallelism than RC4, providing a possible speed improvement.
Although stronger than RC4, this algorithm has also been attacked, with Alexander Maximov and a team from NEC developing ways to distinguish its output from a truly random sequence.
VMPC
Variably Modified Permutation Composition (VMPC) is another RC4 variant. It uses similar key schedule as RC4, with
iterating 3 × 256 = 768 times rather than 256, and with an optional additional 768 iterations to incorporate an initial vector. The output generation function operates as follows:
All arithmetic is performed modulo 256.
i := 0
while GeneratingOutput:
a := S[i]
j := S[j + a]
output S[S[S[j] + 1]]
Swap S[i] and S[j] (b := S[j]; S[i] := b; S[j] := a))
i := i + 1
endwhile
This was attacked in the same papers as RC4A, and can be distinguished within 238 output bytes.
RC4+
RC4+ is a modified version of RC4 with a more complex three-phase key schedule (taking about three times as long as RC4, or the same as RC4-drop512), and a more complex output function which performs four additional lookups in the S array for each byte output, taking approximately 1.7 times as long as basic RC4.
All arithmetic modulo 256. << and >> are left and right shift, ⊕ is exclusive OR
while GeneratingOutput:
i := i + 1
a := S[i]
j := j + a
Swap S[i] and S[j] (b := S[j]; S[j] := S[i]; S[i] := b;)
c := S[i<<5 ⊕ j>>3] + S[j<<5 ⊕ i>>3]
output (S[a+b] + S[c⊕0xAA]) ⊕ S[j+b]
endwhile
This algorithm has not been analyzed significantly.
Spritz
In 2014, Ronald Rivest gave a talk and co-wrote a paper on an updated redesign called Spritz. A hardware accelerator of Spritz was published in Secrypt, 2016 and shows that due to multiple nested calls required to produce output bytes, Spritz performs rather slowly compared to other hash functions such as SHA-3 and the best known hardware implementation of RC4.
The algorithm is:
All arithmetic is performed modulo 256
while GeneratingOutput:
i := i + w
j := k + S[j + S[i]]
k := k + i + S[j]
swap values of S[i] and S[j]
output z := S[j + S[i + S[z + k]]]
endwhile
The value , is relatively prime to the size of the S array. So after 256 iterations of this inner loop, the value (incremented by every iteration) has taken on all possible values 0...255, and every byte in the S array has been swapped at least once.
Like other sponge functions, Spritz can be used to build a cryptographic hash function, a deterministic random bit generator (DRBG), an encryption algorithm that supports authenticated encryption with associated data (AEAD), etc.
In 2016, Banik and Isobe proposed an attack that can distinguish Spritz from random noise.
RC4-based protocols
WEP
WPA (default algorithm, but can be configured to use AES-CCMP instead of RC4)
BitTorrent protocol encryption
Microsoft Office XP (insecure implementation since nonce remains unchanged when documents get modified)
Microsoft Point-to-Point Encryption
Transport Layer Security / Secure Sockets Layer (was optional and then the use of RC4 was prohibited in RFC 7465)
Secure Shell (optionally)
Remote Desktop Protocol (optionally)
Kerberos (optionally)
SASL Mechanism Digest-MD5 (optionally, historic, obsoleted in RFC 6331)
Gpcode.AK, an early June 2008 computer virus for Microsoft Windows, which takes documents hostage for ransom by obscuring them with RC4 and RSA-1024 encryption
PDF
Skype (in modified form)
Where a protocol is marked with "(optionally)", RC4 is one of multiple ciphers the system can be configured to use.
See also
TEA, Block TEA also known as eXtended TEA and Corrected Block TEA – A family of block ciphers that, like RC4, are designed to be very simple to implement.
Advanced Encryption Standard
CipherSaber
References
Further reading
External links
Original posting of RC4 algorithm to Cypherpunks mailing list, Archived version
– Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol
– Test Vectors for the Stream Cipher RC4
– Prohibiting RC4 Cipher Suites
SCAN's entry for RC4
RSA Security Response to Weaknesses in Key Scheduling Algorithm of RC4
RC4 in WEP
(in)Security of the WEP algorithm
Stream ciphers
Broken stream ciphers
Pseudorandom number generators
Free ciphers
Articles with example C code |
26163 | https://en.wikipedia.org/wiki/Real-time%20Transport%20Protocol | Real-time Transport Protocol | The Real-time Transport Protocol (RTP) is a network protocol for delivering audio and video over IP networks. RTP is used in communication and entertainment systems that involve streaming media, such as telephony, video teleconference applications including WebRTC, television services and web-based push-to-talk features.
RTP typically runs over User Datagram Protocol (UDP). RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams. RTP is one of the technical foundations of Voice over IP and in this context is often used in conjunction with a signaling protocol such as the Session Initiation Protocol (SIP) which establishes connections across the network.
RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering Task Force (IETF) and first published in 1996 as which was then superseded by in 2003.
Overview
RTP is designed for end-to-end, real-time transfer of streaming media. The protocol provides facilities for jitter compensation and detection of packet loss and out-of-order delivery, which are common especially during UDP transmissions on an IP network. RTP allows data transfer to multiple destinations through IP multicast. RTP is regarded as the primary standard for audio/video transport in IP networks and is used with an associated profile and payload format. The design of RTP is based on the architectural principle known as application-layer framing where protocol functions are implemented in the application as opposed to the operating system's protocol stack.
Real-time multimedia streaming applications require timely delivery of information and often can tolerate some packet loss to achieve this goal. For example, loss of a packet in audio application may result in loss of a fraction of a second of audio data, which can be made unnoticeable with suitable error concealment algorithms. The Transmission Control Protocol (TCP), although standardized for RTP use, is not normally used in RTP applications because TCP favors reliability over timeliness. Instead the majority of the RTP implementations are built on the User Datagram Protocol (UDP). Other transport protocols specifically designed for multimedia sessions are SCTP and DCCP, although, , they are not in widespread use.
RTP was developed by the Audio/Video Transport working group of the IETF standards organization. RTP is used in conjunction with other protocols such as H.323 and RTSP. The RTP specification describes two protocols: RTP and RTCP. RTP is used for the transfer of multimedia data, and the RTCP is used to periodically send control information and QoS parameters.
The data transfer protocol, RTP, carries real-time data. Information provided by this protocol includes timestamps (for synchronization), sequence numbers (for packet loss and reordering detection) and the payload format which indicates the encoded format of the data. The control protocol, RTCP, is used for quality of service (QoS) feedback and synchronization between the media streams. The bandwidth of RTCP traffic compared to RTP is small, typically around 5%.
RTP sessions are typically initiated between communicating peers using a signaling protocol, such as H.323, the Session Initiation Protocol (SIP), RTSP, or Jingle (XMPP). These protocols may use the Session Description Protocol to specify the parameters for the sessions.
An RTP session is established for each multimedia stream. Audio and video streams may use separate RTP sessions, enabling a receiver to selectively receive components of a particular stream. The RTP and RTCP design is independent of the transport protocol. Applications most typically use UDP with port numbers in the unprivileged range (1024 to 65535). The Stream Control Transmission Protocol (SCTP) and the Datagram Congestion Control Protocol (DCCP) may be used when a reliable transport protocol is desired. The RTP specification recommends even port numbers for RTP, and the use of the next odd port number for the associated RTCP session. A single port can be used for RTP and RTCP in applications that multiplex the protocols.
RTP is used by real-time multimedia applications such as voice over IP, audio over IP, WebRTC and Internet Protocol television.
Profiles and payload formats
RTP is designed to carry a multitude of multimedia formats, which permits the development of new formats without revising the RTP standard. To this end, the information required by a specific application of the protocol is not included in the generic RTP header. For each class of application (e.g., audio, video), RTP defines a profile and associated payload formats. Every instantiation of RTP in a particular application requires a profile and payload format specifications.
The profile defines the codecs used to encode the payload data and their mapping to payload format codes in the protocol field Payload Type (PT) of the RTP header. Each profile is accompanied by several payload format specifications, each of which describes the transport of particular encoded data. Examples of audio payload formats are G.711, G.723, G.726, G.729, GSM, QCELP, MP3, and DTMF, and examples of video payloads are H.261, H.263, H.264, H.265 and MPEG-1/MPEG-2. The mapping of MPEG-4 audio/video streams to RTP packets is specified in , and H.263 video payloads are described in .
Examples of RTP profiles include:
The RTP profile for Audio and video conferences with minimal control () defines a set of static payload type assignments, and a dynamic mechanism for mapping between a payload format, and a PT value using Session Description Protocol (SDP).
The Secure Real-time Transport Protocol (SRTP) () defines an RTP profile that provides cryptographic services for the transfer of payload data.
The experimental Control Data Profile for RTP (RTP/CDP) for machine-to-machine communications.
Packet header
RTP packets are created at the application layer and handed to the transport layer for delivery. Each unit of RTP media data created by an application begins with the RTP packet header.
The RTP header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application. The fields in the header are as follows:
Version: (2 bits) Indicates the version of the protocol. Current version is 2.
P (Padding): (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet. Padding may be used to fill up a block of certain size, for example as required by an encryption algorithm. The last byte of the padding contains the number of padding bytes that were added (including itself).
X (Extension): (1 bit) Indicates presence of an extension header between the header and payload data. The extension header is application or profile specific.
CC (CSRC count): (4 bits) Contains the number of CSRC identifiers (defined below) that follow the SSRC (also defined below).
M (Marker): (1 bit) Signaling used at the application level in a profile-specific manner. If it is set, it means that the current data has some special relevance for the application.
PT (Payload type): (7 bits) Indicates the format of the payload and thus determines its interpretation by the application. Values are profile specific and may be dynamically assigned.
Sequence number: (16 bits) The sequence number is incremented for each RTP data packet sent and is to be used by the receiver to detect packet loss and to accommodate out-of-order delivery. The initial value of the sequence number should be randomized to make known-plaintext attacks on Secure Real-time Transport Protocol more difficult.
Timestamp: (32 bits) Used by the receiver to play back the received samples at appropriate time and interval. When several media streams are present, the timestamps may be independent in each stream. The granularity of the timing is application specific. For example, an audio application that samples data once every 125 μs (8 kHz, a common sample rate in digital telephony) would use that value as its clock resolution. Video streams typically use a 90 kHz clock. The clock granularity is one of the details that is specified in the RTP profile for an application.
SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of a stream. The synchronization sources within the same RTP session will be unique.
CSRC: (32 bits each, the number of entries is indicated by the CSRC count field) Contributing source IDs enumerate contributing sources to a stream which has been generated from multiple sources.
Header extension: (optional, presence indicated by Extension field) The first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension in 32-bit units, excluding the 32 bits of the extension header. The extension header data follows.
Application design
A functional multimedia application requires other protocols and standards used in conjunction with RTP. Protocols such as SIP, Jingle, RTSP, H.225 and H.245 are used for session initiation, control and termination. Other standards, such as H.264, MPEG and H.263, are used for encoding the payload data as specified by the applicable RTP profile.
An RTP sender captures the multimedia data, then encodes, frames and transmits it as RTP packets with appropriate timestamps and increasing timestamps and sequence numbers. The sender sets the payload type field in accordance with connection negotiation and the RTP profile in use. The RTP receiver detects missing packets and may reorder packets. It decodes the media data in the packets according to the payload type and presents the stream to its user.
Standards documents
, Standard 64, RTP: A Transport Protocol for Real-Time Applications
, Standard 65, RTP Profile for Audio and Video Conferences with Minimal Control
, Media Type Registration of RTP Payload Formats
, Media Type Registration of Payload Formats in the RTP Profile for Audio and Video Conferences
, A Taxonomy of Semantics and Mechanisms for Real-Time Transport Protocol (RTP) Sources
, RTP Payload Format for 12-bit DAT Audio and 20- and 24-bit Linear Sampled Audio
, RTP Payload Format for H.264 Video
, RTP Payload Format for Transport of MPEG-4 Elementary Streams
, RTP Payload Format for MPEG-4 Audio/Visual Streams
, RTP Payload Format for MPEG1/MPEG2 Video
, RTP Payload Format for Uncompressed Video
, RTP Payload Format for MIDI
, An Implementation Guide for RTP MIDI
, RTP Payload Format for the Opus Speech and Audio Codec
, RTP Payload Format for High Efficiency Video Coding (HEVC)
See also
Real Time Streaming Protocol
Real Data Transport
ZRTP
Notes
References
Further reading
External links
Henning Schulzrinne's RTP page (including FAQ)
GNU ccRTP
JRTPLIB, a C++ RTP library
Managed Media Aggregation: .NET C# RFC compliant implementation of RTP / RTCP written in completely managed code.
Streaming
Application layer protocols
VoIP protocols
Audio network protocols |
26341 | https://en.wikipedia.org/wiki/Radioteletype | Radioteletype | Radioteletype (RTTY) is a telecommunications system consisting originally of two or more electromechanical teleprinters in different locations connected by radio rather than a wired link. Radioteletype evolved from earlier landline teleprinter operations that began in the mid-1800s. The US Navy Department successfully tested printing telegraphy between an airplane and ground radio station in 1922. Later that year, the Radio Corporation of America successfully tested printing telegraphy via their Chatham, Massachusetts, radio station to the R.M.S. Majestic. Commercial RTTY systems were in active service between San Francisco and Honolulu as early as April 1932 and between San Francisco and New York City by 1934. The US military used radioteletype in the 1930s and expanded this usage during World War II. From the 1980s, teleprinters were replaced by personal computers (PCs) running software to emulate teleprinters.
The term radioteletype is used to describe both the original radioteletype system, sometimes described as "Baudot", as well as the entire family of systems connecting two or more teleprinters or PCs using software to emulate teleprinters, over radio, regardless of alphabet, link system or modulation.
In some applications, notably military and government, radioteletype is known by the acronym RATT (Radio Automatic Teletype).
History
Landline teleprinter operations began in 1849 when a circuit was put in service between Philadelphia and New York City. Émile Baudot designed a system using a five unit code in 1874 that is still in use today. Teleprinter system design was gradually improved until, at the beginning of World War II, it represented the principal distribution method used by the news services.
Radioteletype evolved from these earlier landline teleprinter operations. The US Department of the Navy successfully tested printing telegraphy between an airplane and ground radio station in August 1922. Later that year, the Radio Corporation of America successfully tested printing telegraphy via their Chatham, MA radio station to the R.M.S. Majestic. An early implementation of the Radioteletype was the Watsongraph, named after Detroit inventor Glenn Watson in March 1931. Commercial RTTY systems were in active service between San Francisco and Honolulu as early as April 1932 and between San Francisco and New York City by 1934. The US Military used radioteletype in the 1930s and expanded this usage during World War II. The Navy called radioteletype RATT (Radio Automatic Teletype) and the Army Signal Corps called radioteletype SCRT, an abbreviation of Single-Channel Radio Teletype. The military used frequency shift keying technology and this technology proved very reliable even over long distances.
From the 1980s, teleprinters were replaced by computers running teleprinter emulation software.
Technical description
A radioteletype station consists of three distinct parts: the Teletype or teleprinter, the modem and the radio.
The Teletype or teleprinter is an electromechanical or electronic device. The word Teletype was a trademark of the Teletype Corporation, so the terms "TTY", "RTTY", "RATT" and "teleprinter" are usually used to describe a generic device without reference to a particular manufacturer.
Electromechanical teleprinters are heavy, complex and noisy, and have largely been replaced with electronic units. The teleprinter includes a keyboard, which is the main means of entering text, and a printer or visual display unit (VDU). An alternative input device is a perforated tape reader and, more recently, computer storage media (such as floppy disks). Alternative output devices are tape perforators and computer storage media.
The line output of a teleprinter can be at either digital logic levels (+5 V signifies a logical "1" or mark and 0 V signifies a logical "0" or space) or line levels (−80 V signifies a "1" and +80 V a "0"). When no traffic is passed, the line idles at the "mark" state.
When a key of the teleprinter keyboard is pressed, a 5-bit character is generated. The teleprinter converts it to serial format and transmits a sequence of a start bit (a logical 0 or space), then one after the other the 5 data bits, finishing with a stop bit (a logical 1 or mark, lasting 1, 1.5 or 2 bits). When a sequence of start bit, 5 data bits and stop bit arrives at the input of the teleprinter, it is converted to a 5-bit word and passed to the printer or VDU. With electromechanical teleprinters, these functions required complicated electromechanical devices, but they are easily implemented with standard digital electronics using shift registers. Special integrated circuits have been developed for this function, for example the Intersil 6402 and 6403. These are stand-alone UART devices, similar to computer serial port peripherals.
The 5 data bits allow for only 32 different codes, which cannot accommodate the 26 letters, 10 figures, space, a few punctuation marks and the required control codes, such as carriage return, new line, bell, etc. To overcome this limitation, the teleprinter has two states, the unshifted or letters state and the shifted or numbers or figures state. The change from one state to the other takes place when the special control codes LETTERS and FIGURES are sent from the keyboard or received from the line. In the letters state the teleprinter prints the letters and space while in the shifted state it prints the numerals and punctuation marks. Teleprinters for languages using other alphabets also use an additional third shift state, in which they print letters in the alternative alphabet.
The modem is sometimes called the terminal unit and is an electronic device which is connected between the teleprinter and the radio transceiver. The transmitting part of the modem converts the digital signal transmitted by the teleprinter or tape reader to one or the other of a pair of audio frequency tones, traditionally 2295/2125 Hz (US) or 2125/1955 Hz (Europe). One of the tones corresponds to the mark condition and the other to the space condition. These audio tones, then, modulate an SSB transmitter to produce the final audio-frequency shift keying (AFSK) radio frequency signal. Some transmitters are capable of direct frequency-shift keying (FSK) as they can directly accept the digital signal and change their transmitting frequency according to the mark or space input state. In this case the transmitting part of the modem is bypassed.
On reception, the FSK signal is converted to the original tones by mixing the FSK signal with a local oscillator called the BFO or beat frequency oscillator. These tones are fed to the demodulator part of the modem, which processes them through a series of filters and detectors to recreate the original digital signal. The FSK signals are audible on a communications radio receiver equipped with a BFO, and have a distinctive "beedle-eeeedle-eedle-eee" sound, usually starting and ending on one of the two tones ("idle on mark").
The transmission speed is a characteristic of the teleprinter while the shift (the difference between the tones representing mark and space) is a characteristic of the modem. These two parameters are therefore independent, provided they have satisfied the minimum shift size for a given transmission speed. Electronic teleprinters can readily operate in a variety of speeds, but mechanical teleprinters require the change of gears in order to operate at different speeds.
Today, both functions can be performed with modern computers equipped with digital signal processors or sound cards. The sound card performs the functions of the modem and the CPU performs the processing of the digital bits. This approach is very common in amateur radio, using specialized computer programs like fldigi, MMTTY or MixW.
Before the computer mass storage era, most RTTY stations stored text on paper tape using paper tape punchers and readers. The operator would type the message on the TTY keyboard and punch the code onto the tape. The tape could then be transmitted at a steady, high rate, without typing errors. A tape could be reused, and in some cases - especially for use with ASCII on NC Machines - might be made of plastic or even very thin metal material in order to be reused many times.
The most common test signal is a series of "RYRYRY" characters, as these form an alternating tone pattern exercising all bits and are easily recognized. Pangrams are also transmitted on RTTY circuits as test messages, the most common one being "The quick brown fox jumps over the lazy dog", and in French circuits, "Voyez le brick géant que j'examine près du wharf"
Technical specification
The original (or "Baudot") radioteletype system is based almost invariably on the Baudot code or ITA-2 5 bit alphabet. The link is based on character asynchronous transmission with 1 start bit and 1, 1.5 or 2 stop bits. Transmitter modulation is normally FSK (F1B). Occasionally, an AFSK signal modulating an RF carrier (A2B, F2B) is used on VHF or UHF frequencies. Standard transmission speeds are 45.45, 50, 75, 100, 150 and 300 baud.
Common carrier shifts are 85 Hz (used on LF and VLF frequencies), 170 Hz, 425 Hz, 450 Hz and 850 Hz, although some stations use non-standard shifts. There are variations of the standard Baudot alphabet to cover languages written in Cyrillic, Arabic, Greek etc., using special techniques.
Some combinations of speed and shift are standardized for specific services using the original radioteletype system:
Amateur radio transmissions are almost always 45.45 baud – 170 Hz, although 75 baud activity is being promoted by BARTG in the form of 4-hour contests.
Radio amateurs have experimented with ITA-5 (7-bit ASCII) alphabet transmissions at 110 baud – 170 Hz.
NATO military services use 75 or 100 baud – 850 Hz.
A few naval stations still use RTTY without encryption for CARB (channel availability broadcasts).
Commercial, diplomatic and weather services prefer 50 baud – 425 or 450 Hz.
Russian (and in the past, Soviet Union) merchant marine communications use 50 baud – 170 Hz.
RTTY transmissions on LF and VLF frequencies use a narrow shift of 85 Hz, due to the limited bandwidth of the antennas.
Early amateur radioteletype history
After World War II, amateur radio operators in the US started to receive obsolete but usable Teletype Model 26 equipment from commercial operators with the understanding that this equipment would not be used for or returned to commercial service. "The Amateur Radioteletype and VHF Society" was founded in 1946 in Woodside, NY. This organization soon changed its name to "The VHF Teletype Society" and started US Amateur Radio operations on 2 meters using audio frequency shift keying (AFSK). The first two-way amateur radioteletype QSO of record took place in May 1946 between Dave Winters, W2AUF, Brooklyn, NY and W2BFD, John Evans Williams, Woodside Long Island, NY. On the west coast, amateur RTTY also started on 2 meters. Operation on 80 meters, 40 meters and the other High Frequency (HF) amateur radio bands was initially accomplished using make and break keying since frequency shift keying (FSK) was not yet authorized. In early 1949, the first American transcontinental two-way RTTY QSO was accomplished on 11 meters using AFSK between Tom McMullen (W1QVF) operating at W1AW and Johnny Agalsoff, W6PSW. The stations effected partial contact on January 30, 1949, and repeated more successfully on January 31. On February 1, 1949, the stations exchanged solid print congratulatory message traffic and rag-chewed. Earlier, on January 23, 1949, William T. Knott, W2QGH, Larchmont, NY, had been able to make rough copy of W6PSW's test transmissions. While QSOs could be accomplished, it was quickly realized that FSK was technically superior to make and break keying. Due to the efforts of Merrill Swan, W6AEE, of "The RTTY Society of Southern California" publisher of RTTY and Wayne Green, W2NSD, of CQ Magazine, Amateur Radio operators successfully petitioned the U.S. Federal Communications Commission (FCC) to amend Part 12 of the Regulations, which was effective on February 20, 1953. The amended Regulations permitted FSK in the non-voice parts of the 80, 40 and 20 meter bands and also specified the use of single channel 60 words-per-minute five unit code corresponding to ITA2. A shift of 850 hertz plus or minus 50 hertz was specified. Amateur Radio operators also had to identify their station callsign at the beginning and the end of each transmission and at ten-minute intervals using International Morse code. Use of this wide shift proved to be a problem for Amateur Radio operations. Commercial operators had already discovered that narrow shift worked best on the HF bands. After investigation and a petition to the FCC, Part 12 was amended, in March 1956, to allow Amateur Radio Operators to use any shift that was less than 900 hertz.
The FCC Notice of Proposed Rule Making (NPRM) that resulted in the authorization of Frequency Shift Keying (FSK) in the amateur high frequency (HF) bands responded to petitions by the American Radio Relay League (ARRL), the National Amateur Radio Council and Mr. Robert Weinstein. The NPRM specifically states this, and this information may be found in its entirety in the December 1951 Issue of QST. While the New RTTY Handbook gives ARRL no credit, it was published by CQ Magazine and its author was a CQ columnist (CQ generally opposed ARRL at that time).
The first RTTY Contest was held by the RTTY Society of Southern California from October 31 to November 1, 1953. Named the RTTY Sweepstakes Contest, twenty nine participants exchanged messages that contained a serial number, originating station call, check or RST report of two or three numbers, ARRL section of originator, local time (0000-2400 preferred) and date. Example: NR 23 W0BP CK MINN 1325 FEB 15. By the late 1950s, the contest exchange was expanded to include band used. Example: NR 23 W0BP CK MINN 1325 FEB 15 FORTY METERS. The contest was scored as follows: one point for each message sent and received entirely by RTTY and one point for each message received and acknowledged by RTTY. The final score was computed by multiplying the total number of message points by the number of ARRL sections worked. Two stations could exchange messages again on a different band for added points, but the section multiplier did not increase when the same section was reworked on a different band. Each DXCC entity was counted as an additional ARRL section for RTTY multiplier credit.
RTTY, later named RTTY Journal, also published the first listing of stations, mostly located in the continental US, that were interested in RTTY in 1956. Amateur Radio operators used this callbook information to contact other operators both inside and outside the United States. For example, the first recorded USA to New Zealand two-way RTTY QSO took place in 1956 between W0BP and ZL1WB.
By the late 1950s, new organizations focused on amateur radioteletype started to appear. The "British Amateur Radio Teletype Group", BARTG, now known as the "British Amateur Radio Teledata Group" was formed in June 1959. The Florida RTTY Society was formed in September 1959. Amateur Radio operators outside of Canada and the United States began to acquire surplus teleprinter and receive permission to get on the air. The first recorded RTTY QSO in the UK occurred in September 1959 between G2UK and G3CQE. A few weeks later, G3CQE had the first G/VE RTTY QSO with VE7KX. This was quickly followed up by G3CQE QSOs with VK3KF and ZL3HJ. Information on how to acquire surplus teleprinter equipment continued to spread and before long it was possible to work all continents on RTTY.
Amateur Radio operators used various equipment designs to get on the air using RTTY in the 1950s and 1960s. Amateurs used their existing receivers for RTTY operation but needed to add a terminal unit, sometimes called a demodulator, to convert the received audio signals to DC signals for the teleprinter.
Most of the terminal unit equipment used for receiving RTTY signals was homebuilt, using designs published in amateur radio publications. These original designs can be divided into two classes of terminal units: audio-type and intermediate frequency converters. The audio-type converters proved to be more popular with amateur radio operators. The Twin City, W2JAV and W2PAT designs were examples of typical terminal units that were used into the middle 1960s. The late 1960s and early 1970s saw the emergence of terminal units designed by W6FFC, such as the TT/L, ST-3, ST-5, and ST-6. These designs were first published in RTTY Journal starting in September 1967 and ending in 1970.
An adaptation of the W6FFC TT/L terminal unit was developed by Keith Petersen, W8SDZ, and it was first published in the RTTY Journal in September 1967. The drafting of the schematic in the article was done by Ralph Leland, W8DLT.
Amateur Radio operators needed to modify their transmitters to allow for HF RTTY operation. This was accomplished by adding a frequency shift keyer that used a diode to switch a capacitor in and out of the circuit, shifting the transmitter’s frequency in synchronism with the teleprinter signal changing from mark to space to mark. A very stable transmitter was required for RTTY. The typical frequency multiplication type transmitter that was popular in the 1950s and 1960s would be relatively stable on 80 meters but become progressively less stable on 40 meters, 20 meters and 15 meters. By the middle 1960s, transmitter designs were updated, mixing a crystal-controlled high frequency oscillator with a variable low frequency oscillator, resulting in better frequency stability across all Amateur Radio HF bands.
During the early days of Amateur RTTY, the Worked All Continents – RTTY Award was conceived by the RTTY Society of Southern California and issued by RTTY Journal. The first Amateur Radio station to achieve this WAC – RTTY Award was VE7KX. The first stations recognized as having achieved single band WAC RTTY were W1MX (3.5 MHz); DL0TD (7.0 MHz); K3SWZ (14.0 MHz); W0MT (21.0 MHz) and FG7XT (28.0 MHz). The ARRL began issuing WAC RTTY certificates in 1969.
By the early 1970s, Amateur Radio RTTY had spread around the world and it was finally possible to work more than 100 countries via RTTY. FG7XT was the first Amateur Radio station to claim to achieve this honor. However, Jean did not submit his QSL cards for independent review. ON4BX, in 1971, was the first Amateur Radio station to submit his cards to the DX Editor of RTTY Journal and to achieve this honor. The ARRL began issuing DXCC RTTY Awards on November 1, 1976. Prior to that date, an award for working 100 countries on RTTY was only available via RTTY Journal.
In the 1950s through the 1970s, "RTTY art" was a popular on-air activity. This consisted of (sometimes very elaborate and artistic) pictures sent over rtty through the use of lengthy punched tape transmissions and then printed by the receiving station on paper.
On January 7, 1972, the FCC amended Part 97 to allow faster RTTY speeds. Four standard RTTY speeds were authorized, namely, 60 (45 baud), 67 (50 baud), 75 (56.25 baud) and 100 (75 baud) words per minute. Many Amateur Radio operators had equipment that was capable of being upgraded to 75 and 100 words per minute by changing teleprinter gears. While there was an initial interest in 100 words per minute operation, many Amateur Radio operators moved back to 60 words per minute. Some of the reasons for the failure of 100 words per minute HF RTTY included poor operation of improperly maintained mechanical teleprinters, narrow bandwidth terminal units, continued use of 170 Hz shift at 100 words per minute and excessive error rates due to multipath distortion and the nature of ionospheric propagation.
The FCC approved the use of ASCII by Amateur Radio stations on March 17, 1980 with speeds up to 300 baud from 3.5 to 21.25 MHz and 1200 baud between 28 and 225 MHz. Speeds up to 19.2 kilobaud was authorized on Amateur frequencies above 420 MHz.
These symbol rates were later modified:
12m band and below -- 300 bauds symbol rate -- 47 CFR § 97.307 (f)(3)
10m band -- 1200 bauds symbol rate -- 47 CFR § 97.307 (f)(4)
6m and 2m bands -- 19.6 kilobauds symbol rate -- 47 CFR § 97.307 (f)(5)
1.25m and 70cm bands -- 56 kilobauds symbol rate -- 47 CFR § 97.307 (f)(6)
33 cm band and above -- symbol rate not specified -- 47 CFR § 97.307 (f)(7)
The requirement for Amateur Radio operators in the United States to identify their station callsign at the beginning and the end of each digital transmission and at ten-minute intervals using International Morse code was finally lifted by the FCC on June 15, 1983.
Comparison with other modes
RTTY has a typical baud rate for Amateur operation of 45.45 baud (approximately 60 words per minute). It remains popular as a "keyboard to keyboard" mode in Amateur Radio. RTTY has declined in commercial popularity as faster, more reliable alternative data modes have become available, using satellite or other connections.
For its transmission speed, RTTY has low spectral efficiency. The typical RTTY signal with 170 Hz shift at 45.45 baud requires around 250 Hz receiver bandwidth, more than double that required by PSK31. In theory, at this baud rate, the shift size can be decreased to 22.725 Hz, reducing the overall band footprint substantially. Because RTTY, using either AFSK or FSK modulation, produces a waveform with constant power, a transmitter does not need to use a linear amplifier, which is required for many digital transmission modes. A more efficient Class C amplifier may be used.
RTTY, using either AFSK or FSK modulation, is moderately resistant to vagaries of HF propagation and interference, however modern digital modes, such as MFSK, use Forward Error Correction to provide much better data reliability.
Primary users
Principally, the primary users are those who need robust shortwave communications. Examples are:
All military departments, all over the world (using cryptography)
Diplomatic services all over the world (using cryptography)
Weather reports are transmitted by the US Coast Guard nearly continuously
RTTY systems are also fielded by amateur radio operators, and are popular for long-distance contacts
One regular service transmitting RTTY meteorological information is the German Meteorological Service (Deutscher Wetterdienst or DWD). The DWD regularly transmit two programs on various frequencies on LF and HF in standard RTTY (ITA-2 alphabet). The list of callsigns, frequencies, baud rates and shifts are as follows:
The DWD signals can be easily received in Europe, North Africa and parts of North America.
Pronunciation
RTTY (in English) may be spoken as "radioteletype", by its letters: R-T-T-Y, or simply as /ˈɹɪti/ or /ˈɹəti/
Media
See also
Related technical references
Asynchronous serial communication
Modem
Teleprinter
Telex
Types of radio emissions
UART
Digital HF radio communications systems
ACARS, used by commercial aviation – packet based
CLOVER2000 developed by HAL company, USA, for Radio Amateur application
Hellschreiber, a FAX-RTTY hybrid, very old system from the 1930s
MFSK including COQUELET, PICCOLO and Olivia MFSK, also referred to generically as Polytone
MT63, developed and used by Radio Amateurs and some government agencies
Navtex, used for maritime weather reports, with FEC error control code using the SITOR-B system
PSK31 and PSK63 developed and used by Radio Amateurs
PACTOR, a packet SITOR variant, developed by Radio Amateurs in Germany
AX.25, the original digital PacketRadio standard developed by Amateurs
Automatic Packet Reporting System, built on top of AX.25, used by Amateurs and Emergency services and which includes GPS Positioning
Q15X25, a Radio Amateur created packet format (AX25), similar to the commercial X25 standard
Fast Simple QSO or FSQ, an HF mode developed by Radio Amateurs for us in NVIS and sunrise/sunset conditions.
SITOR, (SImplex Teleprinting Over Radio) a commercial RTTY variant with error control (the Radio Amateur version is called AMTOR)
Sailmail, a commercial HF mail system
WSJT, a computer program used for weak-signal radio communication between amateur radio operators
References
Further reading
Getting Started on RTTY, Getting started on RTTY using MMTTY
RTTY.COM, a repository of Amateur RTTY information
British Amateur Radio Teledata Group (BARTG)
RTTY Demodulator Development by Kok Chen, W7AY. A technology review for the early period until ca 1965.
Quantized radio modulation modes
Military radio systems
Amateur radio
Wireless communication systems |
26384 | https://en.wikipedia.org/wiki/Rebol | Rebol | Rebol ( ; historically REBOL) is a cross-platform data exchange language and a multi-paradigm dynamic programming language designed by Carl Sassenrath for network communications and distributed computing. It introduces the concept of dialecting: small, optimized, domain-specific languages for code and data, which is also the most notable property of the language according to its designer Carl Sassenrath:
Douglas Crockford, known for his involvement in the development of JavaScript, has described Rebol as "a more modern language, but with some very similar ideas to Lisp, in that it's all built upon a representation of data which is then executable as programs" and as one of JSON's influences.
Originally, the language and its official implementation were proprietary and closed source, developed by REBOL Technologies. Following discussion with Lawrence Rosen, the Rebol version 3 interpreter was released under the Apache 2.0 license on December 12, 2012. Older versions are only available in binary form, and no source release for them is planned.
Rebol has been used to program Internet applications (both client- and server-side), database applications, utilities, and multimedia applications.
Etymology
Rebol was initially an acronym for Relative Expression Based Object Language written in all caps. To align with modern trends in language naming represented, e.g. by the change replacing historical name LISP by Lisp, programmers ceased the practice of writing REBOL in all caps. Sassenrath eventually put the naming question to the community debate on his blog. In subsequent writing, Sassenrath adopted the convention of writing the language name as Rebol.
History
First released in 1997, Rebol was designed over a 20-year period by Carl Sassenrath, the architect and primary developer of AmigaOS, based on his study of denotational semantics and using concepts from the programming languages Lisp, Forth, Logo, and Self.
REBOL Technologies was founded in 1998.
REBOL 2, the interpreter, which became the core of extended interpreter editions, was first released in 1999.
REBOL/Command, which added strong encryption and ODBC access, was released in September 2000.
REBOL/View was released in April 2001, adding graphical abilities on the core language.
REBOL/IOS, an extensible collaboration environment built with REBOL was released in August 2001.
REBOL/SDK, providing a choice of kernels to bind against, as well as a preprocessor, was released in December 2002.
Rebol 3 [R3], the newest version of the interpreter, had alpha versions released by REBOL Technologies since January 2008. Since its release as an Apache 2 project in December 2012, it is being developed by the Rebol community.
Design
Ease of use
One of the Rebol design principles is "to do simple things in simple ways". In the following example the Visual interface dialect is used to describe a simple Hello world program with a graphical user interface:
view layout [text "Hello world!" button "Quit" [quit]]
This is how a similar example looks in R3-GUI:
view [text "Hello world!" button "Quit" on-action [quit]]
Dialects
Rebol domain-specific languages, called dialects, are micro-languages optimized for a specific purpose. Dialects can be used to define business rules, graphical user interfaces or sequences of screens during the installation of a program. Users can define their own dialects, reusing any existing Rebol word and giving it a specific meaning in that dialect. Dialects are interpreted by functions processing Rebol blocks (or parsing strings) in a specific way.
An example of Rebol's dialecting abilities can be seen with the word return. In the data exchange dialect return is just a word not having any specific meaning. In the do dialect, return is a global variable referring to a native function passing back a function result value. In the visual interface dialect (VID), return is a keyword causing the layout engine to simulate a carriage return, moving the "rendering pen" down to the beginning of the next line.
A Rebol interpreter with graphical abilities must understand and interpret many dialects. The table below lists the most important ones in order of significance.
Syntax
Rebol syntax is free-form, not requiring specific positioning. However, indentation is often used to better convey the structure of the text to human readers.
Syntactic properties of different dialects may differ. The common platform for all Rebol dialects is the data exchange dialect; other dialects are usually derived from it. In addition to being the common platform for all dialects, the data exchange dialect is directly used to represent data and metadata, populate data structures, send data over Internet, and save them in data storage.
In contrast to programming languages like C, the data exchange dialect does not consist of declarations, statements, expressions or keywords. A valid data exchange dialect text stream is a tree data structure consisting of blocks (the root block is implicit, subblocks are delimited by square brackets), parens (delimited by round brackets), strings (delimited by double quotes or curly brackets suitable for multi-line strings; caret notation is used for unprintable characters), URLs, e-mail addresses, files, paths or other composite values. Unlike ALGOL blocks, Rebol blocks are composite values similar to quoted s-expressions in Lisp. The fact that code is written in the form of Rebol blocks makes the language homoiconic.
Blocks as well as parens may contain other composite values (a block may contain subblocks, parens, strings, ...) or scalar values like words, set-words (words suffixed by the colon), get-words (words prefixed by the colon), lit-words (words prefixed by the apostrophe), numbers, money, characters, etc., separated by whitespace. Note that special characters are allowed in words, so a+b is a word unlike a + b, which is a sequence of three words separated by spaces.
Comments may appear following the semicolon until the end of the line. Multi-line comments or comments not ignored by the lexical parser can be written using "ordinary" datatypes like multi-line strings.
Semantics
Blocks containing domain-specific language can be submitted as arguments to specific evaluator functions.
do
The most frequently used evaluator is the do function. It is used by default to interpret the text input to the interpreter console.
The do dialect interpreted by the do function, is an expression-oriented sublanguage of the data exchange dialect. The main semantic unit of the language is the expression. In contrast to imperative programming languages descending from ALGOL, the do dialect has neither keywords, nor statements.
Words are used as case-insensitive variables. Like in all dynamically typed languages, variables don't have an associated type, type is associated with values. The result, i.e. the evaluation of a word is returned, when a word is encountered by the do function. The set-word form of a word can be used for assignment. While not having statements, assignment, together with functions with side-effects can be used for imperative programming.
Subblocks of the root block evaluate to themselves. This property is used to handle data blocks, for structured programming by submitting blocks as arguments to control functions like if, either, loop, etc., and for dialecting, when a block is passed to a specific interpreter function.
A specific problem worth noting is that composite values, assigned to variables, are not copied. To make a copy, the value must be passed to the copy function.
The do function normally follows a prefix style of evaluation, where a function processes the arguments that follow it. However, infix evaluation using infix operators exists too. Infix evaluation takes precedence over the prefix evaluation. For example,
abs -2 + 3
returns 1, since the infix addition takes precedence over the computation of the absolute value. When evaluating infix expressions, the order of evaluation is left to right, no operator takes precedence over another. For example,
2 + 3 * 4
returns 20, while an evaluation giving precedence to multiplication would yield 14. All operators have prefix versions. Do usually evaluates arguments before passing them to a function. So, the below expression:
print read http://en.wikipedia.org/wiki/Rebol
first reads the Wikipedia Rebol page and then passes the result to the print function. Parentheses can be used to change the order of evaluation. Using prefix notation, the usage of parentheses in expressions can be avoided.
The simple precedence rules are both an advantage:
No need to "consult" precedence tables when writing expressions
No need to rewrite precedence tables when a new operator is defined
Expressions can be easily transliterated from infix to prefix notation and vice versa
as well as a disadvantage:
Users accustomed to more conventional precedence rules may easily make a mistake
parse
The parse function is preferably used to specify, validate, transform and interpret dialects. It does so by matching parse expressions at run time.
Parse expressions are written in the parse dialect, which, like the do dialect, is an expression-oriented sublanguage of the data exchange dialect. Unlike the do dialect, the parse dialect uses keywords representing operators and the most important nonterminals, infix parsing operators don't have prefix equivalents and use precedence rules (sequence has higher precedence than choice).
Actions can be included to be taken during the parsing process as well and the parse function can be used to process blocks or strings. At the string parsing level parse must handle the "low level" parsing, taking into account characters and delimiters. Block parsing is higher level, handling the scanning at the level of Rebol values.
The parse dialect belongs to the family of grammars represented by the top-down parsing language or the parsing expression grammar (PEG). The main similarity is the presence of the sequence and choice operators all the family members have. Parse dialect syntax and the similarities between the parse dialect and the PEG are illustrated by this transliteration of a PEG example that parses an arithmetic expression:
Digit: charset [#"0" - #"9"]
Value: [some Digit | "(" Expr ")"]
Product: [Value any [["*"| "/"] Value]]
Sum: [Product any [["+"| "-"] Product]]
Expr: Sum
parse/all "12+13" Expr
Implementations
The official Rebol 2.7.8 implementation is available in several editions (/Core, /View, /Command, /SDK and /IOS). Both /Core and /View editions are freely redistributable software.
The runtime environment is stored in a single executable file. Rebol/Core 2.7.8, the console edition, is about 300 KB and Rebol/View 2.7.8, the graphical user interface edition, is about 650 KB in size.
Rebol/View provides platform-independent graphics and sound access, and comes with its own windowing toolkit and extensible set of styles (GUI widgets). Extended editions, such as Rebol/Command 2.7.8 or Rebol/SDK 2.7.8 require a paid license; they add features like ODBC data access, and the option to create standalone executable files.
Legacy
Rebol was named by Douglas Crockford as one of the inspirations of JavaScript Object Notation.
Rebol inspired the open-source Orca project, which is an interpreted Rebol-like language.
Boron is an interpreted, homoiconic language inspired by and similar to Rebol, which is meant for embedding domain specific languages. It is implemented as a C library licensed under the terms of the LGPLv3.
The Red programming language was directly inspired by Rebol, yet the implementation choices of Red were geared specifically to overcoming its perceived limitations.
See also
Domain-specific language
Language-oriented programming
References
Further reading
External links
A REBOL tutorial
Rebol 3 Tutorial
Programming languages
AmigaOS 4 software
Dynamic programming languages
Dynamically typed programming languages
Functional languages
Prototype-based programming languages
Scripting languages
Extensible syntax programming languages
Formerly proprietary software
Programming languages created in 1997
High-level programming languages
Homoiconic programming languages |
26672 | https://en.wikipedia.org/wiki/SHA-1 | SHA-1 | In cryptography, SHA-1 (Secure Hash Algorithm 1) is a cryptographically broken but still widely used hash function which takes an input and produces a 160-bit (20-byte) hash value known as a message digest – typically rendered as a hexadecimal number, 40 digits long. It was designed by the United States National Security Agency, and is a U.S. Federal Information Processing Standard.
Since 2005, SHA-1 has not been considered secure against well-funded opponents; as of 2010 many organizations have recommended its replacement.
NIST formally deprecated use of SHA-1 in 2011 and disallowed its use for digital signatures in 2013. , chosen-prefix attacks against SHA-1 are practical. As such, it is recommended to remove SHA-1 from products as soon as possible and instead use SHA-2 or SHA-3. Replacing SHA-1 is urgent where it is used for digital signatures.
All major web browser vendors ceased acceptance of SHA-1 SSL certificates in 2017. In February 2017, CWI Amsterdam and Google announced they had performed a collision attack against SHA-1, publishing two dissimilar PDF files which produced the same SHA-1 hash. However, SHA-1 is still secure for HMAC.
Microsoft has discontinued SHA-1 code signing support for Windows Update on August 7, 2020.
Development
SHA-1 produces a message digest based on principles similar to those used by Ronald L. Rivest of MIT in the design of the MD2, MD4 and MD5 message digest algorithms, but generates a larger hash value (160 bits vs. 128 bits).
SHA-1 was developed as part of the U.S. Government's Capstone project. The original specification of the algorithm was published in 1993 under the title Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). This version is now often named SHA-0. It was withdrawn by the NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly designated SHA-1. SHA-1 differs from SHA-0 only by a single bitwise rotation in the message schedule of its compression function. According to the NSA, this was done to correct a flaw in the original algorithm which reduced its cryptographic security, but they did not provide any further explanation. Publicly available techniques did indeed demonstrate a compromise of SHA-0, in 2004, before SHA-1 in 2017. See #Attacks
Applications
Cryptography
SHA-1 forms part of several widely used security applications and protocols, including TLS and SSL, PGP, SSH, S/MIME, and IPsec. Those applications can also use MD5; both MD5 and SHA-1 are descended from MD4.
SHA-1 and SHA-2 are the hash algorithms required by law for use in certain U.S. government applications, including use within other cryptographic algorithms and protocols, for the protection of sensitive unclassified information. FIPS PUB 180-1 also encouraged adoption and use of SHA-1 by private and commercial organizations. SHA-1 is being retired from most government uses; the U.S. National Institute of Standards and Technology said, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" (emphasis in original), though that was later relaxed to allow SHA-1 to be used for verifying old digital signatures and time stamps.
A prime motivation for the publication of the Secure Hash Algorithm was the Digital Signature Standard, in which it is incorporated.
The SHA hash functions have been used for the basis of the SHACAL block ciphers.
Data integrity
Revision control systems such as Git, Mercurial, and Monotone use SHA-1, not for security, but to identify revisions and to ensure that the data has not changed due to accidental corruption. Linus Torvalds said about Git:
If you have disk corruption, if you have DRAM corruption, if you have any kind of problems at all, Git will notice them. It's not a question of if, it's a guarantee. You can have people who try to be malicious. They won't succeed. ... Nobody has been able to break SHA-1, but the point is the SHA-1, as far as Git is concerned, isn't even a security feature. It's purely a consistency check. The security parts are elsewhere, so a lot of people assume that since Git uses SHA-1 and SHA-1 is used for cryptographically secure stuff, they think that, Okay, it's a huge security feature. It has nothing at all to do with security, it's just the best hash you can get. ...
I guarantee you, if you put your data in Git, you can trust the fact that five years later, after it was converted from your hard disk to DVD to whatever new technology and you copied it along, five years later you can verify that the data you get back out is the exact same data you put in. ...
One of the reasons I care is for the kernel, we had a break in on one of the BitKeeper sites where people tried to corrupt the kernel source code repositories. However Git does not require the second preimage resistance of SHA-1 as a security feature, since it will always prefer to keep the earliest version of an object in case of collision, preventing an attacker from surreptitiously overwriting files.
Cryptanalysis and validation
For a hash function for which L is the number of bits in the message digest, finding a message that corresponds to a given message digest can always be done using a brute force search in approximately 2L evaluations. This is called a preimage attack and may or may not be practical depending on L and the particular computing environment. However, a collision, consisting of finding two different messages that produce the same message digest, requires on average only about evaluations using a birthday attack. Thus the strength of a hash function is usually compared to a symmetric cipher of half the message digest length. SHA-1, which has a 160-bit message digest, was originally thought to have 80-bit strength.
In 2005, cryptographers Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu produced collision pairs for SHA-0 and have found algorithms that should produce SHA-1 collisions in far fewer than the originally expected 280 evaluations.
Some of the applications that use cryptographic hashes, like password storage, are only minimally affected by a collision attack. Constructing a password that works for a given account requires a preimage attack, as well as access to the hash of the original password, which may or may not be trivial. Reversing password encryption (e.g. to obtain a password to try against a user's account elsewhere) is not made possible by the attacks. (However, even a secure password hash can't prevent brute-force attacks on weak passwords.)
In the case of document signing, an attacker could not simply fake a signature from an existing document: The attacker would have to produce a pair of documents, one innocuous and one damaging, and get the private key holder to sign the innocuous document. There are practical circumstances in which this is possible; until the end of 2008, it was possible to create forged SSL certificates using an MD5 collision.
Due to the block and iterative structure of the algorithms and the absence of additional final steps, all SHA functions (except SHA-3) are vulnerable to length-extension and partial-message collision attacks. These attacks allow an attacker to forge a message signed only by a keyed hash – or – by extending the message and recalculating the hash without knowing the key. A simple improvement to prevent these attacks is to hash twice: (the length of 0b, zero block, is equal to the block size of the hash function).
Attacks
In early 2005, Vincent Rijmen and Elisabeth Oswald published an attack on a reduced version of SHA-1 – 53 out of 80 rounds – which finds collisions with a computational effort of fewer than 280 operations.
In February 2005, an attack by Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu was announced. The attacks can find collisions in the full version of SHA-1, requiring fewer than 269 operations. (A brute-force search would require 280 operations.)
The authors write: "In particular, our analysis is built upon the original differential attack on SHA-0, the near collision attack on SHA-0, the multiblock collision techniques, as well as the message modification techniques used in the collision search attack on MD5. Breaking SHA-1 would not be possible without these powerful analytical techniques." The authors have presented a collision for 58-round SHA-1, found with 233 hash operations. The paper with the full attack description was published in August 2005 at the CRYPTO conference.
In an interview, Yin states that, "Roughly, we exploit the following two weaknesses: One is that the file preprocessing step is not complicated enough; another is that certain math operations in the first 20 rounds have unexpected security problems."
On 17 August 2005, an improvement on the SHA-1 attack was announced on behalf of Xiaoyun Wang, Andrew Yao and Frances Yao at the CRYPTO 2005 Rump Session, lowering the complexity required for finding a collision in SHA-1 to 263. On 18 December 2007 the details of this result were explained and verified by Martin Cochran.
Christophe De Cannière and Christian Rechberger further improved the attack on SHA-1 in "Finding SHA-1 Characteristics: General Results and Applications," receiving the Best Paper Award at ASIACRYPT 2006. A two-block collision for 64-round SHA-1 was presented, found using unoptimized methods with 235 compression function evaluations. Since this attack requires the equivalent of about 235 evaluations, it is considered to be a significant theoretical break. Their attack was extended further to 73 rounds (of 80) in 2010 by Grechnikov. In order to find an actual collision in the full 80 rounds of the hash function, however, tremendous amounts of computer time are required. To that end, a collision search for SHA-1 using the distributed computing platform BOINC began August 8, 2007, organized by the Graz University of Technology. The effort was abandoned May 12, 2009 due to lack of progress.
At the Rump Session of CRYPTO 2006, Christian Rechberger and Christophe De Cannière claimed to have discovered a collision attack on SHA-1 that would allow an attacker to select at least parts of the message.
In 2008, an attack methodology by Stéphane Manuel reported hash collisions with an estimated theoretical complexity of 251 to 257 operations. However he later retracted that claim after finding that local collision paths were not actually independent, and finally quoting for the most efficient a collision vector that was already known before this work.
Cameron McDonald, Philip Hawkes and Josef Pieprzyk presented a hash collision attack with claimed complexity 252 at the Rump Session of Eurocrypt 2009. However, the accompanying paper, "Differential Path for SHA-1 with complexity O(252)" has been withdrawn due to the authors' discovery that their estimate was incorrect.
One attack against SHA-1 was Marc Stevens with an estimated cost of $2.77M(2012) to break a single hash value by renting CPU power from cloud servers. Stevens developed this attack in a project called HashClash, implementing a differential path attack. On 8 November 2010, he claimed he had a fully working near-collision attack against full SHA-1 working with an estimated complexity equivalent to 257.5 SHA-1 compressions. He estimated this attack could be extended to a full collision with a complexity around 261.
The SHAppening
On 8 October 2015, Marc Stevens, Pierre Karpman, and Thomas Peyrin published a freestart collision attack on SHA-1's compression function that requires only 257 SHA-1 evaluations. This does not directly translate into a collision on the full SHA-1 hash function (where an attacker is not able to freely choose the initial internal state), but undermines the security claims for SHA-1. In particular, it was the first time that an attack on full SHA-1 had been demonstrated; all earlier attacks were too expensive for their authors to carry them out. The authors named this significant breakthrough in the cryptanalysis of SHA-1 The SHAppening.
The method was based on their earlier work, as well as the auxiliary paths (or boomerangs) speed-up technique from Joux and Peyrin, and using high performance/cost efficient GPU cards from NVIDIA. The collision was found on a 16-node cluster with a total of 64 graphics cards. The authors estimated that a similar collision could be found by buying US$2,000 of GPU time on EC2.
The authors estimated that the cost of renting enough of EC2 CPU/GPU time to generate a full collision for SHA-1 at the time of publication was between US$75K and 120K, and noted that was well within the budget of criminal organizations, not to mention national intelligence agencies. As such, the authors recommended that SHA-1 be deprecated as quickly as possible.
SHAttered – first public collision
On 23 February 2017, the CWI (Centrum Wiskunde & Informatica) and Google announced the SHAttered attack, in which they generated two different PDF files with the same SHA-1 hash in roughly 263.1 SHA-1 evaluations. This attack is about 100,000 times faster than brute forcing a SHA-1 collision with a birthday attack, which was estimated to take 280 SHA-1 evaluations. The attack required "the equivalent processing power of 6,500 years of single-CPU computations and 110 years of single-GPU computations".
Birthday-Near-Collision Attack – first practical chosen-prefix attack
On 24 April 2019 a paper by Gaëtan Leurent and Thomas Peyrin presented at Eurocrypt 2019 described an enhancement to the previously best chosen-prefix attack in Merkle–Damgård–like digest functions based on Davies–Meyer block ciphers. With these improvements, this method is capable of finding chosen-prefix collisions in approximately 268 SHA-1 evaluations. This is approximately 1 billion times faster (and now usable for many targeted attacks, thanks to the possibility of choosing a prefix, for example malicious code or faked identities in signed certificates) than the previous attack's 277.1 evaluations (but without chosen prefix, which was impractical for most targeted attacks because the found collisions were almost random) and is fast enough to be practical for resourceful attackers, requiring approximately $100,000 of cloud processing. This method is also capable of finding chosen-prefix collisions in the MD5 function, but at a complexity of 246.3 does not surpass the prior best available method at a theoretical level (239), though potentially at a practical level (≤249). This attack has a memory requirement of 500+ GB.
On 5 January 2020 the authors published an improved attack. In this paper they demonstrate a chosen-prefix collision attack with a complexity of 263.4, that at the time of publication would cost 45k USD per generated collision.
SHA-0
At CRYPTO 98, two French researchers, Florent Chabaud and Antoine Joux, presented an attack on SHA-0: collisions can be found with complexity 261, fewer than the 280 for an ideal hash function of the same size.
In 2004, Biham and Chen found near-collisions for SHA-0 – two messages that hash to nearly the same value; in this case, 142 out of the 160 bits are equal. They also found full collisions of SHA-0 reduced to 62 out of its 80 rounds.
Subsequently, on 12 August 2004, a collision for the full SHA-0 algorithm was announced by Joux, Carribault, Lemuet, and Jalby. This was done by using a generalization of the Chabaud and Joux attack. Finding the collision had complexity 251 and took about 80,000 processor-hours on a supercomputer with 256 Itanium 2 processors (equivalent to 13 days of full-time use of the computer).
On 17 August 2004, at the Rump Session of CRYPTO 2004, preliminary results were announced by Wang, Feng, Lai, and Yu, about an attack on MD5, SHA-0 and other hash functions. The complexity of their attack on SHA-0 is 240, significantly better than the attack by Joux et al.
In February 2005, an attack by Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu was announced which could find collisions in SHA-0 in 239 operations.
Another attack in 2008 applying the boomerang attack brought the complexity of finding collisions down to 233.6, which was estimated to take 1 hour on an average PC from the year 2008.
In light of the results for SHA-0, some experts suggested that plans for the use of SHA-1 in new cryptosystems should be reconsidered. After the CRYPTO 2004 results were published, NIST announced that they planned to phase out the use of SHA-1 by 2010 in favor of the SHA-2 variants.
Official validation
Implementations of all FIPS-approved security functions can be officially validated through the CMVP program, jointly run by the National Institute of Standards and Technology (NIST) and the Communications Security Establishment (CSE). For informal verification, a package to generate a high number of test vectors is made available for download on the NIST site; the resulting verification, however, does not replace the formal CMVP validation, which is required by law for certain applications.
, there are over 2000 validated implementations of SHA-1, with 14 of them capable of handling messages with a length in bits not a multiple of eight (see SHS Validation List ).
Examples and pseudocode
Example hashes
These are examples of SHA-1 message digests in hexadecimal and in Base64 binary to ASCII text encoding.
SHA1("The quick brown fox jumps over the lazy og")
Outputted hexadecimal: 2fd4e1c67a2d28fced849ee1bb76e7391b93eb12
Outputted Base64 binary to ASCII text encoding: L9ThxnotKPzthJ7hu3bnORuT6xI=
Even a small change in the message will, with overwhelming probability, result in many bits changing due to the avalanche effect. For example, changing dog to cog produces a hash with different values for 81 of the 160 bits:
SHA1("The quick brown fox jumps over the lazy og")
Outputted hexadecimal: de9f2c7fd25e1b3afad3e85a0bd17d9b100db4b3
Outputted Base64 binary to ASCII text encoding: 3p8sf9JeGzr60+haC9F9mxANtLM=
The hash of the zero-length string is:
SHA1("")
Outputted hexadecimal: da39a3ee5e6b4b0d3255bfef95601890afd80709
Outputted Base64 binary to ASCII text encoding: 2jmj7l5rSw0yVb/vlWAYkK/YBwk=
SHA-1 pseudocode
Pseudocode for the SHA-1 algorithm follows:
Note 1: All variables are unsigned 32-bit quantities and wrap modulo 232 when calculating, except for
ml, the message length, which is a 64-bit quantity, and
hh, the message digest, which is a 160-bit quantity.
Note 2: All constants in this pseudo code are in big endian.
Within each word, the most significant byte is stored in the leftmost byte position
Initialize variables:
h0 = 0x67452301
h1 = 0xEFCDAB89
h2 = 0x98BADCFE
h3 = 0x10325476
h4 = 0xC3D2E1F0
ml = message length in bits (always a multiple of the number of bits in a character).
Pre-processing:
append the bit '1' to the message e.g. by adding 0x80 if message length is a multiple of 8 bits.
append 0 ≤ k < 512 bits '0', such that the resulting message length in bits
is congruent to −64 ≡ 448 (mod 512)
append ml, the original message length in bits, as a 64-bit big-endian integer.
Thus, the total length is a multiple of 512 bits.
Process the message in successive 512-bit chunks:
break message into 512-bit chunks
for each chunk
break chunk into sixteen 32-bit big-endian words w[i], 0 ≤ i ≤ 15
Message schedule: extend the sixteen 32-bit words into eighty 32-bit words:
for i from 16 to 79
Note 3: SHA-0 differs by not having this leftrotate.
w[i] = (w[i-3] xor w[i-8] xor w[i-14] xor w[i-16]) leftrotate 1
Initialize hash value for this chunk:
a = h0
b = h1
c = h2
d = h3
e = h4
Main loop:
for i from 0 to 79
if 0 ≤ i ≤ 19 then
f = (b and c) or ((not b) and d)
k = 0x5A827999
else if 20 ≤ i ≤ 39
f = b xor c xor d
k = 0x6ED9EBA1
else if 40 ≤ i ≤ 59
f = (b and c) or (b and d) or (c and d)
k = 0x8F1BBCDC
else if 60 ≤ i ≤ 79
f = b xor c xor d
k = 0xCA62C1D6
temp = (a leftrotate 5) + f + e + k + w[i]
e = d
d = c
c = b leftrotate 30
b = a
a = temp
Add this chunk's hash to result so far:
h0 = h0 + a
h1 = h1 + b
h2 = h2 + c
h3 = h3 + d
h4 = h4 + e
Produce the final hash value (big-endian) as a 160-bit number:
hh = (h0 leftshift 128) or (h1 leftshift 96) or (h2 leftshift 64) or (h3 leftshift 32) or h4
The number hh is the message digest, which can be written in hexadecimal (base 16).
The chosen constant values used in the algorithm were assumed to be nothing up my sleeve numbers:
The four round constants k are 230 times the square roots of 2, 3, 5 and 10. However they were incorrectly rounded to the nearest integer instead of being rounded to the nearest odd integer, with equilibrated proportions of zero and one bits. As well, choosing the square root of 10 (which is not a prime) made it a common factor for the two other chosen square roots of primes 2 and 5, with possibly usable arithmetic properties across successive rounds, reducing the strength of the algorithm against finding collisions on some bits.
The first four starting values for h0 through h3 are the same with the MD5 algorithm, and the fifth (for h4) is similar. However they were not properly verified for being resistant against inversion of the few first rounds to infer possible collisions on some bits, usable by multiblock differential attacks.
Instead of the formulation from the original FIPS PUB 180-1 shown, the following equivalent expressions may be used to compute f in the main loop above:
Bitwise choice between c and d, controlled by b.
(0 ≤ i ≤ 19): f = d xor (b and (c xor d)) (alternative 1)
(0 ≤ i ≤ 19): f = (b and c) or ((not b) and d) (alternative 2)
(0 ≤ i ≤ 19): f = (b and c) xor ((not b) and d) (alternative 3)
(0 ≤ i ≤ 19): f = vec_sel(d, c, b) (alternative 4)
[premo08]
Bitwise majority function.
(40 ≤ i ≤ 59): f = (b and c) or (d and (b or c)) (alternative 1)
(40 ≤ i ≤ 59): f = (b and c) or (d and (b xor c)) (alternative 2)
(40 ≤ i ≤ 59): f = (b and c) xor (d and (b xor c)) (alternative 3)
(40 ≤ i ≤ 59): f = (b and c) xor (b and d) xor (c and d) (alternative 4)
(40 ≤ i ≤ 59): f = vec_sel(c, b, c xor d) (alternative 5)
It was also shown that for the rounds 32–79 the computation of:
w[i] = (w[i-3] xor w[i-8] xor w[i-14] xor w[i-16]) leftrotate 1
can be replaced with:
w[i] = (w[i-6] xor w[i-16] xor w[i-28] xor w[i-32]) leftrotate 2
This transformation keeps all operands 64-bit aligned and, by removing the dependency of w[i] on w[i-3], allows efficient SIMD implementation with a vector length of 4 like x86 SSE instructions.
Comparison of SHA functions
In the table below, internal state means the "internal hash sum" after each compression of a data block.
Implementations
Below is a list of cryptography libraries that support SHA-1:
Botan
Bouncy Castle
cryptlib
Crypto++
Libgcrypt
Mbed TLS
Nettle
LibreSSL
OpenSSL
GnuTLS
Hardware acceleration is provided by the following processor extensions:
Intel SHA extensions: Available on some Intel and AMD x86 processors.
VIA PadLock
See also
Comparison of cryptographic hash functions
Hash function security summary
International Association for Cryptologic Research
Secure Hash Standard
Notes
References
Eli Biham, Rafi Chen, Near-Collisions of SHA-0, Cryptology ePrint Archive, Report 2004/146, 2004 (appeared on CRYPTO 2004), IACR.org
Xiaoyun Wang, Hongbo Yu and Yiqun Lisa Yin, Efficient Collision Search Attacks on SHA-0, Crypto 2005
Xiaoyun Wang, Yiqun Lisa Yin and Hongbo Yu, Finding Collisions in the Full SHA-1, Crypto 2005
Henri Gilbert, Helena Handschuh: Security Analysis of SHA-256 and Sisters. Selected Areas in Cryptography 2003: pp175–193
An Illustrated Guide to Cryptographic Hashes
A. Cilardo, L. Esposito, A. Veniero, A. Mazzeo, V. Beltran, E. Ayugadé, A CellBE-based HPC application for the analysis of vulnerabilities in cryptographic hash functions, High Performance Computing and Communication international conference, August 2010
External links
CSRC Cryptographic Toolkit – Official NIST site for the Secure Hash Standard
FIPS 180-4: Secure Hash Standard (SHS)
(with sample C implementation)
Interview with Yiqun Lisa Yin concerning the attack on SHA-1
Explanation of the successful attacks on SHA-1 (3 pages, 2006)
Cryptography Research – Hash Collision Q&A
by Christof Paar
Cryptographic hash functions
Broken hash functions
Articles with example pseudocode
Checksum algorithms
National Security Agency cryptography |
27097 | https://en.wikipedia.org/wiki/Star%20Trek%3A%20First%20Contact | Star Trek: First Contact | {{about|the 1996 film|the 1988 game|Star Trek: First Contact (video game)Star Trek: First Contact (video game)|the 1991 TV episode|First Contact (Star Trek: The Next Generation)First Contact (Star Trek: The Next Generation)}}Star Trek: First Contact is a 1996 American science fiction film directed by Jonathan Frakes (in his motion picture directorial debut) and based on the franchise Star Trek. It is the eighth film in the Star Trek film series, the second to star the cast of Star Trek: The Next Generation. In the film, the crew of the USS Enterprise-E travel back in time from the 24th century to the mid-21st century to stop the cybernetic Borg from conquering Earth by changing their past.
After the release of Star Trek Generations in 1994, Paramount Pictures tasked writers Brannon Braga and Ronald D. Moore with developing the next film in the series. Braga and Moore wanted to feature the Borg in the plot, while producer Rick Berman wanted a story involving time travel. The writers combined the two ideas; they initially set the film during the European Renaissance, but changed the time period that the Borg corrupted to the mid-21st century, after fearing the Renaissance idea would be "too kitsch". After two better-known directors turned down the job, cast member Jonathan Frakes was chosen to direct to make sure the task fell to someone who understood Star Trek.
The film's script required the creation of new starship designs, including a new USS Enterprise. Production designer Herman Zimmerman and illustrator John Eaves collaborated to make a sleeker ship than its predecessor. Principal photography began with weeks of location shooting in Arizona and California, before production moved to new sets for the ship-based scenes. The Borg were redesigned to appear as though they were converted into machine beings from the inside-out; the new makeup sessions took four times as long as their appearances on the television series. Effects company Industrial Light & Magic rushed to complete the film's special effects in less than five months. Traditional optical effects techniques were supplemented with computer-generated imagery. Jerry Goldsmith produced the film’s score.Star Trek: First Contact was released on November 22, 1996, and was the highest-grossing film on its opening weekend. It eventually made $92 million in the United States and Canada with an additional $54 million in other territories, combining to a worldwide total of $146 million. Critical reception was mostly positive; critics including Roger Ebert considered it to be one of the best Star Trek films, and it was the most positively reviewed film in the franchise (93% of reviews were positive) until being marginally surpassed (94%) by the 2009 reboot film. The Borg and the special effects were lauded, while characterization was less evenly received. Scholarly analysis of the film has focused on Captain Jean-Luc Picard's parallels to Herman Melville's Ahab and the nature of the Borg. First Contact was nominated for the Academy Award for Best Makeup and won three Saturn Awards. It was followed by Star Trek: Insurrection in 1998.
Plot
In the 24th century, Captain Jean-Luc Picard awakens from a nightmare in which he relived his assimilation by the cybernetic Borg six years earlier. He is contacted by Admiral Hayes, who informs him of a new Borg threat against Earth. Picard's orders are for his ship, , to patrol the Neutral Zone in case of Romulan aggression; Starfleet is worried that Picard is too emotionally involved with the Borg to join the fight.
Learning the fleet is losing the battle, the Enterprise crew disobeys orders and heads for Earth, where a single Borg Cube ship holds its own against a group of Starfleet vessels. Enterprise arrives in time to assist the crew of and its captain, the Klingon Worf. With the flagship now supporting them, Picard takes control of the fleet and directs the surviving ships to concentrate their firepower on a seemingly unimportant point on the Borg ship. The Cube is destroyed but launches a smaller sphere ship towards the planet. Enterprise pursues the sphere into a temporal vortex. As the sphere disappears, Enterprise discovers Earth has been altered – it is now populated entirely by Borg. Realizing the Borg have used time travel to change the past, Enterprise follows the sphere through the vortex.Enterprise arrives hundreds of years in its past on April 4, 2063, the day before humanity's first encounter with alien life after Zefram Cochrane's historic warp drive flight some time after the Earth had been devastated by the nuclear holocaust of World War III; the crew realizes the Borg are trying to prevent first contact. After destroying the Borg sphere, an away team transports down to Cochrane's ship, Phoenix, in Bozeman, Montana. Picard has Cochrane's assistant Lily Sloane sent back to Enterprise for medical attention. The captain returns to the ship and leaves Commander William T. Riker on Earth to make sure Phoenixs flight proceeds as planned. While in the future Cochrane is seen as a hero, the real man built the Phoenix for financial gain and is reluctant to be the heroic person the crew describes.
A group of Borg invade Enterprises lower decks and begin to assimilate its crew and modify the ship. Picard and a team attempt to reach engineering to disable the Borg with a corrosive gas, but are forced back; the android Data is captured in the melee. A frightened Sloane corners Picard with a weapon, but he gains her trust. The two escape the Borg-infested area of the ship by creating a diversion in the holodeck. Picard, Worf, and the ship's navigator, Lieutenant Hawk, travel outside the ship in space suits to stop the Borg from calling reinforcements by using the deflector dish, but Hawk is assimilated in the process. As the Borg continue to assimilate more decks, Worf suggests destroying the ship, but Picard angrily calls him a coward, infuriating Worf because of his Klingon heritage. Sloane confronts the captain and makes him realize he is acting irrationally because of his own past with becoming Locutus of Borg. Picard orders an activation of the ship's self-destruct, then orders the crew to head for the escape pods while he stays behind to rescue Data.
As Cochrane, Riker, and engineer Geordi La Forge prepare to activate the warp drive on Phoenix, Picard discovers that the Borg Queen has grafted human skin onto Data, giving him the sensation of touch he has long desired so that she can obtain the android's encryption codes to the Enterprise computer.
Although Picard offers himself to the Borg in exchange for Data's freedom and willingly become Locutus again, Data refuses to leave. He deactivates the self-destruct and fires torpedoes at Phoenix. At the last moment the torpedoes miss, and the Queen realizes Data betrayed her. The android ruptures a coolant tank, and the corrosive vapor eats away the biological components of the Borg. With the Borg threat neutralized, Cochrane completes his warp flight. The next day the crew watches from a distance as an alien Vulcan ship, attracted by the Phoenix warp test, lands on Earth. Cochrane and Sloane greet the aliens. Having ensured the correction of the timeline, the Enterprise crew slip away and return to the 24th century.
CastFirst Contact is the first film in the Star Trek film series in which none of the main characters from The Original Series appear. Rather, the main cast of Star Trek: The Next Generation play the following characters:
Patrick Stewart as Jean-Luc Picard, the captain of the USS Enterprise-E who is haunted by his time as a member of the Borg. Stewart was one of the few cast members who had an important role in developing the script, offering suggestions and comments. Picard's character was changed from the "angst-ridden character [viewers have] seen before", to an action hero type. Stewart noted that Picard was more physically active in the film compared to his usual depiction.
Jonathan Frakes as William T. Riker, the ship's first officer who leads the away team on Earth. Frakes said he did not have much difficulty directing and acting at the same time, having done so on the television series.
Brent Spiner as Data, an android and the ship's second officer, who endeavors to become human. Rumors before the film's release suggested that since Data's skin had been largely removed at the end of the story, it would allow another actor to assume the role.
LeVar Burton as Geordi La Forge, the ship's chief engineer who helps repair the Phoenix. La Forge was born blind, and for the television series and previous film had worn a special VISOR to see. Burton lobbied for many years to have his character's visor replaced so that people could see his eyes, since the "air filter" he wore prevented the audience from seeing his eyes and limited his acting ability. Moore finally agreed, giving the character ocular implants that were never explained in the film, beyond showing they were artificial.
Michael Dorn as Worf, the commander of the USS Defiant and Picard's former chief of security.
Gates McFadden as Beverly Crusher, the ship's doctor. In an interview before the film's premiere, McFadden said she considered women finally on par with the men in Star Trek: "We've come a long way since Majel Barrett was stuck in the sick bay as Nurse Chapel in the [1960s] and made to dye her hair blond."
Marina Sirtis as Deanna Troi, counselor aboard the Enterprise. Sirtis missed working on the television show, and was acutely aware that expectations and stakes for First Contact were high; "we were scared that people thought we couldn't cut it without the original cast", she said.
Alfre Woodard as Lily Sloane, Cochrane's assistant. When Frakes first moved to Los Angeles, Woodard was one of the first people he met. During a conversation at a barbecue Woodard said she would become Frakes' godmother, as he did not have one. Through this relationship, Frakes was able to cast Woodard in the film; he considered it a coup to land an Academy Award-nominated actress. Woodard considered Lily to be the character most like herself out of all the roles she has played.
James Cromwell as Zefram Cochrane, the pilot and creator of Earth's first warp capable vessel. The character of Zefram Cochrane had first appeared in The Original Series episode "Metamorphosis", played by Glenn Corbett. Cromwell's Cochrane is much older and has no resemblance to Corbett, which did not bother the writers. They wanted to portray Cochrane as a character going through a major transition; he starts out as a cynical, selfish drunk who is changed by the characters he meets over the course of the film. Although the character was written with Cromwell in mind, Tom Hanks, a big fan of Star Trek, was considered for the role by Paramount, though producer Rick Berman stated, "I’m sure his name was floated in some capacity, but it was never really on the table." Frakes commented that it would have been a mistake to cast Hanks as Cochrane due to his being so well known. Cromwell had a long previous association with Star Trek, having played characters in The Next Generation episodes "The Hunted" and "Birthright", as well as a role in Star Trek: Deep Space Nine. "[Cromwell] actually came in and read for the part", Frakes said. "He nailed it." Cromwell described his method of portraying Cochrane as always playing himself. Part of the actor's interest in the film was his involvement in Steven M. Greer's Center for the Study of Extraterrestrial Intelligence, which offers training for first contact scenarios.
Alice Krige as the Borg Queen, the controller of the cybernetic collective. Casting for the part took time as the actress needed to be sexy, dangerous, and mysterious. Frakes cast Krige after finding that she had all of the mentioned qualities, and being impressed by her performance in Ghost Story; the director considers her the sexiest Star Trek villain of all time. Krige suffered a large amount of discomfort filming her role; her costume was too tight, causing blisters, and the painful silver contact lenses she wore could only be kept in for four minutes at a time.
The film also introduced the voice of the Borg character, played by Jeff Coopwood,Special Guests. 55 Year Mission Tour. Creation Entertainment. 2021. Retrieved July 31, 2021 uttering the memorable line: "Resistance is futile," which was also the film's tagline. The Borg's ominous warning was: "We are the Borg. Lower your shields and surrender your ships. We will add your biological and technological distinctiveness to our own. Your culture will adapt to service us. Resistance is futile."Science Fiction and the Abolition of Man Finding C. S. Lewis in Sci-fi Film and Television. Kevin C. Neece, Brian Godawa. 2017. Pickwick Publications. . Retrieved August 3, 2021
Several of The Next Generations recurring characters also appeared in the film; Dwight Schultz reprised his role of Lieutenant Reginald Barclay and Patti Yasutake briefly appeared as Nurse Alyssa Ogawa. Whoopi Goldberg was not asked to return as Guinan, a wise bartender whose homeworld was destroyed by the Borg. Goldberg only learned about the decision through the newspapers. "What can I say? I wanted to do it because I didn't think you could do anything about the Borg without [my character]", she said, "but apparently you can, so they don't need me."
Michael Horton appears as a bloodied and stoic Starfleet Security Officer; his character would be given the name Lt. Daniels in the next Star Trek film. Neal McDonough plays Lt. Hawk, the Enterprise helmsman who aids in the defense of the ship until he is assimilated and killed. McDonough was cavalier about his role as a disposable "redshirt", saying that since one of the characters in the deflector dish battle had to die, "that would be me".
The third draft of the script added cameos by two actors from the sister television series Star Trek: Voyager, which was in its third season when the film was released. Robert Picardo appears as the Enterprises Emergency Medical Hologram; Picardo played the holographic Doctor in Voyager. He won the cameo after suggesting to producers that the Enterprise should have the same technology as Voyager. Picardo's line "I'm a doctor, not a door stop", is an allusion to the Star Trek original series character Dr. Leonard McCoy. Picardo's fellow Voyager actor Ethan Phillips, who played Neelix, cameos as a nightclub maître d' in the holodeck scene. Phillips recalled that the producers wanted the fans to be left guessing whether he was the person who played Neelix or not, as he did not appear in the credits; "It was just kind of a goofy thing to do." During production, there were incorrect rumors that Avery Brooks would reprise his role as Star Trek: Deep Space Nine captain Benjamin Sisko. As with many Star Trek productions, new, disposable redshirt characters are killed off over the course of the plot.
Production
Development
In December 1992, Paramount Pictures executives approached Star Trek: The Next Generation producer Rick Berman and engaged him to create two films featuring the cast of the television series. Berman decided to develop two screenplays simultaneously, and prioritize the most promising one for the first film. The effort of writers Brannon Braga and Ronald D. Moore was chosen and developed into Star Trek Generations. Two months after the release of Generations, Paramount decided to produce the second feature for a winter holiday 1996 release. Paramount wanted Braga and Moore, who had written the Generations script and a number of Next Generation episodes, to pen the screenplay. Berman told Braga and Moore that he wanted them to think about doing a story involving time travel. Braga and Moore, meanwhile, wanted to use the Borg. "Right on the spot, we said maybe we can do both, the Borg and time travel," Moore recalled. The Borg had not been seen in full force since the fourth-season episode of The Next Generation, "The Best of Both Worlds", and had never been heavily featured in the series due to budget constraints and the fear that they would lose their scare factor. "The Borg were really liked by the fans, and we liked them," Moore said. "They were fearsome. They were unstoppable. Perfect foils for a feature story."
In deciding to combine the two-story ideas, the writers decided that the time travel element could play out as the Borg attempt to prevent humanity from ever reaching space and becoming a threat. "Our goals at that point were to create a story that was wonderful and a script that was [...] producible within the budget confines of a Star Trek film", said Berman. One major question was identifying the time period to which the Borg would travel. Berman's suggestion was the Renaissance; the Borg would attempt to prevent the dawn of modern European civilization. The first story draft, titled Star Trek: Renaissance, had the crew of the Enterprise track the Borg to their hive in a castle dungeon. The film would have featured sword fights alongside phasers in 15th-century Europe, while Data became Leonardo da Vinci's apprentice. Moore was afraid that it risked becoming campy and over-the-top, while Stewart refused to wear tights. Braga, meanwhile, wanted to see the "birth of Star Trek", when the Vulcans and humans first met; "that, to me, is what made the time travel story fresh", he said.
With the idea of Star Treks genesis in mind, the central story became Cochrane's warp drive test and humanity's first contact. Drawing on clues from previous Star Trek episodes, Cochrane was placed in mid-21st-century Montana, where humans recover from a devastating world war. In the first script with this setting, the Borg attack Cochrane's lab, leaving the scientist comatose; Picard assumes Cochrane's place to continue the warp test and restore history. In this draft Picard has a love interest in the local photographer Ruby, while Riker leads the fight against the Borg on the Enterprise. Another draft included John de Lancie's omnipotent character Q. Looking at the early scripts, the trio knew that serious work was needed. "It just didn't make sense [...] that Picard, the one guy who has a history with the Borg, never meets them," Braga recalled. Riker's and Picard's roles were swapped, and the planetside story was shortened and told differently. Braga and Moore focused the new arc on Cochrane himself, making the ideal future of Star Trek come from a flawed man. The idea of Borg fighting among period costumes coalesced into a "Dixon Hill" holographic novel sequence on the holodeck. The second draft, titled Star Trek: Resurrection, was judged complete enough that the production team used it to plan expenses. The film was given a budget of $45 million, "considerably more" than Generations $35 million price tag; this allowed the production to plan a larger amount of action and special effects.
Braga and Moore intended the film to be easily accessible to any moviegoer and work as a stand-alone story, yet still satisfy the devoted Star Trek fans. Since much of Picard's role made a direct reference to his time as a Borg in The Next Generation episodes "The Best of Both Worlds", the opening dream sequence was added to explain what happened to him in the show. The pair discarded an opening which would have established what the main characters had been doing since the last film in favor of quickly setting the story. While the writers tried to preserve the idea of the Borg as a mindless collective in the original draft, Paramount head Jonathan Dolgen felt that the script was not dramatic enough. He suggested adding an individual Borg villain with whom the characters could interact, which led to the creation of the Borg Queen.
Cast member Frakes was chosen to direct. Frakes had not been the first choice for director; Ridley Scott and John McTiernan reportedly turned down the project. Stewart met a potential candidate and concluded that "they didn't know Star Trek". It was decided to stay with someone who understood the "gestalt of Star Trek", and Frakes was given the job. Frakes reported to work every day at 6:30 am. A major concern during the production was security—the script to Generations had been leaked online, and stronger measures were taken to prevent a similar occurrence. Some script pages were distributed on red paper to foil attempted photocopies or faxes; "We had real trouble reading them," Frakes noted.
Frakes had directed multiple episodes of The Next Generation, Deep Space Nine and Voyager, but First Contact was his first feature film. Whereas Frakes had seven days of preparation followed by seven days of shooting for a given television episode, the director was given a ten-week preparation period before twelve weeks of filming, and had to get used to shooting for a 2.35:1 anamorphic ratio instead of the television standard 1.33:1. In preparation, he watched Jaws, Close Encounters of the Third Kind, 2001: A Space Odyssey and the works of James Cameron and Ridley Scott.
Throughout multiple script revisions a number of titles were considered, including Star Trek: Borg, Star Trek: Destinies, Star Trek: Future Generations and Star Trek: Generations II. The planned title of Resurrection was scrapped when 20th Century Fox announced the title of the fourth Alien film as Alien Resurrection; the film was rebranded First Contact on May 3, 1996.
DesignFirst Contact was the first Star Trek film to make significant use of computer-generated starship models, though physical miniatures were still used for the most important vessels. With the Enterprise-D destroyed during the events of Generations, the task of creating a new starship fell to veteran Star Trek production designer Herman Zimmerman. The script's only guide on the appearance of the vessel was the line "the new Enterprise sleekly comes out of the nebula". Working with illustrator John Eaves, the designers conceived the new Sovereign-class Enterprise-E as "leaner, sleeker, and mean enough to answer any Borg threat you can imagine". Braga and Moore intended it to be more muscular and militaryesque. Eaves looked at the structure of previous Enterprise iterations, and designed a more streamlined, capable war vessel than the Enterprise-D, reducing the neck area of the ship and lengthening the nacelles. Eaves produced 30 to 40 sketches before he found a final design he liked and began making minor changes. Working from blueprints created by Paramount's Rick Sternbach, the model shop at effects house Industrial Light & Magic (ILM) fabricated a miniature over a five-month period. Hull patterns were carved out of wood, then cast and assembled over an aluminum armature. The model's panels were painted in an alternating matte and gloss scheme to add texture. The crew had multiple difficulties in prepping the miniature for filming; while the model shop originally wanted to save time by casting windows using a clear fiberglass, the material came out tacky. ILM instead cut the windows using a laser. Slides of the sets were added behind the window frames to make the interior seem more dimensional when the camera tracked past the ship.
In previous films, Starfleet's range of capital ships had been predominantly represented by the Constitution-class Enterprise and just five other ship classes: the Miranda class from Star Trek II: The Wrath of Khan (represented by the USS Reliant), the Excelsior and the Oberth class Grissom from Star Trek III: The Search for Spock, and the Galaxy and Nebula classes from The Next Generation. ILM supervisor John Knoll insisted that First Contacts space battle prove the breadth of Starfleet's ship configurations. "Starfleet would probably throw everything it could at the Borg, including ships we've never seen before", he reasoned. "And since we figured a lot of the background action in the space battle would need to be done with computer-generated ships that needed to be built from scratch anyway, I realized there was no reason not to do some new designs." Alex Jaeger was appointed visual effects art director to the film and assigned the task of creating four new starships. Paramount wanted ships that would look different from a distance, so the director devised multiple hull profiles. Knoll and Jaeger had decided that the ships had to obey certain Star Trek ship precedents, with a saucer-like primary hull and elongated warp nacelles in pairs. The Akira class featured the traditional saucer section and nacelles combined with a catamaran-style double hull; the Norway class was based on the USS Voyager; the Saber class was a smaller ship with nacelles trailing off the tips of its saucer section; and the Steamrunner class featured twin nacelles trailing off the saucer and connected by an engineering section in the rear. Each design was modeled as a three-dimensional digital wire-frame model for use in the film.
The film also required a number of smaller non-Starfleet designs. The warp ship Phoenix was conceived as fitting inside an old nuclear missile, meaning that the ship's nacelles had to fold into a space of less than . Eaves made sure to emphasize the mechanical aspect of the ship, to suggest it was a highly experimental and untested technology. The Phoenixs cockpit labels came from McDonnell-Douglas space shuttle manuals. Eaves considered the Vulcan ship a "fun" vessel to design. Only two major Vulcan ships had been previously seen in Star Trek, including a courier vessel from The Motion Picture. Since the two-engine ship format had been seen many times, the artists decided to step away from the traditional ship layout, creating a more artistic than functional design. The ship incorporated elements of a starfish and a crab. Because of budget constraints, the full ship was realized as a computer-generated design. Only a boomerang-shaped landing foot was fabricated for the actors to interact with.
The Enterprise interior sets were mostly new designs. The bridge was designed to be comfortable-looking, with warm colors. Among the new additions was a larger holographic viewscreen that would operate only when activated, leaving a plain wall when disabled. New flatscreen computer monitors were used for displays, using less space and giving the bridge a cleaner look. The new monitors also allowed for video playback that could simulate interaction with the actors. The designers created a larger and less-spartan ready room, retaining elements from the television series; Zimmerman added a set of golden three-dimensional Enterprise models to a glass case in the corner. The observation lounge was similar to the design in the Enterprise-D; the set itself was re-used from the television show, the only such set not to be struck following the filming of Generations, though it was expanded and underwent a color change. Engineering was simulated with a large, three-story set, corridors, a lobby, and the largest warp core in the franchise to date. For its Borg-corrupted state, the engineering section was outfitted with Borg drone alcoves, conduits and Data's "assimilation table" where he is interrogated by the Queen. Some existing sets were used to save money; sickbay was a redress of the same location from Voyager, while the USS Defiant scenes used Deep Space Nines standing set. Some set designs took inspiration from the Alien film series, Star Wars and 2001: A Space Odyssey.
The spacewalk scene on the Enterprise exterior was one of the most challenging sets to envision and construct for the film. The production had to design a space suit that looked practical rather than exaggerated. Fans were built into the helmets so that the actors would not get overheated, and neon lights built into the front so that the occupant's faces could be seen. When the actors first put the helmets on, the fully enclosed design made it hard to breathe; after a minute of wearing the suit Stewart became ill, and shooting was discontinued. The set for the ship's outer hull and deflector dish were built on gimbals at Paramount's largest sound stage, surrounded by bluescreen and rigged with wires for the zero gravity sequences. The stage was not large enough to accommodate a full-sized replica of the Enterprise dish, so Zimmerman had to scale down the plans by 15 percent.
Costumes and makeup
The Starfleet uniforms were redesigned for the film by longtime Star Trek costumer Bob Blackman to give a more militaristic feel, with grey padded shoulders, black torso/sleeves/leggings and colored undershirts/stripe cuffs. The new uniforms from this film were later adopted in the fifth season of Star Trek: Deep Space Nine, beginning with "Rapture" and for the rest of the series, but the crew on Star Trek: Voyager continued to use the old DS9 uniforms, due to being stuck in the Delta Quadrant. Since Blackman was also handling the costumes for the television series, non-Starfleet design clothes were delegated to Deborah Everton, a newcomer to Star Trek who was responsible for more than 800 costumes during production. Everton was tasked with updating the Borg's costumes to something new, but reminiscent of the television series. The bulky suits were made sleeker and outfitted with fiber optic lights. The time-travel aspect of the story also required period costumes for the mid 21st century and the 1940s "Dixon Hill" nightclub holodeck recreation. Everton enjoyed designing Woodard's costumes because the character went through many changes during the course of the film, switching from a utilitarian vest and pants in many shots to a glamorous dress during the holodeck scene.
Everton and makeup designers Michael Westmore, Scott Wheeler, and Jake Garber wanted to upgrade the pasty white look the Borg had retained since The Next Generations second season, born out of a need for budget-conscious television design. "I wanted it to look like they were [assimilated or "Borgified"] from the inside out rather than the outside in," Everton said. Each Borg had a slightly different design, and Westmore designed a new one each day to make it appear that there was an army of Borg; in reality, between eight and twelve actors filled all the roles as the costumes and makeup were so expensive to produce. Background Borg were simulated by half-finished mannequins. Westmore reasoned that since the Borg had traveled the galaxy, they would have assimilated other races besides humans. In the television series, much of the Borg's faces had been covered by helmets, but for First Contact the makeup artist removed the head coverings and designed assimilated versions of familiar Star Trek aliens such as Klingons, Bolians, Romulans, Bajorans, and Cardassians. Each drone received an electronic eyepiece. The blinking lights in each eye were programmed by Westmore's son to repeat a production member's name in Morse code.
The makeup time for the Borg expanded from the single hour needed for television to five hours, in addition to the 30 minutes necessary to get into costume and 90 minutes to remove the makeup at the end of the day. While Westmore estimated that a fully staffed production would have around 50 makeup artists, First Contact had to make do with fewer than ten people involved in preparation, and at most 20 artists a day. Despite the long hours, Westmore's teams began to be more creative with the prosthetics even as they decreased their preparation times. "They were using two tubes, and then they were using three tubes, and then they were sticking tubes in the ears and up the nose," Westmore explained. "And we were using a very gooey caramel coloring, maybe using a little bit of it, but by the time we got to the end of the movie we had the stuff dripping down the side of [the Borg's] faces—it looked like they were leaking oil! So, at the very end [of the film], they're more ferocious."
The Borg Queen was a challenge because she had to be unique among Borg but still retain human qualities; Westmore was conscious of avoiding comparisons to films like Alien. The final appearance involved pale gray skin and an elongated, oval head, with coils of wire rather than hair. Krige recalled the first day she had her makeup applied: "I saw everyone cringing. I thought, great; they made this, and they've scared themselves!" Frakes noted that the Queen ended up being alluring in a disturbing way, despite her evil behavior and appearance. Zimmerman, Everton and Westmore combined their efforts to design and create the Borgified sections of the Enterprise to build tension and to make the audience feel that "[they are being fed] the Borg".
Filming
Principal photography took a more leisurely pace than on The Next Generation because of a less hectic schedule; only four pages of script had to be filmed each day, as opposed to eight on the television series. First Contact saw the introduction of cinematographer Matthew F. Leonetti to the Star Trek franchise; Frakes hired him out of admiration for some of his previous work on films such as Poltergeist and Strange Days. Leonetti was unfamiliar with the Star Trek mythos when Frakes approached him; to prepare for the assignment, he studied the previous four films in the franchise, each with a different cinematographer—The Voyage Home (Donald Peterman), The Final Frontier (Andrew Laszlo), The Undiscovered Country (Hiro Narita), and Generations (John Alonzo). The cameraman also spent several days at the sets of Voyager and Deep Space Nine to observe filming.
Leonetti devised multiple lighting methods for the Enterprise interiors for ship standard operations, "Red alert" status, and emergency power. He reasoned that since the ship was being taken over by a foreign entity, it required more dramatic lighting and framing. While much of the footage was shot at 50–70 mm focal lengths using anamorphic lenses, 14 mm spherical lenses were used for Borg's-eye-view shots. Leonetti preferred shooting with long lenses to provide a more claustrophobic feel, but made sure the length did not flatten the image. Handheld cameras were used for battle sequences so that viewers were brought into the action and the camera could follow the movements of the actors. The Borg scenes were received positively by test-screening audiences, so once the rest of the film had been completed, a Borg assimilation scene of the Enterprise crew was added in using some of the money left in the budget to add action.
Since so many new sets had to be created, the production commenced filming with location photography. Four days were spent in the Titan Missile Museum, south of Tucson, Arizona—the disarmed nuclear missile was fitted with a fiberglass capsule shell to stand in for the Phoenixs booster and command module. The old missile silo provided a large set that the budget would have prohibited building from scratch, but the small size created difficulties. Each camera move was planned in advance to work around areas where the lighting would be added, and electricians and grips donned rock-climbing harnesses to move down the shaft and attach the lights. To give greater dimension to the rocket and lend the missile a futuristic appearance, Leonetti chose to offset the missile's metallic surface with complementary colors. Using different-colored gels made the rocket appear longer than it actually was; to complete the effect, shots from the Phoenixs nose downwards and from the engines up were filmed with a 30 mm lens to lengthen the missile.
After the completion of the Phoenix shots, the crew moved to two weeks of nighttime shooting in the Angeles National Forest. Zimmerman created a village of fourteen huts to stand in for Montana; the cast enjoyed the scenes as a chance to escape their uniforms and wear "normal" clothes. The last location shoot was at an art deco restaurant in Los Angeles' Union Station, which stood in for the Dixon Hill holonovel; Frakes wanted a sharp contrast with the dark, mechanical Borg scenes. While the cinematographer wanted to shoot the scene in black and white, Paramount executives deemed the test footage "too experimental" and the idea was dropped. The site made using high-wattage lights impractical, so Leonetti opted to use dimmer master lights near the ceiling and took advantage of a large window to shine diffused lights through. To give the scene a black-and-white feel, Leonetti made sure to use light without any coloration. "I like creating separation with lighting as opposed to using color," he explained. "You can't always rely on color because the actor might start to melt into the background." By separating the backlights, Leonetti made sure that the principal actors stood out of the backdrop. The shoot used a ten-piece orchestra, 15 stuntmen, and 120 extras to fill the seats. Among the nightclub patrons were Braga, Moore, and the film's stunt coordinator, Ronnie Rondell.
After location shooting was completed, shooting on the new Engineering set began May 3. The set lasted less than a day in its pristine condition before it was "Borgified". Filming then proceeded to the bridge. During normal operation scenes, Leonetti chose to cast crosslighting on the principals; this required the ceiling of the set to be removed and lighting grids to be situated around the sides. These lights were then directed towards the actors' faces at 90-degree angles. The set was lined with window paneling backed by red lights, which would blink intermittently during red-alert status. These lights were supplemented by what Leonetti called "interactive light"; these were off-stage, red-gelled lights that cast flashing rims on the bridge set and heads of the crew. For the Borg intrusion, the lighting originated solely from instrument panels and red-alert displays. The fill light on these scenes was reduced so that the cast would pass through dark spots on the bridge and interiors out of the limited range of these sources. Small 30- and 50-watt lights were used to throw localized shafts of light onto the sets.
Next came the action sequences and the battle for the Enterprise, a phase the filmmakers dubbed "Borg Hell". Frakes directed the Borg scenes similar to a horror film, creating as much suspense as possible. To balance these elements he added more comedic elements to the Earth scenes, intended to momentarily relieve the audience of tension before building it up again. Leonetti reconfigured the lighting to reflect the takeover of the ship interiors. "When the ship gets Borgified, everything is changed into more of a squared-off, robotic look with sharp edges but rounded images," he explained. To give the corridor walls more shape, Leonetti lit them from underneath. Since the halls were so small and the ceilings would be visible in many of the shots, special attention was paid to hiding the light fixtures.
For the live-action spacewalk scenes, visual-effects supervisor Ronald B. Moore spent two weeks of bluescreen photography at the deflector set. Frakes regarded filming the scene to be the most tedious in the film because of the amount of preparation it took for each day's shoot. Since the rest of the Enterprise-E, as well as the backdrop of Earth, were to be added later in post-production, it became confusing to coordinate shots. Moore used a laptop with digital reproductions of the set to orient the crew and help Frakes understand what the finished shot would look like. A one-armed actor portrayed the Borg whose arm Worf slices off to accurately portray the effect intended, and the actors' shoes were fitted with lead weights to remind the actors they were to move slowly as if actually wearing gravity boots. McDonough recalled that he joined Stewart and Dorn in asking whether they could do the shots without the weights, as "they hired us because we are actors", but the production insisted on using them.
The last scene filmed was the film's first, Picard's Borg nightmare. One camera shot begins inside the iris of Picard's eyeball and pulls back to reveal the captain aboard a massive Borg ship. The shot continues to pull back and reveal the exterior of a Borg ship. The scene was inspired by a New York City production of Sweeney Todd: the Demon Barber of Fleet Street in which the stage surrounded the audience, giving a sense of realism. The shot was filmed as three separate elements merged with digital effects. The crew used a 50 mm lens to make it easier for the effects team to dissolve the closeup shots with the other elements. Starting from Stewart's eye, the camera pulled back , requiring the key light to increase in intensity up to 1,000 foot-candles so that there was enough depth to keep the eye sharp. The surface of the stage proved too uneven to accomplish the smooth dolly pullback required by the effects team, who needed a steady shot to blend a computer-generated version of Picard's eye with the pullback. The dolly track was raised off the stage floor and layered with pieces of double-thick birch plywood, chosen for its smooth finish. The entire set for the scene was wide and high; gaps left by the dolly reveal were filled in later digitally. Principal photography finished on July 2, 1996, two days over schedule but still under budget. Shooting took a total of sixty days.
Effects
The majority of First Contacts effects were handled by Industrial Light & Magic under the supervision of John Knoll. Smaller effects sequences, such as phaser fire, computer graphics, and transporter effects, were delegated to a team led by visual-effects supervisor David Takemura. Accustomed to directing episodes for the television series, Frakes was frequently reminded by effects artist Terry Frazee to "think big, blow everything up". Most of the effects sequences were planned using low-resolution computer-generated animatics. These rough animated storyboards established length, action and composition, allowing the producers and director to ascertain how the sequences would play out before they were shot.First Contact was the last film to feature a physical model of the Enterprise. For the ship's dramatic introduction, the effects team combined motion control shots of the Enterprise model with a computer-generated background. Sequence supervisor Dennis Turner, who had created Generations energy ribbon and specialized in creating natural phenomena, was charged with creating the star cluster, modeled after the Eagle Nebula. The nebular columns and solid areas were modeled with basic wireframe geometry, with surface shaders applied to make the edges of the nebula glow. A particle render that ILM had devised for the earlier tornado film Twister was used to create a turbulent look within the nebula. Once the shots of the Enterprise had been captured, Turner inserted the ship into the computer-generated background and altered its position until the images matched up.
The opening beauty pass of the new Enterprise was the responsibility of visual-effects cinematographer Marty Rosenberg, who handled all the other miniatures, explosions, and some live-action bluescreen elements. Rosenberg had previously shot some of the Enterprise-D effects for Generations, but had to adjust his techniques for the new model; the cinematographer used a 50 mm lens instead of the 35 mm used for Generations because the smaller lens made the new Enterprises dish appear stretched out. Knoll decided to shoot the model from above and below as much as possible; side views made the ship appear too flat and elongated. Rosenberg preferred motion-control passes of ships over computer-generated versions, as it was much easier to capture a high level of detail with physical models rather than trying to recreate it by computer graphics.
For the Borg battle, Knoll insisted on closeup shots that were near the alien vessel, necessitating a physical model. ILM layered their model with an additional five inches of etched brass over a glowing neon lightbox for internal illumination. To make the Borg vessel appear even larger than it was, Knoll made sure that an edge of it was facing the camera like the prow of a ship and that the Cube broke the edges of the frame. To give the Cube greater depth and texture, Rosenberg shot the vessel with harsher light. "I created this really odd, raking three-quarter backlight coming from the right or left side, which I balanced out with nets and a couple of little lights. I wanted it to look scary and mysterious, so it was lit like a point, and we always had the camera dutched to it; we never just had it coming straight at us," he said. Small lights attached to the Cube's surface helped to create visual interest and convey scale; the model was deliberately shot with a slow, determined pacing to contrast with the Federation ships engaged in battle with the Borg. The impact of Federation weaponry on the Borg Cube was simulated using a model of the Cube. The model had specific areas which could be blown up multiple times without damaging the miniature. For the final explosion of the Cube, Rosenberg shot ten Cube miniatures with explosive-packed lightweight skins. The Cubes were suspended from pipes sixty feet above the camera on the ground. Safety glass was placed over the lens to prevent damage, while the camera was covered with plywood to protect it from bits of plastic that rained down after each explosion. The smaller Borg sphere was a model that was shot separately from the Cube and digitally added in post-production. The time-travel vortex the Sphere creates was simulated with a rocket re-entry effect; bowshock forms in front of the ship, then streams backwards at high speed. Interactive lighting was played across the computer-generated Enterprise model for when the ship is caught in the time vortex.
The miniature Enterprise was again used for the spacewalk sequence. Even on the large model, it was hard to make the miniature appear realistic in extreme close-up shots. To make the pullback shot work, the camera had to be within one eighth of an inch from the model. Painter Kim Smith spent several days on a tiny area of the model to add enough surface detail for the close-up, but even then the focus was barely adequate. To compensate, the crew used a wider-angle lens and shot at the highest f-stop they could. The live-action scenes of the spacewalking crew were then digitally added. Wide shots used footage of photo doubles walking across a large bluescreen draped across ILM's parking lot at night.
ILM was tasked with imagining what the immediate assimilation of an Enterprise crewmember would look like. Jaeger came up with a set of cables that sprang from the Borg's knuckles and buried themselves in the crewmember's neck. Wormlike tubes would course through the victim's body and mechanical devices break the skin. The entire transformation was created using computer-generated imagery. The wormlike geometry was animated over the actor's face, then blended in with the addition of a skin texture over the animation. The gradual change in skin tone was simulated with shaders.
Frakes considered the entrance of the Borg Queen—when her head, shoulders, and steel spine are lowered by cables and attached to her body—as the "signature visual effect in the film". The scene was difficult to execute, taking ILM five months to finish. Jaeger devised a rig that would lower the actress on the set, and applied a prosthetic spine over a blue suit so that ILM could remove Krige's lower body. This strategy enabled the filmmakers to incorporate as many live-action elements as possible without resorting to further digital effects. To make the prosthetics appear at the proper angle when her lower body was removed, Krige extended her neck forward so it appeared in line with the spine. Knoll did not want it to seem that the Queen was on a hard, mechanical rig; "we wanted her to have the appropriate 'float'," he explained. Using separate motion control passes on the set, Knoll shot the lower of the upper torso and the secondary sequence with Krige's entire body. A digital version of the Borg body suit was used for the lowering sequence, at which point the image was morphed back to the real shot of Krige's body. The animated claws of the suit were created digitally as well using a detailed model. As reference to the animators, the shot required Krige to realistically portray "the strange pain or satisfaction of being reconnected to her body".
Music
Film composer Jerry Goldsmith scored First Contact, his third Star Trek feature. Goldsmith wrote a sweeping main title which begins with Alexander Courage's Star Trek fanfare. Instead of composing a menacing theme to underscore the Borg, Goldsmith wrote a pastoral theme linked to humanity's hopeful first contact. The theme uses a four-note motif used in Goldsmith's Star Trek V: The Final Frontier score, which is used in First Contact as a friendship theme and general thematic link.Bond, 156. A menacing march with touches of synthesizers was used to represent the Borg. In addition to composing new music, Goldsmith used music from his previous Star Trek scores, including his theme from The Motion Picture. The Klingon theme from the same film is used to represent Worf.
Because of delays with Paramount's The Ghost and the Darkness, the already-short four-week production schedule was cut to just three weeks. While Berman was concerned about the move, Goldsmith hired his son, Joel, to assist. The young composer provided additional music for the film, writing three cues based on his father's motifs and a total of 22 minutes of music. Joel used variations of his father's Borg music and the Klingon theme as Worf fights hand-to-hand (Joel said that he and his father decided to use the theme for Worf separately). When the Borg invade sickbay and the medical hologram distracts them, Joel wrote what critic Jeff Bond termed "almost Coplandesque" material of tuning strings and clarinet, but the cue was unused. While Joel composed many of the film's action cues, his father contributed to the spacewalk and Phoenix flight sequences. During the fight on the deflector dish, Goldsmith used low-register electronics punctuated by stabs of violent, dissonant strings.
In a break with Star Trek film tradition, the soundtrack incorporated two licensed songs: Roy Orbison's "Ooby Dooby" and Steppenwolf's "Magic Carpet Ride". GNP Crescendo president Neil Norman explained that the decision to include the tracks was controversial, but said that "Frakes did the most amazing job of integrating those songs into the story that we had to use them".
GNP released the First Contact soundtrack on December 2, 1996. The album contained 51 minutes of music, with 35 minutes of Jerry Goldsmith's score, 10 minutes of additional music by Joel Goldsmith, "Ooby Dooby" and "Magic Carpet Ride". The compact disc shipped with CD-ROM features only accessible if played on a personal computer, including interviews with Berman, Frakes, and Goldsmith.
On April 2, 2012, GNP Crescendo Records announced a limited-edition collector's CD featuring the complete score by Jerry Goldsmith (with additional music by Joel Goldsmith), newly remastered by recording engineer Bruce Botnick, with an accompanying 16-page booklet including informative notes by Jeff Bond and John Takis. The expanded album [GNPD 8079] runs 79 minutes and includes three tracks of alternates.
Themes
Frakes believes that the main themes of First Contact—and Star Trek as a whole—are loyalty, friendship, honesty, and mutual respect. This is evident in the film when Picard chooses to rescue Data rather than evacuate the ship with the rest of the crew. The film makes a direct comparison between Picard's hatred of the Borg and refusal to destroy the Enterprise and that of Captain Ahab in Herman Melville's novel Moby-Dick. The moment marks a turning point in the film as Picard changes his mind, symbolized by his putting down his phaser. A similar Moby-Dick reference was made in Star Trek II: The Wrath of Khan, and although Braga and Moore did not want to repeat it, they decided it worked so well they could not leave it out.
In First Contact, the individually inscrutable and faceless Borg fulfil the role of the similarly unreadable whale in Melville's work. Picard, like Ahab, has been hurt by his nemesis, and author Elizabeth Hinds said it makes sense that Picard should "opt for the perverse alternative of remaining on board ship to fight" the Borg rather than take the only sensible option left, to destroy the ship. Several lines in the film refer to the 21st-century dwellers being primitive, with the people of the 24th century having evolved to a more utopian society. In the end, it is Lily (the 21st-century woman) who shows Picard (the 24th-century man) that his quest for revenge is the primitive behavior that humans had evolved to not use. Lily's words cause Picard to reconsider, and he quotes Ahab's words of vengeance, recognizing the death wish embedded therein.
The nature of the Borg in First Contact has been the subject of critical discussion. Author Joanna Zylinska notes that while other alien species are tolerated by humanity in Star Trek, the Borg are viewed differently because of their cybernetic alterations and the loss of freedom and autonomy. Members of the crew who are assimilated into the Collective are subsequently viewed as "polluted by technology" and less than human. Zylinska draws comparisons between the technological distinction of humanity and machine in Star Trek and the work of artists such as Stelarc. Oliver Marchart drew parallels between the Borg's combination of many into an artificial One and Thomas Hobbes's concept of the Leviathan. The nature of perilous first contact between species, as represented by films such as Independence Day, Aliens and First Contact, is a marriage of classic fears of national invasion and the loss of personal identity.
Release
1996 marked the 30th anniversary of the Star Trek franchise. The franchise was on rocky ground; ratings for Deep Space Nine and Voyager had shed millions of viewers, being bested by Hercules: The Legendary Journeys as the highest-rated syndicated series. Some fans remained upset Paramount cancelled The Next Generation at the height of its popularity, and Generations was a commercial success but not critically praised.First Contact was heavily marketed, to an extent not seen since the release of Star Trek: The Motion Picture in 1979. Several novelizations of the film were written for different age groups. Playmates Toys produced six and nine-inch action figures in addition to ship models and a phaser. Two "making of" television specials premiered on HBO and the Sci-Fi Channel, as well as being promoted during a 30th-anniversary television special on UPN. The theatrical trailer to the film was included on a Best of Star Trek music compilation, released at the same time as the First Contact soundtrack. Simon & Schuster Interactive developed a Borg-themed video game for Macintosh and Windows personal computers. The game, Star Trek: Borg, functioned as an interactive movie with scenes filmed at the same time as First Contact production. A video game adaptation of the film was also announced by Spectrum HoloByte, and would have taken the form of a real-time strategy game set entirely on the Enterprise during the Borg takeover, though it was never released. Paramount heavily marketed the film on the internet via a First Contact web site, which averaged 4.4 million hits a week during the film's opening run, the largest amount of traffic ever on a motion-picture site.
The film premiered on November 18, 1996, at Mann's Chinese Theater in Hollywood, Los Angeles. The main cast save Spiner were in attendance, as were Moore, Braga, Jerry Goldsmith, and producer Marty Hornstein. Other Star Trek actors present included DeForest Kelley, René Auberjonois, Avery Brooks, Colm Meaney, Armin Shimerman, Terry Farrell, Kate Mulgrew, Roxann Dawson, Jennifer Lien, Robert Duncan McNeill, Ethan Phillips, Tim Russ, Garrett Wang and Robert Picardo. After the screening, 1,500 guests crossed the street to the Hollywood Colonnade, where the interiors had been dressed to match settings from the film: the holodeck nightclub, part of the bridge, a "star room", the Borg hive and the "crash 'n' burn lounge". The film received a royal premiere in the United Kingdom, with the first screening attended by Charles, Prince of Wales.
Box officeFirst Contact opened in 2,812 theaters beginning November 22, grossing $30.7 million its first week and making it the top movie at the US box office. The film was knocked out of the top place the following week by 101 Dalmatians, earning $25.5 million. The film went on to gross $77 million in its first four weeks, remaining in the top-ten box office during that time. It closed with a US & Canadian gross of $92,027,888 and an international gross of $54 million for a total of $146 million worldwide.First Contact opened in Britain on December 13, 1996, at number two and was the first Star Trek film not to reach number one in that market since The Wrath of Khan. It was still a box office success, earning £8,735,340 to become the highest grossing film in the series in that territory until the release of the Star Trek reboot film in 2009. The film was the best-performing Star Trek film in international markets until 2009's Star Trek film, and Paramount's best showing in markets such as New Zealand, making $315,491 from 28 sites by year's end.
Critical responseFirst Contact garnered positive reviews on release. Ryan Gilbey of The Independent considered the film wise to dispense with the cast of The Original Series: "For the first time, a Star Trek movie actually looks like something more ambitious than an extended TV show," he wrote. Conversely, critic Bob Thompson felt that First Contact was more in the spirit of the 1960s television series than any previous installment. The Globe and Mail Elizabeth Renzeti said that First Contact succeeded in improving on the "stilted" previous entry in the series, and that it featured a renewed interest in storytelling. Kenneth Turan of the Los Angeles Times wrote, "First Contact does everything you'd want a Star Trek film to do, and it does it with cheerfulness and style." Adrian Martin of The Age noted that the film was geared towards pleasing fans: "Strangers to this fanciful world first delineated by Gene Roddenberry will just have to struggle to comprehend as best they can," he wrote, but "cult-followers will be in heaven". The New York Times Janet Maslin said that the "film's convoluted plot will boggle all but hard-core devotees" of the series, while Variety's Joe Leydon wrote that the film did not require intimate knowledge of the series and that fans and non-fans alike would enjoy the film. While Renzetti considered the lack of old characters from the previous seven movies a welcome change, Maslin said that without the original stars, "The series now lacks [...] much of its earlier determination. It has morphed into something less innocent and more derivative than it used to be, something the noncultist is ever less likely to enjoy." Conversely, Roger Ebert called First Contact one of the best Star Trek films, and James Berardinelli found the film the most entertaining Star Trek feature in a decade; "It has single-handedly revived the Star Trek movie series, at least from a creative point of view," he wrote.
The film's acting met with mixed reception. Lisa Schwarzbaum of Entertainment Weekly appreciated that guest stars Woodard and Cromwell were used in "inventive contrast" to their better-known images, as a "serious dramatic actress" and "dancing farmer in Babe", respectively. Lloyd Rose of The Washington Post felt that while Woodard and Cromwell managed to "take care of themselves", Frakes' direction of other actors was not inspired; Steve Persall of the St. Petersburg Times opined that only Cromwell received a choice role in the film, "so he steals the show by default". A couple of reviews noted that Data's interactions with the Borg Queen were among the most interesting parts of the film; critic John Griffin credited Spiner's work as providing "ambivalent frisson" to the feature. Empire magazine's Adam Smith wrote that some characters, particularly Troi and Crusher, were lost or ignored, and that the rapid pacing of the film left no time for those unfamiliar with the series to know or care about the characters. Likewise, Emily Carlisle of the BBC praised Woodard's, Spiner's, and Stewart's performances, but felt the film focused more on action than characterization. Stewart, whom Thompson and Renzetti considered overshadowed by William Shatner in the previous film, received praise from Richard Corliss of Time: "As Patrick Stewart delivers [a] line with a majestic ferocity worthy of a Royal Shakespeare Company alumnus, the audience gapes in awe at a special effect more imposing than any ILM digital doodle. Here is real acting! In a Star Trek film!"
The special effects were generally praised. Jay Carr of The Boston Globe said that First Contact successfully updated Star Trek creator Gene Roddenberry's concept with more elaborate effects and action. Thompson's assessment mirrored Carr's; he agreed that the film managed to convey much of the original 1960s television show, and contained enough "special effects wonders and interstellar gunplay" to sate all types of viewers. Ebert wrote that while previous films had often looked "clunky" in the effects department, First Contact benefited from the latest in effects technology. A dissenting opinion was offered by Scott, who wrote that aside from the key effects sequences, Frakes "aims to distract Trekkers from the distinctly cheap-looking remainder".
Critics reacted favorably to the Borg, describing them as akin to creatures from Hellraiser. Renzetti credited them with breathing "new life" into the crew of the Enterprise while simultaneously trying to kill them. The Borg Queen received special attention for her combination of horror and seduction; Ebert wrote that while the Queen "looks like no notion of sexy I have ever heard of", he was inspired "to keep an open mind". Carr said, "She proves that women with filmy blue skin, lots of external tubing and bad teeth can be sleekly seductive."
Accolades
Home mediaStar Trek: First Contact was first released on VHS in late 1997 as one of several titles expected to boost sluggish sales at video retailers. A LaserDisc version was also released. First Contact was among the first titles announced for the DVD-alternative rental system Digital Video Express in 1998. It was launched with five other test titles in the select markets of Richmond and San Francisco.
When Paramount announced its first slate of DVD releases in August 1998, First Contact was one of the first ten titles released in October, announced in a conscious effort to showcase effects-driven films. This version contained the feature and two trailers, but no other special features. The film was presented in its original 2.35:1 anamorphic aspect ratio, with a surround sound Dolby Digital 5.1 audio mix.
A First Contact "Special Collector's Edition" two-disc set was released in 2005 at the same time as three other Next Generation films and Star Trek: Enterprises fourth season, marking the first time that every film and episode of the franchise was available on home video up to that point. In addition to the feature, presented with the same technical specifications as the previous release and a new DTS soundtrack, the first disc contains a director's commentary by Frakes and a track by Moore and Braga. As with other special-edition DVD releases, the disc includes a text track by Michael and Denise Okuda that provides production trivia and relevant facts about the Star Trek universe. The second disc contains six making-of featurettes, storyboards, and trailers.
Paramount announced that all four Next Generation films would be released on high-definition Blu-ray on September 22, 2009. In addition to the returning DVD extras, new special features for the Blu-ray version of First Contact include "Scene Deconstruction" featurettes and a new commentary by Star Trek'' (2009) co-producer Damon Lindelof and TrekMovie.com contributor Anthony Pascale.
References
Notes
Bibliography
External links
1996 films
American films
1990s English-language films
1990s science fiction action films
American science fiction action films
American space adventure films
American sequel films
Android (robot) films
Cyborg films
American films about revenge
Films about time travel
Films set in the 2060s
Films set in the 24th century
Films set in the future
Films set in Montana
Films shot in California
Films shot in Arizona
First Contact
First Contact
Paramount Pictures films
Films scored by Jerry Goldsmith
Films scored by Joel Goldsmith
Films directed by Jonathan Frakes
Films produced by Rick Berman
Films with screenplays by Rick Berman
Films with screenplays by Brannon Braga
Films with screenplays by Ronald D. Moore
Articles containing video clips
1996 directorial debut films |
27661 | https://en.wikipedia.org/wiki/Source%20code | Source code | In computing, source code is any collection of code, with or without comments, written using a human-readable programming language, usually as plain text. The source code of a program is specially designed to facilitate the work of computer programmers, who specify the actions to be performed by a computer mostly by writing source code. The source code is often transformed by an assembler or compiler into binary machine code that can be executed by the computer. The machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed.
Most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer, or a system administrator, any of whom might wish to study or modify the program.
Definitions
The Linux Information Project defines source code as:
Source code (also referred to as source or code) is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters).
The notion of source code may also be taken more broadly, to include machine code and notations in graphical languages, neither of which are textual in nature. An example from an article presented on the annual IEEE conference and on Source Code Analysis and Manipulation:
For the purpose of clarity "source code" is taken to mean any fully executable description of a software system. It is therefore so construed as to include machine code, very high level languages and executable graphical representations of systems.
Often there are several steps of program translation or minification between the original source code typed by a human and an executable program. While some, like the FSF, argue that an intermediate file "is not real source code and does not count as source code", others find it convenient to refer to each intermediate file as the source code for the next steps.
History
The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer. This first-generation programming language had no distinction between source code and machine code.
When IBM first offered software to work with its machine, the source code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware. For decades, IBM distributed source code with its software product licenses, until 1983.
Most early computer magazines published source code as type-in programs.
Occasionally the entire source code to a large program is published as a hardback book, such as Computers and Typesetting, vol. B: TeX, The Program by Donald Knuth, PGP Source Code and Internals by Philip Zimmermann, PC SpeedScript by Randy Thompson, and µC/OS, The Real-Time Kernel by Jean Labrosse.
Organization
The source code which constitutes a program is usually held in one or more text files stored on a computer's hard disk; usually, these files are carefully arranged into a directory tree, known as a source tree. Source code can also be stored in a database (as is common for stored procedures) or elsewhere.
The source code for a particular piece of software may be contained in a single file or many files. Though the practice is uncommon, a program's source code can be written in different programming languages. For example, a program written primarily in the C programming language, might have portions written in assembly language for optimization purposes. It is also possible for some components of a piece of software to be written and compiled separately, in an arbitrary programming language, and later integrated into the software using a technique called library linking. In some languages, such as Java, this can be done at run time (each class is compiled into a separate file that is linked by the interpreter at runtime).
Yet another method is to make the main program an interpreter for a programming language, either designed specifically for the application in question or general-purpose and then write the bulk of the actual user functionality as macros or other forms of add-ins in this language, an approach taken for example by the GNU Emacs text editor.
The code base of a computer programming project is the larger collection of all the source code of all the computer programs which make up the project. It has become common practice to maintain code bases in version control systems. Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or maybe even hundreds, of different source code files. In these cases, instructions for compilations, such as a Makefile, are included with the source code. These describe the programming relationships among the source code files and contain information about how they are to be compiled.
Purposes
Source code is primarily used as input to the process that produces an executable program (i.e., it is compiled or interpreted). It is also used as a method of communicating algorithms between people (e.g., code snippets in books).
Computer programmers often find it helpful to review existing source code to learn about programming techniques. The sharing of source code between developers is frequently cited as a contributing factor to the maturation of their programming skills. Some people consider source code an expressive artistic medium.
Porting software to other computer platforms is usually prohibitively difficult without source code. Without the source code for a particular piece of software, portability is generally computationally expensive. Possible porting options include binary translation and emulation of the original platform.
Decompilation of an executable program can be used to generate source code, either in assembly code or in a high-level language.
Programmers frequently adapt source code from one piece of software to use in other projects, a concept known as software reusability.
Legal aspects
The situation varies worldwide, but in the United States before 1974, software and its source code was not copyrightable and therefore always public domain software.
In 1974, the US Commission on New Technological Uses of Copyrighted Works (CONTU) decided that "computer programs, to the extent that they embody an author's original creation, are proper subject matter of copyright".
In 1983 in the United States court case Apple v. Franklin it was ruled that the same applied to object code; and that the Copyright Act gave computer programs the copyright status of literary works.
In 1999, in the United States court case Bernstein v. United States it was further ruled that source code could be considered a constitutionally protected form of free speech. Proponents of free speech argued that because source code conveys information to programmers, is written in a language, and can be used to share humor and other artistic pursuits, it is a protected form of communication.
Licensing
An author of a non-trivial work like software, has several exclusive rights, among them the copyright for the source code and object code. The author has the right and possibility to grant customers and users of his software some of his exclusive rights in form of software licensing. Software, and its accompanying source code, can be associated with several licensing paradigms; the most important distinction is free software vs proprietary software. This is done by including a copyright notice that declares licensing terms. If no notice is found, then the default of All rights reserved is implied.
Generally speaking, a software is free software if its users are free to use it for any purpose, study and change its source code, give or sell its exact copies, and give or sell its modified copies. Software is proprietary if it is distributed while the source code is kept secret, or is privately owned and restricted. One of the first software licenses to be published and to explicitly grant these freedoms was the GNU General Public License in 1989; the BSD license is another early example from 1990.
For proprietary software, the provisions of the various copyright laws, trade secrecy and patents are used to keep the source code closed. Additionally, many pieces of retail software come with an end-user license agreement (EULA) which typically prohibits decompilation, reverse engineering, analysis, modification, or circumventing of copy protection. Types of source code protection—beyond traditional compilation to object code—include code encryption, code obfuscation or code morphing.
Quality
The way a program is written can have important consequences for its maintainers. Coding conventions, which stress readability and some language-specific conventions, are aimed at the maintenance of the software source code, which involves debugging and updating. Other priorities, such as the speed of the program's execution, or the ability to compile the program for multiple architectures, often make code readability a less important consideration, since code quality generally depends on its purpose.
See also
Bytecode
Code as data
Coding conventions
Computer code
Free software
Legacy code
Machine code
Markup language
Obfuscated code
Object code
Open-source software
Package (package management system)
Programming language
Source code repository
Syntax highlighting
Visual programming language
References
Sources
(VEW04) "Using a Decompiler for Real-World Source Recovery", M. Van Emmerik and T. Waddington, the Working Conference on Reverse Engineering, Delft, Netherlands, 9–12 November 2004. Extended version of the paper.
External links
Source Code Definition by The Linux Information Project (LINFO)
Same program written in multiple languages
Text |
27675 | https://en.wikipedia.org/wiki/Simple%20Mail%20Transfer%20Protocol | Simple Mail Transfer Protocol | The Simple Mail Transfer Protocol (SMTP) is an internet standard communication protocol for electronic mail transmission. Mail servers and other message transfer agents use SMTP to send and receive mail messages. User-level email clients typically use SMTP only for sending messages to a mail server for relaying, and typically submit outgoing email to the mail server on port 587 or 465 per . For retrieving messages, IMAP (which replaced the older POP3) is standard, but proprietary servers also often implement proprietary protocols, e.g., Exchange ActiveSync.
Since SMTP's introduction in 1981, it has been updated, modified and extended multiple times. The protocol version in common use today has extensible structure with various extensions for authentication, encryption, binary data transfer, and internationalized email addresses. SMTP servers commonly use the Transmission Control Protocol on port number 25 (for plaintext) and 587 (for encrypted communications).
History
Predecessors to SMTP
Various forms of one-to-one electronic messaging were used in the 1960s. Users communicated using systems developed for specific mainframe computers. As more computers were interconnected, especially in the U.S. Government's ARPANET, standards were developed to permit exchange of messages between different operating systems. SMTP grew out of these standards developed during the 1970s.
SMTP traces its roots to two implementations described in 1971: the Mail Box Protocol, whose implementation has been disputed,<ref>The History of Electronic Mail, Tom Van Vleck: "It is not clear this protocol was ever implemented"</ref> but is discussed in and other RFCs, and the SNDMSG program, which, according to , Ray Tomlinson of BBN invented for TENEX computers to send mail messages across the ARPANET.Picture of "The First Email Computer" by Dan Murphy, a PDP-10 Fewer than 50 hosts were connected to the ARPANET at this time.
Further implementations include FTP Mail and Mail Protocol, both from 1973. Development work continued throughout the 1970s, until the ARPANET transitioned into the modern Internet around 1980.
Original SMTP
In 1980, Jon Postel published which proposed the Mail Transfer Protocol as a replacement of the use of the File Transfer Protocol (FTP) for mail. of May 1981 removed all references to FTP and allocated port 57 for TCP and UDP, an allocation that has since been removed by IANA. In November 1981, Postel published "Simple Mail Transfer Protocol".
The SMTP standard was developed around the same time as Usenet, a one-to-many communication network with some similarities.
SMTP became widely used in the early 1980s. At the time, it was a complement to the Unix to Unix Copy Program (UUCP), which was better suited for handling email transfers between machines that were intermittently connected. SMTP, on the other hand, works best when both the sending and receiving machines are connected to the network all the time. Both used a store and forward mechanism and are examples of push technology. Though Usenet's newsgroups were still propagated with UUCP between servers, UUCP as a mail transport has virtually disappeared along with the "bang paths" it used as message routing headers.
Sendmail, released with 4.1cBSD in 1982, soon after was published in November 1981, was one of the first mail transfer agents to implement SMTP. Over time, as BSD Unix became the most popular operating system on the Internet, Sendmail became the most common MTA (mail transfer agent).
The original SMTP protocol supported only unauthenticated unencrypted 7-bit ASCII text communications, susceptible to trivial man-in-the-middle attack, spoofing, and spamming, and requiring any binary data to be encoded to readable text before transmission. Due to absence of a proper authentication mechanism, by design every SMTP server was an open mail relay. The Internet Mail Consortium (IMC) reported that 55% of mail servers were open relays in 1998, but less than 1% in 2002. Because of spam concerns most email providers blocklist open relays, making original SMTP essentially impractical for general use on the Internet.
Modern SMTP
In November 1995, defined Extended Simple Mail Transfer Protocol (ESMTP), which established a general structure for all existing and future extensions which aimed to add-in the features missing from the original SMTP. ESMTP defines consistent and manageable means by which ESMTP clients and servers can be identified and servers can indicate supported extensions.
Message submission () and SMTP-AUTH () were introduced in 1998 and 1999, both describing new trends in email delivery. Originally, SMTP servers were typically internal to an organization, receiving mail for the organization from the outside, and relaying messages from the organization to the outside. But as time went on, SMTP servers (mail transfer agents), in practice, were expanding their roles to become message submission agents for Mail user agents, some of which were now relaying mail from the outside of an organization. (e.g. a company executive wishes to send email while on a trip using the corporate SMTP server.) This issue, a consequence of the rapid expansion and popularity of the World Wide Web, meant that SMTP had to include specific rules and methods for relaying mail and authenticating users to prevent abuses such as relaying of unsolicited email (spam). Work on message submission () was originally started because popular mail servers would often rewrite mail in an attempt to fix problems in it, for example, adding a domain name to an unqualified address. This behavior is helpful when the message being fixed is an initial submission, but dangerous and harmful when the message originated elsewhere and is being relayed. Cleanly separating mail into submission and relay was seen as a way to permit and encourage rewriting submissions while prohibiting rewriting relay. As spam became more prevalent, it was also seen as a way to provide authorization for mail being sent out from an organization, as well as traceability. This separation of relay and submission quickly became a foundation for modern email security practices.
As this protocol started out purely ASCII text-based, it did not deal well with binary files, or characters in many non-English languages. Standards such as Multipurpose Internet Mail Extensions (MIME) were developed to encode binary files for transfer through SMTP. Mail transfer agents (MTAs) developed after Sendmail also tended to be implemented 8-bit-clean, so that the alternate "just send eight" strategy could be used to transmit arbitrary text data (in any 8-bit ASCII-like character encoding) via SMTP. Mojibake was still a problem due to differing character set mappings between vendors, although the email addresses themselves still allowed only ASCII. 8-bit-clean MTAs today tend to support the 8BITMIME extension, permitting some binary files to be transmitted almost as easily as plain text (limits on line length and permitted octet values still apply, so that MIME encoding is needed for most non-text data and some text formats). In 2012, the SMTPUTF8 extension was created to support UTF-8 text, allowing international content and addresses in non-Latin scripts like Cyrillic or Chinese.
Many people contributed to the core SMTP specifications, among them Jon Postel, Eric Allman, Dave Crocker, Ned Freed, Randall Gellens, John Klensin, and Keith Moore.
Mail processing model
Email is submitted by a mail client (mail user agent, MUA) to a mail server (mail submission agent, MSA) using SMTP on TCP port 587. Most mailbox providers still allow submission on traditional port 25. The MSA delivers the mail to its mail transfer agent (mail transfer agent, MTA). Often, these two agents are instances of the same software launched with different options on the same machine. Local processing can be done either on a single machine, or split among multiple machines; mail agent processes on one machine can share files, but if processing is on multiple machines, they transfer messages between each other using SMTP, where each machine is configured to use the next machine as a smart host. Each process is an MTA (an SMTP server) in its own right.
The boundary MTA uses DNS to look up the MX (mail exchanger) record for the recipient's domain (the part of the email address on the right of @). The MX record contains the name of the target MTA. Based on the target host and other factors, the sending MTA selects a recipient server and connects to it to complete the mail exchange.
Message transfer can occur in a single connection between two MTAs, or in a series of hops through intermediary systems. A receiving SMTP server may be the ultimate destination, an intermediate "relay" (that is, it stores and forwards the message) or a "gateway" (that is, it may forward the message using some protocol other than SMTP). Per section 2.1, each hop is a formal handoff of responsibility for the message, whereby the receiving server must either deliver the message or properly report the failure to do so.
Once the final hop accepts the incoming message, it hands it to a mail delivery agent (MDA) for local delivery. An MDA saves messages in the relevant mailbox format. As with sending, this reception can be done using one or multiple computers, but in the diagram above the MDA is depicted as one box near the mail exchanger box. An MDA may deliver messages directly to storage, or forward them over a network using SMTP or other protocol such as Local Mail Transfer Protocol (LMTP), a derivative of SMTP designed for this purpose.
Once delivered to the local mail server, the mail is stored for batch retrieval by authenticated mail clients (MUAs). Mail is retrieved by end-user applications, called email clients, using Internet Message Access Protocol (IMAP), a protocol that both facilitates access to mail and manages stored mail, or the Post Office Protocol (POP) which typically uses the traditional mbox mail file format or a proprietary system such as Microsoft Exchange/Outlook or Lotus Notes/Domino. Webmail clients may use either method, but the retrieval protocol is often not a formal standard.
SMTP defines message transport, not the message content. Thus, it defines the mail envelope and its parameters, such as the envelope sender, but not the header (except trace information) nor the body of the message itself. STD 10 and define SMTP (the envelope), while STD 11 and define the message (header and body), formally referred to as the Internet Message Format.
Protocol overview
SMTP is a connection-oriented, text-based protocol in which a mail sender communicates with a mail receiver by issuing command strings and supplying necessary data over a reliable ordered data stream channel, typically a Transmission Control Protocol (TCP) connection. An SMTP session consists of commands originated by an SMTP client (the initiating agent, sender, or transmitter) and corresponding responses from the SMTP server (the listening agent, or receiver) so that the session is opened, and session parameters are exchanged. A session may include zero or more SMTP transactions. An SMTP transaction consists of three command/reply sequences:
MAIL command, to establish the return address, also called return-path, reverse-path, bounce address, mfrom, or envelope sender.
RCPT command, to establish a recipient of the message. This command can be issued multiple times, one for each recipient. These addresses are also part of the envelope.
DATA to signal the beginning of the message text; the content of the message, as opposed to its envelope. It consists of a message header and a message body separated by an empty line. DATA is actually a group of commands, and the server replies twice: once to the DATA command itself, to acknowledge that it is ready to receive the text, and the second time after the end-of-data sequence, to either accept or reject the entire message.
Besides the intermediate reply for DATA, each server's reply can be either positive (2xx reply codes) or negative. Negative replies can be permanent (5xx codes) or transient (4xx codes). A reject is a permanent failure and the client should send a bounce message to the server it received it from. A drop is a positive response followed by message discard rather than delivery.
The initiating host, the SMTP client, can be either an end-user's email client, functionally identified as a mail user agent (MUA), or a relay server's mail transfer agent (MTA), that is an SMTP server acting as an SMTP client, in the relevant session, in order to relay mail. Fully capable SMTP servers maintain queues of messages for retrying message transmissions that resulted in transient failures.
A MUA knows the outgoing mail SMTP server from its configuration. A relay server typically determines which server to connect to by looking up the MX (Mail eXchange) DNS resource record for each recipient's domain name. If no MX record is found, a conformant relaying server (not all are) instead looks up the A record. Relay servers can also be configured to use a smart host. A relay server initiates a TCP connection to the server on the "well-known port" for SMTP: port 25, or for connecting to an MSA, port 587. The main difference between an MTA and an MSA is that connecting to an MSA requires SMTP Authentication.
SMTP vs mail retrieval
SMTP is a delivery protocol only. In normal use, mail is "pushed" to a destination mail server (or next-hop mail server) as it arrives. Mail is routed based on the destination server, not the individual user(s) to which it is addressed. Other protocols, such as the Post Office Protocol (POP) and the Internet Message Access Protocol (IMAP) are specifically designed for use by individual users retrieving messages and managing mail boxes. To permit an intermittently-connected mail server to pull messages from a remote server on demand, SMTP has a feature to initiate mail queue processing on a remote server (see Remote Message Queue Starting below). POP and IMAP are unsuitable protocols for relaying mail by intermittently-connected machines; they are designed to operate after final delivery, when information critical to the correct operation of mail relay (the "mail envelope") has been removed.
Remote Message Queue Starting
Remote Message Queue Starting enables a remote host to start processing of the mail queue on a server so it may receive messages destined to it by sending a corresponding command. The original TURN command was deemed insecure and was extended in with the ETRN command which operates more securely using an authentication method based on Domain Name System information.
Outgoing mail SMTP server
An email client needs to know the IP address of its initial SMTP server and this has to be given as part of its configuration (usually given as a DNS name). This server will deliver outgoing messages on behalf of the user.
Outgoing mail server access restrictions
Server administrators need to impose some control on which clients can use the server. This enables them to deal with abuse, for example spam. Two solutions have been in common use:
In the past, many systems imposed usage restrictions by the location of the client, only permitting usage by clients whose IP address is one that the server administrators control. Usage from any other client IP address is disallowed.
Modern SMTP servers typically offer an alternative system that requires authentication of clients by credentials before allowing access.
Restricting access by location
Under this system, an ISP's SMTP server will not allow access by users who are outside the ISP's network. More precisely, the server may only allow access to users with an IP address provided by the ISP, which is equivalent to requiring that they are connected to the Internet using that same ISP. A mobile user may often be on a network other than that of their normal ISP, and will then find that sending email fails because the configured SMTP server choice is no longer accessible.
This system has several variations. For example, an organisation's SMTP server may only provide service to users on the same network, enforcing this by firewalling to block access by users on the wider Internet. Or the server may perform range checks on the client's IP address. These methods were typically used by corporations and institutions such as universities which provided an SMTP server for outbound mail only for use internally within the organisation. However, most of these bodies now use client authentication methods, as described below.
Where a user is mobile, and may use different ISPs to connect to the internet, this kind of usage restriction is onerous, and altering the configured outbound email SMTP server address is impractical. It is highly desirable to be able to use email client configuration information that does not need to change.
Client authentication
Modern SMTP servers typically require authentication of clients by credentials before allowing access, rather than restricting access by location as described earlier. This more flexible system is friendly to mobile users and allows them to have a fixed choice of configured outbound SMTP server. SMTP Authentication, often abbreviated SMTP AUTH, is an extension of the SMTP in order to log in using an authentication mechanism.
Ports
Communication between mail servers generally uses the standard TCP port 25 designated for SMTP.
Mail clients however generally don't use this, instead using specific "submission" ports. Mail services generally accept email submission from clients on one of:
587 (Submission), as formalized in (previously )
465 This port was deprecated after , until the issue of .
Port 2525 and others may be used by some individual providers, but have never been officially supported.
Many Internet service providers now block all outgoing port 25 traffic from their customers. Mainly as an anti-spam measure, but also to cure for the higher cost they have when leaving it open, perhaps by charging more from the few customers that require it open.
SMTP transport example
A typical example of sending a message via SMTP to two mailboxes (alice and theboss) located in the same mail domain (example.com) is reproduced in the following session exchange. (In this example, the conversation parts are prefixed with S: and C:, for server and client, respectively; these labels are not part of the exchange.)
After the message sender (SMTP client) establishes a reliable communications channel to the message receiver (SMTP server), the session is opened with a greeting by the server, usually containing its fully qualified domain name (FQDN), in this case smtp.example.com. The client initiates its dialog by responding with a HELO command identifying itself in the command's parameter with its FQDN (or an address literal if none is available).
S: 220 smtp.example.com ESMTP Postfix
C: HELO relay.example.org
S: 250 Hello relay.example.org, I am glad to meet you
C: MAIL FROM:<bob@example.org>
S: 250 Ok
C: RCPT TO:<alice@example.com>
S: 250 Ok
C: RCPT TO:<theboss@example.com>
S: 250 Ok
C: DATA
S: 354 End data with <CR><LF>.<CR><LF>
C: From: "Bob Example" <bob@example.org>
C: To: "Alice Example" <alice@example.com>
C: Cc: theboss@example.com
C: Date: Tue, 15 Jan 2008 16:02:43 -0500
C: Subject: Test message
C:
C: Hello Alice.
C: This is a test message with 5 header fields and 4 lines in the message body.
C: Your friend,
C: Bob
C: .
S: 250 Ok: queued as 12345
C: QUIT
S: 221 Bye
{The server closes the connection}
The client notifies the receiver of the originating email address of the message in a MAIL FROM command. This is also the return or bounce address in case the message cannot be delivered. In this example the email message is sent to two mailboxes on the same SMTP server: one for each recipient listed in the To: and Cc: header fields. The corresponding SMTP command is RCPT TO. Each successful reception and execution of a command is acknowledged by the server with a result code and response message (e.g., 250 Ok).
The transmission of the body of the mail message is initiated with a DATA command after which it is transmitted verbatim line by line and is terminated with an end-of-data sequence. This sequence consists of a new-line (<CR><LF>), a single full stop (.), followed by another new-line (<CR><LF>). Since a message body can contain a line with just a period as part of the text, the client sends two periods every time a line starts with a period; correspondingly, the server replaces every sequence of two periods at the beginning of a line with a single one. Such escaping method is called dot-stuffing.
The server's positive reply to the end-of-data, as exemplified, implies that the server has taken the responsibility of delivering the message. A message can be doubled if there is a communication failure at this time, e.g. due to a power shortage: Until the sender has received that 250 Ok reply, it must assume the message was not delivered. On the other hand, after the receiver has decided to accept the message, it must assume the message has been delivered to it. Thus, during this time span, both agents have active copies of the message that they will try to deliver. The probability that a communication failure occurs exactly at this step is directly proportional to the amount of filtering that the server performs on the message body, most often for anti-spam purposes. The limiting timeout is specified to be 10 minutes.
The QUIT command ends the session. If the email has other recipients located elsewhere, the client would QUIT and connect to an appropriate SMTP server for subsequent recipients after the current destination(s) had been queued. The information that the client sends in the HELO and MAIL FROM commands are added (not seen in example code) as additional header fields to the message by the receiving server. It adds a Received and Return-Path header field, respectively.
Some clients are implemented to close the connection after the message is accepted (250 Ok: queued as 12345), so the last two lines may actually be omitted. This causes an error on the server when trying to send the 221 Bye reply.
SMTP Extensions
Extension discovery mechanism
Clients learn a server's supported options by using the EHLO greeting, as exemplified below, instead of the original HELO. Clients fall back to HELO only if the server does not support EHLO greeting.
Modern clients may use the ESMTP extension keyword SIZE to query the server for the maximum message size that will be accepted. Older clients and servers may try to transfer excessively sized messages that will be rejected after consuming network resources, including connect time to network links that is paid by the minute.
Users can manually determine in advance the maximum size accepted by ESMTP servers. The client replaces the HELO command with the EHLO command.
S: 220 smtp2.example.com ESMTP Postfix
C: EHLO bob.example.org
S: 250-smtp2.example.com Hello bob.example.org [192.0.2.201]
S: 250-SIZE 14680064
S: 250-PIPELINING
S: 250 HELP
Thus smtp2.example.com'' declares that it can accept a fixed maximum message size no larger than 14,680,064 octets (8-bit bytes).
In the simplest case, an ESMTP server declares a maximum SIZE immediately after receiving an EHLO. According to , however, the numeric parameter to the SIZE extension in the EHLO response is optional. Clients may instead, when issuing a MAIL FROM command, include a numeric estimate of the size of the message they are transferring, so that the server can refuse receipt of overly-large messages.
Binary data transfer
Original SMTP supports only a single body of ASCII text, therefore any binary data needs to be encoded as text into that body of the message before transfer, and then decoded by the recipient. Binary-to-text encodings, such as uuencode and BinHex were typically used.
The 8BITMIME command was developed to address this. It was standardized in 1994 as It facilitates the transparent exchange of e-mail messages containing octets outside the seven-bit ASCII character set by encoding them as MIME content parts, typically encoded with Base64.
Mail delivery mechanism extensions
On-Demand Mail Relay
On-Demand Mail Relay (ODMR) is an SMTP extension standardized in that allows an intermittently-connected SMTP server to receive email queued for it when it is connected.
Internationalization extension
Original SMTP supports email addresses composed of ASCII characters only, which is inconvenient for users whose native script is not Latin based, or who use diacritic not in the ASCII character set. This limitation was alleviated via extensions enabling UTF-8 in address names. introduced experimental UTF8SMTP command and later was superseded by that introduced SMTPUTF8 command. These extensions provide support for multi-byte and non-ASCII characters in email addresses, such as those with diacritics and other language characters such as Greek and Chinese.
Current support is limited, but there is strong interest in broad adoption of and the related RFCs in countries like China that have a large user base where Latin (ASCII) is a foreign script.
Extensions
Like SMTP, ESMTP is a protocol used to transport Internet mail. It is used as both an inter-server transport protocol and (with restricted behavior enforced) a mail submission protocol.
The main identification feature for ESMTP clients is to open a transmission with the command EHLO (Extended HELLO), rather than HELO (Hello, the original standard). A server will respond with success (code 250), failure (code 550) or error (code 500, 501, 502, 504, or 421), depending on its configuration. An ESMTP server returns the code 250 OK in a multi-line reply with its domain and a list of keywords to indicate supported extensions. A RFC 821 compliant server returns error code 500, allowing ESMTP clients to try either HELO or QUIT.
Each service extension is defined in an approved format in subsequent RFCs and registered with the Internet Assigned Numbers Authority (IANA). The first definitions were the RFC 821 optional services: SEND, SOML (Send or Mail), SAML (Send and Mail), EXPN, HELP, and TURN. The format of additional SMTP verbs was set and for new parameters in MAIL and RCPT.
Some relatively common keywords (not all of them corresponding to commands) used today are:
8BITMIME – 8 bit data transmission,
ATRN – Authenticated TURN for On-Demand Mail Relay,
AUTH – Authenticated SMTP,
CHUNKING – Chunking,
DSN – Delivery status notification, (See Variable envelope return path)
ETRN – Extended version of remote message queue starting command TURN,
HELP – Supply helpful information,
PIPELINING – Command pipelining,
SIZE – Message size declaration,
STARTTLS – Transport Layer Security, (2002)
SMTPUTF8 – Allow UTF-8 encoding in mailbox names and header fields,
UTF8SMTP – Allow UTF-8 encoding in mailbox names and header fields, (deprecated)
The ESMTP format was restated in (superseding RFC 821) and updated to the latest definition in in 2008. Support for the EHLO command in servers became mandatory, and HELO designated a required fallback.
Non-standard, unregistered, service extensions can be used by bilateral agreement, these services are indicated by an EHLO message keyword starting with "X", and with any additional parameters or verbs similarly marked.
SMTP commands are case-insensitive. They are presented here in capitalized form for emphasis only. An SMTP server that requires a specific capitalization method is a violation of the standard.
8BITMIME
At least the following servers advertise the 8BITMIME extension:
Apache James (since 2.3.0a1)
Citadel (since 7.30)
Courier Mail Server
Gmail
IceWarp
IIS SMTP Service
Kerio Connect
Lotus Domino
Microsoft Exchange Server (as of Exchange Server 2000)
Novell GroupWise
OpenSMTPD
Oracle Communications Messaging Server
Postfix
Sendmail (since 6.57)
The following servers can be configured to advertise 8BITMIME, but do not perform conversion of 8-bit data to 7-bit when connecting to non-8BITMIME relays:
Exim and qmail do not translate eight-bit messages to seven-bit when making an attempt to relay 8-bit data to non-8BITMIME peers, as is required by the RFC. This does not cause problems in practice, since virtually all modern mail relays are 8-bit clean.
Microsoft Exchange Server 2003 advertises 8BITMIME by default, but relaying to a non-8BITMIME peer results in a bounce. This is allowed by RFC 6152 section 3.
SMTP-AUTH
The SMTP-AUTH extension provides an access control mechanism. It consists of an authentication step through which the client effectively logs into the mail server during the process of sending mail. Servers that support SMTP-AUTH can usually be configured to require clients to use this extension, ensuring the true identity of the sender is known. The SMTP-AUTH extension is defined in .
SMTP-AUTH can be used to allow legitimate users to relay mail while denying relay service to unauthorized users, such as spammers. It does not necessarily guarantee the authenticity of either the SMTP envelope sender or the "From:" header. For example, spoofing, in which one sender masquerades as someone else, is still possible with SMTP-AUTH unless the server is configured to limit message from-addresses to addresses this AUTHed user is authorized for.
The SMTP-AUTH extension also allows one mail server to indicate to another that the sender has been authenticated when relaying mail. In general this requires the recipient server to trust the sending server, meaning that this aspect of SMTP-AUTH is rarely used on the Internet.
SMTPUTF8
Supporting servers include:
Postfix (version 3.0 and later)
Momentum (versions 4.1 and 3.6.5, and later)
Sendmail (under development)
Exim (experimental as of the 4.86 release)
CommuniGate Pro as of version 6.2.2
Courier-MTA as of version 1.0
Halon as of version 4.0
Microsoft Exchange Server as of protocol revision 14.0
Haraka and other servers.
Oracle Communications Messaging Server as of release 8.0.2.
Security extensions
Mail delivery can occur both over plain text and encrypted connections, however the communicating parties might not know in advance of other party's ability to use secure channel.
STARTTLS or "Opportunistic TLS"
The STARTTLS extensions enables supporting SMTP servers to notify connecting clients that it supports TLS encrypted communication and offers the opportunity for clients to upgrade their connection by sending the STARTTLS command. Servers supporting the extension do not inherently gain any security benefits from its implementation on its own, as upgrading to a TLS encrypted session is dependent on the connecting client deciding to exercise this option, hence the term opportunistic TLS.
STARTTLS is effective only against passive observation attacks, since the STARTTLS negotiation happens in plain text and an active attacker can trivially remove STARTTLS commands. This type of man-in-the-middle attack is sometimes referred to as STRIPTLS, where the encryption negotiation information sent from one end never reaches the other. In this scenario both parties take the invalid or unexpected responses as indication that the other does not properly support STARTTLS, defaulting to traditional plain-text mail transfer. Note that STARTTLS is also defined for IMAP and POP3 in other RFCs, but these protocols serve different purposes: SMTP is used for communication between message transfer agents, while IMAP and POP3 are for end clients and message transfer agents.
In 2014 the Electronic Frontier Foundation began "STARTTLS Everywhere" project that, similarly to "HTTPS Everywhere" list, allowed relying parties to discover others supporting secure communication without prior communication. The project stopped accepting submissions on 29 April 2021, and EFF recommended switching to DANE and MTA-STS for discovering information on peers' TLS support.
officially declared plain text obsolete and recommend always using TLS, adding ports with implicit TLS.
SMTP MTA Strict Transport Security
A newer 2018 called "SMTP MTA Strict Transport Security (MTA-STS)" aims to address the problem of active adversary by defining a protocol for mail servers to declare their ability to use secure channels in specific files on the server and specific DNS TXT records. The relying party would regularly check existence of such record, and cache it for the amount of time specified in the record and never communicate over insecure channels until record expires. Note that MTA-STS records apply only to SMTP traffic between mail servers while communications between a user's client and the mail server are protected by Transport Layer Security with SMTP/MSA, IMAP, POP3, or HTTPS in combination with an organizational or technical policy. Essentially, MTA-STS is a means to extend such a policy to third parties.
In April 2019 Google Mail announced support for MTA-STS.
SMTP TLS Reporting
A number of protocols allows secure delivery of messages, but they can fail due to misconfigurations or deliberate active interference, leading to undelivered messages or delivery over unencrypted or unauthenticated channels. "SMTP TLS Reporting" describes a reporting mechanism and format for sharing statistics and specific information about potential failures with recipient domains. Recipient domains can then use this information to both detect potential attacks and diagnose unintentional misconfigurations.
In April 2019 Google Mail announced support for SMTP TLS Reporting.
Spoofing and spamming
The original design of SMTP had no facility to authenticate senders, or check that servers were authorized to send on their behalf, with the result that email spoofing is possible, and commonly used in email spam and phishing.
Occasional proposals are made to modify SMTP extensively or replace it completely. One example of this is Internet Mail 2000, but neither it, nor any other has made much headway in the face of the network effect of the huge installed base of classic SMTP.
Instead, mail servers now use a range of techniques, such as stricter enforcement of standards such as , DomainKeys Identified Mail, Sender Policy Framework and DMARC, DNSBLs and greylisting to reject or quarantine suspicious emails.
Implementations
Related requests for comments
– Requirements for Internet Hosts—Application and Support (STD 3)
– SMTP Service Extension for Message Size Declaration (оbsoletes: )
– Anti-Spam Recommendations for SMTP MTAs (BCP 30)
– Simple Mail Transfer Protocol
– SMTP Service Extension for Command Pipelining (STD 60)
– SMTP Service Extensions for Transmission of Large and Binary MIME Messages
– SMTP Service Extension for Secure SMTP over Transport Layer Security (obsoletes )
– SMTP Service Extension for Delivery Status Notifications (obsoletes )
– Enhanced Status Codes for SMTP (obsoletes , updated by )
– An Extensible Message Format for Delivery Status Notifications (obsoletes )
– Message Disposition Notification (updates )
– Recommendations for Automatic Responses to Electronic Mail
– SMTP Operational Experience in Mixed IPv4/v6 Environments
– Overview and Framework for Internationalized Email (updated by )
– SMTP Service Extension for Authentication (obsoletes , updates , updated by )
– Email Submission Operations: Access and Accountability Requirements (BCP 134)
– A Registry for SMTP Enhanced Mail System Status Codes (BCP 138) (updates )
– The Simple Mail Transfer Protocol (obsoletes aka STD 10, , , , updates )
– Internet Message Format (obsoletes aka STD 11, and )
– Downgrading Mechanism for Email Address Internationalization
– Message Submission for Mail (STD 72) (obsoletes , )
– The Multipart/Report Content Type for the Reporting of Mail System Administrative Messages (obsoletes , and in turn )
– SMTP Extension for Internationalized Email Addresses (updates , , , and )
– Cleartext Considered Obsolete: Use of Transport Layer Security (TLS) for Email Submission and Access
See also
Bounce address
CRAM-MD5 (a SASL mechanism for ESMTPA)
Email
Email encryption
DKIM
Ident
List of mail server software
List of SMTP server return codes
POP before SMTP / SMTP after POP
Internet Message Access Protocol Binary Content Extension
Sender Policy Framework (SPF)
Simple Authentication and Security Layer (SASL)
SMTP Authentication
Variable envelope return path
Comparison of email clients for information about SMTP support
Notes
References
External links
SMTP Service Extensions
Simple Mail Transfer Protocol
SMTP Service Extension for Authentication (obsoletes )
SMTP and LMTP Transmission Types Registration (with ESMTPA)
Message Submission for Mail (obsoletes , which obsoletes )
Internet mail protocols |
27799 | https://en.wikipedia.org/wiki/Semigroup | Semigroup | In mathematics, a semigroup is an algebraic structure consisting of a set together with an associative binary operation.
The binary operation of a semigroup is most often denoted multiplicatively: x·y, or simply xy, denotes the result of applying the semigroup operation to the ordered pair . Associativity is formally expressed as that for all x, y and z in the semigroup.
Semigroups may be considered a special case of magmas, where the operation is associative, or as a generalization of groups, without requiring the existence of an identity element or inverses. As in the case of groups or magmas, the semigroup operation need not be commutative, so x·y is not necessarily equal to y·x; a well-known example of an operation that is associative but non-commutative is matrix multiplication. If the semigroup operation is commutative, then the semigroup is called a commutative semigroup or (less often than in the analogous case of groups) it may be called an abelian semigroup.
A monoid is an algebraic structure intermediate between groups and semigroups, and is a semigroup having an identity element, thus obeying all but one of the axioms of a group: existence of inverses is not required of a monoid. A natural example is strings with concatenation as the binary operation, and the empty string as the identity element. Restricting to non-empty strings gives an example of a semigroup that is not a monoid. Positive integers with addition form a commutative semigroup that is not a monoid, whereas the non-negative integers do form a monoid. A semigroup without an identity element can be easily turned into a monoid by just adding an identity element. Consequently, monoids are studied in the theory of semigroups rather than in group theory. Semigroups should not be confused with quasigroups, which are a generalization of groups in a different direction; the operation in a quasigroup need not be associative but quasigroups preserve from groups a notion of division. Division in semigroups (or in monoids) is not possible in general.
The formal study of semigroups began in the early 20th century. Early results include a Cayley theorem for semigroups realizing any semigroup as transformation semigroup, in which arbitrary functions replace the role of bijections from group theory. A deep result in the classification of finite semigroups is Krohn–Rhodes theory, analogous to the Jordan–Hölder decomposition for finite groups. Some other techniques for studying semigroups, like Green's relations, do not resemble anything in group theory.
The theory of finite semigroups has been of particular importance in theoretical computer science since the 1950s because of the natural link between finite semigroups and finite automata via the syntactic monoid. In probability theory, semigroups are associated with Markov processes. In other areas of applied mathematics, semigroups are fundamental models for linear time-invariant systems. In partial differential equations, a semigroup is associated to any equation whose spatial evolution is independent of time.
There are numerous special classes of semigroups, semigroups with additional properties, which appear in particular applications. Some of these classes are even closer to groups by exhibiting some additional but not all properties of a group. Of these we mention: regular semigroups, orthodox semigroups, semigroups with involution, inverse semigroups and cancellative semigroups. There are also interesting classes of semigroups that do not contain any groups except the trivial group; examples of the latter kind are bands and their commutative subclass—semilattices, which are also ordered algebraic structures.
Definition
A semigroup is a set together with a binary operation "" (that is, a function ) that satisfies the associative property:
For all , the equation holds.
More succinctly, a semigroup is an associative magma.
Examples of semigroups
Empty semigroup: the empty set forms a semigroup with the empty function as the binary operation.
Semigroup with one element: there is essentially only one (specifically, only one up to isomorphism), the singleton {a} with operation .
Semigroup with two elements: there are five which are essentially different.
The "flip-flop" monoid: a semigroup with three elements representing the three operations on a switch - set, reset, and do nothing.
The set of positive integers with addition. (With 0 included, this becomes a monoid.)
The set of integers with minimum or maximum. (With positive/negative infinity included, this becomes a monoid.)
Square nonnegative matrices of a given size with matrix multiplication.
Any ideal of a ring with the multiplication of the ring.
The set of all finite strings over a fixed alphabet Σ with concatenation of strings as the semigroup operation — the so-called "free semigroup over Σ". With the empty string included, this semigroup becomes the free monoid over Σ.
A probability distribution F together with all convolution powers of F, with convolution as the operation. This is called a convolution semigroup.
Transformation semigroups and monoids.
The set of continuous functions from a topological space to itself with composition of functions forms a monoid with the identity function acting as the identity. More generally, the endomorphisms of any object of a category form a monoid under composition.
The product of faces of an arrangement of hyperplanes.
Basic concepts
Identity and zero
A left identity of a semigroup (or more generally, magma) is an element such that for all in , . Similarly, a right identity is an element such that for all in , . Left and right identities are both called one-sided identities. A semigroup may have one or more left identities but no right identity, and vice versa.
A two-sided identity (or just identity) is an element that is both a left and right identity. Semigroups with a two-sided identity are called monoids. A semigroup may have at most one two-sided identity. If a semigroup has a two-sided identity, then the two-sided identity is the only one-sided identity in the semigroup. If a semigroup has both a left identity and a right identity, then it has a two-sided identity (which is therefore the unique one-sided identity).
A semigroup without identity may be embedded in a monoid formed by adjoining an element to and defining for all . The notation denotes a monoid obtained from by adjoining an identity if necessary ( for a monoid).
Similarly, every magma has at most one absorbing element, which in semigroup theory is called a zero. Analogous to the above construction, for every semigroup , one can define , a semigroup with 0 that embeds .
Subsemigroups and ideals
The semigroup operation induces an operation on the collection of its subsets: given subsets A and B of a semigroup S, their product , written commonly as AB, is the set (This notion is defined identically as it is for groups.) In terms of this operation, a subset A is called
a subsemigroup if AA is a subset of A,
a right ideal if AS is a subset of A, and
a left ideal if SA is a subset of A.
If A is both a left ideal and a right ideal then it is called an ideal (or a two-sided ideal).
If S is a semigroup, then the intersection of any collection of subsemigroups of S is also a subsemigroup of S.
So the subsemigroups of S form a complete lattice.
An example of a semigroup with no minimal ideal is the set of positive integers under addition. The minimal ideal of a commutative semigroup, when it exists, is a group.
Green's relations, a set of five equivalence relations that characterise the elements in terms of the principal ideals they generate, are important tools for analysing the ideals of a semigroup and related notions of structure.
The subset with the property that every element commutes with any other element of the semigroup is called the center of the semigroup. The center of a semigroup is actually a subsemigroup.
Homomorphisms and congruences
A semigroup homomorphism is a function that preserves semigroup structure. A function between two semigroups is a homomorphism if the equation
.
holds for all elements a, b in S, i.e. the result is the same when performing the semigroup operation after or before applying the map f.
A semigroup homomorphism between monoids preserves identity if it is a monoid homomorphism. But there are semigroup homomorphisms which are not monoid homomorphisms, e.g. the canonical embedding of a semigroup without identity into . Conditions characterizing monoid homomorphisms are discussed further. Let be a semigroup homomorphism. The image of is also a semigroup. If is a monoid with an identity element , then is the identity element in the image of . If is also a monoid with an identity element and belongs to the image of , then , i.e. is a monoid homomorphism. Particularly, if is surjective, then it is a monoid homomorphism.
Two semigroups S and T are said to be isomorphic if there exists a bijective semigroup homomorphism . Isomorphic semigroups have the same structure.
A semigroup congruence is an equivalence relation that is compatible with the semigroup operation. That is, a subset that is an equivalence relation and and implies for every in S. Like any equivalence relation, a semigroup congruence induces congruence classes
and the semigroup operation induces a binary operation on the congruence classes:
Because is a congruence, the set of all congruence classes of forms a semigroup with , called the quotient semigroup or factor semigroup, and denoted . The mapping is a semigroup homomorphism, called the quotient map, canonical surjection or projection; if S is a monoid then quotient semigroup is a monoid with identity . Conversely, the kernel of any semigroup homomorphism is a semigroup congruence. These results are nothing more than a particularization of the first isomorphism theorem in universal algebra. Congruence classes and factor monoids are the objects of study in string rewriting systems.
A nuclear congruence on S is one which is the kernel of an endomorphism of S.
A semigroup S satisfies the maximal condition on congruences if any family of congruences on S, ordered by inclusion, has a maximal element. By Zorn's lemma, this is equivalent to saying that the ascending chain condition holds: there is no infinite strictly ascending chain of congruences on S.
Every ideal I of a semigroup induces a factor semigroup, the Rees factor semigroup, via the congruence ρ defined by if either , or both x and y are in I.
Quotients and divisions
The following notions introduce the idea that a semigroup is contained in another one.
A semigroup T is a quotient of a semigroup S if there is a surjective semigroup morphism from S to T. For example, is a quotient of , using the morphism consisting of taking the remainder modulo 2 of an integer.
A semigroup T divides a semigroup S, noted if T is a quotient of a subsemigroup S. In particular, subsemigroups of S divides T, while it is not necessarily the case that there are a quotient of S.
Both of those relation are transitive.
Structure of semigroups
For any subset A of S there is a smallest subsemigroup T of S which contains A, and we say that A generates T. A single element x of S generates the subsemigroup { xn n ∈ Z+ }. If this is finite, then x is said to be of finite order, otherwise it is of infinite order.
A semigroup is said to be periodic if all of its elements are of finite order.
A semigroup generated by a single element is said to be monogenic (or cyclic). If a monogenic semigroup is infinite then it is isomorphic to the semigroup of positive integers with the operation of addition.
If it is finite and nonempty, then it must contain at least one idempotent.
It follows that every nonempty periodic semigroup has at least one idempotent.
A subsemigroup which is also a group is called a subgroup. There is a close relationship between the subgroups of a semigroup and its idempotents. Each subgroup contains exactly one idempotent, namely the identity element of the subgroup. For each idempotent e of the semigroup there is a unique maximal subgroup containing e. Each maximal subgroup arises in this way, so there is a one-to-one correspondence between idempotents and maximal subgroups. Here the term maximal subgroup differs from its standard use in group theory.
More can often be said when the order is finite. For example, every nonempty finite semigroup is periodic, and has a minimal ideal and at least one idempotent. The number of finite semigroups of a given size (greater than 1) is (obviously) larger than the number of groups of the same size. For example, of the sixteen possible "multiplication tables" for a set of two elements eight form semigroups whereas only four of these are monoids and only two form groups. For more on the structure of finite semigroups, see Krohn–Rhodes theory.
Special classes of semigroups
A monoid is a semigroup with an identity element.
A group is a semigroup with an identity element and an inverse element.
A subsemigroup is a subset of a semigroup that is closed under the semigroup operation.
A cancellative semigroup is one having the cancellation property: implies and similarly for .
A band is a semigroup whose operation is idempotent.
A semilattice is a semigroup whose operation is idempotent and commutative.
0-simple semigroups.
Transformation semigroups: any finite semigroup S can be represented by transformations of a (state-) set Q of at most states. Each element x of S then maps Q into itself and sequence xy is defined by for each q in Q. Sequencing clearly is an associative operation, here equivalent to function composition. This representation is basic for any automaton or finite-state machine (FSM).
The bicyclic semigroup is in fact a monoid, which can be described as the free semigroup on two generators p and q, under the relation .
C0-semigroups.
Regular semigroups. Every element x has at least one inverse y satisfying and ; the elements x and y are sometimes called "mutually inverse".
Inverse semigroups are regular semigroups where every element has exactly one inverse. Alternatively, a regular semigroup is inverse if and only if any two idempotents commute.
Affine semigroup: a semigroup that is isomorphic to a finitely-generated subsemigroup of Zd. These semigroups have applications to commutative algebra.
Structure theorem for commutative semigroups
There is a structure theorem for commutative semigroups in terms of semilattices. A semilattice (or more precisely a meet-semilattice) is a partially ordered set where every pair of elements has a greatest lower bound, denoted . The operation makes into a semigroup satisfying the additional idempotence law .
Given a homomorphism from an arbitrary semigroup to a semilattice, each inverse image is a (possibly empty) semigroup. Moreover, becomes graded by , in the sense that
If is onto, the semilattice is isomorphic to the quotient of by the equivalence relation such that if and only if . This equivalence relation is a semigroup congruence, as defined above.
Whenever we take the quotient of a commutative semigroup by a congruence, we get another commutative semigroup. The structure theorem says that for any commutative semigroup , there is a finest congruence such that the quotient of by this equivalence relation is a semilattice. Denoting this semilattice by , we get a homomorphism from onto . As mentioned, becomes graded by this semilattice.
Furthermore, the components are all Archimedean semigroups. An Archimedean semigroup is one where given any pair of elements , there exists an element and such that .
The Archimedean property follows immediately from the ordering in the semilattice , since with this ordering we have if and only if for some and .
Group of fractions
The group of fractions or group completion of a semigroup S is the group generated by the elements of S as generators and all equations which hold true in S as relations. There is an obvious semigroup homomorphism which sends each element of S to the corresponding generator. This has a universal property for morphisms from S to a group: given any group H and any semigroup homomorphism , there exists a unique group homomorphism with k=fj. We may think of G as the "most general" group that contains a homomorphic image of S.
An important question is to characterize those semigroups for which this map is an embedding. This need not always be the case: for example, take S to be the semigroup of subsets of some set X with set-theoretic intersection as the binary operation (this is an example of a semilattice). Since holds for all elements of S, this must be true for all generators of G(S) as well: which is therefore the trivial group. It is clearly necessary for embeddability that S have the cancellation property. When S is commutative this condition is also sufficient and the Grothendieck group of the semigroup provides a construction of the group of fractions. The problem for non-commutative semigroups can be traced to the first substantial paper on semigroups. Anatoly Maltsev gave necessary and sufficient conditions for embeddability in 1937.
Semigroup methods in partial differential equations
Semigroup theory can be used to study some problems in the field of partial differential equations. Roughly speaking, the semigroup approach is to regard a time-dependent partial differential equation as an ordinary differential equation on a function space. For example, consider the following initial/boundary value problem for the heat equation on the spatial interval and times :
Let be the Lp space of square-integrable real-valued functions with domain the interval and let A be the second-derivative operator with domain
where H2 is a Sobolev space. Then the above initial/boundary value problem can be interpreted as an initial value problem for an ordinary differential equation on the space X:
On an heuristic level, the solution to this problem "ought" to be . However, for a rigorous treatment, a meaning must be given to the exponential of tA. As a function of t, exp(tA) is a semigroup of operators from X to itself, taking the initial state u0 at time to the state at time t. The operator A is said to be the infinitesimal generator of the semigroup.
History
The study of semigroups trailed behind that of other algebraic structures with more complex axioms such as groups or rings. A number of sources attribute the first use of the term (in French) to J.-A. de Séguier in Élements de la Théorie des Groupes Abstraits (Elements of the Theory of Abstract Groups) in 1904. The term is used in English in 1908 in Harold Hinton's Theory of Groups of Finite Order.
Anton Sushkevich obtained the first non-trivial results about semigroups. His 1928 paper "Über die endlichen Gruppen ohne das Gesetz der eindeutigen Umkehrbarkeit" ("On finite groups without the rule of unique invertibility") determined the structure of finite simple semigroups and showed that the minimal ideal (or Green's relations J-class) of a finite semigroup is simple. From that point on, the foundations of semigroup theory were further laid by David Rees, James Alexander Green, Evgenii Sergeevich Lyapin, Alfred H. Clifford and Gordon Preston. The latter two published a two-volume monograph on semigroup theory in 1961 and 1967 respectively. In 1970, a new periodical called Semigroup Forum (currently edited by Springer Verlag) became one of the few mathematical journals devoted entirely to semigroup theory.
The representation theory of semigroups was developed in 1963 by Boris Schein using binary relations on a set A and composition of relations for the semigroup product. At an algebraic conference in 1972 Schein surveyed the literature on BA, the semigroup of relations on A. In 1997 Schein and Ralph McKenzie proved that every semigroup is isomorphic to a transitive semigroup of binary relations.
In recent years researchers in the field have become more specialized with dedicated monographs appearing on important classes of semigroups, like inverse semigroups, as well as monographs focusing on applications in algebraic automata theory, particularly for finite automata, and also in functional analysis.
Generalizations
If the associativity axiom of a semigroup is dropped, the result is a magma, which is nothing more than a set M equipped with a binary operation that is closed .
Generalizing in a different direction, an n-ary semigroup (also n-semigroup, polyadic semigroup or multiary semigroup) is a generalization of a semigroup to a set G with a n-ary operation instead of a binary operation. The associative law is generalized as follows: ternary associativity is , i.e. the string abcde with any three adjacent elements bracketed. N-ary associativity is a string of length with any n adjacent elements bracketed. A 2-ary semigroup is just a semigroup. Further axioms lead to an n-ary group.
A third generalization is the semigroupoid, in which the requirement that the binary relation be total is lifted. As categories generalize monoids in the same way, a semigroupoid behaves much like a category but lacks identities.
Infinitary generalizations of commutative semigroups have sometimes been considered by various authors.
See also
Absorbing element
Biordered set
Empty semigroup
Generalized inverse
Identity element
Light's associativity test
Quantum dynamical semigroup
Semigroup ring
Weak inverse
Notes
Citations
References
General References
Specific references
Semigroup theory
Algebraic structures |
28684 | https://en.wikipedia.org/wiki/Session%20Initiation%20Protocol | Session Initiation Protocol | The Session Initiation Protocol (SIP) is a signaling protocol used for initiating, maintaining, and terminating real-time sessions that include voice, video and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol (IP) networks as well as mobile phone calling over LTE (VoLTE).
The protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message.
SIP works in conjunction with several other protocols that specify and carry the session media. Most commonly, media type and parameter negotiation and media setup are performed with the Session Description Protocol (SDP), which is carried as payload in SIP messages. SIP is designed to be independent of the underlying transport layer protocol and can be used with the User Datagram Protocol (UDP), the Transmission Control Protocol (TCP), and the Stream Control Transmission Protocol (SCTP). For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security (TLS). For the transmission of media streams (voice, video) the SDP payload carried in SIP messages typically employs the Real-time Transport Protocol (RTP) or the Secure Real-time Transport Protocol (SRTP).
History
SIP was originally designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996 to facilitate establishing multicast multimedia sessions on the Mbone. The protocol was standardized as in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem (IMS) architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in and various extensions and clarifications have been published since.
SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network (PSTN) with a vision of supporting new multimedia applications. It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games.
SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized primarily by the Internet Engineering Task Force (IETF), while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union (ITU).
Protocol operation
SIP is only involved in the signaling operations of a media communication session and is primarily used to set up and terminate voice or video calls. SIP can be used to establish two-party (unicast) or multiparty (multicast) sessions. It also allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, and adding or deleting media streams. SIP has also found applications in messaging applications, such as instant messaging, and event subscription and notification.
SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up. For call setup, the body of a SIP message contains a Session Description Protocol (SDP) data unit, which specifies the media format, codec and media communication protocol. Voice and video media streams are typically carried between the terminals using the Real-time Transport Protocol (RTP) or Secure Real-time Transport Protocol (SRTP).
Every resource of a SIP network, such as user agents, call routers, and voicemail boxes, are identified by a Uniform Resource Identifier (URI). The syntax of the URI follows the general standard syntax also used in Web services and e-mail. The URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a fully qualified domain name of the host and port. If secure transmission is required, the scheme sips is used.
SIP employs design elements similar to the HTTP request and response transaction model. Each transaction consists of a client request that invokes a particular method or function on the server and at least one response. SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format.
SIP can be carried by several transport layer protocols including Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Stream Control Transmission Protocol (SCTP). SIP clients typically use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is commonly used for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted with Transport Layer Security (TLS).
SIP-based telephony networks often implement call processing features of Signaling System 7 (SS7), for which special SIP protocol extensions exist, although the two protocols themselves are very different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers.
Network elements
The network elements that use the Session Initiation Protocol for communication are called SIP user agents. Each user agent (UA) performs the function of a user agent client (UAC) when it is requesting a service function, and that of a user agent server (UAS) when responding to a request. Thus, any two SIP endpoints may in principle operate without any intervening SIP infrastructure. However, for network operational reasons, for provisioning public services to users, and for directory services, SIP defines several specific types of network server elements. Each of these service elements also communicates within the client-server model implemented in user agent clients and servers.
User agent
A user agent is a logical network endpoint that sends or receives SIP messages and manages SIP sessions. User agents have client and server components. The user agent client (UAC) sends SIP requests. The user agent server (UAS) receives requests and returns a SIP response. Unlike other network protocols that fix the roles of client and server, e.g., in HTTP, in which a web browser only acts as a client, and never as a server, SIP requires both peers to implement both roles. The roles of UAC and UAS only last for the duration of a SIP transaction.
A SIP phone is an IP phone that implements client and server functions of a SIP user agent and provides the traditional call functions of a telephone, such as dial, answer, reject, call hold, and call transfer. SIP phones may be implemented as a hardware device or as a softphone. As vendors increasingly implement SIP as a standard telephony platform, the distinction between hardware-based and software-based SIP phones is blurred and SIP elements are implemented in the basic firmware functions of many IP-capable communications devices such as smartphones.
In SIP, as in HTTP, the user agent may identify itself using a message header field (User-Agent), containing a text description of the software, hardware, or the product name. The user agent field is sent in request messages, which means that the receiving SIP server can evaluate this information to perform device-specific configuration or feature activation. Operators of SIP network elements sometimes store this information in customer account portals, where it can be useful in diagnosing SIP compatibility problems or in the display of service status.
Proxy server
A proxy server is a network server with UAC and UAS components that functions as an intermediary entity for the purpose of performing requests on behalf of other network elements. A proxy server primarily plays the role of call routing; it sends SIP requests to another entity closer to it destination. Proxies are also useful for enforcing policy, such as for determining whether a user is allowed to make a call. A proxy interprets, and, if necessary, rewrites specific parts of a request message before forwarding it.
SIP proxy servers that route messages to more than one destination are called forking proxies. The forking of a SIP request establishes multiple dialogs from the single request. Thus, a call may be answered from one of multiple SIP endpoints. For identification of multiple dialogs, each dialog has an identifier with contributions from both endpoints.
Redirect server
A redirect server is a user agent server that generates 3xx (redirection) responses to requests it receives, directing the client to contact an alternate set of URIs. A redirect server allows proxy servers to direct SIP session invitations to external domains.
Registrar
A registrar is a SIP endpoint that provides a location service. It accepts REGISTER requests, recording the address and other parameters from the user agent. For subsequent requests, it provides an essential means to locate possible communication peers on the network. The location service links one or more IP addresses to the SIP URI of the registering agent. Multiple user agents may register for the same URI, with the result that all registered user agents receive the calls to the URI.
SIP registrars are logical elements and are often co-located with SIP proxies. To improve network scalability, location services may instead be located with a redirect server.
Session border controller
Session border controllers (SBCs) serve as middleboxes between user agents and SIP servers for various types of functions, including network topology hiding and assistance in NAT traversal. SBCs are an independently engineered solution and are not mentioned in the SIP RFC.
Gateway
Gateways can be used to interconnect a SIP network to other networks, such as the PSTN, which use different protocols or technologies.
SIP messages
SIP is a text-based protocol with syntax similar to that of HTTP. There are two different types of SIP messages: requests and responses. The first line of a request has a method, defining the nature of the request, and a Request-URI, indicating where the request should be sent. The first line of a response has a response code.
Requests
Requests initiate a functionality of the protocol. They are sent by a user agent client to the server and are answered with one or more SIP responses, which return a result code of the transaction, and generally indicate the success, failure, or other state of the transaction.
Responses
Responses are sent by the user agent server indicating the result of a received request. Several classes of responses are recognized, determined by the numerical range of result codes:
1xx: Provisional responses to requests indicate the request was valid and is being processed.
2xx: Successful completion of the request. As a response to an INVITE, it indicates a call is established. The most common code is 200, which is an unqualified success report.
3xx: Call redirection is needed for completion of the request. The request must be completed with a new destination.
4xx: The request cannot be completed at the server for a variety of reasons, including bad request syntax (code 400).
5xx: The server failed to fulfill an apparently valid request, including server internal errors (code 500).
6xx: The request cannot be fulfilled at any server. It indicates a global failure, including call rejection by the destination.
Transactions
SIP defines a transaction mechanism to control the exchanges between participants and deliver messages reliably. A transaction is a state of a session, which is controlled by various timers. Client transactions send requests and server transactions respond to those requests with one or more responses. The responses may include provisional responses with a response code in the form 1xx, and one or multiple final responses (2xx – 6xx).
Transactions are further categorized as either type invite or type non-invite. Invite transactions differ in that they can establish a long-running conversation, referred to as a dialog in SIP, and so include an acknowledgment (ACK) of any non-failing final response, e.g., 200 OK.
Instant messaging and presence
The Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions (SIMPLE) is the SIP-based suite of standards for instant messaging and presence information. Message Session Relay Protocol (MSRP) allows instant message sessions and file transfer.
Conformance testing
The SIP developer community meets regularly at conferences organized by SIP Forum to test interoperability of SIP implementations. The TTCN-3 test specification language, developed by a task force at ETSI (STF 196), is used for specifying conformance tests for SIP implementations.
Performance testing
When developing SIP software or deploying a new SIP infrastructure, it is important to test the capability of servers and IP networks to handle certain call load: number of concurrent calls and number of calls per second. SIP performance tester software is used to simulate SIP and RTP traffic to see if the server and IP network are stable under the call load. The software measures performance indicators like answer delay, answer/seizure ratio, RTP jitter and packet loss, round-trip delay time.
Applications
SIP connection is a marketing term for voice over Internet Protocol (VoIP) services offered by many Internet telephony service providers (ITSPs). The service provides routing of telephone calls from a client's private branch exchange (PBX) telephone system to the PSTN. Such services may simplify corporate information system infrastructure by sharing Internet access for voice and data, and removing the cost for Basic Rate Interface (BRI) or Primary Rate Interface (PRI) telephone circuits.
SIP trunking is a similar marketing term preferred for when the service is used to simplify a telecom infrastructure by sharing the carrier access circuit for voice, data, and Internet traffic while removing the need for PRI circuits.
SIP-enabled video surveillance cameras can initiate calls to alert the operator of events, such as the motion of objects in a protected area.
SIP is used in audio over IP for broadcasting applications where it provides an interoperable means for audio interfaces from different manufacturers to make connections with one another.
Implementations
The U.S. National Institute of Standards and Technology (NIST), Advanced Networking Technologies Division provides a public-domain Java implementation that serves as a reference implementation for the standard. The implementation can work in proxy server or user agent scenarios and has been used in numerous commercial and research projects. It supports in full and a number of extension RFCs including (event notification) and (reliable provisional responses).
Numerous other commercial and open-source SIP implementations exist. See List of SIP software.
SIP-ISUP interworking
SIP-I, Session Initiation Protocol with encapsulated ISUP, is a protocol used to create, modify, and terminate communication sessions based on ISUP using SIP and IP networks. Services using SIP-I include voice, video telephony, fax and data. SIP-I and SIP-T are two protocols with similar features, notably to allow ISUP messages to be transported over SIP networks. This preserves all of the detail available in the ISUP header. SIP-I was defined by the ITU-T, whereas SIP-T was defined by the IETF.
Encryption
Concerns about the security of calls via the public Internet have been addressed by encryption of the SIP protocol for secure transmission. The URI scheme SIPS is used to mandate that SIP communication be secured with Transport Layer Security (TLS). SIPS URIs take the form sips:user@example.com.
End-to-end encryption of SIP is only possible if there is a direct connection between communication endpoints. While a direct connection can be made via Peer-to-peer SIP or via a VPN between the endpoints, most SIP communication involves multiple hops, with the first hop being from a user agent to the user agent's ITSP. For the multiple-hop case, SIPS will only secure the first hop; the remaining hops will normally not be secured with TLS and the SIP communication will be insecure. In contrast, the HTTPS protocol provides end-to-end security as it is done with a direct connection and does not involve the notion of hops.
The media streams (audio and video), which are separate connections from the SIPS signaling stream, may be encrypted using SRTP. The key exchange for SRTP is performed with SDES (), or with ZRTP (). When SDES is used, the keys will be transmitted via insecure SIP unless SIPS is used. One may also add a MIKEY () exchange to SIP to determine session keys for use with SRTP.
See also
Computer telephony integration (CTI)
Computer-supported telecommunications applications (CSTA)
H.323 protocols H.225.0 and H.245
IP Multimedia Subsystem (IMS)
Media Gateway Control Protocol (MGCP)
Mobile VoIP
MSCML (Media Server Control Markup Language)
Network convergence
Rendezvous protocol
RTP payload formats
SIGTRAN (Signaling Transport)
SIP extensions for the IP Multimedia Subsystem
SIP provider
Skinny Client Control Protocol (SCCP)
T.38
XIMSS (XML Interface to Messaging, Scheduling, and Signaling)
Notes
References
External links
IANA: SIP Parameters
IANA: SIP Event Types Namespace
VoIP protocols
Videotelephony
Application layer protocols |
28702 | https://en.wikipedia.org/wiki/Session%20Description%20Protocol | Session Description Protocol | The Session Description Protocol (SDP) is a format for describing multimedia communication sessions for the purposes of announcement and invitation. Its predominant use is in support of streaming media applications, such as voice over IP (VoIP) and video conferencing. SDP does not deliver any media streams itself but is used between endpoints for negotiation of network metrics, media types, and other associated properties. The set of properties and parameters is called a session profile.
SDP is extensible for the support of new media types and formats. SDP was originally a component of the Session Announcement Protocol (SAP), but found other uses in conjunction with the Real-time Transport Protocol (RTP), the Real-time Streaming Protocol (RTSP), Session Initiation Protocol (SIP), and as a standalone protocol for describing multicast sessions.
The IETF published the original specification as a Proposed Standard in April 1998. Revised specifications were released in 2006 (RFC 4566), and in 2021 (RFC 8866)..
Session description
The Session Description Protocol describes a session as a group of fields in a text-based format, one field per line. The form of each field is as follows.
<character>=<value><CR><LF>
Where <character> is a single case-sensitive character and <value> is structured text in a format that depends on the character. Values are typically UTF-8 encoded. Whitespace is not allowed immediately to either side of the equal sign.
Session descriptions consist of three sections: session, timing, and media descriptions. Each description may contain multiple timing and media descriptions. Names are only unique within the associated syntactic construct.
Fields must appear in the order shown; optional fields are marked with an asterisk:
Session description
v= (protocol version number, currently only 0)
o= (originator and session identifier : username, id, version number, network address)
s= (session name : mandatory with at least one UTF-8-encoded character)
i=* (session title or short information)
u=* (URI of description)
e=* (zero or more email address with optional name of contacts)
p=* (zero or more phone number with optional name of contacts)
c=* (connection information—not required if included in all media)
b=* (zero or more bandwidth information lines)
One or more time descriptions ("t=" and "r=" lines; see below)
z=* (time zone adjustments)
k=* (encryption key)
a=* (zero or more session attribute lines)
Zero or more Media descriptions (each one starting by an "m=" line; see below)
(mandatory)
t= (time the session is active)
r=* (zero or more repeat times)
(optional)
m= (media name and transport address)
i=* (media title or information field)
c=* (connection information — optional if included at session level)
b=* (zero or more bandwidth information lines)
k=* (encryption key)
a=* (zero or more media attribute lines — overriding the Session attribute lines)
Below is a sample session description from RFC 4566. This session is originated by the user "jdoe", at IPv4 address 10.47.16.5. Its name is "SDP Seminar" and extended session information ("A Seminar on the session description protocol") is included along with a link for additional information and an email address to contact the responsible party, Jane Doe. This session is specified to last for two hours using NTP timestamps, with a connection address (which indicates the address clients must connect to or — when a multicast address is provided, as it is here — subscribe to) specified as IPv4 224.2.17.12 with a TTL of 127. Recipients of this session description are instructed to only receive media. Two media descriptions are provided, both using RTP Audio Video Profile. The first is an audio stream on port 49170 using RTP/AVP payload type 0 (defined by RFC 3551 as PCMU), and the second is a video stream on port 51372 using RTP/AVP payload type 99 (defined as "dynamic"). Finally, an attribute is included which maps RTP/AVP payload type 99 to format h263-1998 with a 90 kHz clock rate. RTCP ports for the audio and video streams of 49171 and 51373, respectively, are implied.
v=0
o=jdoe 2890844526 2890842807 IN IP4 10.47.16.5
s=SDP Seminar
i=A Seminar on the session description protocol
u=http://www.example.com/seminars/sdp.pdf
e=j.doe@example.com (Jane Doe)
c=IN IP4 224.2.17.12/127
t=2873397496 2873404696
a=recvonly
m=audio 49170 RTP/AVP 0
m=video 51372 RTP/AVP 99
a=rtpmap:99 h263-1998/90000
The SDP specification is purely a format for session description. It is intended to be distributed over different transport protocols as necessary, including SAP, SIP, and RTSP. SDP could even be transmitted by email or as an HTTP payload.
Attributes
SDP uses attributes to extend the core protocol. Attributes can appear within the Session or Media sections and are scoped accordingly as session-level or media-level. New attributes are added to the standard occasionally through registration with IANA.
Attributes are either properties or values:
Property: a=flag conveys a boolean property of the media or session.
Value: a=attribute:value provides a named parameter.
Two of these attributes are specially defined:
a=charset:encoding is used in the session or media sections to specify a different character encoding (as registered in the IANA registry) from the recommended default value (UTF-8) for standard protocol keys. These values contain a text that is intended to be displayed to a user.
a=sdplang:code is used to specify the language of text. Alternate text in multiple languages may be carried in the session, and selected automatically by the user agent according to user preferences.
In both cases, text fields intended to be displayed to a user are interpreted as opaque strings, but rendered to the user or application with the values indicated in the last occurrence of the fields charset and sdplang in the current media section, or otherwise their last value in the session section.
The parameters v, s, and o are mandatory, must not be empty, and should be UTF-8-encoded. They are used as identifiers and are not intended to be displayed to users.
A few other attributes are also present in the example, either as a session-level attribute (such as the attribute in property form a=recvonly), or as a media-level attribute (such as the attribute in value form a=rtpmap:99 h263-1998/90000 for the video in the example).
Time formats and repetitions
Absolute times are represented in Network Time Protocol (NTP) format (the number of seconds since 1900). If the stop time is 0 then the session is unbounded. If the start time is also zero then the session is considered permanent. Unbounded and permanent sessions are discouraged but not prohibited.
Intervals can be represented with NTP times or in typed time: a value and time units (days: d, hours: h, minutes: m and seconds: s) sequence.
Thus an hour meeting from 10 am UTC on 1 August 2010, with a single repeat time a week later at the same time can be represented as:
t=
r=604800 3600 0
Or using typed time:
t=
r=7d 1h 0
When repeat times are specified, the start time of each repetition may need to be adjusted to compensate for daylight saving time changes so that it will occur at the same local time in a specific time zone throughout the period between the start time and the stop time.
Instead of specifying this time zone and having to support a database of time zones for knowing when and where daylight adjustments will be needed, the repeat times are assumed to be all defined within the same time zone, and SDP supports the indication of NTP absolute times when a daylight offset (expressed in seconds or using a type time) will need to be applied to the repeated start time or end time falling at or after each daylight adjustment. All these offsets are relative to the start time, they are not cumulative. NTP supports this with field z, which indicates a series of pairs whose first item is the NTP absolute time when a daylight adjustment will occur, and the second item indicates the offset to apply relative to the absolute times computed with the field r.
For example, if a daylight adjustment will subtract 1 hour on 31 October 2010 at 3 am UTC (i.e. 60 days minus 7 hours after the start time on Sunday 1 August 2010 at 10am UTC), and this will be the only daylight adjustment to apply in the scheduled period which would occur between 1 August 2010 up to the 28 November 2010 at 10 am UTC (the stop time of the repeated 1-hour session which is repeated each week at the same local time, which occurs 88 days later), this can be specified as:
t=
r=7d 1h 0
z= -1h
If the weekly 1-hour session was repeated every Sunday for one full year, i.e. from Sunday 1 August 2010 3 am UTC to Sunday 26 June 2011 4 am UTC (stop time of the last repeat, i.e. 360 days plus 1 hour later, or 31107600 seconds later), so that it would include the transition back to Summer time on Sunday 27 March 2011 at 2 am (1 hour is added again to local time so that the second daylight transition would occur 209 days after the first start time):
t=
r=7d 1h 0
z= -1h 0
As SDP announcements for repeated sessions should not be made to cover very long periods exceeding a few years, the number of daylight adjustments to include in the z= parameter should remain small.
Sessions may be repeated irregularly over a week but scheduled the same way for all weeks in the period, by adding more tuples in the r parameter. For example, to schedule the same event also on Saturday (at the same time of the day) you would use:
t=
r=7d 1h 0 6d
z= -1h 0
The SDP protocol does not support repeating sessions monthly and yearly schedules with such simple repeat times, because they are irregularly spaced in time; instead, additional t/r tuples may be supplied for each month or year.
Notes
References
External links
Internet Standards
Java specification requests
VoIP protocols |
28703 | https://en.wikipedia.org/wiki/Session%20Announcement%20Protocol | Session Announcement Protocol | The Session Announcement Protocol (SAP) is an experimental protocol for advertising multicast session information. SAP typically uses Session Description Protocol (SDP) as the format for Real-time Transport Protocol (RTP) session descriptions. Announcement data is sent using IP multicast and the User Datagram Protocol (UDP).
Under SAP, senders periodically transmit SDP descriptions to a well-known multicast address and port number (9875). A listening application constructs a guide of all advertised multicast sessions.
SAP was published by the IETF as RFC 2974.
Announcement interval
The announcement interval is cooperatively modulated such that all SAP announcements in the multicast delivery scope, by default, consume 4000 bits per second. Regardless, the maximum announce interval is 300 seconds (5 minutes). Announcements automatically expire after 10 times the announcement interval or one hour, whichever is greater. Announcements may also be explicitly withdrawn by the original issuer.
Authentication, encryption and compression
SAP features separate methods for authenticating and encrypting announcements. Use of encryption is not recommended. Authentication prevents unauthorized modification and other DoS attacks. Authentication is optional. Two authentication schemes are supported:
Pretty Good Privacy as defined in RFC 2440
Cryptographic Message Syntax as defined in RFC 5652
The message body may optionally be compressed using the zlib format as defined in RFC 1950.
Applications and implementations
VLC media player monitors SAP announcements and presents the user a list of available streams.
SAP is one of the optional discovery and connection management techniques described in the AES67 audio-over-Ethernet interoperability standard.
References
External links
Session Announcement Protocol (SAP)
SAP/SDP Listener
Internet protocols
Internet Standards |
28733 | https://en.wikipedia.org/wiki/Steganography | Steganography | Steganography ( ) is the practice of concealing a message within another message or a physical object. In computing/electronic contexts, a computer file, message, image, or video is concealed within another file, message, image, or video. The word steganography comes from Greek steganographia, which combines the words steganós (), meaning "covered or concealed", and -graphia () meaning "writing".
The first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic. Generally, the hidden messages appear to be (or to be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, and key-dependent steganographic schemes adhere to Kerckhoffs's principle.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages, no matter how unbreakable they are, arouse interest and may in themselves be incriminating in countries in which encryption is illegal.
Whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent and its contents.
Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a transport layer, such as a document file, image file, program, or protocol. Media files are ideal for steganographic transmission because of their large size. For example, a sender might start with an innocuous image file and adjust the color of every hundredth pixel to correspond to a letter in the alphabet. The change is so subtle that someone who is not specifically looking for it is unlikely to notice the change.
History
The first recorded uses of steganography can be traced back to 440 BC in Greece, when Herodotus mentions two examples in his Histories. Histiaeus sent a message to his vassal, Aristagoras, by shaving the head of his most trusted servant, "marking" the message onto his scalp, then sending him on his way once his hair had regrown, with the instruction, "When thou art come to Miletus, bid Aristagoras shave thy head, and look thereon." Additionally, Demaratus sent a warning about a forthcoming attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in common use then as reusable writing surfaces, sometimes used for shorthand.
In his work Polygraphiae, Johannes Trithemius developed his so-called "Ave-Maria-Cipher" that can hide information in a Latin praise of God. "Auctor Sapientissimus Conseruans Angelica Deferat Nobis Charitas Potentissimi Creatoris" for example contains the concealed word VICIPEDIA.
Techniques
Physical
Steganography has been widely used for centuries. Some examples include:
Hidden messages on a paper written in secret inks.
Hidden messages distributed, according to a certain rule or key, as smaller parts (e.g. words or letters) among other words of a less suspicious cover text. This particular form of steganography is called a null cipher.
Messages written in Morse code on yarn and then knitted into a piece of clothing worn by a courier.
Messages written on envelopes in the area covered by postage stamps.
In the early days of the printing press, it was common to mix different typefaces on a printed page because the printer did not have enough copies of some letters in one typeface. Thus, a message could be hidden by using two or more different typefaces, such as normal or italic.
During and after World War II, espionage agents used photographically-produced microdots to send information back and forth. Microdots were typically minute (less than the size of the period produced by a typewriter). World War II microdots were embedded in the paper and covered with an adhesive, such as collodion that was reflective and so was detectable by viewing against glancing light. Alternative techniques included inserting microdots into slits cut into the edge of postcards.
During World War II, Velvalee Dickinson, a spy for Japan in New York City, sent information to accommodation addresses in neutral South America. She was a dealer in dolls, and her letters discussed the quantity and type of doll to ship. The stegotext was the doll orders, and the concealed "plaintext" was itself encoded and gave information about ship movements, etc. Her case became somewhat famous and she became known as the Doll Woman.
During World War II, photosensitive glass was declared secret, and used for transmitting information to Allied armies.
Jeremiah Denton repeatedly blinked his eyes in Morse code during the 1966 televised press conference that he was forced into as an American prisoner-of-war by his North Vietnamese captors, spelling out "T-O-R-T-U-R-E". That confirmed for the first time to the US Naval Intelligence and other Americans that the North Vietnamese were torturing American prisoners-of-war.
In 1968, crew members of the USS Pueblo intelligence ship, held as prisoners by North Korea, communicated in sign language during staged photo opportunities, to inform the United States that they were not defectors but captives of the North Koreans. In other photos presented to the US, crew members gave "the finger" to the unsuspecting North Koreans, in an attempt to discredit photos that showed them smiling and comfortable.
Digital messages
Modern steganography entered the world in 1985 with the advent of personal computers being applied to classical steganography problems. Development following that was very slow, but has since taken off, going by a large number of steganography software available:
Concealing messages within the lowest bits of noisy images or sound files. A survey and evaluation of relevant literature/techniques on the topic of digital image steganography can be found here.
Concealing data within encrypted data or within random data. The message to conceal is encrypted, then used to overwrite part of a much larger block of encrypted data or a block of random data (an unbreakable cipher like the one-time pad generates ciphertexts that look perfectly random without the private key).
Chaffing and winnowing.
Mimic functions convert one file to have the statistical profile of another. This can thwart statistical methods that help brute-force attacks identify the right solution in a ciphertext-only attack.
Concealed messages in tampered executable files, exploiting redundancy in the targeted instruction set.
Pictures embedded in video material (optionally played at a slower or faster speed).
Injecting imperceptible delays to packets sent over the network from the keyboard. Delays in keypresses in some applications (telnet or remote desktop software) can mean a delay in packets, and the delays in the packets can be used to encode data.
Changing the order of elements in a set.
Content-Aware Steganography hides information in the semantics a human user assigns to a datagram. These systems offer security against a nonhuman adversary/warden.
Blog-Steganography. Messages are fractionalized and the (encrypted) pieces are added as comments of orphaned web-logs (or pin boards on social network platforms). In this case, the selection of blogs is the symmetric key that sender and recipient are using; the carrier of the hidden message is the whole blogosphere.
Modifying the echo of a sound file (Echo Steganography).
Steganography for audio signals.
Image bit-plane complexity segmentation steganography
Including data in ignored sections of a file, such as after the logical end of the carrier file.
Adaptive steganography: Skin tone based steganography using a secret embedding angle.
Embedding data within the control-flow diagram of a program subjected to control flow analysis
Digital text
Using non-printing Unicode characters Zero-Width Joiner (ZWJ) and Zero-Width Non-Joiner (ZWNJ). These characters are used for joining and disjoining letters in Arabic and Persian, but can be used in Roman alphabets for hiding information because they have no meaning in Roman alphabets: because they are "zero-width" they are not displayed. ZWJ and ZWNJ can represent "1" and "0". This may also be done with en space, figure space and whitespace characters.
Embedding a secret message in the pattern of deliberate errors and marked corrections in a word processing document, using the word processor's change tracking feature.
In 2020, Zhongliang Yang et al discovered that for text generative steganography, when the quality of the generated steganographic text is optimized to a certain extent, it may make the overall statistical distribution characteristics of the generated steganographic text more different from the normal text, making it easier to be recognized. They named this phenomenon Perceptual-Statistical Imperceptibility Conflict Effect (Psic Effect).
Hiding an image within a soundfile
An image or a text can be converted into a soundfile, which is then analysed with a spectrogram to reveal the image. Various artists have used this method to conceal hidden pictures in their songs, such as Aphex Twin in "Windowlicker" or Nine Inch Nails in their album Year Zero.
Social steganography
In communities with social or government taboos or censorship, people use cultural steganography—hiding messages in idiom, pop culture references, and other messages they share publicly and assume are monitored. This relies on social context to make the underlying messages visible only to certain readers. Examples include:
Hiding a message in the title and context of a shared video or image.
Misspelling names or words that are popular in the media in a given week, to suggest an alternate meaning.
Hiding a picture that can be traced by using Paint or any other drawing tool.
Steganography in streaming media
Since the era of evolving network applications, steganography research has shifted from image steganography to steganography in streaming media such as Voice over Internet Protocol (VoIP).
In 2003, Giannoula et al. developed a data hiding technique leading to compressed forms of source video signals on a frame-by-frame basis.
In 2005, Dittmann et al. studied steganography and watermarking of multimedia contents such as VoIP.
In 2008, Yongfeng Huang and Shanyu Tang presented a novel approach to information hiding in low bit-rate VoIP speech stream, and their published work on steganography is the first-ever effort to improve the codebook partition by using Graph theory along with Quantization Index Modulation in low bit-rate streaming media.
In 2011 and 2012, Yongfeng Huang and Shanyu Tang devised new steganographic algorithms that use codec parameters as cover object to realise real-time covert VoIP steganography. Their findings were published in IEEE Transactions on Information Forensics and Security.
Cyber-physical systems/Internet of Things
Academic work since 2012 demonstrated the feasibility of steganography for cyber-physical systems (CPS)/the Internet of Things (IoT). Some techniques of CPS/IoT steganography overlap with network steganography, i.e. hiding data in communication protocols used in CPS/the IoT. However, specific techniques hide data in CPS components. For instance, data can be stored in unused registers of IoT/CPS components and in the states of IoT/CPS actuators.
Printed
Digital steganography output may be in the form of printed documents. A message, the plaintext, may be first encrypted by traditional means, producing a ciphertext. Then, an innocuous cover text is modified in some way so as to contain the ciphertext, resulting in the stegotext. For example, the letter size, spacing, typeface, or other characteristics of a cover text can be manipulated to carry the hidden message. Only a recipient who knows the technique used can recover the message and then decrypt it. Francis Bacon developed Bacon's cipher as such a technique.
The ciphertext produced by most digital steganography methods, however, is not printable. Traditional digital methods rely on perturbing noise in the channel file to hide the message, and as such, the channel file must be transmitted to the recipient with no additional noise from the transmission. Printing introduces much noise in the ciphertext, generally rendering the message unrecoverable. There are techniques that address this limitation, one notable example being ASCII Art Steganography.
Although not classic steganography, some types of modern color laser printers integrate the model, serial number, and timestamps on each printout for traceability reasons using a dot-matrix code made of small, yellow dots not recognizable to the naked eye — see printer steganography for details.
Using puzzles
The art of concealing data in a puzzle can take advantage of the degrees of freedom in stating the puzzle, using the starting information to encode a key within the puzzle/puzzle image.
For instance, steganography using sudoku puzzles has as many keys as there are possible solutions of a sudoku puzzle, which is .
Network
In 1977, Kent concisely described the potential for covert channel signaling in general network communication protocols, even if the traffic is encrypted (in a footnote) in "Encryption-Based Protection for Interactive User/Computer Communication," Proceedings of the Fifth Data Communications Symposium, September 1977.
In 1987, Girling first studied covert channels on a local area network (LAN), identified and realised three obvious covert channels (two storage channels and one timing channel), and his research paper entitled “Covert channels in LAN’s” published in IEEE Transactions on Software Engineering, vol. SE-13 of 2, in February 1987.
In 1989, Wolf implemented covert channels in LAN protocols, e.g. using the reserved fields, pad fields, and undefined fields in the TCP/IP protocol.
In 1997, Rowland used the IP identification field, the TCP initial sequence number and acknowledge sequence number fields in TCP/IP headers to build covert channels.
In 2002, Kamran Ahsan made an excellent summary of research on network steganography.
In 2005, Steven J. Murdoch and Stephen Lewis contributed a chapter entitled "Embedding Covert Channels into TCP/IP" in the "Information Hiding" book published by Springer.
All information hiding techniques that may be used to exchange steganograms in telecommunication networks can be classified under the general term of network steganography. This nomenclature was originally introduced by Krzysztof Szczypiorski in 2003. Contrary to typical steganographic methods that use digital media (images, audio and video files) to hide data, network steganography uses communication protocols' control elements and their intrinsic functionality. As a result, such methods can be harder to detect and eliminate.
Typical network steganography methods involve modification of the properties of a single network protocol. Such modification can be applied to the PDU (Protocol Data Unit), to the time relations between the exchanged PDUs, or both (hybrid methods).
Moreover, it is feasible to utilize the relation between two or more different network protocols to enable secret communication. These applications fall under the term inter-protocol steganography. Alternatively, multiple network protocols can be used simultaneously to transfer hidden information and so-called control protocols can be embedded into steganographic communications to extend their capabilities, e.g. to allow dynamic overlay routing or the switching of utilized hiding methods and network protocols.
Network steganography covers a broad spectrum of techniques, which include, among others:
Steganophony – the concealment of messages in Voice-over-IP conversations, e.g. the employment of delayed or corrupted packets that would normally be ignored by the receiver (this method is called LACK – Lost Audio Packets Steganography), or, alternatively, hiding information in unused header fields.
WLAN Steganography – transmission of steganograms in Wireless Local Area Networks. A practical example of WLAN Steganography is the HICCUPS system (Hidden Communication System for Corrupted Networks)
Terminology and Taxonomy
In 2015, a taxonomy of 109 network hiding methods was presented by Steffen Wendzel, Sebastian Zander et al. that summarized core concepts used in network steganography research. The taxonomy was developed further in recent years by several publications and authors and adjusted to new domains, such as CPS steganography.
Additional terminology
Discussions of steganography generally use terminology analogous to and consistent with conventional radio and communications technology. However, some terms appear specifically in software and are easily confused. These are the most relevant ones to digital steganographic systems:
The payload is the data covertly communicated. The carrier is the signal, stream, or data file that hides the payload, which differs from the channel, which typically means the type of input, such as a JPEG image. The resulting signal, stream, or data file with the encoded payload is sometimes called the package, stego file, or covert message. The proportion of bytes, samples, or other signal elements modified to encode the payload is called the encoding density and is typically expressed as a number between 0 and 1.
In a set of files, the files that are considered likely to contain a payload are suspects. A suspect identified through some type of statistical analysis can be referred to as a candidate.
Countermeasures and detection
Detecting physical steganography requires a careful physical examination, including the use of magnification, developer chemicals, and ultraviolet light. It is a time-consuming process with obvious resource implications, even in countries that employ many people to spy on their fellow nationals. However, it is feasible to screen mail of certain suspected individuals or institutions, such as prisons or prisoner-of-war (POW) camps.
During World War II, prisoner of war camps gave prisoners specially-treated paper that would reveal invisible ink. An article in the 24 June 1948 issue of Paper Trade Journal by the Technical Director of the United States Government Printing Office had Morris S. Kantrowitz describe in general terms the development of this paper. Three prototype papers (Sensicoat, Anilith, and Coatalith) were used to manufacture postcards and stationery provided to German prisoners of war in the US and Canada. If POWs tried to write a hidden message, the special paper rendered it visible. The US granted at least two patents related to the technology, one to Kantrowitz, , "Water-Detecting paper and Water-Detecting Coating Composition Therefor," patented 18 July 1950, and an earlier one, "Moisture-Sensitive Paper and the Manufacture Thereof," , patented 20 July 1948. A similar strategy issues prisoners with writing paper ruled with a water-soluble ink that runs in contact with water-based invisible ink.
In computing, steganographically encoded package detection is called steganalysis. The simplest method to detect modified files, however, is to compare them to known originals. For example, to detect information being moved through the graphics on a website, an analyst can maintain known clean copies of the materials and then compare them against the current contents of the site. The differences, if the carrier is the same, comprise the payload. In general, using extremely high compression rates makes steganography difficult but not impossible. Compression errors provide a hiding place for data, but high compression reduces the amount of data available to hold the payload, raising the encoding density, which facilitates easier detection (in extreme cases, even by casual observation).
There are a variety of basic tests that can be done to identify whether or not a secret message exists. This process is not concerned with the extraction of the message, which is a different process and a separate step. The most basic approaches of steganalysis are visual or aural attacks, structural attacks, and statistical attacks. These approaches attempt to detect the steganographic algorithms that were used. These algorithms range from unsophisticated to very sophisticated, with early algorithms being much easier to detect due to statistical anomalies that were present. The size of the message that is being hidden is a factor in how difficult it is to detect. The overall size of the cover object also plays a factor as well. If the cover object is small and the message is large, this can distort the statistics and make it easier to detect. A larger cover object with a small message decreases the statistics and gives it a better chance of going unnoticed.
Steganalysis that targets a particular algorithm has much better success as it is able to key in on the anomalies that are left behind. This is because the analysis can perform a targeted search to discover known tendencies since it is aware of the behaviors that it commonly exhibits. When analyzing an image the least significant bits of many images are actually not random. The camera sensor, especially lower-end sensors are not the best quality and can introduce some random bits. This can also be affected by the file compression done on the image. Secret messages can be introduced into the least significant bits in an image and then hidden. A steganography tool can be used to camouflage the secret message in the least significant bits but it can introduce a random area that is too perfect. This area of perfect randomization stands out and can be detected by comparing the least significant bits to the next-to-least significant bits on an image that hasn't been compressed.
Generally, though, there are many techniques known to be able to hide messages in data using steganographic techniques. None are, by definition, obvious when users employ standard applications, but some can be detected by specialist tools. Others, however, are resistant to detection—or rather it is not possible to reliably distinguish data containing a hidden message from data containing just noise—even when the most sophisticated analysis is performed. Steganography is being used to conceal and deliver more effective cyber attacks, referred to as Stegware. The term Stegware was first introduced in 2017 to describe any malicious operation involving steganography as a vehicle to conceal an attack. Detection of steganography is challenging, and because of that, not an adequate defence. Therefore, the only way of defeating the threat is to transform data in a way that destroys any hidden messages, a process called Content Threat Removal.
Applications
Use in modern printers
Some modern computer printers use steganography, including Hewlett-Packard and Xerox brand color laser printers. The printers add tiny yellow dots to each page. The barely-visible dots contain encoded printer serial numbers and date and time stamps.
Example from modern practice
The larger the cover message (in binary data, the number of bits) relative to the hidden message, the easier it is to hide the hidden message (as an analogy, the larger the "haystack", the easier it is to hide a "needle"). So digital pictures, which contain much data, are sometimes used to hide messages on the Internet and on other digital communication media. It is not clear how common this practice actually is.
For example, a 24-bit bitmap uses 8 bits to represent each of the three color values (red, green, and blue) of each pixel. The blue alone has 28 different levels of blue intensity. The difference between 11111111 and 11111110 in the value for blue intensity is likely to be undetectable by the human eye. Therefore, the least significant bit can be used more or less undetectably for something else other than color information. If that is repeated for the green and the red elements of each pixel as well, it is possible to encode one letter of ASCII text for every three pixels.
Stated somewhat more formally, the objective for making steganographic encoding difficult to detect is to ensure that the changes to the carrier (the original signal) because of the injection of the payload (the signal to covertly embed) are visually (and ideally, statistically) negligible. The changes are indistinguishable from the noise floor of the carrier. All media can be a carrier, but media with a large amount of redundant or compressible information is better suited.
From an information theoretical point of view, that means that the channel must have more capacity than the "surface" signal requires. There must be redundancy. For a digital image, it may be noise from the imaging element; for digital audio, it may be noise from recording techniques or amplification equipment. In general, electronics that digitize an analog signal suffer from several noise sources, such as thermal noise, flicker noise, and shot noise. The noise provides enough variation in the captured digital information that it can be exploited as a noise cover for hidden data. In addition, lossy compression schemes (such as JPEG) always introduce some error to the decompressed data, and it is possible to exploit that for steganographic use, as well.
Although steganography and digital watermarking seem similar, they are not. In steganography, the hidden message should remain intact until it reaches its destination. Steganography can be used for digital watermarking in which a message (being simply an identifier) is hidden in an image so that its source can be tracked or verified (for example, Coded Anti-Piracy) or even just to identify an image (as in the EURion constellation). In such a case, the technique of hiding the message (here, the watermark) must be robust to prevent tampering. However, digital watermarking sometimes requires a brittle watermark, which can be modified easily, to check whether the image has been tampered with. That is the key difference between steganography and digital watermarking.
Alleged use by intelligence services
In 2010, the Federal Bureau of Investigation alleged that the Russian foreign intelligence service uses customized steganography software for embedding encrypted text messages inside image files for certain communications with "illegal agents" (agents without diplomatic cover) stationed abroad.
On April 23, 2019 the U.S. Department of Justice unsealed an indictment charging Xiaoqing Zheng, a Chinese businessman and former Principal Engineer at General Electric, with 14 counts of conspiring to steal intellectual property and trade secrets from General Electric. Zheng had allegedly used steganography to exfiltrate 20,000 documents from General Electric to Tianyi Aviation Technology Co. in Nanjing, China, a company the FBI accused him of starting with backing from the Chinese government.
Distributed steganography
There are distributed steganography methods, including methodologies that distribute the payload through multiple carrier files in diverse locations to make detection more difficult. For example, by cryptographer William Easttom (Chuck Easttom).
Online challenge
The puzzles that are presented by Cicada 3301 incorporate steganography with cryptography and other solving techniques since 2012. Puzzles involving steganography have also been featured in other alternate reality games.
The communications of The Mayday Mystery incorporate steganography and other solving techniques since 1981.
See also
References
Sources
External links
An overview of digital steganography, particularly within images, for the computationally curious by Chris League, Long Island University, 2015
Examples showing images hidden in other images
Information Hiding: Steganography & Digital Watermarking. Papers and information about steganography and steganalysis research from 1995 to the present. Includes Steganography Software Wiki list. Dr. Neil F. Johnson.
Detecting Steganographic Content on the Internet. 2002 paper by Niels Provos and Peter Honeyman published in Proceedings of the Network and Distributed System Security Symposium (San Diego, CA, February 6–8, 2002). NDSS 2002. Internet Society, Washington, D.C.
Covert Channels in the TCP/IP Suite1996 paper by Craig Rowland detailing the hiding of data in TCP/IP packets.
Network Steganography Centre Tutorials. How-to articles on the subject of network steganography (Wireless LANs, VoIP – Steganophony, TCP/IP protocols and mechanisms, Steganographic Router, Inter-protocol steganography). By Krzysztof Szczypiorski and Wojciech Mazurczyk from Network Security Group.
Invitation to BPCS-Steganography.
Steganography by Michael T. Raggo, DefCon 12 (1 August 2004)
File Format Extension Through Steganography by Blake W. Ford and Khosrow Kaikhah
Computer steganography. Theory and practice with Mathcad (Rus) 2006 paper by Konakhovich G. F., Puzyrenko A. Yu. published in MK-Press Kyiv, Ukraine
Espionage techniques |
28814 | https://en.wikipedia.org/wiki/Secure%20Shell | Secure Shell | The Secure Shell Protocol (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Its most notable applications are remote login and command-line execution.
SSH applications are based on a client–server architecture, connecting an SSH client instance with an SSH server. SSH operates as a layered protocol suite comprising three principal hierarchical components: the transport layer provides server authentication, confidentiality, and integrity; the user authentication protocol validates the user to the server; and the connection protocol multiplexes the encrypted tunnel into multiple logical communication channels.
SSH was designed on Unix-like operating systems, as a replacement for Telnet and for unsecured remote Unix shell protocols, such as the Berkeley Remote Shell (rsh) and the related rlogin and rexec protocols, which all use insecure, plaintext transmission of authentication tokens.
SSH was first designed in 1995 by Finnish computer scientist Tatu Ylönen. Subsequent development of the protocol suite proceeded in several developer groups, producing several variants of implementation. The protocol specification distinguishes two major versions, referred to as SSH-1 and SSH-2. The most commonly implemented software track is OpenSSH, released in 1999 as open-source software by the OpenBSD developers. Implementations are distributed for all types of operating systems in common use, including embedded systems.
Definition
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary.
SSH may be used in several methodologies. In the simplest manner, both ends of a communication channel use automatically generated public-private key pairs to encrypt a network connection, and then use a password to authenticate the user.
When the public-private key pair is generated by the user manually, the authentication is essentially performed when the key pair is created, and a session may then be opened automatically without a password prompt. In this scenario, the public key is placed on all computers that must allow access to the owner of the matching private key, which the owner keeps private. While authentication is based on the private key, the key is never transferred through the network during authentication. SSH only verifies that the same person offering the public key also owns the matching private key.
In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user.
Authentication: OpenSSH key management
On Unix-like systems, the list of authorized public keys is typically stored in the home directory of the user that is allowed to log in remotely, in the file ~/.ssh/authorized_keys. This file is respected by SSH only if it is not writable by anything apart from the owner and root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase.
The private key can also be looked for in standard places, and its full path can be specified as a command line setting (the option -i for ssh). The ssh-keygen utility produces the public and private keys, always in pairs.
SSH also supports password-based authentication that is encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, and obtain it (man-in-the-middle attack). However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side previously used. The SSH client raises a warning before accepting the key of a new, previously unknown server. Password authentication can be disabled from the server side.
Usage
SSH is typically used to log into a remote machine and execute commands, but it also supports tunneling, forwarding TCP ports and X11 connections; it can transfer files using the associated SSH file transfer (SFTP) or secure copy (SCP) protocols. SSH uses the client–server model.
An SSH client program is typically used for establishing connections to an SSH daemon accepting remote connections. Both are commonly present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to Windows 10 version 1709 do not include SSH by default. Proprietary, freeware and open source (e.g. PuTTY, and the version of OpenSSH which is part of Cygwin) versions of various levels of complexity and completeness exist. File managers for UNIX-like systems (e.g. Konqueror) can use the FISH protocol to provide a split-pane GUI with drag-and-drop. The open source Windows program WinSCP provides similar file management (synchronization, copy, remote delete) capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows typically involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.
The IANA has assigned TCP port 22, UDP port 22 and SCTP port 22 for this protocol. IANA had listed the standard TCP port 22 for SSH servers as one of the well-known ports as early as 2001. SSH can also be run using SCTP rather than TCP as the connection oriented transport layer protocol.
Historical development
Version 1
In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, Finland, designed the first version of the protocol (now called SSH-1) prompted by a password-sniffing attack at his university network. The goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, and the tool quickly gained in popularity. Towards the end of 1995, the SSH user base had grown to 20,000 users in fifty countries.
In December 1995, Ylönen founded SSH Communications Security to market and develop SSH. The original version of the SSH software used various pieces of free software, such as GNU libgmp, but later versions released by SSH Communications Security evolved into increasingly proprietary software.
It was estimated that by the year 2000 the number of users had grown to 2 million.
Version 2
"Secsh" was the official Internet Engineering Task Force's (IETF) name for the IETF working group responsible for version 2 of the SSH protocol. In 2006, a revised version of the protocol, SSH-2, was adopted as a standard. This version is incompatible with SSH-1. SSH-2 features both security and feature improvements over SSH-1. Better security, for example, comes through Diffie–Hellman key exchange and strong integrity checking via message authentication codes. New features of SSH-2 include the ability to run any number of shell sessions over a single SSH connection. Due to SSH-2's superiority and popularity over SSH-1, some implementations such as libssh (v0.8.0+), Lsh and Dropbear support only the SSH-2 protocol.
Version 1.99
In January 2006, well after version 2.1 was established, RFC 4253 specified that an SSH server supporting 2.0 as well as prior versions should identify its protocol version as 1.99. This version number does not reflect a historical software revision, but a method to identify backward compatibility.
OpenSSH and OSSH
In 1999, developers, desiring availability of a free software version, restarted software development from the 1.2.12 release of the original SSH program, which was the last released under an open source license. This served as a code base for Björn Grönvall's OSSH software. Shortly thereafter, OpenBSD developers forked Grönvall's code and created OpenSSH, which shipped with Release 2.6 of OpenBSD. From this version, a "portability" branch was formed to port OpenSSH to other operating systems.
, OpenSSH was the single most popular SSH implementation, being the default version in a large number of operating system distributions. OSSH meanwhile has become obsolete. OpenSSH continues to be maintained and supports the SSH-2 protocol, having expunged SSH-1 support from the codebase in the OpenSSH 7.6 release.
Uses
SSH is a protocol that can be used for many applications across many platforms including most Unix variants (Linux, the BSDs including Apple's macOS, and Solaris), as well as Microsoft Windows. Some of the applications below may require features that are only available or compatible with specific SSH clients or servers. For example, using the SSH protocol to implement a VPN is possible, but presently only with the OpenSSH server and client implementation.
For login to a shell on a remote host (replacing Telnet and rlogin)
For executing a single command on a remote host (replacing rsh)
For setting up automatic (passwordless) login to a remote server (for example, using OpenSSH)
In combination with rsync to back up, copy and mirror files efficiently and securely
For forwarding a port
For tunneling (not to be confused with a VPN, which routes packets between different networks, or bridges two broadcast domains into one).
For using as a full-fledged encrypted VPN. Note that only OpenSSH server and client supports this feature.
For forwarding X from a remote host (possible through multiple intermediate hosts)
For browsing the web through an encrypted proxy connection with SSH clients that support the SOCKS protocol.
For securely mounting a directory on a remote server as a filesystem on a local computer using SSHFS.
For automated remote monitoring and management of servers through one or more of the mechanisms discussed above.
For development on a mobile or embedded device that supports SSH.
For securing file transfer protocols.
File transfer protocols
The Secure Shell protocols are used in several file transfer mechanisms.
Secure copy (SCP), which evolved from RCP protocol over SSH
rsync, intended to be more efficient than SCP. Generally runs over an SSH connection.
SSH File Transfer Protocol (SFTP), a secure alternative to FTP (not to be confused with FTP over SSH or FTPS)
Files transferred over shell protocol (a.k.a. FISH), released in 1998, which evolved from Unix shell commands over SSH
Fast and Secure Protocol (FASP), aka Aspera, uses SSH for control and UDP ports for data transfer.
Architecture
The SSH protocol has a layered architecture with separates three components:
The transport layer (RFC 4253) typically uses the Transmission Control Protocol (TCP) of TCP/IP, reserving port number 22 as a server listening port. This layer handles initial key exchange as well as server authentication, and sets up encryption, compression, and integrity verification. It exposes to the upper layer an interface for sending and receiving plaintext packets with a size of up to 32,768 bytes each, but more can be allowed by each implementation. The transport layer also arranges for key re-exchange, usually after 1 GB of data has been transferred or after one hour has passed, whichever occurs first.
The user authentication layer (RFC 4252) handles client authentication, and provides a suite of authentication algorithms. Authentication is client-driven: when one is prompted for a password, it may be the SSH client prompting, not the server. The server merely responds to the client's authentication requests. Widely used user-authentication methods include the following:
password: a method for straightforward password authentication, including a facility allowing a password to be changed. Not all programs implement this method.
publickey: a method for public-key-based authentication, usually supporting at least DSA, ECDSA or RSA keypairs, with other implementations also supporting X.509 certificates.
keyboard-interactive (RFC 4256): a versatile method where the server sends one or more prompts to enter information and the client displays them and sends back responses keyed-in by the user. Used to provide one-time password authentication such as S/Key or SecurID. Used by some OpenSSH configurations when PAM is the underlying host-authentication provider to effectively provide password authentication, sometimes leading to inability to log in with a client that supports just the plain password authentication method.
GSSAPI authentication methods which provide an extensible scheme to perform SSH authentication using external mechanisms such as Kerberos 5 or NTLM, providing single sign-on capability to SSH sessions. These methods are usually implemented by commercial SSH implementations for use in organizations, though OpenSSH does have a working GSSAPI implementation.
The connection layer (RFC 4254) defines the concept of channels, channel requests, and global requests, which define the SSH services provided. A single SSH connection can be multiplexed into multiple logical channels simultaneously, each transferring data bidirectionally. Channel requests are used to relay out-of-band channel-specific data, such as the changed size of a terminal window, or the exit code of a server-side process. Additionally, each channel performs its own flow control using the receive window size. The SSH client requests a server-side port to be forwarded using a global request. Standard channel types include:
shell for terminal shells, SFTP and exec requests (including SCP transfers)
direct-tcpip for client-to-server forwarded connections
forwarded-tcpip for server-to-client forwarded connections
The SSHFP DNS record (RFC 4255) provides the public host key fingerprints in order to aid in verifying the authenticity of the host.
This open architecture provides considerable flexibility, allowing the use of SSH for a variety of purposes beyond a secure shell. The functionality of the transport layer alone is comparable to Transport Layer Security (TLS); the user-authentication layer is highly extensible with custom authentication methods; and the connection layer provides the ability to multiplex many secondary sessions into a single SSH connection, a feature comparable to BEEP and not available in TLS.
Algorithms
EdDSA, ECDSA, RSA and DSA for public-key cryptography.
ECDH and Diffie–Hellman for key exchange.
HMAC, AEAD and UMAC for MAC.
AES (and deprecated RC4, 3DES, DES) for symmetric encryption.
AES-GCM and ChaCha20-Poly1305 for AEAD encryption.
SHA (and deprecated MD5) for key fingerprint.
Vulnerabilities
SSH-1
In 1998, a vulnerability was described in SSH 1.5 which allowed the unauthorized insertion of content into an encrypted SSH stream due to insufficient data integrity protection from CRC-32 used in this version of the protocol. A fix known as SSH Compensation Attack Detector was introduced into most implementations. Many of these updated implementations contained a new integer overflow vulnerability that allowed attackers to execute arbitrary code with the privileges of the SSH daemon, typically root.
In January 2001 a vulnerability was discovered that allows attackers to modify the last block of an IDEA-encrypted session. The same month, another vulnerability was discovered that allowed a malicious server to forward a client authentication to another server.
Since SSH-1 has inherent design flaws which make it vulnerable, it is now generally considered obsolete and should be avoided by explicitly disabling fallback to SSH-1. Most modern servers and clients support SSH-2.
CBC plaintext recovery
In November 2008, a theoretical vulnerability was discovered for all versions of SSH which allowed recovery of up to 32 bits of plaintext from a block of ciphertext that was encrypted using what was then the standard default encryption mode, CBC. The most straightforward solution is to use CTR, counter mode, instead of CBC mode, since this renders SSH resistant to the attack.
Suspected decryption by NSA
On December 28, 2014 Der Spiegel published classified information leaked by whistleblower Edward Snowden which suggests that the National Security Agency may be able to decrypt some SSH traffic. The technical details associated with such a process were not disclosed. A 2017 analysis of the CIA hacking tools BothanSpy and Gyrfalcon suggested that the SSH protocol was not compromised.
Standards documentation
The following RFC publications by the IETF "secsh" working group document SSH-2 as a proposed Internet standard.
– The Secure Shell (SSH) Protocol Assigned Numbers
– The Secure Shell (SSH) Protocol Architecture
– The Secure Shell (SSH) Authentication Protocol
– The Secure Shell (SSH) Transport Layer Protocol
– The Secure Shell (SSH) Connection Protocol
– Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints
– Generic Message Exchange Authentication for the Secure Shell Protocol (SSH)
– The Secure Shell (SSH) Session Channel Break Extension
– The Secure Shell (SSH) Transport Layer Encryption Modes
– Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol
The protocol specifications were later updated by the following publications:
– Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol (March 2006)
– RSA Key Exchange for the Secure Shell (SSH) Transport Layer Protocol (March 2006)
– Generic Security Service Application Program Interface (GSS-API) Authentication and Key Exchange for the Secure Shell (SSH) Protocol (May 2006)
– The Secure Shell (SSH) Public Key File Format (November 2006)
– Secure Shell Public Key Subsystem (March 2007)
– AES Galois Counter Mode for the Secure Shell Transport Layer Protocol (August 2009)
– Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer (December 2009)
– X.509v3 Certificates for Secure Shell Authentication (March 2011)
– Suite B Cryptographic Suites for Secure Shell (SSH) (May 2011)
– Use of the SHA-256 Algorithm with RSA, Digital Signature Algorithm (DSA), and Elliptic Curve DSA (ECDSA) in SSHFP Resource Records (April 2012)
– SHA-2 Data Integrity Verification for the Secure Shell (SSH) Transport Layer Protocol (July 2012)
– Ed25519 SSHFP Resource Records (March 2015)
– Secure Shell Transport Model for the Simple Network Management Protocol (SNMP) (June 2009)
– Using the NETCONF Protocol over Secure Shell (SSH) (June 2011)
draft-gerhards-syslog-transport-ssh-00 – SSH transport mapping for SYSLOG (July 2006)
draft-ietf-secsh-filexfer-13 – SSH File Transfer Protocol (July 2006)
In addition, the OpenSSH project includes several vendor protocol specifications/extensions:
OpenSSH PROTOCOL overview
OpenSSH certificate/key overview
draft-miller-ssh-agent-04 - SSH Agent Protocol (December 2019)
See also
Brute-force attack
Comparison of SSH clients
Comparison of SSH servers
Corkscrew
Ident
OpenSSH
Secure Shell tunneling
Web-based SSH
References
Further reading
Original announcement of Ssh
External links
SSH Protocols
Application layer protocols
Finnish inventions |
29048 | https://en.wikipedia.org/wiki/Single-sideband%20modulation | Single-sideband modulation | In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver.
Basic concept
Radio transmitters work by mixing a radio frequency (RF) signal of a specific frequency, the carrier wave, with the audio signal to be broadcast. In AM transmitters this mixing usually takes place in the final RF amplifier (high level modulation). It is less common and much less efficient to do the mixing at low power and then amplify it in a linear amplifier. Either method produces a set of frequencies with a strong signal at the carrier frequency and with weaker signals at frequencies extending above and below the carrier frequency by the maximum frequency of the input signal. Thus the resulting signal has a spectrum whose bandwidth is twice the maximum frequency of the original input audio signal.
SSB takes advantage of the fact that the entire original signal is encoded in each of these "sidebands". It is not necessary to transmit both sidebands plus the carrier, as a suitable receiver can extract the entire original signal from either the upper or lower sideband. There are several methods for eliminating the carrier and one sideband from the transmitted signal. Producing this single sideband signal is too complicated to be done in the final amplifier stage as with AM. SSB Modulation must be done at a low level and amplified in a linear amplifier where lower efficiency partially offsets the power advantage gained by eliminating the carrier and one sideband. Nevertheless, SSB transmissions use the available amplifier energy considerably more efficiently, providing longer-range transmission for the same power output. In addition, the occupied spectrum is less than half that of a full carrier AM signal.
SSB reception requires frequency stability and selectivity well beyond that of inexpensive AM receivers which is why broadcasters have seldom used it. In point to point communications where expensive receivers are in common use already they can successfully be adjusted to receive whichever sideband is being transmitted.
History
The first U.S. patent application for SSB modulation was filed on December 1, 1915 by John Renshaw Carson. The U.S. Navy experimented with SSB over its radio circuits before World War I. SSB first entered commercial service on January 7, 1927, on the longwave transatlantic public radiotelephone circuit between New York and London. The high power SSB transmitters were located at Rocky Point, New York, and Rugby, England. The receivers were in very quiet locations in Houlton, Maine, and Cupar Scotland.
SSB was also used over long distance telephone lines, as part of a technique known as frequency-division multiplexing (FDM). FDM was pioneered by telephone companies in the 1930s. With this technology, many simultaneous voice channels could be transmitted on a single physical circuit, for example in L-carrier. With SSB, channels could be spaced (usually) only 4,000 Hz apart, while offering a speech bandwidth of nominally 300 Hz to 3,400 Hz.
Amateur radio operators began serious experimentation with SSB after World War II. The Strategic Air Command established SSB as the radio standard for its aircraft in 1957. It has become a de facto standard for long-distance voice radio transmissions since then.
Mathematical formulation
Single-sideband has the mathematical form of quadrature amplitude modulation (QAM) in the special case where one of the baseband waveforms is derived from the other, instead of being independent messages:
where is the message (real-valued), is its Hilbert transform, and is the radio carrier frequency.
To understand this formula, we may express as the real part of a complex-valued function, with no loss of information:
where represents the imaginary unit. is the analytic representation of which means that it comprises only the positive-frequency components of :
where and are the respective Fourier transforms of and Therefore, the frequency-translated function contains only one side of Since it also has only positive-frequency components, its inverse Fourier transform is the analytic representation of
and again the real part of this expression causes no loss of information. With Euler's formula to expand we obtain :
Coherent demodulation of to recover is the same as AM: multiply by and lowpass to remove the "double-frequency" components around frequency . If the demodulating carrier is not in the correct phase (cosine phase here), then the demodulated signal will be some linear combination of and , which is usually acceptable in voice communications (if the demodulation carrier frequency is not quite right, the phase will be drifting cyclically, which again is usually acceptable in voice communications if the frequency error is small enough, and amateur radio operators are sometimes tolerant of even larger frequency errors that cause unnatural-sounding pitch shifting effects).
Lower sideband
can also be recovered as the real part of the complex-conjugate, which represents the negative frequency portion of When is large enough that has no negative frequencies, the product is another analytic signal, whose real part is the actual lower-sideband transmission:
The sum of the two sideband signals is:
which is the classic model of suppressed-carrier double sideband AM.
Practical implementations
Bandpass filtering
One method of producing an SSB signal is to remove one of the sidebands via filtering, leaving only either the upper sideband (USB), the sideband with the higher frequency, or less commonly the lower sideband (LSB), the sideband with the lower frequency. Most often, the carrier is reduced or removed entirely (suppressed), being referred to in full as single sideband suppressed carrier (SSBSC). Assuming both sidebands are symmetric, which is the case for a normal AM signal, no information is lost in the process. Since the final RF amplification is now concentrated in a single sideband, the effective power output is greater than in normal AM (the carrier and redundant sideband account for well over half of the power output of an AM transmitter). Though SSB uses substantially less bandwidth and power, it cannot be demodulated by a simple envelope detector like standard AM.
Hartley modulator
An alternate method of generation known as a Hartley modulator, named after R. V. L. Hartley, uses phasing to suppress the unwanted sideband. To generate an SSB signal with this method, two versions of the original signal are generated, mutually 90° out of phase for any single frequency within the operating bandwidth. Each one of these signals then modulates carrier waves (of one frequency) that are also 90° out of phase with each other. By either adding or subtracting the resulting signals, a lower or upper sideband signal results. A benefit of this approach is to allow an analytical expression for SSB signals, which can be used to understand effects such as synchronous detection of SSB.
Shifting the baseband signal 90° out of phase cannot be done simply by delaying it, as it contains a large range of frequencies. In analog circuits, a wideband 90-degree phase-difference network is used. The method was popular in the days of vacuum tube radios, but later gained a bad reputation due to poorly adjusted commercial implementations. Modulation using this method is again gaining popularity in the homebrew and DSP fields. This method, utilizing the Hilbert transform to phase shift the baseband audio, can be done at low cost with digital circuitry.
Weaver modulator
Another variation, the Weaver modulator, uses only lowpass filters and quadrature mixers, and is a favored method in digital implementations.
In Weaver's method, the band of interest is first translated to be centered at zero, conceptually by modulating a complex exponential with frequency in the middle of the voiceband, but implemented by a quadrature pair of sine and cosine modulators at that frequency (e.g. 2 kHz). This complex signal or pair of real signals is then lowpass filtered to remove the undesired sideband that is not centered at zero. Then, the single-sideband complex signal centered at zero is upconverted to a real signal, by another pair of quadrature mixers, to the desired center frequency.
Full, reduced, and suppressed-carrier SSB
Conventional amplitude-modulated signals can be considered wasteful of power and bandwidth because they contain a carrier signal and two identical sidebands. Therefore, SSB transmitters are generally designed to minimize the amplitude of the carrier signal. When the carrier is removed from the transmitted signal, it is called suppressed-carrier SSB.
However, in order for a receiver to reproduce the transmitted audio without distortion, it must be tuned to exactly the same frequency as the transmitter. Since this is difficult to achieve in practice, SSB transmissions can sound unnatural, and if the error in frequency is great enough, it can cause poor intelligibility. In order to correct this, a small amount of the original carrier signal can be transmitted so that receivers with the necessary circuitry to synchronize with the transmitted carrier can correctly demodulate the audio. This mode of transmission is called reduced-carrier single-sideband.
In other cases, it may be desirable to maintain some degree of compatibility with simple AM receivers, while still reducing the signal's bandwidth. This can be accomplished by transmitting single-sideband with a normal or slightly reduced carrier. This mode is called compatible (or full-carrier) SSB or amplitude modulation equivalent (AME). In typical AME systems, harmonic distortion can reach 25%, and intermodulation distortion can be much higher than normal, but minimizing distortion in receivers with envelope detectors is generally considered less important than allowing them to produce intelligible audio.
A second, and perhaps more correct, definition of "compatible single sideband" (CSSB) refers to a form of amplitude and phase modulation in which the carrier is transmitted along with a series of sidebands that are predominantly above or below the carrier term. Since phase modulation is present in the generation of the signal, energy is removed from the carrier term and redistributed into the sideband structure similar to that which occurs in analog frequency modulation. The signals feeding the phase modulator and the envelope modulator are further phase-shifted by 90° with respect to each other. This places the information terms in quadrature with each other; the Hilbert transform of information to be transmitted is utilized to cause constructive addition of one sideband and cancellation of the opposite primary sideband. Since phase modulation is employed, higher-order terms are also generated. Several methods have been employed to reduce the impact (amplitude) of most of these higher-order terms. In one system, the phase-modulated term is actually the log of the value of the carrier level plus the phase-shifted audio/information term. This produces an ideal CSSB signal, where at low modulation levels only a first-order term on one side of the carrier is predominant. As the modulation level is increased, the carrier level is reduced while a second-order term increases substantially in amplitude. At the point of 100% envelope modulation, 6 dB of power is removed from the carrier term, and the second-order term is identical in amplitude to carrier term. The first-order sideband has increased in level until it is now at the same level as the formerly unmodulated carrier. At the point of 100% modulation, the spectrum appears identical to a normal double-sideband AM transmission, with the center term (now the primary audio term) at a 0 dB reference level, and both terms on either side of the primary sideband at −6 dB. The difference is that what appears to be the carrier has shifted by the audio-frequency term towards the "sideband in use". At levels below 100% modulation, the sideband structure appears quite asymmetric. When voice is conveyed by a CSSB source of this type, low-frequency components are dominant, while higher-frequency terms are lower by as much as 20 dB at 3 kHz. The result is that the signal occupies approximately 1/2 the normal bandwidth of a full-carrier, DSB signal. There is one catch: the audio term utilized to phase-modulate the carrier is generated based on a log function that is biased by the carrier level. At negative 100% modulation, the term is driven to zero (0), and the modulator becomes undefined. Strict modulation control must be employed to maintain stability of the system and avoid splatter. This system is of Russian origin and was described in the late 1950s. It is uncertain whether it was ever deployed.
A second series of approaches was designed and patented by Leonard R. Kahn. The various Kahn systems removed the hard limit imposed by the use of the strict log function in the generation of the signal. Earlier Kahn systems utilized various methods to reduce the second-order term through the insertion of a predistortion component. One example of this method was also used to generate one of the Kahn independent-sideband (ISB) AM stereo signals. It was known as the STR-77 exciter method, having been introduced in 1977. Later, the system was further improved by use of an arcsine-based modulator that included a 1-0.52E term in the denominator of the arcsin generator equation. E represents the envelope term; roughly half the modulation term applied to the envelope modulator is utilized to reduce the second-order term of the arcsin "phase"-modulated path; thus reducing the second-order term in the undesired sideband. A multi-loop modulator/demodulator feedback approach was used to generate an accurate arcsin signal. This approach was introduced in 1984 and became known as the STR-84 method. It was sold by Kahn Research Laboratories; later, Kahn Communications, Inc. of NY. An additional audio processing device further improved the sideband structure by selectively applying pre-emphasis to the modulating signals. Since the envelope of all the signals described remains an exact copy of the information applied to the modulator, it can be demodulated without distortion by an envelope detector such as a simple diode. In a practical receiver, some distortion may be present, usually at a low level (in AM broadcast, always below 5%), due to sharp filtering and nonlinear group delay in the IF filters of the receiver, which act to truncate the compatibility sideband – those terms that are not the result of a linear process of simply envelope modulating the signal as would be the case in full-carrier DSB-AM – and rotation of phase of these compatibility terms such that they no longer cancel the quadrature distortion term caused by a first-order SSB term along with the carrier. The small amount of distortion caused by this effect is generally quite low and acceptable.
The Kahn CSSB method was also briefly used by Airphone as the modulation method employed for early consumer telephone calls that could be placed from an aircraft to ground. This was quickly supplanted by digital modulation methods to achieve even greater spectral efficiency.
While CSSB is seldom used today in the AM/MW broadcast bands worldwide, some amateur radio operators still experiment with it.
Demodulation
The front end of an SSB receiver is similar to that of an AM or FM receiver, consisting of a superheterodyne RF front end that produces a frequency-shifted version of the radio frequency (RF) signal within a standard intermediate frequency (IF) band.
To recover the original signal from the IF SSB signal, the single sideband must be frequency-shifted down to its original range of baseband frequencies, by using a product detector which mixes it with the output of a beat frequency oscillator (BFO). In other words, it is just another stage of heterodyning. For this to work, the BFO frequency must be exactly adjusted.
If the BFO frequency is off, the output signal will be frequency-shifted (up or down), making speech sound strange and "Donald Duck"-like, or unintelligible.
For audio communications, there is a common agreement about the BFO oscillator shift of 1.7 kHz. A voice signal is sensitive to about 50 Hz shift, with up to 100 Hz still bearable. Some receivers use a carrier recovery system, which attempts to automatically lock on to the exact IF frequency. The carrier recovery doesn't solve the frequency shift. It gives better S/N ratio on the detector output.
As an example, consider an IF SSB signal centered at frequency = 45000 Hz. The baseband frequency it needs to be shifted to is = 2000 Hz. The BFO output waveform is . When the signal is multiplied by (aka heterodyned with) the BFO waveform, it shifts the signal to , and to , which is known as the beat frequency or image frequency. The objective is to choose an that results in = 2000 Hz. (The unwanted components at can be removed by a lowpass filter; for which an output transducer or the human ear may serve).
There are two choices for : 43000 Hz and 47000 Hz, called low-side and high-side injection. With high-side injection, the spectral components that were distributed around 45000 Hz will be distributed around 2000 Hz in the reverse order, also known as an inverted spectrum. That is in fact desirable when the IF spectrum is also inverted, because the BFO inversion restores the proper relationships. One reason for that is when the IF spectrum is the output of an inverting stage in the receiver. Another reason is when the SSB signal is actually a lower sideband, instead of an upper sideband. But if both reasons are true, then the IF spectrum is not inverted, and the non-inverting BFO (43000 Hz) should be used.
If is off by a small amount, then the beat frequency is not exactly , which can lead to the speech distortion mentioned earlier.
SSB as a speech-scrambling technique
SSB techniques can also be adapted to frequency-shift and frequency-invert baseband waveforms (voice inversion). This voice scrambling method was made by running the audio of one side band modulated audio sample through its opposite (e.g. running an LSB modulated audio sample through a radio running USB modulation).
These effects were used, in conjunction with other filtering techniques, during World War II as a simple method for speech encryption. Radiotelephone conversations between the US and Britain were intercepted and "decrypted" by the Germans; they included some early conversations between Franklin D. Roosevelt and Churchill. In fact, the signals could be understood directly by trained operators. Largely to allow secure communications between Roosevelt and Churchill, the SIGSALY system of digital encryption was devised.
Today, such simple inversion-based speech encryption techniques are easily decrypted using simple techniques and are no longer regarded as secure.
Vestigial sideband (VSB)
Limitation of single-sideband modulation being used for voice signals and not available for video/TV signals leads to the usage of vestigial sideband. A vestigial sideband (in radio communication) is a sideband that has been only partly cut off or suppressed. Television broadcasts (in analog video formats) use this method if the video is transmitted in AM, due to the large bandwidth used. It may also be used in digital transmission, such as the ATSC standardized 8VSB.
The broadcast or transport channel for TV in countries that use NTSC or ATSC has a bandwidth of 6 MHz. To conserve bandwidth, SSB would be desirable, but the video signal has significant low-frequency content (average brightness) and has rectangular synchronising pulses. The engineering compromise is vestigial-sideband transmission. In vestigial sideband, the full upper sideband of bandwidth W2 = 4.0 MHz is transmitted, but only W1 = 0.75 MHz of the lower sideband is transmitted, along with a carrier. The carrier frequency is 1.25 MHz above the lower edge of the 6MHz wide channel. This effectively makes the system AM at low modulation frequencies and SSB at high modulation frequencies. The absence of the lower sideband components at high frequencies must be compensated for, and this is done in the IF amplifier.
Frequencies for LSB and USB in amateur radio voice communication
When single-sideband is used in amateur radio voice communications, it is common practice that for frequencies below 10 MHz, lower sideband (LSB) is used and for frequencies of 10 MHz and above, upper sideband (USB) is used. For example, on the 40 m band, voice communications often take place around 7.100 MHz using LSB mode. On the 20 m band at 14.200 MHz, USB mode would be used.
An exception to this rule applies to the five discrete amateur channels on the 60-meter band (near 5.3 MHz) where FCC rules specifically require USB.
Extended single sideband (eSSB)
Extended single sideband is any J3E (SSB-SC) mode that exceeds the audio bandwidth of standard or traditional 2.9 kHz SSB J3E modes (ITU 2K90J3E) to support higher-quality sound.
Amplitude-companded single-sideband modulation (ACSSB)
Amplitude-companded single sideband (ACSSB) is a narrowband modulation method using a single sideband with a pilot tone, allowing an expander in the receiver to restore the amplitude that was severely compressed by the transmitter. It offers improved effective range over standard SSB modulation while simultaneously retaining backwards compatibility with standard SSB radios. ACSSB also offers reduced bandwidth and improved range for a given power level compared with narrow band FM modulation.
Controlled-envelope single-sideband modulation (CESSB)
The generation of standard SSB modulation results in large envelope overshoots well above the average envelope level for a sinusoidal tone (even when the audio signal is peak-limited). The standard SSB envelope peaks are due to truncation of the spectrum and nonlinear phase distortion from the approximation errors of the practical implementation of the required Hilbert transform. It was recently shown that suitable overshoot compensation (so-called controlled-envelope single-sideband modulation or CESSB) achieves about 3.8 dB of peak reduction for speech transmission. This results in an effective average power increase of about 140%.
Although the generation of the CESSB signal can be integrated into the SSB modulator, it is feasible to separate the generation of the CESSB signal (e.g. in form of an external speech preprocessor) from a standard SSB radio. This requires that the standard SSB radio's modulator be linear-phase and have a sufficient bandwidth to pass the CESSB signal. If a standard SSB modulator meets these requirements, then the envelope control by the CESSB process is preserved.
ITU designations
In 1982, the International Telecommunication Union (ITU) designated the types of amplitude modulation:
See also
ACSSB, amplitude-companded single sideband
Independent sideband
Modulation for other examples of modulation techniques
Sideband for more general information about a sideband
References
Sources
Partly from Federal Standard 1037C in support of MIL-STD-188
Further reading
Sgrignoli, G., W. Bretl, R. and Citta. (1995). "VSB modulation used for terrestrial and cable broadcasts." IEEE Transactions on Consumer Electronics. v. 41, issue 3, p. 367 - 382.
J. Brittain, (1992). "Scanning the past: Ralph V.L. Hartley", Proc. IEEE, vol.80,p. 463.
eSSB - Extended Single Sideband
Radio modulation modes
Electronic design |
29087 | https://en.wikipedia.org/wiki/Security%20through%20obscurity | Security through obscurity | Security through obscurity (or security by obscurity) is the reliance in security engineering on design or implementation secrecy as the main method of providing security to a system or component. Security experts have rejected this view as far back as 1851, and advise that obscurity should never be the only security mechanism.
History
An early opponent of security through obscurity was the locksmith Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked. In response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, he said: "Rogues are very keen in their profession, and know already much more than we can teach them."
There is scant formal literature on the issue of security through obscurity. Books on security engineering cite Kerckhoffs' doctrine from 1883, if they cite anything at all. For example, in a discussion about secrecy and openness in Nuclear Command and Control:
[T]he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, that the security of a system should depend on its key, not on its design remaining obscure.
In the field of legal academia, Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships" as well as how competition affects the incentives to disclose.
The principle of security through obscurity was more generally accepted in cryptographic work in the days when essentially all well-informed cryptographers were employed by national intelligence agencies, such as the National Security Agency. Now that cryptographers often work at universities, where researchers publish many or even all of their results, and publicly test others' designs, or in private industry, where results are more often controlled by patents and copyrights than by secrecy, the argument has lost some of its former popularity. An early example was PGP, whose source code is publicly available to anyone. The security technology in some of the best commercial browsers (such as Chromium or Mozilla Firefox) is also considered highly secure despite being open source.
There are conflicting stories about the origin of this term. Fans of MIT's Incompatible Timesharing System (ITS) say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing Alt Alt Control-D set a flag that would prevent patching the system even if the user later got it right.
In January 2020, NPR reported that Democratic party officials in Iowa declined to share information regarding the security of its caucus app, to "make sure we are not relaying information that could be used against us." Cybersecurity experts replied that "to withhold the technical details of its app doesn't do much to protect the system."
Criticism
Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States sometimes recommends against this practice: "System security should not depend on the secrecy of the implementation or its components."
The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.
Obscurity in architecture vs. technique
Knowledge of how the system is built differs from concealment and camouflage. The effectiveness of obscurity in operations security depends on whether the obscurity lives on top of other good security practices, or if it is being used alone. When used as an independent layer, obscurity is considered a valid security tool.
In recent years, security through obscurity has gained support as a methodology in cybersecurity through Moving Target Defense and cyber deception. NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment.
See also
Steganography
Code morphing
Kerckhoffs' principle
Need to know
Obfuscated code
Presumed security
Secure by design
AACS encryption key controversy
Zero-day (computing)
Code talker
Obfuscation
References
External links
Eric Raymond on Cisco's IOS source code 'release' v Open Source
Computer Security Publications: Information Economics, Shifting Liability and the First Amendment by Ethan M. Preston and John Lofton
by Jay Beale
Secrecy, Security and Obscurity & The Non-Security of Secrecy by Bruce Schneier
"Security through obsolescence", Robin Miller, linux.com, June 6, 2002
Computer security procedures
Cryptography
Secrecy |
29122 | https://en.wikipedia.org/wiki/Signals%20intelligence | Signals intelligence | Signals intelligence (SIGINT) is intelligence-gathering by interception of signals, whether communications between people (communications intelligence—abbreviated to COMINT) or from electronic signals not directly used in communication (electronic intelligence—abbreviated to ELINT). Signals intelligence is a subset of intelligence collection management.
As classified and sensitive information is usually encrypted, signals intelligence in turn involves the use of cryptanalysis to decipher the messages. Traffic analysis—the study of who is signaling whom and in what quantity—is also used to integrate information again.
History
Origins
Electronic interceptions appeared as early as 1900, during the Boer War of 1899–1902. The British Royal Navy had installed wireless sets produced by Marconi on board their ships in the late 1890s and the British Army used some limited wireless signalling. The Boers captured some wireless sets and used them to make vital transmissions. Since the British were the only people transmitting at the time, no special interpretation of the signals that were intercepted by the British was necessary.
The birth of signals intelligence in a modern sense dates from the Russo-Japanese War of 1904–1905. As the Russian fleet prepared for conflict with Japan in 1904, the British ship HMS Diana stationed in the Suez Canal intercepted Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history.
Development in World War I
Over the course of the First World War, the new method of signals intelligence reached maturity. Failure to properly protect its communications fatally compromised the Russian Army in its advance early in World War I and led to their disastrous defeat by the Germans under Ludendorff and Hindenburg at the Battle of Tannenberg. In 1918, French intercept personnel captured a message written in the new ADFGVX cipher, which was cryptanalyzed by Georges Painvin. This gave the Allies advance warning of the German 1918 Spring Offensive.
The British in particular built up great expertise in the newly emerging field of signals intelligence and codebreaking (synonymous with cryptanalysis). On the declaration of war, Britain cut all German undersea cables. This forced the Germans to use either a telegraph line that connected through the British network and could be tapped, or through radio which the British could then intercept. Rear-Admiral Henry Oliver appointed Sir Alfred Ewing to establish an interception and decryption service at the Admiralty; Room 40. An interception service known as 'Y' service, together with the post office and Marconi stations grew rapidly to the point where the British could intercept almost all official German messages.
The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, to infer from the routes they chose where defensive minefields had been placed and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place and a warning could be given. Detailed information about submarine movements was also available.
The use of radio receiving equipment to pinpoint the location of the transmitter was also developed during the war.
Captain H.J. Round working for Marconi, began carrying out experiments with direction finding radio equipment for the army in France in 1915. By May 1915, the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports.
Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea. The battle of Dogger Bank was won in no small part due to the intercepts that allowed the Navy to position its ships in the right place. It played a vital role in subsequent naval clashes, including at the Battle of Jutland as the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines and Zeppelins. The system was so successful, that by the end of the war over 80 million words, comprising the totality of German wireless transmission over the course of the war had been intercepted by the operators of the Y-stations and decrypted. However its most astonishing success was in decrypting the Zimmermann Telegram, a telegram from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico.
Postwar consolidation
With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. In 1919, the British Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peace-time codebreaking agency should be created. The Government Code and Cypher School (GC&CS) was the first peace-time codebreaking agency, with a public function "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also with a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October. By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems.
The US Cipher Bureau was established in 1919 and achieved some success at the Washington Naval Conference in 1921, through cryptanalysis by Herbert Yardley. Secretary of War Henry L. Stimson closed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail."
World War II
The use of SIGINT had even greater implications during World War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra" managed from Government Code and Cypher School at Bletchley Park. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks feasible.
Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". "Ultra" decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.
Winston Churchill was reported to have told King George VI: "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander, Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory. Official historian of British Intelligence in World War II Sir Harry Hinsley, argued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended.
Technical definitions
The United States Department of Defense has defined the term "signals intelligence" as:
A category of intelligence comprising either individually or in combination all communications intelligence (COMINT), electronic intelligence (ELINT), and foreign instrumentation signals intelligence (FISINT), however transmitted.
Intelligence derived from communications, electronic, and foreign instrumentation signals.
Being a broad field, SIGINT has many sub-disciplines. The two main ones are communications intelligence (COMINT) and electronic intelligence (ELINT).
Disciplines shared across the branches
Targeting
A collection system has to know to look for a particular signal. "System", in this context, has several nuances. Targeting is an output of the process of developing collection requirements:
"1. An intelligence need considered in the allocation of intelligence resources. Within the Department of Defense, these collection requirements fulfill the essential elements of information and other intelligence needs of a commander, or an agency.
"2. An established intelligence need, validated against the appropriate allocation of intelligence resources (as a requirement) to fulfill the essential elements of information and other intelligence needs of an intelligence consumer."
Need for multiple, coordinated receivers
First, atmospheric conditions, sunspots, the target's transmission schedule and antenna characteristics, and other factors create uncertainty that a given signal intercept sensor will be able to "hear" the signal of interest, even with a geographically fixed target and an opponent making no attempt to evade interception. Basic countermeasures against interception include frequent changing of radio frequency, polarization, and other transmission characteristics. An intercept aircraft could not get off the ground if it had to carry antennas and receivers for every possible frequency and signal type to deal with such countermeasures.
Second, locating the transmitter's position is usually part of SIGINT. Triangulation and more sophisticated radio location techniques, such as time of arrival methods, require multiple receiving points at different locations. These receivers send location-relevant information to a central point, or perhaps to a distributed system in which all participate, such that the information can be correlated and a location computed.
Intercept management
Modern SIGINT systems, therefore, have substantial communications among intercept platforms. Even if some platforms are clandestine, there is still a broadcast of information telling them where and how to look for signals. A United States targeting system under development in the late 1990s, PSTS, constantly sends out information that helps the interceptors properly aim their antennas and tune their receivers. Larger intercept aircraft, such as the EP-3 or RC-135, have the on-board capability to do some target analysis and planning, but others, such as the RC-12 GUARDRAIL, are completely under ground direction. GUARDRAIL aircraft are fairly small, and usually work in units of three to cover a tactical SIGINT requirement, where the larger aircraft tend to be assigned strategic/national missions.
Before the detailed process of targeting begins, someone has to decide there is a value in collecting information about something. While it would be possible to direct signals intelligence collection at a major sports event, the systems would capture a great deal of noise, news signals, and perhaps announcements in the stadium. If, however, an anti-terrorist organization believed that a small group would be trying to coordinate their efforts, using short-range unlicensed radios, at the event, SIGINT targeting of radios of that type would be reasonable. Targeting would not know where in the stadium the radios might be located, or the exact frequency they are using; those are the functions of subsequent steps such as signal detection and direction finding.
Once the decision to target is made, the various interception points need to cooperate, since resources are limited.
Knowing what interception equipment to use becomes easier when a target country buys its radars and radios from known manufacturers, or is given them as military aid. National intelligence services keep libraries of devices manufactured by their own country and others, and then use a variety of techniques to learn what equipment is acquired by a given country.
Knowledge of physics and electronic engineering further narrows the problem of what types of equipment might be in use. An intelligence aircraft flying well outside the borders of another country will listen for long-range search radars, not short-range fire control radars that would be used by a mobile air defense. Soldiers scouting the front lines of another army know that the other side will be using radios that must be portable and not have huge antennas.
Signal detection
Even if a signal is human communications (e.g., a radio), the intelligence collection specialists have to know it exists. If the targeting function described above learns that a country has a radar that operates in a certain frequency range, the first step is to use a sensitive receiver, with one or more antennas that listen in every direction, to find an area where such a radar is operating. Once the radar is known to be in the area, the next step is to find its location.
If operators know the probable frequencies of transmissions of interest, they may use a set of receivers, preset to the frequencies of interest. These are the frequency (horizontal axis) versus power (vertical axis) produced at the transmitter, before any filtering of signals that do not add to the information being transmitted. Received energy on a particular frequency may start a recorder, and alert a human to listen to the signals if they are intelligible (i.e., COMINT). If the frequency is not known, the operators may look for power on primary or sideband frequencies using a spectrum analyzer. Information from the spectrum analyzer is then used to tune receivers to signals of interest. For example, in this simplified spectrum, the actual information is at 800 kHz and 1.2 MHz.
Real-world transmitters and receivers usually are directional. In the figure to the left, assume that each display is connected to a spectrum analyzer connected to a directional antenna aimed in the indicated direction.
Countermeasures to interception
Spread-spectrum communications is an electronic counter-countermeasures (ECCM) technique to defeat looking for particular frequencies. Spectrum analysis can be used in a different ECCM way to identify frequencies not being jammed or not in use.
Direction-finding
The earliest, and still common, means of direction finding is to use directional antennas as goniometers, so that a line can be drawn from the receiver through the position of the signal of interest. (See HF/DF.) Knowing the compass bearing, from a single point, to the transmitter does not locate it. Where the bearings from multiple points, using goniometry, are plotted on a map, the transmitter will be located at the point where the bearings intersect. This is the simplest case; a target may try to confuse listeners by having multiple transmitters, giving the same signal from different locations, switching on and off in a pattern known to their user but apparently random to the listener.
Individual directional antennas have to be manually or automatically turned to find the signal direction, which may be too slow when the signal is of short duration. One alternative is the Wullenweber array technique. In this method, several concentric rings of antenna elements simultaneously receive the signal, so that the best bearing will ideally be clearly on a single antenna or a small set. Wullenweber arrays for high-frequency signals are enormous, referred to as "elephant cages" by their users.
An alternative to tunable directional antennas, or large omnidirectional arrays such as the Wullenweber, is to measure the time of arrival of the signal at multiple points, using GPS or a similar method to have precise time synchronization. Receivers can be on ground stations, ships, aircraft, or satellites, giving great flexibility.
Modern anti-radiation missiles can home in on and attack transmitters; military antennas are rarely a safe distance from the user of the transmitter.
Traffic analysis
When locations are known, usage patterns may emerge, from which inferences may be drawn. Traffic analysis is the discipline of drawing patterns from information flow among a set of senders and receivers, whether those senders and receivers are designated by location determined through direction finding, by addressee and sender identifications in the message, or even MASINT techniques for "fingerprinting" transmitters or operators. Message content, other than the sender and receiver, is not necessary to do traffic analysis, although more information can be helpful.
For example, if a certain type of radio is known to be used only by tank units, even if the position is not precisely determined by direction finding, it may be assumed that a tank unit is in the general area of the signal. The owner of the transmitter can assume someone is listening, so might set up tank radios in an area where he wants the other side to believe he has actual tanks. As part of Operation Quicksilver, part of the deception plan for the invasion of Europe at the Battle of Normandy, radio transmissions simulated the headquarters and subordinate units of the fictitious First United States Army Group (FUSAG), commanded by George S. Patton, to make the German defense think that the main invasion was to come at another location. In like manner, fake radio transmissions from Japanese aircraft carriers, before the Battle of Pearl Harbor, were made from Japanese local waters, while the attacking ships moved under strict radio silence.
Traffic analysis need not focus on human communications. For example, if the sequence of a radar signal, followed by an exchange of targeting data and a confirmation, followed by observation of artillery fire, this may identify an automated counterbattery system. A radio signal that triggers navigational beacons could be a landing aid system for an airstrip or helicopter pad that is intended to be low-profile.
Patterns do emerge. Knowing a radio signal, with certain characteristics, originating from a fixed headquarters may be strongly suggestive that a particular unit will soon move out of its regular base. The contents of the message need not be known to infer the movement.
There is an art as well as science of traffic analysis. Expert analysts develop a sense for what is real and what is deceptive. Harry Kidder, for example, was one of the star cryptanalysts of World War II, a star hidden behind the secret curtain of SIGINT.
Electronic order of battle
Generating an electronic order of battle (EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizational order of battle. EOB covers both COMINT and ELINT. The Defense Intelligence Agency maintains an EOB by location. The Joint Spectrum Center (JSC) of the Defense Information Systems Agency supplements this location database with five more technical databases:
FRRS: Frequency Resource Record System
BEI: Background Environment Information
SCS: Spectrum Certification System
EC/S: Equipment Characteristics/Space
TACDB: platform lists, sorted by nomenclature, which contain links to the C-E equipment complement of each platform, with links to the parametric data for each piece of equipment, military unit lists and their subordinate units with equipment used by each unit.
For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify the medium- and long-range counter-artillery radars in a given area.
Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability.
Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted:
U1 to U2, requesting permission to proceed to checkpoint X.
U2 to U1, approved. please report at arrival.
(20 minutes later) U1 to U2, all vehicles have arrived to checkpoint X.
This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding and radio frequency MASINT could help confirm that the traffic is not deception.
The EOB buildup process is divided as following:
Signal separation
Measurements optimization
Data Fusion
Networks build-up
Separation of the intercepted spectrum and the signals intercepted from each sensor must take place in an extremely small period of time, in order to separate the different signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g., hopping or time-division multiple access (TDMA)).
By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate than the basic measurements of a standard direction finding sensor. By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved.
Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc.
Network build-up, or analysis of emitters (communication transmitters) in a target region over a sufficient period of time, enables creation of the communications flows of a battlefield.
Communications intelligence
COMINT (communications intelligence) is a sub-category of signals intelligence that engages in dealing with messages or voice information derived from the interception of foreign communications. COMINT is commonly referred to as SIGINT, which can cause confusion when talking about the broader intelligence disciplines. The US Joint Chiefs of Staff defines it as "Technical information and intelligence derived from foreign communications by other than the intended recipients".
COMINT, which is defined to be communications among people, will reveal some or all of the following:
Who is transmitting
Where they are located, and, if the transmitter is moving, the report may give a plot of the signal against location
If known, the organizational function of the transmitter
The time and duration of transmission, and the schedule if it is a periodic transmission
The frequencies and other technical characteristics of their transmission
If the transmission is encrypted or not, and if it can be decrypted. If it is possible to intercept either an originally transmitted cleartext or obtain it through cryptanalysis, the language of the communication and a translation (when needed).
The addresses, if the signal is not a general broadcast and if addresses are retrievable from the message. These stations may also be COMINT (e.g., a confirmation of the message or a response message), ELINT (e.g., a navigation beacon being activated) or both. Rather than, or in addition to, an address or other identifier, there may be information on the location and signal characteristics of the responder.
Voice interception
A basic COMINT technique is to listen for voice communications, usually over radio but possibly "leaking" from telephones or from wiretaps. If the voice communications are encrypted, traffic analysis may still give information.
In the Second World War, for security the United States used Native American volunteer communicators known as code talkers, who used languages such as Navajo, Comanche and Choctaw, which would be understood by few people, even in the U.S. Even within these uncommon languages, the code talkers used specialized codes, so a "butterfly" might be a specific Japanese aircraft. British forces made limited use of Welsh speakers for the same reason.
While modern electronic encryption does away with the need for armies to use obscure languages, it is likely that some groups might use rare dialects that few outside their ethnic group would understand.
Text interception
Morse code interception was once very important, but Morse code telegraphy is now obsolete in the western world, although possibly used by special operations forces. Such forces, however, now have portable cryptographic equipment.
Specialists scan radio frequencies for character sequences (e.g., electronic mail) and fax.
Signaling channel interception
A given digital communications link can carry thousands or millions of voice communications, especially in developed countries. Without addressing the legality of such actions, the problem of identifying which channel contains which conversation becomes much simpler when the first thing intercepted is the signaling channel that carries information to set up telephone calls. In civilian and many military use, this channel will carry messages in Signaling System 7 protocols.
Retrospective analysis of telephone calls can be made from Call detail record (CDR) used for billing the calls.
Monitoring friendly communications
More a part of communications security than true intelligence collection, SIGINT units still may have the responsibility of monitoring one's own communications or other electronic emissions, to avoid providing intelligence to the enemy. For example, a security monitor may hear an individual transmitting inappropriate information over an unencrypted radio network, or simply one that is not authorized for the type of information being given. If immediately calling attention to the violation would not create an even greater security risk, the monitor will call out one of the BEADWINDOW codes used by Australia, Canada, New Zealand, the United Kingdom, the United States, and other nations working under their procedures. Standard BEADWINDOW codes (e.g., "BEADWINDOW 2") include:
Position: (e.g., disclosing, in an insecure or inappropriate way), "Friendly or enemy position, movement or intended movement, position, course, speed, altitude or destination or any air, sea or ground element, unit or force."
Capabilities: "Friendly or enemy capabilities or limitations. Force compositions or significant casualties to special equipment, weapons systems, sensors, units or personnel. Percentages of fuel or ammunition remaining."
Operations: "Friendly or enemy operation – intentions progress, or results. Operational or logistic intentions; mission participants flying programmes; mission situation reports; results of friendly or enemy operations; assault objectives."
Electronic warfare (EW): "Friendly or enemy electronic warfare (EW) or emanations control (EMCON) intentions, progress, or results. Intention to employ electronic countermeasures (ECM); results of friendly or enemy ECM; ECM objectives; results of friendly or enemy electronic counter-countermeasures (ECCM); results of electronic support measures/tactical SIGINT (ESM); present or intended EMCON policy; equipment affected by EMCON policy."
Friendly or enemy key personnel: "Movement or identity of friendly or enemy officers, visitors, commanders; movement of key maintenance personnel indicating equipment limitations."
Communications security (COMSEC): "Friendly or enemy COMSEC breaches. Linkage of codes or codewords with plain language; compromise of changing frequencies or linkage with line number/circuit designators; linkage of changing call signs with previous call signs or units; compromise of encrypted/classified call signs; incorrect authentication procedure."
Wrong circuit: "Inappropriate transmission. Information requested, transmitted or about to be transmitted which should not be passed on the subject circuit because it either requires greater security protection or it is not appropriate to the purpose for which the circuit is provided."
Other codes as appropriate for the situation may be defined by the commander.
In WWII, for example, the Japanese Navy, by poor practice, identified a key person's movement over a low-security cryptosystem. This made possible Operation Vengeance, the interception and death of the Combined Fleet commander, Admiral Isoroku Yamamoto.
Electronic signals intelligence
Electronic signals intelligence (ELINT) refers to intelligence-gathering by use of electronic sensors. Its primary focus lies on non-communications signals intelligence. The Joint Chiefs of Staff define it as "Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources."
Signal identification is performed by analyzing the collected parameters of a specific signal, and either matching it to known criteria, or recording it as a possible new emitter. ELINT data are usually highly classified, and are protected as such.
The data gathered are typically pertinent to the electronics of an opponent's defense network, especially the electronic parts such as radars, surface-to-air missile systems, aircraft, etc. ELINT can be used to detect ships and aircraft by their radar and other electromagnetic radiation; commanders have to make choices between not using radar (EMCON), intermittently using it, or using it and expecting to avoid defenses. ELINT can be collected from ground stations near the opponent's territory, ships off their coast, aircraft near or in their airspace, or by satellite.
Complementary relationship to COMINT
Combining other sources of information and ELINT allows traffic analysis to be performed on electronic emissions which contain human encoded messages. The method of analysis differs from SIGINT in that any human encoded message which is in the electronic transmission is not analyzed during ELINT. What is of interest is the type of electronic transmission and its location. For example, during the Battle of the Atlantic in World War II, Ultra COMINT was not always available because Bletchley Park was not always able to read the U-boat Enigma traffic. But high-frequency direction finding ("huff-duff") was still able to detect U-boats by analysis of radio transmissions and the positions through triangulation from the direction located by two or more huff-duff systems. The Admiralty was able to use this information to plot courses which took convoys away from high concentrations of U-boats.
Other ELINT disciplines include intercepting and analyzing enemy weapons control signals, or the identification, friend or foe responses from transponders in aircraft used to distinguish enemy craft from friendly ones.
Role in air warfare
A very common area of ELINT is intercepting radars and learning their locations and operating procedures. Attacking forces may be able to avoid the coverage of certain radars, or, knowing their characteristics, electronic warfare units may jam radars or send them deceptive signals. Confusing a radar electronically is called a "soft kill", but military units will also send specialized missiles at radars, or bomb them, to get a "hard kill". Some modern air-to-air missiles also have radar homing guidance systems, particularly for use against large airborne radars.
Knowing where each surface-to-air missile and anti-aircraft artillery system is and its type means that air raids can be plotted to avoid the most heavily defended areas and to fly on a flight profile which will give the aircraft the best chance of evading ground fire and fighter patrols. It also allows for the jamming or spoofing of the enemy's defense network (see electronic warfare). Good electronic intelligence can be very important to stealth operations; stealth aircraft are not totally undetectable and need to know which areas to avoid. Similarly, conventional aircraft need to know where fixed or semi-mobile air defense systems are so that they can shut them down or fly around them.
ELINT and ESM
Electronic support measures (ESM) or electronic surveillance measures are ELINT techniques using various electronic surveillance systems, but the term is used in the specific context of tactical warfare. ESM give the information needed for electronic attack (EA) such as jamming, or directional bearings (compass angle) to a target in signals intercept such as in the huff-duff radio direction finding (RDF) systems so critically important during the World War II Battle of the Atlantic. After WWII, the RDF, originally applied only in communications, was broadened into systems to also take in ELINT from radar bandwidths and lower frequency communications systems, giving birth to a family of NATO ESM systems, such as the shipboard US AN/WLR-1—AN/WLR-6 systems and comparable airborne units. EA is also called electronic counter-measures (ECM). ESM provides information needed for electronic counter-counter measures (ECCM), such as understanding a spoofing or jamming mode so one can change one's radar characteristics to avoid them.
ELINT for meaconing
Meaconing is the combined intelligence and electronic warfare of learning the characteristics of enemy navigation aids, such as radio beacons, and retransmitting them with incorrect information.
Foreign instrumentation signals intelligence
FISINT (Foreign instrumentation signals intelligence) is a sub-category of SIGINT, monitoring primarily non-human communication. Foreign instrumentation signals include (but not limited to) telemetry (TELINT), tracking systems, and video data links. TELINT is an important part of national means of technical verification for arms control.
Counter-ELINT
Still at the research level are techniques that can only be described as counter-ELINT, which would be part of a SEAD campaign. It may be informative to compare and contrast counter-ELINT with ECCM.
SIGINT versus MASINT
Signals intelligence and measurement and signature intelligence (MASINT) are closely, and sometimes confusingly, related.
The signals intelligence disciplines of communications and electronic intelligence focus on the information in those signals themselves, as with COMINT detecting the speech in a voice communication or ELINT measuring the frequency, pulse repetition rate, and other characteristics of a radar.
MASINT also works with collected signals, but is more of an analysis discipline. There are, however, unique MASINT sensors, typically working in different regions or domains of the electromagnetic spectrum, such as infrared or magnetic fields. While NSA and other agencies have MASINT groups, the Central MASINT Office is in the Defense Intelligence Agency (DIA).
Where COMINT and ELINT focus on the intentionally transmitted part of the signal, MASINT focuses on unintentionally transmitted information. For example, a given radar antenna will have sidelobes emanating from a direction other than that in which the main antenna is aimed. The RADINT (radar intelligence) discipline involves learning to recognize a radar both by its primary signal, captured by ELINT, and its sidelobes, perhaps captured by the main ELINT sensor, or, more likely, a sensor aimed at the sides of the radio antenna.
MASINT associated with COMINT might involve the detection of common background sounds expected with human voice communications. For example, if a given radio signal comes from a radio used in a tank, if the interceptor does not hear engine noise or higher voice frequency than the voice modulation usually uses, even though the voice conversation is meaningful, MASINT might suggest it is a deception, not coming from a real tank.
See HF/DF for a discussion of SIGINT-captured information with a MASINT flavor, such as determining the frequency to which a receiver is tuned, from detecting the frequency of the beat frequency oscillator of the superheterodyne receiver.
Legality
Since the invention of the radio, the international consensus has been that the radio-waves are no one's property, and thus the interception itself is not illegal. There can however be national laws on who is allowed to collect, store and process radio traffic, and for what purposes.
Monitoring traffic in cables (i.e. telephone and Internet) is far more controversial, since it most of the time requires physical access to the cable and thereby violating ownership and expected privacy.
See also
Central Intelligence Agency Directorate of Science & Technology
COINTELPRO
ECHELON
Foreign Intelligence Surveillance Act of 1978 Amendments Act of 2008
Geospatial intelligence
Human intelligence (espionage)
Imagery intelligence
Intelligence Branch (Canadian Forces)
List of intelligence gathering disciplines
Listening station
Open-source intelligence
Radio Reconnaissance Platoon
RAF Intelligence
Signals intelligence by alliances, nations and industries
Signals intelligence operational platforms by nation for current collection systems
TEMPEST
US signals intelligence in the Cold War
Venona
Zircon satellite
References
Further reading
Bamford, James, Body of Secrets: How America's NSA and Britain's GCHQ Eavesdrop on the World (Century, London, 2001)
West, Nigel, The SIGINT Secrets: The Signals Intelligence War, 1900 to Today (William Morrow, New York, 1988)
External links
Part I of IV Articles On Evolution of Army Signal Corps COMINT and SIGINT into NSA
NSA's overview of SIGINT
USAF Pamphlet on sources of intelligence
German WWII SIGINT/COMINT
Intelligence Programs and Systems
The U.S. Intelligence Community by Jeffrey T. Richelson
Secrets of Signals Intelligence During the Cold War and Beyond by Matthew Aid et al.
Maritime SIGINT Architecture Technical Standards Handbook
Command and control
Cryptography
Cyberwarfare
Intelligence gathering disciplines
Military intelligence |
29213 | https://en.wikipedia.org/wiki/Software%20cracking | Software cracking | Software cracking (known as "breaking" mostly in the 1980s) is the modification of software to remove or disable features which are considered undesirable by the person cracking the software, especially copy protection features (including protection against the manipulation of software, serial number, hardware key, date checks and disc check) or software annoyances like nag screens and adware.
A crack refers to the means of achieving, for example a stolen serial number or a tool that performs that act of cracking. Some of these tools are called keygen, patch, or loader. A keygen is a handmade product serial number generator that often offers the ability to generate working serial numbers in your own name. A patch is a small computer program that modifies the machine code of another program. This has the advantage for a cracker to not include a large executable in a release when only a few bytes are changed. A loader modifies the startup flow of a program and does not remove the protection but circumvents it. A well-known example of a loader is a trainer used to cheat in games. Fairlight pointed out in one of their .nfo files that these type of cracks are not allowed for warez scene game releases. A nukewar has shown that the protection may not kick in at any point for it to be a valid crack.
The distribution of cracked copies is illegal in most countries. There have been lawsuits over cracking software. It might be legal to use cracked software in certain circumstances. Educational resources for reverse engineering and software cracking are, however, legal and available in the form of Crackme programs.
History
The first software copy protection was applied to software for the Apple II, Atari 8-bit family, and Commodore 64 computers.. Software publishers have implemented increasingly complex methods in an effort to stop unauthorized copying of software.
On the Apple II, the operating system directly controls the step motor that moves the floppy drive head, and also directly interprets the raw data, called nibbles, read from each track to identify the data sectors. This allowed complex disk-based software copy protection, by storing data on half tracks (0, 1, 2.5, 3.5, 5, 6...), quarter tracks (0, 1, 2.25, 3.75, 5, 6...), and any combination thereof. In addition, tracks did not need to be perfect rings, but could be sectioned so that sectors could be staggered across overlapping offset tracks, the most extreme version being known as spiral tracking. It was also discovered that many floppy drives did not have a fixed upper limit to head movement, and it was sometimes possible to write an additional 36th track above the normal 35 tracks. The standard Apple II copy programs could not read such protected floppy disks, since the standard DOS assumed that all disks had a uniform 35-track, 13- or 16-sector layout. Special nibble-copy programs such as Locksmith and Copy II Plus could sometimes duplicate these disks by using a reference library of known protection methods; when protected programs were cracked they would be completely stripped of the copy protection system, and transferred onto a standard format disk that any normal Apple II copy program could read.
One of the primary routes to hacking these early copy protections was to run a program that simulates the normal CPU operation. The CPU simulator provides a number of extra features to the hacker, such as the ability to single-step through each processor instruction and to examine the CPU registers and modified memory spaces as the simulation runs (any modern disassembler/debugger can do this). The Apple II provided a built-in opcode disassembler, allowing raw memory to be decoded into CPU opcodes, and this would be utilized to examine what the copy-protection was about to do next. Generally there was little to no defense available to the copy protection system, since all its secrets are made visible through the simulation. However, because the simulation itself must run on the original CPU, in addition to the software being hacked, the simulation would often run extremely slowly even at maximum speed.
On Atari 8-bit computers, the most common protection method was via "bad sectors". These were sectors on the disk that were intentionally unreadable by the disk drive. The software would look for these sectors when the program was loading and would stop loading if an error code was not returned when accessing these sectors. Special copy programs were available that would copy the disk and remember any bad sectors. The user could then use an application to spin the drive by constantly reading a single sector and display the drive RPM. With the disk drive top removed a small screwdriver could be used to slow the drive RPM below a certain point. Once the drive was slowed down the application could then go and write "bad sectors" where needed. When done the drive RPM was sped up back to normal and an uncracked copy was made. Of course cracking the software to expect good sectors made for readily copied disks without the need to meddle with the disk drive. As time went on more sophisticated methods were developed, but almost all involved some form of malformed disk data, such as a sector that might return different data on separate accesses due to bad data alignment. Products became available (from companies such as Happy Computers) which replaced the controller BIOS in Atari's "smart" drives. These upgraded drives allowed the user to make exact copies of the original program with copy protections in place on the new disk.
On the Commodore 64, several methods were used to protect software. For software distributed on ROM cartridges, subroutines were included which attempted to write over the program code. If the software was on ROM, nothing would happen, but if the software had been moved to RAM, the software would be disabled. Because of the operation of Commodore floppy drives, one write protection scheme would cause the floppy drive head to bang against the end of its rail, which could cause the drive head to become misaligned. In some cases, cracked versions of software were desirable to avoid this result. A misaligned drive head was rare usually fixing itself by smashing against the rail stops. Another brutal protection scheme was grinding from track 1 to 40 and back a few times.
Most of the early software crackers were computer hobbyists who often formed groups that competed against each other in the cracking and spreading of software. Breaking a new copy protection scheme as quickly as possible was often regarded as an opportunity to demonstrate one's technical superiority rather than a possibility of money-making. Some low skilled hobbyists would take already cracked software and edit various unencrypted strings of text in it to change messages a game would tell a game player, often something considered vulgar. Uploading the altered copies on file sharing networks provided a source of laughs for adult users. The cracker groups of the 1980s started to advertise themselves and their skills by attaching animated screens known as crack intros in the software programs they cracked and released. Once the technical competition had expanded from the challenges of cracking to the challenges of creating visually stunning intros, the foundations for a new subculture known as demoscene were established. Demoscene started to separate itself from the illegal "warez scene" during the 1990s and is now regarded as a completely different subculture. Many software crackers have later grown into extremely capable software reverse engineers; the deep knowledge of assembly required in order to crack protections enables them to reverse engineer drivers in order to port them from binary-only drivers for Windows to drivers with source code for Linux and other free operating systems. Also because music and game intro was such an integral part of gaming the music format and graphics became very popular when hardware became affordable for the home user.
With the rise of the Internet, software crackers developed secretive online organizations. In the latter half of the nineties, one of the most respected sources of information about "software protection reversing" was Fravia's website.
+HCU
The High Cracking University (+HCU) was founded by Old Red Cracker (+ORC), considered a genius of reverse engineering and a legendary figure in RCE, to advance research into Reverse Code Engineering (RCE). He had also taught and authored many papers on the subject, and his texts are considered classics in the field and are mandatory reading for students of RCE.
The addition of the "+" sign in front of the nickname of a reverser signified membership in the +HCU. Amongst the students of +HCU were the top of the elite Windows reversers worldwide. +HCU published a new reverse engineering problem annually and a small number of respondents with the best replies qualified for an undergraduate position at the university.
+Fravia was a professor at +HCU. Fravia's website was known as "+Fravia's Pages of Reverse Engineering" and he used it to challenge programmers as well as the wider society to "reverse engineer" the "brainwashing of a corrupt and rampant materialism". In its heyday, his website received millions of visitors per year and its influence was "widespread".
Nowadays most of the graduates of +HCU have migrated to Linux and few have remained as Windows reversers. The information at the university has been rediscovered by a new generation of researchers and practitioners of RCE who have started new research projects in the field.
Methods
The most common software crack is the modification of an application's binary to cause or prevent a specific key branch in the program's execution. This is accomplished by reverse engineering the compiled program code using a debugger such as SoftICE, OllyDbg, GDB, or MacsBug until the software cracker reaches the subroutine that contains the primary method of protecting the software (or by disassembling an executable file with a program such as IDA). The binary is then modified using the debugger or a hex editor or monitor in a manner that replaces a prior branching opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or skip over it. Almost all common software cracks are a variation of this type. Proprietary software developers are constantly developing techniques such as code obfuscation, encryption, and self-modifying code to make this modification increasingly difficult. Even with these measures being taken, developers struggle to combat software cracking. This is because it is very common for a professional to publicly release a simple cracked EXE or Retrium Installer for public download, eliminating the need for inexperienced users to crack the software themselves.
A specific example of this technique is a crack that removes the expiration period from a time-limited trial of an application. These cracks are usually programs that alter the program executable and sometimes the .dll or .so linked to the application. Similar cracks are available for software that requires a hardware dongle. A company can also break the copy protection of programs that they have legally purchased but that are licensed to particular hardware, so that there is no risk of downtime due to hardware failure (and, of course, no need to restrict oneself to running the software on bought hardware only).
Another method is the use of special software such as CloneCD to scan for the use of a commercial copy protection application. After discovering the software used to protect the application, another tool may be used to remove the copy protection from the software on the CD or DVD. This may enable another program such as Alcohol 120%, CloneDVD, Game Jackal, or Daemon Tools to copy the protected software to a user's hard disk. Popular commercial copy protection applications which may be scanned for include SafeDisc and StarForce.
In other cases, it might be possible to decompile a program in order to get access to the original source code or code on a level higher than machine code. This is often possible with scripting languages and languages utilizing JIT compilation. An example is cracking (or debugging) on the .NET platform where one might consider manipulating CIL to achieve one's needs. Java's bytecode also works in a similar fashion in which there is an intermediate language before the program is compiled to run on the platform dependent machine code.
Advanced reverse engineering for protections such as SecuROM, SafeDisc, StarForce, or Denuvo requires a cracker, or many crackers to spend much more time studying the protection, eventually finding every flaw within the protection code, and then coding their own tools to "unwrap" the protection automatically from executable (.EXE) and library (.DLL) files.
There are a number of sites on the Internet that let users download cracks produced by warez groups for popular games and applications (although at the danger of acquiring malicious software that is sometimes distributed via such sites). Although these cracks are used by legal buyers of software, they can also be used by people who have downloaded or otherwise obtained unauthorized copies (often through P2P networks).
See also
Reverse engineering
References
Hacker culture
Copyright infringement
Copyright infringement of software
Warez |
29580 | https://en.wikipedia.org/wiki/Set-top%20box | Set-top box | A set-top box (STB), also colloquially known as a cable box and historically television decoder, is an information appliance device that generally contains a TV-tuner input and displays output to a television set and an external source of signal, turning the source signal into content in a form that can then be displayed on the television screen or other display device. They are used in cable television, satellite television, and over-the-air television systems as well as other uses.
According to the Los Angeles Times, the cost to a cable provider in the United States for a set-top box is between $150 for a basic box to $250 for a more sophisticated box. In 2016, the average pay-TV subscriber paid $231 per year to lease their set-top box from a cable service provider.
TV signal sources
The signal source might be an Ethernet cable, a satellite dish, a coaxial cable (see cable television), a telephone line (including DSL connections), broadband over power lines (BPL), or even an ordinary VHF or UHF antenna. Content, in this context, could mean any or all of video, audio, Internet web pages, interactive video games, or other possibilities. Satellite and microwave-based services also require specific external receiver hardware, so the use of set-top boxes of various formats has never completely disappeared. Set-top boxes can also enhance source signal quality.
UHF converter
Before the All-Channel Receiver Act of 1962 required US television receivers to be able to tune the entire VHF and UHF range (which in North America was NTSC-M channels 2 through 83 on 54 to 890 MHz), a set-top box known as a UHF converter would be installed at the receiver to shift a portion of the UHF-TV spectrum onto low-VHF channels for viewing. As some 1960s-era 12-channel TV sets remained in use for many years, and Canada and Mexico were slower than the US to require UHF tuners to be factory-installed in new TVs, a market for these converters continued to exist for much of the 1970s.
Cable converter
Cable television represented a possible alternative to deployment of UHF converters as broadcasts could be frequency-shifted to VHF channels at the cable head-end instead of the final viewing location. However, most cable systems could not accommodate the full 54-890 MHz VHF/UHF frequency range and the twelve channels of VHF space were quickly exhausted on most systems. Adding any additional channels therefore needed to be done by inserting the extra signals into cable systems on nonstandard frequencies, typically either below VHF channel 7 (midband) or directly above VHF channel 13 (superband).
These frequencies corresponded to non-television services (such as two-way radio) over-the-air and were therefore not on standard TV receivers. Before cable-ready TV sets became common in the late 1980s, an electronic tuning device called a cable converter box was needed to receive the additional analog cable TV channels and transpose or convert the selected channel to analog radio frequency (RF) for viewing on a regular TV set on a single channel, usually VHF channel 3 or 4. The box allowed an analog non-cable-ready television set to receive analog encrypted cable channels and was a prototype topology for later date digital encryption devices. Newer televisions were then converted to be analog cypher cable-ready, with the standard converter built-in for selling premium television (aka pay per view). Several years later and slowly marketed, the advent of digital cable continued and increased the need for various forms of these devices. Block conversion of the entire affected frequency band onto UHF, while less common, was used by some models to provide full VCR compatibility and the ability to drive multiple TV sets, albeit with a somewhat nonstandard channel numbering scheme.
Newer television receivers greatly reduced the need for external set-top boxes, although cable converter boxes continue to be used to descramble premium cable channels according to carrier-controlled access restrictions, and to receive digital cable channels, along with using interactive services like video on demand, pay per view, and home shopping through television.
Closed captioning box
Set-top boxes were also made to enable closed captioning on older sets in North America, before this became a mandated inclusion in new TV sets. Some have also been produced to mute the audio (or replace it with noise) when profanity is detected in the captioning, where the offensive word is also blocked. Some also include a V-chip that allows only programs of some television content ratings. A function that limits children's time watching TV or playing video games may also be built in, though some of these work on main electricity rather than the video signal.
Digital television adapter
The transition to digital terrestrial television after the turn of the millennium left many existing television receivers unable to tune and display the new signal directly. In the United States, where analog shutdown was completed in 2009 for full-service broadcasters, a federal subsidy was offered for coupon-eligible converter boxes with deliberately limited capability which would restore signals lost to digital transition.
Professional set-top box
Professional set-top boxes are referred to as IRDs or integrated receiver/decoders in the professional broadcast audio/video industry. They are designed for more robust field handling and rack mounting environments. IRDs are capable of outputting uncompressed serial digital interface signals, unlike consumer STBs which usually don't, mostly because of copyright reasons.
Hybrid box
Hybrid set-top boxes, such as those used for Smart TV programming, enable viewers to access multiple TV delivery methods (including terrestrial, cable, internet, and satellite); like IPTV boxes, they include video on demand, time-shifting TV, Internet applications, video telephony, surveillance, gaming, shopping, TV-centric electronic program guides, and e-government. By integrating varying delivery streams, hybrids (sometimes known as "TV-centric") enable pay-TV operators more flexible application deployment, which decreases the cost of launching new services, increases speed to market, and limits disruption for consumers.
As examples, Hybrid Broadcast Broadband TV (HbbTV) set-top boxes allow traditional TV broadcasts, whether from terrestrial (DTT), satellite, or cable providers, to be brought together with video delivered over the Internet and personal multimedia content. Advanced Digital Broadcast (ADB) launched its first hybrid DTT/IPTV set-top box in 2005, which provided Telefónica with the digital TV platform for its Movistar TV service by the end of that year. In 2009, ADB provided Europe's first three-way hybrid digital TV platform to Polish digital satellite operator n, which enables subscribers to view integrated content whether delivered via satellite, terrestrial, or internet.
UK based Inview Technology has over 8M STBs deployed in the UK for Teletext and an original push VOD service for Top Up TV.
IPTV receiver
In IPTV networks, the set-top box is a small computer providing two-way communications on an IP network and decoding the video streaming media. IP set-top boxes have a built-in home network interface that can be Ethernet, Wireless (802.11 g,n,ac), or one of the existing wire home networking technologies such as HomePNA or the ITU-T G.hn standard, which provides a way to create a high-speed (up to 1Gbit/s) local area network using existing home wiring (power lines, phone lines, and coaxial cables).
In the US and Europe, telephone companies use IPTV (often on ADSL or optical fiber networks) as a means to compete with traditional local cable television monopolies.
This type of service is distinct from Internet television, which involves third-party content over the public Internet not controlled by the local system operator.
Features
Programming features
Electronic program guide
Electronic program guides and interactive program guides provide users of television, radio, and other media applications with continuously updated menus displaying broadcast programming or scheduling information for current and upcoming programming. Some guides, such as ITV, also feature backward scrolling to promote their catch-up content.
Favorites
This feature allows the user to choose preferred channels, making them easier and quicker to access; this is handy with the wide range of digital channels on offer. The concept of favourite channels is superficially similar to that of the "bookmark" function offered in many Web browsers.
Timer
The timer allows the user to program and enable the box to switch between channels at certain times: this is handy to record from more than one channel while the user is out. The user still needs to program the VCR or DVD recorder.
Convenience features
Controls on the box
Some models have controls on the box, as well as on the remote control. This is useful should the user lose the remote or if the batteries age.
Remote controls that work with other TVs
Some remote controls can also control some basic functions of various brands of TVs. This allows the user to use just one remote to turn the TV on and off, adjust volume, or switch between digital and analog TV channels or between terrestrial and internet channels.
Parental locks
The parental lock or content filters allow users over 18 years old to block access to channels that are not appropriate for children, using a personal identification number. Some boxes simply block all channels, while others allow the user to restrict access to chosen channels not suitable for children below certain ages.
Software alternatives
As complexity and potential programming faults of the set-top box increase, software such as MythTV, Select-TV and Microsoft's Media Center have developed features comparable to those of set-top boxes, ranging from basic DVR-like functionality to DVD copying, home automation, and housewide music or video playback.
Firmware update features
Almost all modern set-top boxes feature automatic firmware update processes. The firmware update is typically provided by the service provider.
Ambiguities in the definition
With the advent of flat-panel televisions, set-top boxes are now deeper in profile than the tops of most modern TV sets. Because of this, set-top boxes are often placed beneath televisions, and the term set-top box has become something of a misnomer, possibly helping the adoption of the term digibox. Additionally, newer set-top boxes that sit at the edge of IP-based distribution networks are often called net-top boxes or NTBs, to differentiate between IP and RF inputs. The Roku LT is around the size of a pack of cards and delivers Smart TV to conventional sets.
The distinction between external tuner or demodulator boxes (traditionally considered to be "set-top boxes") and storage devices (such as VCR, DVD, or disc-based PVR units) is also blurred by the increasing deployment of satellite and cable tuner boxes with hard disk, network or USB interfaces built-in.
Devices with the capabilities of computer terminals, such as the WebTV thin client, also fall into the grey area that could invite the term "NTB".
Europe
In Europe, a set-top box does not necessarily contain a tuner of its own. A box connected to a television (or VCR) SCART connector is fed with the baseband television signal from the set's tuner, and can have the television display the returned processed signal instead.
This SCART feature had been used for connection to analogue decoding equipment by pay TV operators in Europe, and in the past was used for connection to teletext equipment before the decoders became built-in. The outgoing signal could be of the same nature as the incoming signal, or RGB component video, or even an "insert" over the original signal, due to the "fast switching" feature of SCART.
In case of analogue pay-TV, this approach avoided the need for a second remote control. The use of digital television signals in more modern pay-TV schemes requires that decoding take place before the digital-to-analogue conversion step, rendering the video outputs of an analogue SCART connector no longer suitable for interconnection to decryption hardware. Standards such as DVB's Common Interface and ATSC's CableCARD therefore use a PCMCIA-like card inserted as part of the digital signal path as their alternative to a tuner-equipped set-top box.
Energy use
In June 2011 a report from the American National Resources Defense Council brought attention to the energy efficiency of set-top boxes, and the US Department of Energy announced plans to consider the adoption of energy efficiency standards for set-top boxes. In November 2011, the National Cable & Telecommunications Association announced a new energy efficiency initiative that commits the largest American cable operators to the purchase of set-top boxes that meet Energy Star standards and the development of sleep modes that will use less energy when the set-top box is not being used to watch or record video.
See also
AllVid
CableCARD
Comparison of digital media players
DTV receiver
Digital media player
Microconsoles
Over-the-top media services
References
External links
"What Is a Set Top Box or STB Working and Architecture?" at Headendinfo.com
Cable television technology
Consumer electronics
Satellite broadcasting
Television terminology |
30403 | https://en.wikipedia.org/wiki/Turing%20machine | Turing machine | A Turing machine is a mathematical model of computation that defines an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of implementing that algorithm's logic can be constructed.
The machine operates on an infinite memory tape divided into discrete "cells". The machine positions its "head" over a cell and "reads" or "scans" the symbol there. Then, based on the symbol and the machine's own present state in a "finite table" of user-specified instructions, the machine first writes a symbol (e.g., a digit or a letter from a finite alphabet) in the cell (some models allow symbol erasure or no writing), then either moves the tape one cell left or right (some models allow no motion, some models move the head), then, based on the observed symbol and the machine's own state in the table, either proceeds to another instruction or halts computation.
The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine" (automatic machine). With this model, Turing was able to answer two questions in the negative:
Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)?
Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol?
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem').
Turing machines proved the existence of fundamental limitations on the power of mechanical computation. While they can express arbitrary computations, their minimalist design makes them unsuitable for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory.
Turing completeness is the ability for a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.
Overview
A Turing machine is a general example of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a machine (automaton) capable of enumerating some arbitrary subset of valid strings of an alphabet; these strings are part of a recursively enumerable set. A Turing machine has a tape of infinite length on which it can perform read and write operations.
Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing.
The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus.
A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). A more mathematically oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church–Turing thesis. The thesis states that Turing machines indeed capture the informal notion of effective methods in logic and mathematics, and provide a precise definition of an algorithm or "mechanical procedure". Studying their abstract properties yields many insights into computer science and complexity theory.
Physical description
In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consisted of:
Description
The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On Computable Numbers, with an Application to the Entscheidungsproblem", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner").
More explicitly, a Turing machine consists of:
A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendable to the left and to the right, so that the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written before are assumed to be filled with the blank symbol. In some models the tape has a left end marked with a special symbol; the tape extends or is indefinitely extensible to the right.
A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time. In some models the head moves and the tape is stationary.
A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialized. These states, writes Turing, replace the "state of mind" a person performing computations would ordinarily be in.
A finite table of instructions that, given the state(qi) the machine is currently in and the symbol(aj) it is reading on the tape (symbol currently under the head), tells the machine to do the following in sequence (for the 5-tuple models):
Either erase or write a symbol (replacing aj with aj1).
Move the head (which is described by dk and can have values: 'L' for one step left or 'R' for one step right or 'N' for staying in the same place).
Assume the same or a new state as prescribed (go to state qi1).
In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled.
Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount of storage space.
Formal definition
Following , a (one-tape) Turing machine can be formally defined as a 7-tuple where
is a finite, non-empty set of tape alphabet symbols;
is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation);
is the set of input symbols, that is, the set of symbols allowed to appear in the initial tape contents;
is a finite, non-empty set of states;
is the initial state;
is the set of final states or accepting states. The initial tape contents is said to be accepted by if it eventually halts in a state from .
is a partial function called the transition function, where L is left shift, R is right shift. If is not defined on the current state and the current tape symbol, then the machine halts; intuitively, the transition function specifies the next state transited from the current state, which symbol to overwrite the current symbol pointed by the head, and the next head movement.
In addition, the Turing machine can also have a reject state to make rejection more explicit. In that case there are three possibilities: accepting, rejecting, and running forever. Another possibility is to regard the final values on the tape as the output. However, if the only output is the final state the machine ends up in (or never halting), the machine can still effectively output a longer string by taking in an integer that tells it which bit of the string to output.
A relatively uncommon variant allows "no shift", say N, as a third element of the set of directions .
The 7-tuple for the 3-state busy beaver looks like this (see more about this busy beaver at Turing machine examples):
(states);
(tape alphabet symbols);
(blank symbol);
(input symbols);
(initial state);
(final states);
see state-table below (transition function).
Initially all tape cells are marked with .
Additional details required to visualize or implement Turing machines
In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like."
For instance,
There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely.
The shift left and shift right operations may shift the tape head across the tape, but when actually building a Turing machine it is more practical to make the tape slide back and forth under the head instead.
The tape can be finite, and automatically extended with blanks as needed (which is closest to the mathematical definition), but it is more common to think of it as stretching infinitely at one or both ends and being pre-filled with blanks except on the explicitly given finite fragment the tape head is on. (This is, of course, not implementable in practice.) The tape cannot be fixed in length, since that would not correspond to the given definition and would seriously limit the range of computations the machine can perform to those of a linear bounded automaton if the tape was proportional to the input size, or finite-state machine if it was strictly fixed-length.
Alternative definitions
Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, the set could be changed from to , where N ("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right. This would not increase the machine's computational power.
The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in The Undecidable, p. 126-127 and Davis (2000) p. 152):
(definition 1): (qi, Sj, Sk/E/N, L/R/N, qm)
( current state qi , symbol scanned Sj , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N , new state qm )
Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj:
(definition 2): (qi, Sj, qm, Sk/E/N, L/R/N)
( current state qi , symbol scanned Sj , new state qm , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N )
For the remainder of this article "definition 1" (the Turing/Davis convention) will be used.
In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing in The Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947), The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples:
Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples.
Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at Post–Turing machine.
The "state"
The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape:
Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "the complete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to the left of the scanned symbol.
A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gödel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374-375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right.
Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below):
1A1
This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B.
"State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol.
Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion.
"State" diagrams
To the right: the above table as expressed as a "state transition" diagram.
Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing.
Whether a drawing represents an improvement on its table must be decided by the reader for the particular context.
The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time, not the course ("trajectory") of a computation through time and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters".
The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936) The Undecidable, pp. 139–140).
Equivalent models
Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.)
A Turing machine is equivalent to a single-stack pushdown automaton (PDA) that has been made more flexible and concise by relaxing the last-in-first-out (LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right.
At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model.
Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state.
Read-only, right-moving Turing machines are equivalent to DFAs (as well as NFAs by conversion using the NDFA to DFA conversion algorithm).
For practical and didactical intentions the equivalent register machine can be used as a usual assembly programming language.
A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example, ANSI C is not Turing-equivalent, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, called pointers, is accessible inside the language. However, other programming languages like Pascal do not have this feature, which allows them to be Turing complete in principle.
It is just Turing complete in principle, as memory allocation in a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot.
Choice c-machines, oracle o-machines
Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine":
Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡, The Undecidable, p. 138)
This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration.
An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an unspecified entity "apart from saying that it cannot be a machine" (Turing (1939), The Undecidable, p. 166–168).
Universal Turing machines
As Turing wrote in The Undecidable, p. 128 (italics added):
This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the stored-program computer.
In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9)
Comparison with real machines
It is often believed that Turing machines, unlike simpler automata, are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number of configurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations.
There are a number of ways to explain why Turing machines are useful models of real computers:
Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p. 157). A large enough FSA can also model any real computer, disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to real computers.
The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data.
Like a Turing machine, a real machine can have its storage space enlarged as needed, by acquiring more disks or other storage media.
Descriptions of real machine programs using simpler abstract models are often much more complex than descriptions using Turing machines. For example, a Turing machine describing an algorithm may have a few hundred states, while the equivalent deterministic finite automaton (DFA) on a given real machine has quadrillions. This makes the DFA representation infeasible to analyze.
Turing machines describe algorithms independent of how much memory they use. There is a limit to the memory possessed by any current machine, but this limit can rise arbitrarily in time. Turing machines allow us to make statements about algorithms which will (theoretically) hold forever, regardless of advances in conventional computing machine architecture.
Turing machines simplify the statement of algorithms. Algorithms running on Turing-equivalent abstract machines are usually more general than their counterparts running on real machines, because they have arbitrary-precision data types available and never have to deal with unexpected conditions (including, but not limited to, running out of memory).
Limitations
Computational complexity theory
A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random-access stored-program machine or RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model.
Concurrency
Another limitation of Turing machines is that they do not model concurrency well. For example, there is a bound on the size of integer that can be computed by an always-halting nondeterministic Turing machine starting on a blank tape. (See article on unbounded nondeterminism.) By contrast, there are always-halting concurrent systems with no inputs that can compute an integer of unbounded size. (A process can be created with local storage that is initialized with a count of 0 that concurrently sends itself both a stop and a go message. When it receives a go message, it increments its count by 1 and sends itself a go message. When it receives a stop message, it stops with an unbounded number in its local storage.)
Interaction
In the early days of computing, computer use was typically limited to batch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice.
Since the 1970s, interactive use of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such as I/O automata are usually preferred.
History
They were described in 1936 by Alan Turing.
Historical background: computational machinery
Robin Gandy (1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back to Charles Babbage (circa 1834) and actually proposes "Babbage's Thesis":
Gandy's analysis of Babbage's analytical engine describes the following five operations (cf. p. 52–53):
The arithmetic functions +, −, ×, where − indicates "proper" subtraction if .
Any sequence of operations is an operation.
Iteration of an operation (repeating n times an operation P).
Conditional iteration (repeating n times an operation P conditional on the "success" of test T).
Conditional transfer (i.e., conditional "goto").
Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p. 53). He cites other proposals for "universal calculating machines" including those of Percy Ludgate (1909), Leonardo Torres y Quevedo (1914), Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However:
The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900
With regard to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows:
By 1922, this notion of "Entscheidungsproblem" had developed a bit, and H. Behmann stated that
By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 by Kurt Gödel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s.
The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gödel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions.
And Post had only proposed a definition of calculability and criticized Church's "definition", but had proved nothing.
Alan Turing's a-machine
In the spring of 1935, Turing as a young Master's student at King's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes:
Gandy states that:
While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function":
When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper "On Computable Numbers, with an Application to the Entscheidungsproblem" (1937):
Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable".
1937–1970: The "digital computer", the birth of "computer science"
In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by both the Axis and Allied militaries in World War II (cf. Hodges p. 298–299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post–Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random-access machine models—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set.
1970–present: as a model of computation
Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine:
See also
Arithmetical hierarchy
Bekenstein bound, showing the impossibility of infinite-tape Turing machines of finite size and bounded energy
BlooP and FlooP
Chaitin's constant or Omega (computer science) for information relating to the halting problem
Chinese room
Conway's Game of Life, a Turing-complete cellular automaton
Digital infinity
The Emperor's New Mind
Enumerator (in theoretical computer science)
Genetix
Gödel, Escher, Bach: An Eternal Golden Braid, a famous book that discusses, among other topics, the Church–Turing thesis
Halting problem, for more references
Harvard architecture
Imperative programming
Langton's ant and Turmites, simple two-dimensional analogues of the Turing machine
List of things named after Alan Turing
Modified Harvard architecture
Quantum Turing machine
Claude Shannon, another leading thinker in information theory
Turing machine examples
Turing switch
Turing tarpit, any computing system or language that, despite being Turing complete, is generally considered useless for practical computing
Unorganized machine, for Turing's very early ideas on neural networks
Von Neumann architecture
Notes
References
Primary literature, reprints, and compilations
B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford UK, . Contains the Turing papers plus a draft letter to Emil Post re his criticism of "Turing's convention", and Donald W. Davies' Corrections to Turing's Universal Computing Machine
Martin Davis (ed.) (1965), The Undecidable, Raven Press, Hewlett, NY.
Emil Post (1936), "Finite Combinatory Processes—Formulation 1", Journal of Symbolic Logic, 1, 103–105, 1936. Reprinted in The Undecidable, pp. 289ff.
Emil Post (1947), "Recursive Unsolvability of a Problem of Thue", Journal of Symbolic Logic, vol. 12, pp. 1–11. Reprinted in The Undecidable, pp. 293ff. In the Appendix of this paper Post comments on and gives corrections to Turing's paper of 1936–1937. In particular see the footnotes 11 with corrections to the universal computing machine coding and footnote 14 with comments on Turing's first and second proofs.
(and ). Reprinted in many collections, e.g. in The Undecidable, pp. 115–154; available on the web in many places.
Alan Turing, 1948, "Intelligent Machinery." Reprinted in "Cybernetics: Key Papers." Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p. 31. Reprinted in
F. C. Hennie and R. E. Stearns. Two-tape simulation of multitape Turing machines. JACM, 13(4):533–546, 1966.
Computability theory
Some parts have been significantly rewritten by Burgess. Presentation of Turing machines in context of Lambek "abacus machines" (cf. Register machine) and recursive functions, showing their equivalence.
Taylor L. Booth (1967), Sequential Machines and Automata Theory, John Wiley and Sons, Inc., New York. Graduate level engineering text; ranges over a wide variety of topics, Chapter IX Turing Machines includes some recursion theory.
. On pages 12–20 he gives examples of 5-tuple tables for Addition, The Successor Function, Subtraction (x ≥ y), Proper Subtraction (0 if x < y), The Identity Function and various identity functions, and Multiplication.
. On pages 90–103 Hennie discusses the UTM with examples and flow-charts, but no actual 'code'.
Centered around the issues of machine-interpretation of "languages", NP-completeness, etc.
Stephen Kleene (1952), Introduction to Metamathematics, North–Holland Publishing Company, Amsterdam Netherlands, 10th impression (with corrections of 6th reprint 1971). Graduate level text; most of Chapter XIII Computable functions is on Turing machine proofs of computability of recursive functions, etc.
. With reference to the role of Turing machines in the development of computation (both hardware and software) see 1.4.5 History and Bibliography pp. 225ff and 2.6 History and Bibliographypp. 456ff.
Zohar Manna, 1974, Mathematical Theory of Computation. Reprinted, Dover, 2003.
Marvin Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Inc., N.J., 1967. See Chapter 8, Section 8.2 "Unsolvability of the Halting Problem."
Chapter 2: Turing machines, pp. 19–56.
Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computability, The MIT Press, Cambridge MA, paperback edition 1987, original McGraw-Hill edition 1967, (pbk.)
Chapter 3: The Church–Turing Thesis, pp. 125–149.
Peter van Emde Boas 1990, Machine Models and Simulations, pp. 3–66, in Jan van Leeuwen, ed., Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, [place?], (Volume A). QA76.H279 1990.
Church's thesis
Small Turing machines
Rogozhin, Yurii, 1998, "A Universal Turing Machine with 22 States and 2 Symbols", Romanian Journal of Information Science and Technology, 1(3), 259–265, 1998. (surveys known results about small universal Turing machines)
Stephen Wolfram, 2002, A New Kind of Science, Wolfram Media,
Brunfiel, Geoff, Student snags maths prize, Nature, October 24. 2007.
Jim Giles (2007), Simplest 'universal computer' wins student $25,000, New Scientist, October 24, 2007.
Alex Smith, Universality of Wolfram’s 2, 3 Turing Machine, Submission for the Wolfram 2, 3 Turing Machine Research Prize.
Vaughan Pratt, 2007, "Simple Turing machines, Universality, Encodings, etc.", FOM email list. October 29, 2007.
Martin Davis, 2007, "Smallest universal machine", and Definition of universal Turing machine FOM email list. October 26–27, 2007.
Alasdair Urquhart, 2007 "Smallest universal machine", FOM email list. October 26, 2007.
Hector Zenil (Wolfram Research), 2007 "smallest universal machine", FOM email list. October 29, 2007.
Todd Rowland, 2007, "Confusion on FOM", Wolfram Science message board, October 30, 2007.
Olivier and Marc RAYNAUD, 2014, A programmable prototype to achieve Turing machines" LIMOS Laboratory of Blaise Pascal University (Clermont-Ferrand in France).
Other
Robin Gandy, "The Confluence of Ideas in 1936", pp. 51–102 in Rolf Herken, see below.
Stephen Hawking (editor), 2005, God Created the Integers: The Mathematical Breakthroughs that Changed History, Running Press, Philadelphia, . Includes Turing's 1936–1937 paper, with brief commentary and biography of Turing as written by Hawking.
Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, Oxford and New York, 1989 (1990 corrections), .
Hao Wang, "A variant to Turing's theory of computing machines", Journal of the Association for Computing Machinery (JACM) 4, 63–92 (1957).
Charles Petzold, Petzold, Charles, The Annotated Turing, John Wiley & Sons, Inc.,
Arora, Sanjeev; Barak, Boaz, "Complexity Theory: A Modern Approach", Cambridge University Press, 2009, , section 1.4, "Machines as strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9"
Kirner, Raimund; Zimmermann, Wolf; Richter, Dirk: "On Undecidability Results of Real Programming Languages", In 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS'09), Maria Taferl, Austria, Oct. 2009.
External links
Turing Machine in the Stanford Encyclopedia of Philosophy
Turing Machine Causal Networks by Enrique Zeleny as part of the Wolfram Demonstrations Project.
1936 in computing
1937 in computing
Educational abstract machines
Theoretical computer science
Alan Turing
Models of computation
Formal methods
Computability theory
English inventions
Automata (computation)
Formal languages
Abstract machines |
30538 | https://en.wikipedia.org/wiki/Transmission%20Control%20Protocol | Transmission Control Protocol | The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, and file transfer rely on TCP, which is part of the Transport Layer of the TCP/IP suite. SSL/TLS often runs on top of TCP.
TCP is connection-oriented, and a connection between client and server is established before data can be sent. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP), which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities to TCP, including denial of service, connection hijacking, TCP veto, and reset attack.
Historical origin
In May 1974, Vint Cerf and Bob Kahn described an internetworking protocol for sharing resources using packet switching among network nodes. The authors had been working with Gérard Le Lann to incorporate concepts from the French CYCLADES project into the new network. The specification of the resulting protocol, (Specification of Internet Transmission Control Program), was written by Vint Cerf, Yogen Dalal, and Carl Sunshine, and published in December 1974. It contains the first attested use of the term internet, as a shorthand for internetwork.
A central control component of this model was the Transmission Control Program that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and the Internet Protocol. This resulted in a networking model that became known informally as TCP/IP, although formally it was variously referred to as the Department of Defense (DOD) model, and ARPANET model, and eventually also as the Internet Protocol Suite.
In 2004, Vint Cerf and Bob Kahn received the Turing Award for their foundational work on TCP/IP.
Network function
The Transmission Control Protocol provides a communication service at an intermediate level between an application program and the Internet Protocol. It provides host-to-host connectivity at the transport layer of the Internet model. An application does not need to know the particular mechanisms for sending data via a link to another host, such as the required IP fragmentation to accommodate the maximum transmission unit of the transmission medium. At the transport layer, TCP handles all handshaking and transmission details and presents an abstraction of the network connection to the application typically through a network socket interface.
At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or unpredictable network behaviour, IP packets may be lost, duplicated, or delivered out of order. TCP detects these problems, requests re-transmission of lost data, rearranges out-of-order data and even helps minimize network congestion to reduce the occurrence of the other problems. If the data still remains undelivered, the source is notified of this failure. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the receiving application. Thus, TCP abstracts the application's communication from the underlying networking details.
TCP is used extensively by many internet applications, including the World Wide Web (WWW), email, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and streaming media.
TCP is optimized for accurate delivery rather than timely delivery and can incur relatively long delays (on the order of seconds) while waiting for out-of-order messages or re-transmissions of lost messages. Therefore, it is not particularly suitable for real-time applications such as voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) operating over the User Datagram Protocol (UDP) are usually recommended instead.
TCP is a reliable stream delivery service which guarantees that all bytes received will be identical and in the same order as those sent. Since packet transfer by many networks is not reliable, TCP achieves this using a technique known as positive acknowledgement with re-transmission. This requires the receiver to respond with an acknowledgement message as it receives the data. The sender keeps a record of each packet it sends and maintains a timer from when the packet was sent. The sender re-transmits a packet if the timer expires before receiving the acknowledgement. The timer is needed in case a packet gets lost or corrupted.
While IP handles actual delivery of the data, TCP keeps track of segments - the individual units of data transmission that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a web server, the TCP software layer of that server divides the file into segments and forwards them individually to the internet layer in the network stack. The internet layer software encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. When the client program on the destination computer receives them, the TCP software in the transport layer re-assembles the segments and ensures they are correctly ordered and error-free as it streams the file contents to the receiving application.
TCP segment structure
Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet Protocol (IP) datagram, and exchanged with peers.
The term TCP packet appears in both informal and formal usage, whereas in more precise terminology segment refers to the TCP protocol data unit (PDU), datagram to the IP PDU, and frame to the data link layer PDU:
Processes transmit data by calling on the TCP and passing buffers of data as arguments. The TCP packages the data from these buffers into segments and calls on the internet module [e.g. IP] to transmit each segment to the destination TCP.
A TCP segment consists of a segment header and a data section. The segment header contains 10 mandatory fields, and an optional extension field (Options, pink background in table). The data section follows the header and is the payload data carried for the application. The length of the data section is not specified in the segment header; It can be calculated by subtracting the combined length of the segment header and IP header from the total IP datagram length specified in the IP header.
Source port (16 bits) Identifies the sending port.
Destination port (16 bits) Identifies the receiving port.
Sequence number (32 bits) Has a dual role:
If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are then this sequence number plus 1.
If the SYN flag is clear (0), then this is the accumulated sequence number of the first data byte of this segment for the current session.
Acknowledgment number (32 bits) If the ACK flag is set then the value of this field is the next sequence number that the sender of the ACK is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data.
Data offset (4 bits) Specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header. This field gets its name from the fact that it is also the offset from the start of the TCP segment to the actual data.
Reserved (3 bits)For future use and should be set to zero.
Flags (9 bits)Contains 9 1-bit flags (control bits) as follows:
NS (1 bit): ECN-nonce - concealment protection
CWR (1 bit): Congestion window reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in congestion control mechanism.
ECE (1 bit): ECN-Echo has a dual role, depending on the value of the SYN flag. It indicates:
If the SYN flag is set (1), that the TCP peer is ECN capable.
If the SYN flag is clear (0), that a packet with Congestion Experienced flag set (ECN=11) in the IP header was received during normal transmission. This serves as an indication of network congestion (or impending congestion) to the TCP sender.
URG (1 bit): Indicates that the Urgent pointer field is significant
ACK (1 bit): Indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set.
PSH (1 bit): Push function. Asks to push the buffered data to the receiving application.
RST (1 bit): Reset the connection
SYN (1 bit): Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags and fields change meaning based on this flag, and some are only valid when it is set, and others when it is clear.
FIN (1 bit): Last packet from sender
Window size (16 bits)The size of the receive window, which specifies the number of window size units that the sender of this segment is currently willing to receive. (See and .)
Checksum (16 bits)The 16-bit checksum field is used for error-checking of the TCP header, the payload and an IP pseudo-header. The pseudo-header consists of the source IP address, the destination IP address, the protocol number for the TCP protocol (6) and the length of the TCP headers and payload (in bytes).
Urgent pointer (16 bits)If the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte.
Options (Variable 0–320 bits, in units of 32 bits)The length of this field is determined by the data offset field. Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type of option and is the only field that is not optional. Depending on Option-Kind value, the next two fields may be set. Option-Length indicates the total length of the option, and Option-Data contains data associated with the option, if applicable. For example, an Option-Kind byte of 1 indicates that this is a no operation option used only for padding, and does not have an Option-Length or Option-Data fields following it. An Option-Kind byte of 0 marks the end of options, and is also only one byte. An Option-Kind byte of 2 is used to indicate Maximum Segment Size option, and will be followed by an Option-Length byte specifying the length of the MSS field. Option-Length is the total length of the given options field, including Option-Kind and Option-Length fields. So while the MSS value is typically expressed in two bytes, Option-Length will be 4. As an example, an MSS option field with a value of 0x05B4 is coded as (0x02 0x04 0x05B4) in the TCP options section.
Some options may only be sent when SYN is set; they are indicated below as [SYN]. Option-Kind and standard lengths given as (Option-Kind, Option-Length).
{| class="wikitable"
|-
! Option-Kind
! Option-Length
! Option-Data
! Purpose
! Notes
|-
|0
|
|
|End of options list
|
|-
|1
|
|
|No operation
|This may be used to align option fields on 32-bit boundaries for better performance.
|-
|2
|4
|SS
|Maximum segment size
|See [SYN]
|-
|3
|3
|S
|Window scale
|See for details [SYN]
|-
|4
|2
|
|Selective Acknowledgement permitted
|See for details[SYN]
|-
|5
|N (10, 18, 26, or 34)
|BBBB, EEEE, ...
|Selective ACKnowledgement (SACK)
|These first two bytes are followed by a list of 1–4 blocks being selectively acknowledged, specified as 32-bit begin/end pointers.
|-
|8
|10
|TTTT, EEEE
|Timestamp and echo of previous timestamp
|See for details
|}
The remaining Option-Kind values are historical, obsolete, experimental, not yet standardized, or unassigned. Option number assignments are maintained by the IANA.
PaddingThe TCP header padding is used to ensure that the TCP header ends, and data begins, on a 32-bit boundary. The padding is composed of zeros.
Protocol operation
TCP protocol operations may be divided into three phases. Connection establishment is a multi-step handshake process that establishes a connection before entering the data transfer phase. After data transfer is completed, the connection termination closes the connection and releases all allocated resources.
A TCP connection is managed by an operating system through a resource that represents the local end-point for communications, the Internet socket. During the lifetime of a TCP connection, the local end-point undergoes a series of state changes:
Connection establishment
Before a client attempts to connect with a server, the server must first bind to and listen at a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may establish a connection by initiating an active open using the three-way (or 3-step) handshake:
SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A.
SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number i.e. A+1, and the sequence number that the server chooses for the packet is another random number, B.
ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgment value i.e. A+1, and the acknowledgment number is set to one more than the received sequence number i.e. B+1.
Steps 1 and 2 establish and acknowledge the sequence number for one direction. Steps 2 and 3 establish and acknowledge the sequence number for the other direction. Following the completion of these steps, both the client and server have received acknowledgments and a full-duplex communication is established.
Connection termination
The connection termination phase uses a four-way handshake, with each side of the connection terminating independently. When an endpoint wishes to stop its half of the connection, it transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After the side that sent the first FIN has responded with the final ACK, it waits for a timeout before finally closing the connection, during which time the local port is unavailable for new connections; this prevents possible confusion that can occur if delayed packets associated with a previous connection are delivered during a subsequent connection.
It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN and host B replies with a FIN & ACK (combining two steps into one) and host A replies with an ACK.
Some operating systems, such as Linux and HP-UX, implement a half-duplex close sequence. If the host actively closes a connection, while still having unread incoming data available, the host sends the signal RST (losing any received data) instead of FIN. This assures that a TCP application is aware there was a data loss.
A connection can be in a half-open state, in which case one side has terminated the connection, but the other has not. The side that has terminated can no longer send any data into the connection, but the other side can. The terminating side should continue reading the data until the other side terminates as well.
Resource usage
Most implementations allocate an entry in a table that maps a session to a running operating system process. Because TCP packets do not include a session identifier, both endpoints identify the session using the client's address and port. Whenever a packet is received, the TCP implementation must perform a lookup on this table to find the destination process. Each entry in the table is known as a Transmission Control Block or TCB. It contains information about the endpoints (IP and port), status of the connection, running data about the packets that are being exchanged and buffers for sending and receiving data.
The number of sessions in the server side is limited only by memory and can grow as new connections arrive, but the client must allocate an ephemeral port before sending the first SYN to the server. This port remains allocated during the whole conversation and effectively limits the number of outgoing connections from each of the client's IP addresses. If an application fails to properly close unrequired connections, a client can run out of resources and become unable to establish new TCP connections, even from other applications.
Both endpoints must also allocate space for unacknowledged packets and received (but unread) data.
Data transfer
The Transmission Control Protocol differs in several key features compared to the User Datagram Protocol:
Ordered data transfer: the destination host rearranges segments according to a sequence number
Retransmission of lost packets: any cumulative stream not acknowledged is retransmitted
Error-free data transfer: corrupted packets are treated as lost and are retransmitted
Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received. When the receiving host's buffer fills, the next acknowledgment suspends the transfer and allows the data in the buffer to be processed.
Congestion control: lost packets (presumed due to congestion) trigger a reduction in data delivery rate
Reliable transmission
TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any out-of-order delivery that may occur. The sequence number of the first byte is chosen by the transmitter for the first packet, which is flagged SYN. This number can be arbitrary, and should, in fact, be unpredictable to defend against TCP sequence prediction attacks.
Acknowledgements (ACKs) are sent with a sequence number by the receiver of data to tell the sender that data has been received to the specified byte. ACKs do not imply that the data has been delivered to the application, they merely signify that it is now the receiver's responsibility to deliver the data.
Reliability is achieved by the sender detecting lost data and retransmitting it. TCP uses two primary techniques to identify loss. Retransmission timeout (RTO) and duplicate cumulative acknowledgements (DupAcks).
Dupack-based retransmission
If a single segment (say segment number 100) in a stream is lost, then the receiver cannot acknowledge packets above that segment number (100) because it uses cumulative ACKs. Hence the receiver acknowledges packet 99 again on the receipt of another data packet. This duplicate acknowledgement is used as a signal for packet loss. That is, if the sender receives three duplicate acknowledgements, it retransmits the last unacknowledged packet. A threshold of three is used because the network may reorder segments causing duplicate acknowledgements. This threshold has been demonstrated to avoid spurious retransmissions due to reordering. Some TCP implementation use selective acknowledgements (SACKs) to provide explicit feedback about the segments that have been received. This greatly improves TCP's ability to retransmit the right segments.
Timeout-based retransmission
When a sender transmits a segment, it initializes a timer with a conservative estimate of the arrival time of the acknowledgement. The segment is retransmitted if the timer expires, with a new timeout threshold of twice the previous value, resulting in exponential backoff behavior. Typically, the initial timer value is , where is the clock granularity. This guards against excessive transmission traffic due to faulty or malicious actors, such as man-in-the-middle denial of service attackers.
Error detection
Sequence numbers allow receivers to discard duplicate packets and properly sequence out-of-order packets. Acknowledgments allow senders to determine when to retransmit lost packets.
To assure correctness a checksum field is included; see for details. The TCP checksum is a weak check by modern standards and is normally paired with a CRC integrity check at layer 2, below both TCP and IP, such as is used in PPP or the Ethernet frame. However, introduction of errors in packets between CRC-protected hops is common and the 16-bit TCP checksum catches most of these.
Flow control
TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a smartphone that is slowly processing received data, the smartphone must be able to regulate the data flow so as not to be overwhelmed.
TCP uses a sliding window flow control protocol. In each TCP segment, the receiver specifies in the receive window field the amount of additionally received data (in bytes) that it is willing to buffer for the connection. The sending host can send only up to that amount of data before it must wait for an acknowledgement and receive window update from the receiving host.
When a receiver advertises a window size of 0, the sender stops sending data and starts the persist timer. The persist timer is used to protect TCP from a deadlock situation that could arise if a subsequent window size update from the receiver is lost, and the sender cannot send more data until receiving a new window size update from the receiver. When the persist timer expires, the TCP sender attempts recovery by sending a small packet so that the receiver responds by sending another acknowledgement containing the new window size.
If a receiver is processing incoming data in small increments, it may repeatedly advertise a small receive window. This is referred to as the silly window syndrome, since it is inefficient to send only a few bytes of data in a TCP segment, given the relatively large overhead of the TCP header.
Congestion control
The final main aspect of TCP is congestion control. TCP uses a number of mechanisms to achieve high performance and avoid congestion collapse, where network performance can fall by several orders of magnitude. These mechanisms control the rate of data entering the network, keeping the data flow below a rate that would trigger collapse. They also yield an approximately max-min fair allocation between flows.
Acknowledgments for data sent, or lack of acknowledgments, are used by senders to infer network conditions between the TCP sender and receiver. Coupled with timers, TCP senders and receivers can alter the behavior of the flow of data. This is more generally referred to as congestion control and/or network congestion avoidance.
Modern implementations of TCP contain four intertwined algorithms: slow-start, congestion avoidance, fast retransmit, and fast recovery (RFC 5681).
In addition, senders employ a retransmission timeout (RTO) that is based on the estimated round-trip time (or RTT) between the sender and receiver, as well as the variance in this round trip time. The behavior of this timer is specified in RFC 6298. There are subtleties in the estimation of RTT. For example, senders must be careful when calculating RTT samples for retransmitted packets; typically they use Karn's Algorithm or TCP timestamps (see RFC 1323). These individual RTT samples are then averaged over time to create a Smoothed Round Trip Time (SRTT) using Jacobson's algorithm. This SRTT value is what is finally used as the round-trip time estimate.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast in very high-speed environments are ongoing areas of research and standards development. As a result, there are a number of TCP congestion avoidance algorithm variations.
Maximum segment size
The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which can lead to packet loss and excessive retransmissions. To try to accomplish this, typically the MSS is announced by each side using the MSS option when the TCP connection is established, in which case it is derived from the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver are directly attached. Furthermore, TCP senders can use path MTU discovery to infer the minimum MTU along the network path between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the network.
MSS announcement is also often called "MSS negotiation". Strictly speaking, the MSS is not "negotiated" between the originator and the receiver, because that would imply that both originator and receiver will negotiate and agree upon a single, unified MSS that applies to all communication in both directions of the connection. In fact, two completely independent values of MSS are permitted for the two directions of data flow in a TCP connection. This situation may arise, for example, if one of the devices participating in a connection has an extremely limited amount of memory reserved (perhaps even smaller than the overall discovered Path MTU) for processing incoming TCP segments.
Selective acknowledgments
Relying purely on the cumulative acknowledgment scheme employed by the original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose bytes with sequence number 1,000 to 10,999 are sent in 10 different TCP segments of equal size, and the second segment (sequence numbers 2,000 to 2,999) is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver can only send a cumulative ACK value of 2,000 (the sequence number immediately following the last sequence number of the received data) and cannot say that it received bytes 3,000 to 10,999 successfully. Thus the sender may then have to resend all data starting with sequence number 2,000.
To alleviate this issue TCP employs the selective acknowledgment (SACK) option, defined in 1996 in RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets which were received correctly, in addition to the sequence number immediately following the last sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number of SACK blocks, where each SACK block is conveyed by the Left Edge of Block (the first sequence number of the block) and the Right Edge of Block (the sequence number immediately following the last sequence number of the block), with a Block being a contiguous range that the receiver correctly received. In the example above, the receiver would send an ACK segment with a cumulative ACK value of 2,000 and a SACK option header with sequence numbers 3,000 and 11,000. The sender would accordingly retransmit only the second segment with sequence numbers 2,000 to 2,999.
A TCP sender can interpret an out-of-order segment delivery as a lost segment. If it does so, the TCP sender will retransmit the segment previous to the out-of-order packet and slow its data delivery rate for that connection. The duplicate-SACK option, an extension to the SACK option that was defined in May 2000 in RFC 2883, solves this problem. The TCP receiver sends a D-ACK to indicate that no segments were lost, and the TCP sender can then reinstate the higher transmission-rate.
The SACK option is not mandatory, and comes into operation only if both parties support it. This is negotiated when a connection is established. SACK uses a TCP header option (see TCP segment structure for details). The use of SACK has become widespread—all popular TCP stacks support it. Selective acknowledgment is also used in Stream Control Transmission Protocol (SCTP).
Window scaling
For more efficient use of high-bandwidth networks, a larger TCP window size may be used. The TCP window size field controls the flow of data and its value is limited to between 2 and 65,535 bytes.
Since the size field cannot be expanded, a scaling factor is used. The TCP window scale option, as defined in RFC 1323, is an option used to increase the maximum window size from 65,535 bytes to 1 gigabyte. Scaling up to larger window sizes is a part of what is necessary for TCP tuning.
The window scale option is used only during the TCP 3-way handshake. The window scale value represents the number of bits to left-shift the 16-bit window size field. The window scale value can be set from 0 (no shift) to 14 for each direction independently. Both sides must send the option in their SYN segments to enable window scaling in either direction.
Some routers and packet firewalls rewrite the window scaling factor during a transmission. This causes sending and receiving sides to assume different TCP window sizes. The result is non-stable traffic that may be very slow. The problem is visible on some sites behind a defective router.
TCP timestamps
TCP timestamps, defined in RFC 1323 in 1992, can help TCP determine in which order packets were sent.
TCP timestamps are not normally aligned to the system clock and start at some random value. Many operating systems will increment the timestamp for every elapsed millisecond; however the RFC only states that the ticks should be proportional.
There are two timestamp fields:
a 4-byte sender timestamp value (my timestamp)
a 4-byte echo reply timestamp value (the most recent timestamp received from you).
TCP timestamps are used in an algorithm known as Protection Against Wrapped Sequence numbers, or PAWS (see RFC 1323 for details). PAWS is used when the receive window crosses the sequence number wraparound boundary. In the case where a packet was potentially retransmitted it answers the question: "Is this sequence number in the first 4 GB or the second?" And the timestamp is used to break the tie.
Also, the Eifel detection algorithm (RFC 3522) uses TCP timestamps to determine if retransmissions are occurring because packets are lost or simply out of order.
Recent Statistics show that the level of Timestamp adoption has stagnated, at ~40%, owing to Windows server dropping support since Windows Server 2008.
TCP timestamps are enabled by default In Linux kernel., and disabled by default in Windows Server 2008, 2012 and 2016.
Out-of-band data
It is possible to interrupt or abort the queued stream instead of waiting for the stream to finish. This is done by specifying the data as urgent. This tells the receiving program to process it immediately, along with the rest of the urgent data. When finished, TCP informs the application and resumes back to the stream queue. An example is when TCP is used for a remote login session, the user can send a keyboard sequence that interrupts or aborts the program at the other end. These signals are most often needed when a program on the remote machine fails to operate correctly. The signals must be sent without waiting for the program to finish its current transfer.
TCP out-of-band data was not designed for the modern Internet. The urgent pointer only alters the processing on the remote host and doesn't expedite any processing on the network itself. When it gets to the remote host there are two slightly different interpretations of the protocol, which means only single bytes of OOB data are reliable. This is assuming it is reliable at all as it is one of the least commonly used protocol elements and tends to be poorly implemented.
Forcing data delivery
Normally, TCP waits for 200 ms for a full packet of data to send (Nagle's Algorithm tries to group small messages into a single packet). This wait creates small, but potentially serious delays if repeated constantly during a file transfer. For example, a typical send block would be 4 KB, a typical MSS is 1460, so 2 packets go out on a 10 Mbit/s ethernet taking ~1.2 ms each followed by a third carrying the remaining 1176 after a 197 ms pause because TCP is waiting for a full buffer.
In the case of telnet, each user keystroke is echoed back by the server before the user can see it on the screen. This delay would become very annoying.
Setting the socket option TCP_NODELAY overrides the default 200 ms send delay. Application programs use this socket option to force output to be sent after writing a character or line of characters.
The RFC defines the PSH push bit as "a message to the receiving TCP stack to send this data immediately up to the receiving application". There is no way to indicate or control it in user space using Berkeley sockets and it is controlled by protocol stack only.
Vulnerabilities
TCP may be attacked in a variety of ways. The results of a thorough security assessment of TCP, along with possible mitigations for the identified issues, were published in 2009, and is currently being pursued within the IETF.
Denial of service
By using a spoofed IP address and repeatedly sending purposely assembled SYN packets, followed by many ACK packets, attackers can cause the server to consume large amounts of resources keeping track of the bogus connections. This is known as a SYN flood attack. Proposed solutions to this problem include SYN cookies and cryptographic puzzles, though SYN cookies come with their own set of vulnerabilities. Sockstress is a similar attack, that might be mitigated with system resource management. An advanced DoS attack involving the exploitation of the TCP Persist Timer was analyzed in Phrack #66. PUSH and ACK floods are other variants.
Connection hijacking
An attacker who is able to eavesdrop a TCP session and redirect packets can hijack a TCP connection. To do so, the attacker learns the sequence number from the ongoing communication and forges a false segment that looks like the next segment in the stream. Such a simple hijack can result in one packet being erroneously accepted at one end. When the receiving host acknowledges the extra segment to the other side of the connection, synchronization is lost. Hijacking might be combined with Address Resolution Protocol (ARP) or routing attacks that allow taking control of the packet flow, so as to get permanent control of the hijacked TCP connection.
Impersonating a different IP address was not difficult prior to RFC 1948, when the initial sequence number was easily guessable. That allowed an attacker to blindly send a sequence of packets that the receiver would believe to come from a different IP address, without the need to deploy ARP or routing attacks: it is enough to ensure that the legitimate host of the impersonated IP address is down, or bring it to that condition using denial-of-service attacks. This is why the initial sequence number is now chosen at random.
TCP veto
An attacker who can eavesdrop and predict the size of the next packet to be sent can cause the receiver to accept a malicious payload without disrupting the existing connection. The attacker injects a malicious packet with the sequence number and a payload size of the next expected packet. When the legitimate packet is ultimately received, it is found to have the same sequence number and length as a packet already received and is silently dropped as a normal duplicate packet—the legitimate packet is "vetoed" by the malicious packet. Unlike in connection hijacking, the connection is never desynchronized and communication continues as normal after the malicious payload is accepted. TCP veto gives the attacker less control over the communication, but makes the attack particularly resistant to detection. The large increase in network traffic from the ACK storm is avoided. The only evidence to the receiver that something is amiss is a single duplicate packet, a normal occurrence in an IP network. The sender of the vetoed packet never sees any evidence of an attack.
Another vulnerability is the TCP reset attack.
TCP ports
TCP and UDP use port numbers to identify sending and receiving application end-points on a host, often called Internet sockets. Each side of a TCP connection has an associated 16-bit unsigned port number (0-65535) reserved by the sending or receiving application. Arriving TCP packets are identified as belonging to a specific TCP connection by its sockets, that is, the combination of source host address, source port, destination host address, and destination port. This means that a server computer can provide several clients with several services simultaneously, as long as a client takes care of initiating any simultaneous connections to one destination port from different source ports.
Port numbers are categorized into three basic categories: well-known, registered, and dynamic/private. The well-known ports are assigned by the Internet Assigned Numbers Authority (IANA) and are typically used by system-level or root processes. Well-known applications running as servers and passively listening for connections typically use these ports. Some examples include: FTP (20 and 21), SSH (22), TELNET (23), SMTP (25), HTTP over SSL/TLS (443), and HTTP (80). Note, as of the latest standard, HTTP/3, QUIC is used as a transport instead of TCP. Registered ports are typically used by end user applications as ephemeral source ports when contacting servers, but they can also identify named services that have been registered by a third party. Dynamic/private ports can also be used by end user applications, but are less commonly so. Dynamic/private ports do not contain any meaning outside of any particular TCP connection.
Network Address Translation (NAT), typically uses dynamic port numbers, on the ("Internet-facing") public side, to disambiguate the flow of traffic that is passing between a public network and a private subnetwork, thereby allowing many IP addresses (and their ports) on the subnet to be serviced by a single public-facing address.
Development
TCP is a complex protocol. However, while significant enhancements have been made and proposed over the years, its most basic operation has not changed significantly since its first specification RFC 675 in 1974, and the v4 specification RFC 793, published in September 1981. RFC 1122, Host Requirements for Internet Hosts, clarified a number of TCP protocol implementation requirements. A list of the 8 required specifications and over 20 strongly encouraged enhancements is available in RFC 7414. Among this list is RFC 2581, TCP Congestion Control, one of the most important TCP-related RFCs in recent years, describes updated algorithms that avoid undue congestion. In 2001, RFC 3168 was written to describe Explicit Congestion Notification (ECN), a congestion avoidance signaling mechanism.
The original TCP congestion avoidance algorithm was known as "TCP Tahoe", but many alternative algorithms have since been proposed (including TCP Reno, TCP Vegas, FAST TCP, TCP New Reno, and TCP Hybla).
TCP Interactive (iTCP) is a research effort into TCP extensions that allows applications to subscribe to TCP events and register handler components that can launch applications for various purposes, including application-assisted congestion control.
Multipath TCP (MPTCP) is an ongoing effort within the IETF that aims at allowing a TCP connection to use multiple paths to maximize resource usage and increase redundancy. The redundancy offered by Multipath TCP in the context of wireless networks enables the simultaneous utilization of different networks, which brings higher throughput and better handover capabilities. Multipath TCP also brings performance benefits in datacenter environments. The reference implementation of Multipath TCP is being developed in the Linux kernel. Multipath TCP is used to support the Siri voice recognition application on iPhones, iPads and Macs
tcpcrypt is an extension proposed in July 2010 to provide transport-level encryption directly in TCP itself. It is designed to work transparently and not require any configuration. Unlike TLS (SSL), tcpcrypt itself does not provide authentication, but provides simple primitives down to the application to do that. , the first tcpcrypt IETF draft has been published and implementations exist for several major platforms.
TCP Fast Open is an extension to speed up the opening of successive TCP connections between two endpoints. It works by skipping the three-way handshake using a cryptographic "cookie". It is similar to an earlier proposal called T/TCP, which was not widely adopted due to security issues. TCP Fast Open was published as RFC 7413 in 2014.
Proposed in May 2013, Proportional Rate Reduction (PRR) is a TCP extension developed by Google engineers. PRR ensures that the TCP window size after recovery is as close to the Slow-start threshold as possible. The algorithm is designed to improve the speed of recovery and is the default congestion control algorithm in Linux 3.2+ kernels.
Deprecated proposals
TCP Cookie Transactions (TCPCT) is an extension proposed in December 2009 to secure servers against denial-of-service attacks. Unlike SYN cookies, TCPCT does not conflict with other TCP extensions such as window scaling. TCPCT was designed due to necessities of DNSSEC, where servers have to handle large numbers of short-lived TCP connections. In 2016, TCPCT was deprecated in favor of TCP Fast Open. Status of the original RFC was changed to "historic".
TCP over wireless networks
TCP was originally designed for wired networks. Packet loss is considered to be the result of network congestion and the congestion window size is reduced dramatically as a precaution. However, wireless links are known to experience sporadic and usually temporary losses due to fading, shadowing, hand off, interference, and other radio effects, that are not strictly congestion. After the (erroneous) back-off of the congestion window size, due to wireless packet loss, there may be a congestion avoidance phase with a conservative decrease in window size. This causes the radio link to be underutilized. Extensive research on combating these harmful effects has been conducted. Suggested solutions can be categorized as end-to-end solutions, which require modifications at the client or server, link layer solutions, such as Radio Link Protocol (RLP) in cellular networks, or proxy-based solutions which require some changes in the network without modifying end nodes.
A number of alternative congestion control algorithms, such as Vegas, Westwood, Veno, and Santa Cruz, have been proposed to help solve the wireless problem.
Hardware implementations
One way to overcome the processing power requirements of TCP is to build hardware implementations of it, widely known as TCP offload engines (TOE). The main problem of TOEs is that they are hard to integrate into computing systems, requiring extensive changes in the operating system of the computer or device. One company to develop such a device was Alacritech.
Debugging
A packet sniffer, which intercepts TCP traffic on a network link, can be useful in debugging networks, network stacks, and applications that use TCP by showing the user what packets are passing through a link. Some networking stacks support the SO_DEBUG socket option, which can be enabled on the socket using setsockopt. That option dumps all the packets, TCP states, and events on that socket, which is helpful in debugging. Netstat is another utility that can be used for debugging.
Alternatives
For many applications TCP is not appropriate. One problem (at least with normal implementations) is that the application cannot access the packets coming after a lost packet until the retransmitted copy of the lost packet is received. This causes problems for real-time applications such as streaming media, real-time multiplayer games and voice over IP (VoIP) where it is generally more useful to get most of the data in a timely fashion than it is to get all of the data in order.
For historical and performance reasons, most storage area networks (SANs) use Fibre Channel Protocol (FCP) over Fibre Channel connections.
Also, for embedded systems, network booting, and servers that serve simple requests from huge numbers of clients (e.g. DNS servers) the complexity of TCP can be a problem. Finally, some tricks such as transmitting data between two hosts that are both behind NAT (using STUN or similar systems) are far simpler without a relatively complex protocol like TCP in the way.
Generally, where TCP is unsuitable, the User Datagram Protocol (UDP) is used. This provides the application multiplexing and checksums that TCP does, but does not handle streams or retransmission, giving the application developer the ability to code them in a way suitable for the situation, or to replace them with other methods like forward error correction or interpolation.
Stream Control Transmission Protocol (SCTP) is another protocol that provides reliable stream oriented services similar to TCP. It is newer and considerably more complex than TCP, and has not yet seen widespread deployment. However, it is especially designed to be used in situations where reliability and near-real-time considerations are important.
Venturi Transport Protocol (VTP) is a patented proprietary protocol that is designed to replace TCP transparently to overcome perceived inefficiencies related to wireless data transport.
TCP also has issues in high-bandwidth environments. The TCP congestion avoidance algorithm works very well for ad-hoc environments where the data sender is not known in advance. If the environment is predictable, a timing based protocol such as Asynchronous Transfer Mode (ATM) can avoid TCP's retransmits overhead.
UDP-based Data Transfer Protocol (UDT) has better efficiency and fairness than TCP in networks that have high bandwidth-delay product.
Multipurpose Transaction Protocol (MTP/IP) is patented proprietary software that is designed to adaptively achieve high throughput and transaction performance in a wide variety of network conditions, particularly those where TCP is perceived to be inefficient.
Checksum computation
TCP checksum for IPv4
When TCP runs over IPv4, the method used to compute the checksum is defined in RFC 793:
The checksum field is the 16 bit one's complement of the one's complement sum of all 16-bit words in the header and text. If a segment contains an odd number of header and text octets to be checksummed, the last octet is padded on the right with zeros to form a 16-bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros.
In other words, after appropriate padding, all 16-bit words are added using one's complement arithmetic. The sum is then bitwise complemented and inserted as the checksum field. A pseudo-header that mimics the IPv4 packet header used in the checksum computation is shown in the table below.
The source and destination addresses are those of the IPv4 header. The protocol value is 6 for TCP (cf. List of IP protocol numbers). The TCP length field is the length of the TCP header and data (measured in octets).
TCP checksum for IPv6
When TCP runs over IPv6, the method used to compute the checksum is changed, as per RFC 2460:
Any transport or other upper-layer protocol that includes the addresses from the IP header in its checksum computation must be modified for use over IPv6, to include the 128-bit IPv6 addresses instead of 32-bit IPv4 addresses.
A pseudo-header that mimics the IPv6 header for computation of the checksum is shown below.
Source address: the one in the IPv6 header
Destination address: the final destination; if the IPv6 packet doesn't contain a Routing header, TCP uses the destination address in the IPv6 header, otherwise, at the originating node, it uses the address in the last element of the Routing header, and, at the receiving node, it uses the destination address in the IPv6 header.
TCP length: the length of the TCP header and data
Next Header: the protocol value for TCP
Checksum offload
Many TCP/IP software stack implementations provide options to use hardware assistance to automatically compute the checksum in the network adapter prior to transmission onto the network or upon reception from the network for validation. This may relieve the OS from using precious CPU cycles calculating the checksum. Hence, overall network performance is increased.
This feature may cause packet analyzers that are unaware or uncertain about the use of checksum offload to report invalid checksums in outbound packets that have not yet reached the network adapter. This will only occur for packets that are intercepted before being transmitted by the network adapter; all packets transmitted by the network adaptor on the wire will have valid checksums. This issue can also occur when monitoring packets being transmitted between virtual machines on the same host, where a virtual device driver may omit the checksum calculation (as an optimization), knowing that the checksum will be calculated later by the VM host kernel or its physical hardware.
RFC documents
– Specification of Internet Transmission Control Program, December 1974 Version
– TCP v4
STD 7 – Transmission Control Protocol, Protocol specification
– includes some error corrections for TCP
– TCP Extensions for High Performance [Obsoleted by RFC 7323]
– Extending TCP for Transactions—Concepts [Obsoleted by RFC 6247]
– Defending Against Sequence Number Attacks
– TCP Selective Acknowledgment Options
– TCP Congestion Control
– Moving the Undeployed TCP Extensions RFC 1072, RFC 1106, RFC 1110, RFC 1145, RFC 1146, RFC 1379, RFC 1644, and RFC 1693 to Historic Status
– Computing TCP's Retransmission Timer
– TCP Extensions for Multipath Operation with Multiple Addresses
– TCP Extensions for High Performance
– A Roadmap for TCP Specification Documents
See also
Connection-oriented communication
List of TCP and UDP port numbers (a long list of ports and services)
Micro-bursting (networking)
T/TCP variant of TCP
TCP global synchronization
TCP pacing
WTCP a proxy-based modification of TCP for wireless networks
Notes
References
Further reading
**
External links
Oral history interview with Robert E. Kahn
IANA Port Assignments
IANA TCP Parameters
John Kristoff's Overview of TCP (Fundamental concepts behind TCP and how it is used to transport data between two endpoints)
Checksum example
TCP tutorial
Transport layer protocols |
31062 | https://en.wikipedia.org/wiki/Telnet | Telnet | Telnet is an application protocol used on the Internet or local area network to provide a bidirectional interactive text-oriented communication facility using a virtual terminal connection. User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data connection over the Transmission Control Protocol (TCP).
Telnet was developed in 1969 beginning with , extended in , and standardized as Internet Engineering Task Force (IETF) Internet Standard STD 8, one of the first Internet standards. The name stands for "teletype network".
Historically, Telnet provided access to a command-line interface on a remote host. However, because of serious security concerns when using Telnet over an open network such as the Internet, its use for this purpose has waned significantly in favor of SSH.
The term telnet is also used to refer to the software that implements the client part of the protocol. Telnet client applications are available for virtually all computer platforms. Telnet is also used as a verb. To telnet means to establish a connection using the Telnet protocol, either with a command line client or with a graphical interface. For example, a common directive might be: "To change your password, telnet into the server, log in and run the passwd command." In most cases, a user would be telnetting into a Unix-like server system or a network device (such as a router).
History and standards
Telnet is a client-server protocol, based on a reliable connection-oriented transport. Typically, this protocol is used to establish a connection to Transmission Control Protocol (TCP) port number 23, where a Telnet server application (telnetd) is listening. Telnet, however, predates TCP/IP and was originally run over Network Control Program (NCP) protocols.
Even though Telnet was an ad hoc protocol with no official definition until March 5, 1973, the name actually referred to Teletype Over Network Protocol as the RFC 206 (NIC 7176) on Telnet makes the connection clear:
Essentially, it used an 8-bit channel to exchange 7-bit ASCII data. Any byte with the high bit set was a special Telnet character. On March 5, 1973, a Telnet protocol standard was defined at UCLA with the publication of two NIC documents: Telnet Protocol Specification, NIC 15372, and Telnet Option Specifications, NIC 15373.
Many extensions were made for Telnet because of its negotiable options protocol architecture. Some of these extensions have been adopted as Internet standards, IETF documents STD 27 through STD 32. Some extensions have been widely implemented and others are proposed standards on the IETF standards track (see below)
Telnet is best understood in the context of a user with a simple terminal using the local Telnet program (known as the client program) to run a logon session on a remote computer where the user's communications needs are handled by a Telnet server program.
Security
When Telnet was initially developed in 1969, most users of networked computers were in the computer departments of academic institutions, or at large private and government research facilities. In this environment, security was not nearly as much a concern as it became after the bandwidth explosion of the 1990s. The rise in the number of people with access to the Internet, and by extension the number of people attempting to hack other people's servers, made encrypted alternatives necessary.
Experts in computer security, such as SANS Institute, recommend that the use of Telnet for remote logins should be discontinued under all normal circumstances, for the following reasons:
Telnet, by default, does not encrypt any data sent over the connection (including passwords), and so it is often feasible to eavesdrop on the communications and use the password later for malicious purposes; anybody who has access to a router, switch, hub or gateway located on the network between the two hosts where Telnet is being used can intercept the packets passing by and obtain login, password and whatever else is typed with a packet analyzer.
Most implementations of Telnet have no authentication that would ensure communication is carried out between the two desired hosts and not intercepted in the middle.
Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.
These security-related shortcomings have seen the usage of the Telnet protocol drop rapidly, especially on the public Internet, in favor of the Secure Shell (SSH) protocol, first released in 1995. SSH has practically replaced Telnet, and the older protocol is used these days only in rare cases to access decades-old legacy equipment that does not support more modern protocols. SSH provides much of the functionality of telnet, with the addition of strong encryption to prevent sensitive data such as passwords from being intercepted, and public key authentication, to ensure that the remote computer is actually who it claims to be. As has happened with other early Internet protocols, extensions to the Telnet protocol provide Transport Layer Security (TLS) security and Simple Authentication and Security Layer (SASL) authentication that address the above concerns. However, most Telnet implementations do not support these extensions; and there has been relatively little interest in implementing these as SSH is adequate for most purposes.
It is of note that there are a large number of industrial and scientific devices which have only Telnet available as a communication option. Some are built with only a standard RS-232 port and use a serial server hardware appliance to provide the translation between the TCP/Telnet data and the RS-232 serial data. In such cases, SSH is not an option unless the interface appliance can be configured for SSH (or is replaced with one supporting SSH).
Telnet is still used by hobbyists, especially among amateur radio operators. The Winlink protocol supports packet radio via a Telnet connection.
Telnet 5250
IBM 5250 or 3270 workstation emulation is supported via custom telnet clients, TN5250/TN3270, and IBM i systems. Clients and servers designed to pass IBM 5250 data streams over Telnet generally do support SSL encryption, as SSH does not include 5250 emulation. Under IBM i (also known as OS/400), port 992 is the default port for secured telnet.
Telnet data
All data octets except 0xff are transmitted over Telnet as is.
(0xff, or 255 in decimal, is the IAC byte (Interpret As Command) which signals that the next byte is a telnet command. The command to insert 0xff into the stream is 0xff, so 0xff must be escaped by doubling it when sending data over the telnet protocol.)
Telnet client applications can establish an interactive TCP session to a port other than the Telnet server port. Connections to such ports do not use IAC and all octets are sent to the server without interpretation. For example, a command line telnet client could make an HTTP request to a web server on TCP port 80 as follows:
$ telnet www.example.com 80
GET /path/to/file.html HTTP/1.1
Host: www.example.com
Connection: close
There are other TCP terminal clients, such as netcat or socat on UNIX and PuTTY on Windows, which handle such requirements. Nevertheless, Telnet may still be used in debugging network services such as SMTP, IRC, HTTP, FTP or POP3, to issue commands to a server and examine the responses.
Another difference between Telnet and other TCP terminal clients is that Telnet is not 8-bit clean by default. 8-bit mode may be negotiated, but octets with the high bit set may be garbled until this mode is requested, as 7-bit is the default mode. The 8-bit mode (so named binary option) is intended to transmit binary data, not ASCII characters. The standard suggests the interpretation of codes 0000–0176 as ASCII, but does not offer any meaning for high-bit-set data octets. There was an attempt to introduce a switchable character encoding support like HTTP has, but nothing is known about its actual software support.
Related RFCs
Internet Standards
, Telnet Protocol Specification
, Telnet Option Specifications
, Telnet Binary Transmission
, Telnet Echo Option
, Telnet Suppress Go Ahead Option
, Telnet Status Option
, Telnet Timing Mark Option
, Telnet Extended Options: List Option
Proposed Standards
, Telnet End of Record Option
, Telnet Window Size Option
, Telnet Terminal Speed Option
, Telnet Terminal-Type Option
, Telnet X Display Location Option
, Requirements for Internet Hosts - Application and Support
, Telnet Linemode Option
, Telnet Remote Flow Control Option
, Telnet Environment Option
, Telnet Authentication Option
, Telnet Authentication: Kerberos Version 5
, TELNET Authentication Using DSA
, Telnet Authentication: SRP
, Telnet Data Encryption Option
, The telnet URI Scheme
Informational/experimental
, The Q Method of Implementing TELNET Option Negotiation
, Telnet Environment Option Interoperability Issues
Other RFCs
, Telnet 3270 Regime Option
, 5250 Telnet Interface
, Telnet Com Port Control Option
, IBM's iSeries Telnet Enhancements
Telnet clients
PuTTY and plink command line are a free, open-source SSH, Telnet, rlogin, and raw TCP client for Windows, Linux, and Unix.
AbsoluteTelnet is a telnet client for Windows. It also supports SSH and SFTP,
RUMBA (Terminal Emulator)
Line Mode Browser, a command line web browser
NCSA Telnet
TeraTerm
SecureCRT from Van Dyke Software
ZOC Terminal
SyncTERM BBS terminal program supporting Telnet, SSHv2, RLogin, Serial, Windows, *nix, and Mac OS X platforms, X/Y/ZMODEM and various BBS terminal emulations
Rtelnet is a SOCKS client version of Telnet, providing similar functionality of telnet to those hosts which are behind firewall and NAT.
Inetutils includes a telnet client and server and is installed by default on many Linux distributions.
telnet.exe command line utility included in default installation of many versions of Microsoft Windows.
See also
List of terminal emulators
Banner grabbing
Virtual terminal
Reverse telnet
HyTelnet
Kermit
SSH
References
External links
Telnet Options — the official list of assigned option numbers at iana.org
Telnet Interactions Described as a Sequence Diagram
Telnet configuration
Telnet protocol description, with NVT reference
Microsoft TechNet:Telnet commands
TELNET: The Mother of All (Application) Protocols
Troubleshoot Telnet Errors in Windows Operating System
Contains a list of telnet addresses and list of telnet clients
Application layer protocols
History of the Internet
Internet Protocol based network software
Internet protocols
Internet Standards
Remote administration software
Unix network-related software
URI schemes |
31748 | https://en.wikipedia.org/wiki/Ultra | Ultra | Ultra was the designation adopted by British military intelligence in June 1941 for wartime signals intelligence obtained by breaking high-level encrypted enemy radio and teleprinter communications at the Government Code and Cypher School (GC&CS) at Bletchley Park. Ultra eventually became the standard designation among the western Allies for all such intelligence. The name arose because the intelligence obtained was considered more important than that designated by the highest British security classification then used (Most Secret) and so was regarded as being Ultra Secret. Several other cryptonyms had been used for such intelligence.
The code name Boniface was used as a cover name for Ultra. In order to ensure that the successful code-breaking did not become apparent to the Germans, British intelligence created a fictional MI6 master spy, Boniface, who controlled a fictional series of agents throughout Germany. Information obtained through code-breaking was often attributed to the human intelligence from the Boniface network. The U.S. used the codename Magic for its decrypts from Japanese sources, including the "Purple" cipher.
Much of the German cipher traffic was encrypted on the Enigma machine. Used properly, the German military Enigma would have been virtually unbreakable; in practice, shortcomings in operation allowed it to be broken. The term "Ultra" has often been used almost synonymously with "Enigma decrypts". However, Ultra also encompassed decrypts of the German Lorenz SZ 40/42 machines that were used by the German High Command, and the Hagelin machine.
Many observers, at the time and later, regarded Ultra as immensely valuable to the Allies. Winston Churchill was reported to have told King George VI, when presenting to him Stewart Menzies (head of the Secret Intelligence Service and the person who controlled distribution of Ultra decrypts to the government): "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" F. W. Winterbotham quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at war's end describing Ultra as having been "decisive" to Allied victory. Sir Harry Hinsley, Bletchley Park veteran and official historian of British Intelligence in World War II, made a similar assessment of Ultra, saying that while the Allies would have won the war without it, "the war would have been something like two years longer, perhaps three years longer, possibly four years longer than it was." However, Hinsley and others have emphasized the difficulties of counterfactual history in attempting such conclusions, and some historians, such as Keegan, have said the shortening might have been as little as the three months it took the United States to deploy the atomic bomb.
The existence of Ultra was kept secret for many years after the war. Since the Ultra story was widely disseminated by Winterbotham in 1974, historians have altered the historiography of World War II. For example, Andrew Roberts, writing in the 21st century, states, "Because he had the invaluable advantage of being able to read Field Marshal Erwin Rommel's Enigma communications, General Bernard Montgomery knew how short the Germans were of men, ammunition, food and above all fuel. When he put Rommel's picture up in his caravan he wanted to be seen to be almost reading his opponent's mind. In fact he was reading his mail." Over time, Ultra has become embedded in the public consciousness and Bletchley Park has become a significant visitor attraction. As stated by historian Thomas Haigh, "The British code-breaking effort of the Second World War, formerly secret, is now one of the most celebrated aspects of modern British history, an inspiring story in which a free society mobilized its intellectual resources against a terrible enemy."
Sources of intelligence
Most Ultra intelligence was derived from reading radio messages that had been encrypted with cipher machines, complemented by material from radio communications using traffic analysis and direction finding. In the early phases of the war, particularly during the eight-month Phoney War, the Germans could transmit most of their messages using land lines and so had no need to use radio. This meant that those at Bletchley Park had some time to build up experience of collecting and starting to decrypt messages on the various radio networks. German Enigma messages were the main source, with those of the Luftwaffe predominating, as they used radio more and their operators were particularly ill-disciplined.
German
Enigma
"Enigma" refers to a family of electro-mechanical rotor cipher machines. These produced a polyalphabetic substitution cipher and were widely thought to be unbreakable in the 1920s, when a variant of the commercial Model D was first used by the Reichswehr. The German Army, Navy, Air Force, Nazi party, Gestapo and German diplomats used Enigma machines in several variants. Abwehr (German military intelligence) used a four-rotor machine without a plugboard and Naval Enigma used different key management from that of the army or air force, making its traffic far more difficult to cryptanalyse; each variant required different cryptanalytic treatment. The commercial versions were not as secure and Dilly Knox of GC&CS is said to have broken one before the war.
German military Enigma was first broken in December 1932 by the Polish Cipher Bureau, using a combination of brilliant mathematics, the services of a spy in the German office responsible for administering encrypted communications, and good luck. The Poles read Enigma to the outbreak of World War II and beyond, in France. At the turn of 1939, the Germans made the systems ten times more complex, which required a tenfold increase in Polish decryption equipment, which they could not meet. On 25 July 1939, the Polish Cipher Bureau handed reconstructed Enigma machines and their techniques for decrypting ciphers to the French and British. Gordon Welchman wrote,
At Bletchley Park, some of the key people responsible for success against Enigma included mathematicians Alan Turing and Hugh Alexander and, at the British Tabulating Machine Company, chief engineer Harold Keen.
After the war, interrogation of German cryptographic personnel led to the conclusion that German cryptanalysts understood that cryptanalytic attacks against Enigma were possible but were thought to require impracticable amounts of effort and investment. The Poles' early start at breaking Enigma and the continuity of their success gave the Allies an advantage when World War II began.
Lorenz cipher
In June 1941, the Germans started to introduce on-line stream cipher teleprinter systems for strategic point-to-point radio links, to which the British gave the code-name Fish. Several systems were used, principally the Lorenz SZ 40/42 (Tunny) and Geheimfernschreiber (Sturgeon). These cipher systems were cryptanalysed, particularly Tunny, which the British thoroughly penetrated. It was eventually attacked using Colossus machines, which were the first digital programme-controlled electronic computers. In many respects the Tunny work was more difficult than for the Enigma, since the British codebreakers had no knowledge of the machine producing it and no head-start such as that the Poles had given them against Enigma.
Although the volume of intelligence derived from this system was much smaller than that from Enigma, its importance was often far higher because it produced primarily high-level, strategic intelligence that was sent between Wehrmacht High Command (OKW). The eventual bulk decryption of Lorenz-enciphered messages contributed significantly, and perhaps decisively, to the defeat of Nazi Germany. Nevertheless, the Tunny story has become much less well known among the public than the Enigma one. At Bletchley Park, some of the key people responsible for success in the Tunny effort included mathematicians W. T. "Bill" Tutte and Max Newman and electrical engineer Tommy Flowers.
Italian
In June 1940, the Italians were using book codes for most of their military messages, except for the Italian Navy, which in early 1941 had started using a version of the Hagelin rotor-based cipher machine C-38. This was broken from June 1941 onwards by the Italian subsection of GC&CS at Bletchley Park.
Japanese
In the Pacific theatre, a Japanese cipher machine, called "Purple" by the Americans, was used for highest-level Japanese diplomatic traffic. It produced a polyalphabetic substitution cipher, but unlike Enigma, was not a rotor machine, being built around electrical stepping switches. It was broken by the US Army Signal Intelligence Service and disseminated as Magic. Detailed reports by the Japanese ambassador to Germany were encrypted on the Purple machine. His reports included reviews of German assessments of the military situation, reviews of strategy and intentions, reports on direct inspections by the ambassador (in one case, of Normandy beach defences), and reports of long interviews with Hitler. The Japanese are said to have obtained an Enigma machine in 1937, although it is debated whether they were given it by the Germans or bought a commercial version, which, apart from the plugboard and internal wiring, was the German Heer/Luftwaffe machine. Having developed a similar machine, the Japanese did not use the Enigma machine for their most secret communications.
The chief fleet communications code system used by the Imperial Japanese Navy was called JN-25 by the Americans, and by early 1942 the US Navy had made considerable progress in decrypting Japanese naval messages. The US Army also made progress on the Japanese Army's codes in 1943, including codes used by supply ships, resulting in heavy losses to their shipping.
Distribution
Army- and air force-related intelligence derived from signals intelligence (SIGINT) sources—mainly Enigma decrypts in Hut 6—was compiled in summaries at GC&CS (Bletchley Park) Hut 3 and distributed initially under the codeword "BONIFACE", implying that it was acquired from a well placed agent in Berlin. The volume of the intelligence reports going out to commanders in the field built up gradually. Naval Enigma decrypted in Hut 8 was forwarded from Hut 4 to the Admiralty's Operational Intelligence Centre (OIC), which distributed it initially under the codeword "HYDRO". The codeword "ULTRA" was adopted in June 1941. This codeword was reportedly suggested by Commander Geoffrey Colpoys, RN, who served in the RN OIC.
Army and air force
The distribution of Ultra information to Allied commanders and units in the field involved considerable risk of discovery by the Germans, and great care was taken to control both the information and knowledge of how it was obtained. Liaison officers were appointed for each field command to manage and control dissemination.
Dissemination of Ultra intelligence to field commanders was carried out by MI6, which operated Special Liaison Units (SLU) attached to major army and air force commands. The activity was organized and supervised on behalf of MI6 by Group Captain F. W. Winterbotham. Each SLU included intelligence, communications, and cryptographic elements. It was headed by a British Army or RAF officer, usually a major, known as "Special Liaison Officer". The main function of the liaison officer or his deputy was to pass Ultra intelligence bulletins to the commander of the command he was attached to, or to other indoctrinated staff officers. In order to safeguard Ultra, special precautions were taken. The standard procedure was for the liaison officer to present the intelligence summary to the recipient, stay with him while he studied it, then take it back and destroy it.
By the end of the war, there were about 40 SLUs serving commands around the world. Fixed SLUs existed at the Admiralty, the War Office, the Air Ministry, RAF Fighter Command, the US Strategic Air Forces in Europe (Wycombe Abbey) and other fixed headquarters in the UK. An SLU was operating at the War HQ in Valletta, Malta. These units had permanent teleprinter links to Bletchley Park.
Mobile SLUs were attached to field army and air force headquarters and depended on radio communications to receive intelligence summaries. The first mobile SLUs appeared during the French campaign of 1940. An SLU supported the British Expeditionary Force (BEF) headed by General Lord Gort. The first liaison officers were Robert Gore-Browne and Humphrey Plowden. A second SLU of the 1940 period was attached to the RAF Advanced Air Striking Force at Meaux commanded by Air Vice-Marshal P H Lyon Playfair. This SLU was commanded by Squadron Leader F.W. "Tubby" Long.
Intelligence agencies
In 1940, special arrangements were made within the British intelligence services for handling BONIFACE and later Ultra intelligence. The Security Service started "Special Research Unit B1(b)" under Herbert Hart. In the SIS this intelligence was handled by "Section V" based at St Albans.
Radio and cryptography
The communications system was founded by Brigadier Sir Richard Gambier-Parry, who from 1938 to 1946 was head of MI6 Section VIII, based at Whaddon Hall in Buckinghamshire, UK. Ultra summaries from Bletchley Park were sent over landline to the Section VIII radio transmitter at Windy Ridge. From there they were transmitted to the destination SLUs.
The communications element of each SLU was called a "Special Communications Unit" or SCU. Radio transmitters were constructed at Whaddon Hall workshops, while receivers were the National HRO, made in the USA. The SCUs were highly mobile and the first such units used civilian Packard cars. The following SCUs are listed: SCU1 (Whaddon Hall), SCU2 (France before 1940, India), SCU3 (RSS Hanslope Park), SCU5, SCU6 (possibly Algiers and Italy), SCU7 (training unit in the UK), SCU8 (Europe after D-day), SCU9 (Europe after D-day), SCU11 (Palestine and India), SCU12 (India), SCU13 and SCU14.
The cryptographic element of each SLU was supplied by the RAF and was based on the TYPEX cryptographic machine and one-time pad systems.
RN Ultra messages from the OIC to ships at sea were necessarily transmitted over normal naval radio circuits and were protected by one-time pad encryption.
Lucy
An intriguing question concerns the alleged use of Ultra information by the "Lucy" spy ring, headquartered in Switzerland and apparently operated by one man, Rudolf Roessler. This was an extremely well informed, responsive ring that was able to get information "directly from German General Staff Headquarters" – often on specific request. It has been alleged that "Lucy" was in major part a conduit for the British to feed Ultra intelligence to the Soviets in a way that made it appear to have come from highly placed espionage rather than from cryptanalysis of German radio traffic. The Soviets, however, through an agent at Bletchley, John Cairncross, knew that Britain had broken Enigma. The "Lucy" ring was initially treated with suspicion by the Soviets. The information it provided was accurate and timely, however, and Soviet agents in Switzerland (including their chief, Alexander Radó) eventually learned to take it seriously. However, the theory that the Lucy ring was a cover for Britain to pass Enigma intelligence to the Soviets has not gained traction. Among others who have rejected the theory, Harry Hinsley, the official historian for the British Secret Services in World War II, stated that "there is no truth in the much-publicized claim that the British authorities made use of the ‘Lucy’ ring..to forward intelligence to Moscow".
Use of intelligence
Most deciphered messages, often about relative trivia, were insufficient as intelligence reports for military strategists or field commanders. The organisation, interpretation and distribution of decrypted Enigma message traffic and other sources into usable intelligence was a subtle task.
At Bletchley Park, extensive indices were kept of the information in the messages decrypted. For each message the traffic analysis recorded the radio frequency, the date and time of intercept, and the preamble—which contained the network-identifying discriminant, the time of origin of the message, the callsign of the originating and receiving stations, and the indicator setting. This allowed cross referencing of a new message with a previous one. The indices included message preambles, every person, every ship, every unit, every weapon, every technical term and of repeated phrases such as forms of address and other German military jargon that might be usable as cribs.
The first decryption of a wartime Enigma message, albeit one that had been transmitted three months earlier, was achieved by the Poles at PC Bruno on 17 January 1940. Little had been achieved by the start of the Allied campaign in Norway in April. At the start of the Battle of France on 10 May 1940, the Germans made a very significant change in the indicator procedures for Enigma messages. However, the Bletchley Park cryptanalysts had anticipated this, and were able — jointly with PC Bruno — to resume breaking messages from 22 May, although often with some delay. The intelligence that these messages yielded was of little operational use in the fast-moving situation of the German advance.
Decryption of Enigma traffic built up gradually during 1940, with the first two prototype bombes being delivered in March and August. The traffic was almost entirely limited to Luftwaffe messages. By the peak of the Battle of the Mediterranean in 1941, however, Bletchley Park was deciphering daily 2,000 Italian Hagelin messages. By the second half of 1941 30,000 Enigma messages a month were being deciphered, rising to 90,000 a month of Enigma and Fish decrypts combined later in the war.
Some of the contributions that Ultra intelligence made to the Allied successes are given below.
In April 1940, Ultra information provided a detailed picture of the disposition of the German forces, and then their movement orders for the attack on the Low Countries prior to the Battle of France in May.
An Ultra decrypt of June 1940 read KNICKEBEIN KLEVE IST AUF PUNKT 53 GRAD 24 MINUTEN NORD UND EIN GRAD WEST EINGERICHTET ("The Cleves Knickebein is directed at position 53 degrees 24 minutes north and 1 degree west"). This was the definitive piece of evidence that Dr R V Jones of scientific intelligence in the Air Ministry needed to show that the Germans were developing a radio guidance system for their bombers. Ultra intelligence then continued to play a vital role in the so-called Battle of the Beams.
During the Battle of Britain, Air Chief Marshal Sir Hugh Dowding, Commander-in-Chief of RAF Fighter Command, had a teleprinter link from Bletchley Park to his headquarters at RAF Bentley Priory, for Ultra reports. Ultra intelligence kept him informed of German strategy, and of the strength and location of various Luftwaffe units, and often provided advance warning of bombing raids (but not of their specific targets). These contributed to the British success. Dowding was bitterly and sometimes unfairly criticized by others who did not see Ultra, but he did not disclose his source.
Decryption of traffic from Luftwaffe radio networks provided a great deal of indirect intelligence about the Germans' planned Operation Sea Lion to invade England in 1940.
On 17 September 1940 an Ultra message reported that equipment at German airfields in Belgium for loading planes with paratroops and their gear, was to be dismantled. This was taken as a clear signal that Sea Lion had been cancelled.
Ultra revealed that a major German air raid was planned for the night of 14 November 1940, and indicated three possible targets, including London and Coventry. However, the specific target was not determined until late on the afternoon of 14 November, by detection of the German radio guidance signals. Unfortunately, countermeasures failed to prevent the devastating Coventry Blitz. F. W. Winterbotham claimed that Churchill had advance warning, but intentionally did nothing about the raid, to safeguard Ultra. This claim has been comprehensively refuted by R V Jones, Sir David Hunt, Ralph Bennett and Peter Calvocoressi. Ultra warned of a raid but did not reveal the target. Churchill, who had been en route to Ditchley Park, was told that London might be bombed and returned to 10 Downing Street so that he could observe the raid from the Air Ministry roof.
Ultra intelligence considerably aided the British Army's Operation Compass victory over the much larger Italian army in Libya in December 1940 – February 1941.
Ultra intelligence greatly aided the Royal Navy's victory over the Italian navy in the Battle of Cape Matapan in March 1941.
Although the Allies lost the Battle of Crete in May 1941, the Ultra intelligence that a parachute landing was planned, and the exact day of the invasion, meant that heavy losses were inflicted on the Germans and that fewer British troops were captured.
Ultra intelligence fully revealed the preparations for Operation Barbarossa, the German invasion of the USSR. Although this information was passed to the Soviet government, Stalin refused to believe it. The information did, however, help British planning, knowing that substantial German forces were to be deployed to the East.
Ultra intelligence made a very significant contribution in the Battle of the Atlantic. Winston Churchill wrote "The only thing that ever really frightened me during the war was the U-boat peril." The decryption of Enigma signals to the U-boats was much more difficult than those of the Luftwaffe. It was not until June 1941 that Bletchley Park was able to read a significant amount of this traffic currently. Transatlantic convoys were then diverted away from the U-boat "wolfpacks", and the U-boat supply vessels were sunk. On 1 February 1942, Enigma U-boat traffic became unreadable because of the introduction of a different 4-rotor Enigma machine. This situation persisted until December 1942, although other German naval Enigma messages were still being deciphered, such as those of the U-boat training command at Kiel. From December 1942 to the end of the war, Ultra allowed Allied convoys to evade U-boat patrol lines, and guided Allied anti-submarine forces to the location of U-boats at sea.
In the Western Desert Campaign, Ultra intelligence helped Wavell and Auchinleck to prevent Rommel's forces from reaching Cairo in the autumn of 1941.
Ultra intelligence from Hagelin decrypts, and from Luftwaffe and German naval Enigma decrypts, helped sink about half of the ships supplying the Axis forces in North Africa.
Ultra intelligence from Abwehr transmissions confirmed that Britain's Security Service (MI5) had captured all of the German agents in Britain, and that the Abwehr still believed in the many double agents which MI5 controlled under the Double Cross System. This enabled major deception operations.
Deciphered JN-25 messages allowed the U.S. to turn back a Japanese offensive in the Battle of the Coral Sea in April 1942 and set up the decisive American victory at the Battle of Midway in June 1942.
Ultra contributed very significantly to the monitoring of German developments at Peenemünde and the collection of V-1 and V-2 Intelligence from 1942 onwards.
Ultra contributed to Montgomery's victory at the Battle of Alam el Halfa by providing warning of Rommel's planned attack.
Ultra also contributed to the success of Montgomery's offensive in the Second Battle of El Alamein, by providing him (before the battle) with a complete picture of Axis forces, and (during the battle) with Rommel's own action reports to Germany.
Ultra provided evidence that the Allied landings in French North Africa (Operation Torch) were not anticipated.
A JN-25 decrypt of 14 April 1943 provided details of Admiral Yamamoto's forthcoming visit to Balalae Island, and on 18 April, a year to the day following the Doolittle Raid, his aircraft was shot down, killing this man who was regarded as irreplaceable.
Ship position reports in the Japanese Army’s "2468" water transport code, decrypted by the SIS starting in July 1943, helped U.S. submarines and aircraft sink two-thirds of the Japanese merchant marine.
The part played by Ultra intelligence in the preparation for the Allied invasion of Sicily was of unprecedented importance. It provided information as to where the enemy's forces were strongest and that the elaborate strategic deceptions had convinced Hitler and the German high command.
The success of the Battle of North Cape, in which HMS Duke of York sank the German battleship Scharnhorst, was entirely built on prompt deciphering of German naval signals.
US Army Lieutenant Arthur J Levenson who worked on both Enigma and Tunny at Bletchley Park, said in a 1980 interview of intelligence from Tunny
Both Enigma and Tunny decrypts showed Germany had been taken in by Operation Bodyguard, the deception operation to protect Operation Overlord. They revealed the Germans did not anticipate the Normandy landings and even after D-Day still believed Normandy was only a feint, with the main invasion to be in the Pas de Calais.
Information that there was German Panzergrenadier division in the planned dropping zone for the US 101st Airborne Division in Operation Overlord led to a change of location.
It assisted greatly in Operation Cobra.
It warned of the major German counterattack at Mortain, and allowed the Allies to surround the forces at Falaise.
During the Allied advance to Germany, Ultra often provided detailed tactical information, and showed how Hitler ignored the advice of his generals and insisted on German troops fighting in place "to the last man".
Arthur "Bomber" Harris, officer commanding RAF Bomber Command, was not cleared for Ultra. After D-Day, with the resumption of the strategic bomber campaign over Germany, Harris remained wedded to area bombardment. Historian Frederick Taylor argues that, as Harris was not cleared for access to Ultra, he was given some information gleaned from Enigma but not the information's source. This affected his attitude about post-D-Day directives to target oil installations, since he did not know that senior Allied commanders were using high-level German sources to assess just how much this was hurting the German war effort; thus Harris tended to see the directives to bomb specific oil and munitions targets as a "panacea" (his word) and a distraction from the real task of making the rubble bounce.
Safeguarding of sources
The Allies were seriously concerned with the prospect of the Axis command finding out that they had broken into the Enigma traffic. The British were more disciplined about such measures than the Americans, and this difference was a source of friction between them. It has been noted with some irony that in Delhi, the British Ultra unit was based in a large wooden hut in the grounds of Government House. Security consisted of a wooden table flat across the door with a bell on it and a sergeant sitting there. This hut was ignored by all. The American unit was in a large brick building, surrounded by barbed wire and armed patrols. People may not have known what was in there, but they surely knew it was something important and secret.
To disguise the source of the intelligence for the Allied attacks on Axis supply ships bound for North Africa, "spotter" submarines and aircraft were sent to search for Axis ships. These searchers or their radio transmissions were observed by the Axis forces, who concluded their ships were being found by conventional reconnaissance. They suspected that there were some 400 Allied submarines in the Mediterranean and a huge fleet of reconnaissance aircraft on Malta. In fact, there were only 25 submarines and at times as few as three aircraft.
This procedure also helped conceal the intelligence source from Allied personnel, who might give away the secret by careless talk, or under interrogation if captured. Along with the search mission that would find the Axis ships, two or three additional search missions would be sent out to other areas, so that crews would not begin to wonder why a single mission found the Axis ships every time.
Other deceptive means were used. On one occasion, a convoy of five ships sailed from Naples to North Africa with essential supplies at a critical moment in the North African fighting. There was no time to have the ships properly spotted beforehand. The decision to attack solely on Ultra intelligence went directly to Churchill. The ships were all sunk by an attack "out of the blue", arousing German suspicions of a security breach. To distract the Germans from the idea of a signals breach (such as Ultra), the Allies sent a radio message to a fictitious spy in Naples, congratulating him for this success. According to some sources the Germans decrypted this message and believed it.
In the Battle of the Atlantic, the precautions were taken to the extreme. In most cases where the Allies knew from intercepts the location of a U-boat in mid-Atlantic, the U-boat was not attacked immediately, until a "cover story" could be arranged. For example, a search plane might be "fortunate enough" to sight the U-boat, thus explaining the Allied attack.
Some Germans had suspicions that all was not right with Enigma. Admiral Karl Dönitz received reports of "impossible" encounters between U-boats and enemy vessels which made him suspect some compromise of his communications. In one instance, three U-boats met at a tiny island in the Caribbean Sea, and a British destroyer promptly showed up. The U-boats escaped and reported what had happened. Dönitz immediately asked for a review of Enigma's security. The analysis suggested that the signals problem, if there was one, was not due to the Enigma itself. Dönitz had the settings book changed anyway, blacking out Bletchley Park for a period. However, the evidence was never enough to truly convince him that Naval Enigma was being read by the Allies. The more so, since B-Dienst, his own codebreaking group, had partially broken Royal Navy traffic (including its convoy codes early in the war), and supplied enough information to support the idea that the Allies were unable to read Naval Enigma.
By 1945, most German Enigma traffic could be decrypted within a day or two, yet the Germans remained confident of its security.
Role of women in Allied codebreaking
After encryption systems were "broken", there was a large volume of cryptologic work needed to recover daily key settings and keep up with changes in enemy security procedures, plus the more mundane work of processing, translating, indexing, analyzing and distributing tens of thousands of intercepted messages daily. The more successful the code breakers were, the more labor was required. Some 8,000 women worked at Bletchley Park, about three quarters of the work force. Before the attack on Pearl Harbor, the US Navy sent letters to top women's colleges seeking introductions to their best seniors; the Army soon followed suit. By the end of the war, some 7000 workers in the Army Signal Intelligence service, out of a total 10,500, were female. By contrast, the Germans and Japanese had strong ideological objections to women engaging in war work. The Nazis even created a Cross of Honour of the German Mother to encourage women to stay at home and have babies.
Effect on the war
The exact influence of Ultra on the course of the war is debated; an oft-repeated assessment is that decryption of German ciphers advanced the end of the European war by no less than two years. Hinsley, who first made this claim, is typically cited as an authority for the two-year estimate.
Winterbotham's quoting of Eisenhower's "decisive" verdict is part of a letter sent by Eisenhower to Menzies after the conclusion of the European war and later found among his papers at the Eisenhower Presidential Library. It allows a contemporary, documentary view of a leader on Ultra's importance:
There is wide disagreement about the importance of codebreaking in winning the crucial Battle of the Atlantic. To cite just one example, the historian Max Hastings states that "In 1941 alone, ultra saved between 1.5 and two million tons of Allied ships from destruction." This would represent a 40 percent to 53 percent reduction, though it is not clear how this extrapolation was made.
Another view is from a history based on the German naval archives written after the war for the British Admiralty by a former U-boat commander and son-in-law of his commander, Grand Admiral Karl Dönitz. His book reports that several times during the war they undertook detailed investigations to see whether their operations were being compromised by broken Enigma ciphers. These investigations were spurred because the Germans had broken the British naval code and found the information useful. Their investigations were negative, and the conclusion was that their defeat "was due firstly to outstanding developments in enemy radar..." The great advance was centimetric radar, developed in a joint British-American venture, which became operational in the spring of 1943. Earlier radar was unable to distinguish U-boat conning towers from the surface of the sea, so it could not even locate U-boats attacking convoys on the surface on moonless nights; thus the surfaced U-boats were almost invisible, while having the additional advantage of being swifter than their prey. The new higher-frequency radar could spot conning towers, and periscopes could even be detected from airplanes. Some idea of the relative effect of cipher-breaking and radar improvement can be obtained from graphs showing the tonnage of merchantmen sunk and the number of U-boats sunk in each month of the Battle of the Atlantic. The graphs cannot be interpreted unambiguously, because it is challenging to factor in many variables such as improvements in cipher-breaking and the numerous other advances in equipment and techniques used to combat U-boats. Nonetheless, the data seem to favor the view of the former U-boat commander—that radar was crucial.
While Ultra certainly affected the course of the Western Front during the war, two factors often argued against Ultra having shortened the overall war by a measure of years are the relatively small role it played in the Eastern Front conflict between Germany and the Soviet Union, and the completely independent development of the U.S.-led Manhattan Project to create the atomic bomb. Author Jeffrey T. Richelson mentions Hinsley's estimate of at least two years, and concludes that "It might be more accurate to say that Ultra helped shorten the war by three months – the interval between the actual end of the war in Europe and the time the United States would have been able to drop an atomic bomb on Hamburg or Berlin – and might have shortened the war by as much as two years had the U.S. atomic bomb program been unsuccessful." Military historian Guy Hartcup analyzes aspects of the question but then simply says, "It is impossible to calculate in terms of months or years how much Ultra shortened the war."
Postwar disclosures
While it is obvious why Britain and the U.S. went to considerable pains to keep Ultra a secret until the end of the war, it has been a matter of some conjecture why Ultra was kept officially secret for 29 years thereafter, until 1974. During that period, the important contributions to the war effort of a great many people remained unknown, and they were unable to share in the glory of what is now recognised as one of the chief reasons the Allies won the war – or, at least, as quickly as they did.
At least three versions exist as to why Ultra was kept secret so long. Each has plausibility, and all may be true. First, as David Kahn pointed out in his 1974 New York Times review of Winterbotham's The Ultra Secret, after the war, surplus Enigmas and Enigma-like machines were sold to Third World countries, which remained convinced of the security of the remarkable cipher machines. Their traffic was not as secure as they believed, however, which is one reason the British made the machines available.
By the 1970s, newer computer-based ciphers were becoming popular as the world increasingly turned to computerised communications, and the usefulness of Enigma copies (and rotor machines generally) rapidly decreased. Switzerland developed its own version of Enigma, known as NEMA, and used it into the late 1970s, while the United States National Security Agency (NSA) retired the last of its rotor-based encryption systems, the KL-7 series, in the 1980s.
A second explanation relates to a misadventure of Churchill's between the World Wars, when he publicly disclosed information from decrypted Soviet communications. This had prompted the Soviets to change their ciphers, leading to a blackout.
The third explanation is given by Winterbotham, who recounts that two weeks after V-E Day, on 25 May 1945, Churchill requested former recipients of Ultra intelligence not to divulge the source or the information that they had received from it, in order that there be neither damage to the future operations of the Secret Service nor any cause for the Axis to blame Ultra for their defeat.
Since it was British and, later, American message-breaking which had been the most extensive, the importance of Enigma decrypts to the prosecution of the war remained unknown despite revelations by the Poles and the French of their early work on breaking the Enigma cipher. This work, which was carried out in the 1930s and continued into the early part of the war, was necessarily uninformed regarding further breakthroughs achieved by the Allies during the balance of the war. In 1967, Polish military historian Władysław Kozaczuk in his book Bitwa o tajemnice ("Battle for Secrets") first revealed Enigma had been broken by Polish cryptologists before World War II. Later the 1973 public disclosure of Enigma decryption in the book Enigma by French intelligence officer Gustave Bertrand generated pressure to discuss the rest of the Enigma–Ultra story.
In 1967, David Kahn in The Codebreakers described the 1944 capture of a Naval Enigma machine from and gave the first published hint about the scale, mechanisation and operational importance of the Anglo-American Enigma-breaking operation:
Ladislas Farago's 1971 best-seller The Game of the Foxes gave an early garbled version of the myth of the purloined Enigma. According to Farago, it was thanks to a "Polish-Swedish ring [that] the British obtained a working model of the 'Enigma' machine, which the Germans used to encipher their top-secret messages." "It was to pick up one of these machines that Commander Denniston went clandestinely to a secluded Polish castle [!] on the eve of the war. Dilly Knox later solved its keying, exposing all Abwehr signals encoded by this system." "In 1941 [t]he brilliant cryptologist Dillwyn Knox, working at the Government Code & Cypher School at the Bletchley centre of British code-cracking, solved the keying of the Abwehr's Enigma machine."
The British ban was finally lifted in 1974, the year that a key participant on the distribution side of the Ultra project, F. W. Winterbotham, published The Ultra Secret. A succession of books by former participants and others followed. The official history of British intelligence in World War II was published in five volumes from 1979 to 1988, and included further details from official sources concerning the availability and employment of Ultra intelligence. It was chiefly edited by Harry Hinsley, with one volume by Michael Howard. There is also a one-volume collection of reminiscences by Ultra veterans, Codebreakers (1993), edited by Hinsley and Alan Stripp.
A 2012 London Science Museum exhibit, "Code Breaker: Alan Turing's Life and Legacy", marking the centenary of his birth, includes a short film of statements by half a dozen participants and historians of the World War II Bletchley Park Ultra operations. John Agar, a historian of science and technology, states that by war's end 8,995 people worked at Bletchley Park. Iain Standen, Chief Executive of the Bletchley Park Trust, says of the work done there: "It was crucial to the survival of Britain, and indeed of the West." The Departmental Historian at GCHQ (the Government Communications Headquarters), who identifies himself only as "Tony" but seems to speak authoritatively, says that Ultra was a "major force multiplier. It was the first time that quantities of real-time intelligence became available to the British military." He further states that it is only in 2012 that Alan Turing's last two papers on Enigma decryption have been released to Britain's National Archives; the seven decades' delay had been due to their "continuing sensitivity... It wouldn't have been safe to release [them earlier]."
Holocaust intelligence
Historians and holocaust researchers have tried to establish when the Allies realized the full extent of Nazi-era extermination of Jews, and specifically, the extermination-camp system. In 1999, the U.S. Government passed the Nazi War Crimes Disclosure Act (P.L. 105-246), making it policy to declassify all Nazi war crime documents in their files; this was later amended to include the Japanese Imperial Government. As a result, more than 600 decrypts and translations of intercepted messages were disclosed; NSA historian Robert Hanyok would conclude that Allied communications intelligence, "by itself, could not have provided an early warning to Allied leaders regarding the nature and scope of the Holocaust."
Following Operation Barbarossa, decrypts in August 1941 alerted British authorities to the many massacres in occupied zones of the Soviet Union, including those of Jews, but specifics were not made public for security reasons. Revelations about the concentration camps were gleaned from other sources, and were publicly reported by the Polish government-in-exile, Jan Karski and the WJC offices in Switzerland a year or more later. A decrypted message referring to "Einsatz Reinhard" (the Höfle Telegram), from January 11, 1943, may have outlined the system and listed the number of Jews and others gassed at four death camps the previous year, but codebreakers did not understand the meaning of the message. In summer 1944, Arthur Schlesinger, an OSS analyst, interpreted the intelligence as an "incremental increase in persecution rather than... extermination."
Postwar consequences
There has been controversy about the influence of Allied Enigma decryption on the course of World War II. It has also been suggested that the question should be broadened to include Ultra's influence not only on the war itself, but also on the post-war period.
F. W. Winterbotham, the first author to outline the influence of Enigma decryption on the course of World War II, likewise made the earliest contribution to an appreciation of Ultra's postwar influence, which now continues into the 21st century—and not only in the postwar establishment of Britain's GCHQ (Government Communication Headquarters) and America's NSA. "Let no one be fooled," Winterbotham admonishes in chapter 3, "by the spate of television films and propaganda which has made the war seem like some great triumphant epic. It was, in fact, a very narrow shave, and the reader may like to ponder [...] whether [...] we might have won [without] Ultra."
Debate continues on whether, had postwar political and military leaders been aware of Ultra's role in Allied victory in World War II, these leaders might have been less optimistic about post-World War II military involvements.
Knightley suggests that Ultra may have contributed to the development of the Cold War. The Soviets received disguised Ultra information, but the existence of Ultra itself was not disclosed by the western Allies. The Soviets, who had clues to Ultra's existence, possibly through Kim Philby, John Cairncross and Anthony Blunt, may thus have felt still more distrustful of their wartime partners.
The mystery surrounding the discovery of the sunk off the coast of New Jersey by divers Richie Kohler and John Chatterton was unravelled in part through the analysis of Ultra intercepts, which demonstrated that, although U-869 had been ordered by U-boat Command to change course and proceed to North Africa, near Rabat, the submarine had missed the messages changing her assignment and had continued to the eastern coast of the U.S., her original destination.
In 1953, the CIA's Project ARTICHOKE, a series of experiments on human subjects to develop drugs for use in interrogations, was renamed Project MKUltra. MK was the CIA's designation for its Technical Services Division and Ultra was in reference to the Ultra project.
See also
Hut 6
Hut 8
Magic (cryptography)
Military intelligence
Signals intelligence in modern history
The Imitation Game
Notes
References
Bibliography
A short account of World War II cryptology which covers more than just the Enigma story.
Has been criticised for inaccuracy and exaggeration
Transcript of a lecture given on Tuesday 19 October 1993 at Cambridge University
This is the standard reference on the crucial foundations laid by the Poles for World War II Enigma decryption.
Focuses on the battle-field exploitation of Ultra material.
Rejewski, Marian, wrote a number of papers on his 1932 break into Enigma and his subsequent work on the cipher, well into World War II, with his fellow mathematician-cryptologists, Jerzy Różycki and Henryk Zygalski. Most of Rejewski's papers appear in
This provides a description of the Enigma, other ciphers, and codes.
An early publication containing several misapprehensions that are corrected in an addendum in the 1997 edition.
The first published account of the previously secret wartime operation, concentrating mainly on distribution of intelligence. It was written from memory and has been shown by subsequent authors, who had access to official records, to contain some inaccuracies.
Telecommunications-related introductions in 1941
1941 establishments in the United Kingdom
Military intelligence
Signals intelligence of World War II
Secret Intelligence Service
Bletchley Park |
32494 | https://en.wikipedia.org/wiki/Vi | Vi | vi (pronounced as distinct letters, ) is a screen-oriented text editor originally created for the Unix operating system. The portable subset of the behavior of vi and programs based on it, and the ex editor language supported within these programs, is described by (and thus standardized by) the Single Unix Specification and POSIX.
The original code for vi was written by Bill Joy in 1976, as the visual mode for a line editor called ex that Joy had written with Chuck Haley. Bill Joy's ex 1.1 was released as part of the first Berkeley Software Distribution (BSD) Unix release in March 1978. It was not until version 2.0 of ex, released as part of Second BSD in May 1979 that the editor was installed under the name "vi" (which took users straight into ex's visual mode), and the name by which it is known today. Some current implementations of vi can trace their source code ancestry to Bill Joy; others are completely new, largely compatible reimplementations.
The name "vi" is derived from the shortest unambiguous abbreviation for the ex command visual, which switches the ex line editor to its full-screen mode. The name is pronounced (the English letters v and i).
In addition to various non–free software variants of vi distributed with proprietary implementations of Unix, vi was opensourced with OpenSolaris, and several free and open source software vi clones exist. A 2009 survey of Linux Journal readers found that vi was the most widely used text editor among respondents, beating gedit, the second most widely used editor, by nearly a factor of two (36% to 19%).
History
Creation
vi was derived from a sequence of UNIX command line editors, starting with ed, which was a line editor designed to work well on teleprinters, rather than display terminals. Within AT&T Corporation, where ed originated, people seemed to be happy with an editor as basic and unfriendly as ed, George Coulouris recalls:
[...] for many years, they had no suitable terminals. They carried on with TTYs and other printing terminals for a long time, and when they did buy screens for everyone, they got Tektronix 4014s. These were large storage tube displays. You can't run a screen editor on a storage-tube display as the picture can't be updated. Thus it had to fall to someone else to pioneer screen editing for Unix, and that was us initially, and we continued to do so for many years.
Coulouris considered the cryptic commands of ed to be only suitable for "immortals", and thus in February 1976, he enhanced ed (using Ken Thompson's ed source as a starting point) to make em (the "editor for mortals") while acting as a lecturer at Queen Mary College. The em editor was designed for display terminals and was a single-line-at-a-time visual editor. It was one of the first programs on Unix to make heavy use of "raw terminal input mode", in which the running program, rather than the terminal device driver, handled all keystrokes. When Coulouris visited UC Berkeley in the summer of 1976, he brought a DECtape containing em, and showed the editor to various people. Some people considered this new kind of editor to be a potential resource hog, but others, including Bill Joy, were impressed.
Inspired by em, and by their own tweaks to ed, Bill Joy and Chuck Haley, both graduate students at UC Berkeley, took code from em to make en, and then "extended" en to create ex version 0.1. After Haley's departure, Bruce Englar encouraged Joy to redesign the editor, which he did June through October 1977 adding a full-screen visual mode to exwhich came to be vi.
vi and ex share their code; vi is the ex binary launching with the capability to render the text being edited onto a computer terminalit is ex's visual mode. The name vi comes from the abbreviated ex command (vi) to enter the visual mode from within it. The longform command to do the same was visual, and the name vi is explained as a contraction of visual in later literature. vi is also the shell command to launch ex/vi in the visual mode directly, from within a shell.
According to Joy, many of the ideas in this visual mode were taken from Bravothe bimodal text editor developed at Xerox PARC for the Alto. In an interview about vi's origins, Joy said:
A lot of the ideas for the screen editing mode were stolen from a Bravo manual I surreptitiously looked at and copied. Dot is really the double-escape from Bravo, the redo command. Most of the stuff was stolen. There were some things stolen from ed—we got a manual page for the Toronto version of ed, which I think Rob Pike had something to do with. We took some of the regular expression extensions out of that.
Joy used a Lear Siegler ADM-3A terminal. On this terminal, the Escape key was at the location now occupied by the Tab key on the widely used IBM PC keyboard (on the left side of the alphabetic part of the keyboard, one row above the middle row). This made it a convenient choice for switching vi modes. Also, the keys h,j,k,l served double duty as cursor movement keys and were inscribed with arrows, which is why vi uses them in that way. The ADM-3A had no other cursor keys. Joy explained that the terse, single character commands and the ability to type ahead of the display were a result of the slow 300 baud modem he used when developing the software and that he wanted to be productive when the screen was painting slower than he could think.
Distribution
Joy was responsible for creating the first BSD Unix release in March, 1978, and included ex 1.1 (dated 1 February 1978) in the distribution, thereby exposing his editor to an audience beyond UC Berkeley. From that release of BSD Unix onwards, the only editors that came with the Unix system were ed and ex. In a 1984 interview, Joy attributed much of the success of vi to the fact that it was bundled for free, whereas other editors, such as Emacs, could cost hundreds of dollars.
Eventually it was observed that most ex users were spending all their time in visual mode, and thus in ex 2.0 (released as part of Second Berkeley Software Distribution in May, 1979), Joy created vi as a hard link to ex, such that when invoked as vi, ex would automatically start up in its visual mode. Thus, vi is not the evolution of ex, vi is ex.
Joy described ex 2.0 (vi) as a very large program, barely able to fit in the memory of a PDP-11/70, thus although vi may be regarded as a small, lightweight, program today, it was not seen that way early in its history. By version 3.1, shipped with 3BSD in December 1979, the full version of vi was no longer able to fit in the memory of a PDP-11; the editor would be also too big to run on PC/IX for the IBM PC in 1984.
Joy continued to be lead developer for vi until version 2.7 in June 1979, and made occasional contributions to vi's development until at least version 3.5 in August 1980. In discussing the origins of vi and why he discontinued development, Joy said:
I wish we hadn't used all the keys on the keyboard. I think one of the interesting things is that vi is really a mode-based editor. I think as mode-based editors go, it's pretty good. One of the good things about EMACS, though, is its programmability and the modelessness. Those are two ideas which never occurred to me. I also wasn't very good at optimizing code when I wrote vi. I think the redisplay module of the editor is almost intractable. It does a really good job for what it does, but when you're writing programs as you're learning... That's why I stopped working on it.
What actually happened was that I was in the process of adding multiwindows to vi when we installed our VAX, which would have been in December of '78. We didn't have any backups and the tape drive broke. I continued to work even without being able to do backups. And then the source code got scrunched and I didn't have a complete listing. I had almost rewritten all of the display code for windows, and that was when I gave up. After that, I went back to the previous version and just documented the code, finished the manual and closed it off. If that scrunch had not happened, vi would have multiple windows, and I might have put in some programmability—but I don't know.
The fundamental problem with vi is that it doesn't have a mouse and therefore you've got all these commands. In some sense, its backwards from the kind of thing you'd get from a mouse-oriented thing. I think multiple levels of undo would be wonderful, too. But fundamentally, vi is still ed inside. You can't really fool it.
It's like one of those pinatas—things that have candy inside but has layer after layer of paper mache on top. It doesn't really have a unified concept. I think if I were going to go back—I wouldn't go back, but start over again.
In 1979, Mary Ann Horton took on responsibility for vi. Horton added support for arrow and function keys, macros, and improved performance by replacing termcap with terminfo.
Ports and clones
Up to version 3.7 of vi, created in October 1981, UC Berkeley was the development home for vi, but with Bill Joy's departure in early 1982 to join Sun Microsystems, and AT&T's UNIX System V (January 1983) adopting vi, changes to the vi codebase happened more slowly and in a more dispersed and mutually incompatible way. At UC Berkeley, changes were made but the version number was never updated beyond 3.7. Commercial Unix vendors, such as Sun, HP, DEC, and IBM each received copies of the vi source, and their operating systems, Solaris, HP-UX, Tru64 UNIX, and AIX, today continue to maintain versions of vi directly descended from the 3.7 release, but with added features, such as adjustable key mappings, encryption, and wide character support.
While commercial vendors could work with Bill Joy's codebase (and continue to use it today), many people could not. Because Joy had begun with Ken Thompson's ed editor, ex and vi were derivative works and could not be distributed except to people who had an AT&T source license. People looking for a free Unix-style editor would have to look elsewhere. By 1985, a version of Emacs (MicroEMACS) was available for a variety of platforms, but it was not until June 1987 that STEVIE (ST Editor for VI Enthusiasts), a limited vi clone, appeared. In early January 1990, Steve Kirkendall posted a new clone of vi, Elvis, to the Usenet newsgroup comp.os.minix, aiming for a more complete and more faithful clone of vi than STEVIE. It quickly attracted considerable interest in a number of enthusiast communities. Andrew Tanenbaum quickly asked the community to decide on one of these two editors to be the vi clone in Minix; Elvis was chosen, and remains the vi clone for Minix today.
In 1989 Lynne Jolitz and William Jolitz began porting BSD Unix to run on 386 class processors, but to create a free distribution they needed to avoid any AT&T-contaminated code, including Joy's vi. To fill the void left by removing vi, their 1992 386BSD distribution adopted Elvis as its vi replacement. 386BSD's descendants, FreeBSD and NetBSD, followed suit. But at UC Berkeley, Keith Bostic wanted a "bug for bug compatible" replacement for Joy's vi for BSD 4.4 Lite. Using Kirkendall's Elvis (version 1.8) as a starting point, Bostic created nvi, releasing it in the northern spring of 1994. When FreeBSD and NetBSD resynchronized the 4.4-Lite2 codebase, they too switched over to Bostic's nvi, which they continue to use today.
Despite the existence of vi clones with enhanced featuresets, sometime before June 2000, Gunnar Ritter ported Joy's vi codebase (taken from 2.11BSD, February 1992) to modern Unix-based operating systems, such as Linux and FreeBSD. Initially, his work was technically illegal to distribute without an AT&T source license, but, in January 2002, those licensing rules were relaxed, allowing legal distribution as an open-source project. Ritter continued to make small enhancements to the vi codebase similar to those done by commercial Unix vendors still using Joy's codebase, including changes required by the POSIX.2 standard for vi. His work is available as Traditional Vi, and runs today on a variety of systems.
But although Joy's vi was now once again available for BSD Unix, it arrived after the various BSD flavors had committed themselves to nvi, which provides a number of enhancements over traditional vi, and drops some of its legacy features (such as open mode for editing one line at a time). It is in some sense, a strange inversion that BSD Unix, where Joy's vi codebase began, no longer uses it, and the AT&T-derived Unixes, which in the early days lacked Joy's editor, are the ones that now use and maintain modified versions of his code.
Impact
Over the years since its creation, vi became the de facto standard Unix editor and a hacker favorite outside of MIT until the rise of Emacs after about 1984. The Single UNIX Specification specifies vi, so every conforming system must have it.
vi is still widely used by users of the Unix family of operating systems. About half the respondents in a 1991 USENET poll preferred vi. In 1999, Tim O'Reilly, founder of the eponymous computer book publishing company, stated that his company sold more copies of its vi book than its emacs book.
Interface
vi is a modal editor: it operates in either insert mode (where typed text becomes part of the document) or command mode (where keystrokes are interpreted as commands that control the edit session). For example, typing while in command mode switches the editor to insert mode, but typing again at this point places an "i" character in the document. From insert mode, pressing switches the editor back to command mode. A perceived advantage of vi's separation of text entry and command modes is that both text editing and command operations can be performed without requiring the removal of the user's hands from the home row. As non-modal editors usually have to reserve all keys with letters and symbols for the printing of characters, any special commands for actions other than adding text to the buffer must be assigned to keys that do not produce characters, such as function keys, or combinations of modifier keys such as , and with regular keys. Vi has the property that most ordinary keys are connected to some kind of command for positioning, altering text, searching and so forth, either singly or in key combinations. Many commands can be touch typed without the use of or . Other types of editors generally require the user to move their hands from the home row when touch typing:
To use a mouse to select text, commands, or menu items in a GUI editor.
To the arrow keys or editing functions (Home / End or Function Keys).
To invoke commands using modifier keys in conjunction with the standard typewriter keys.
For instance, in vi, replacing a word is replacement text, which is a combination of two independent commands (change and word-motion) together with a transition into and out of insert mode. Text between the cursor position and the end of the word is overwritten by the replacement text. The operation can be repeated at some other location by typing , the effect being that the word starting at that location will be replaced with the same replacement text.
A human–computer interaction textbook notes on its first page that "One of the classic UI foibles—told and re-told by HCI educators around the world—is the vi editor's lack of feedback when switching between modes. Many a user made the mistake of providing input while in command mode or entering a command while in input mode."
Contemporary derivatives and clones
Vim "Vi IMproved" has many additional features compared to vi, including (scriptable) syntax highlighting, mouse support, graphical versions, visual mode, many new editing commands and a large amount of extension in the area of ex commands. Vim is included with almost every Linux distribution (and is also shipped with every copy of Apple macOS). Vim also has a vi compatibility mode, in which Vim is more compatible with vi than it would be otherwise, although some vi features, such as open mode, are missing in Vim, even in compatibility mode. This mode is controlled by the :set compatible option. It is automatically turned on by Vim when it is started in a situation that looks as if the software might be expected to be vi compatible. Vim features that do not conflict with vi compatibility are always available, regardless of the setting. Vim was derived from a port of STEVIE to the Amiga.
Elvis is a free vi clone for Unix and other operating systems written by Steve Kirkendall. Elvis introduced a number of features now present in other vi clones, including allowing the cursor keys to work in input mode. It was the first to provide color syntax highlighting (and to generalize syntax highlighting to multiple filetypes). Elvis 1.x was used as the starting point for nvi, but Elvis 2.0 added numerous features, including multiple buffers, windows, display modes, and file access schemes. Elvis is the standard version of vi shipped on Slackware Linux, Kate OS and MINIX. The most recent version of Elvis is 2.2, released in October 2003.
nvi is an implementation of the ex/vi text editor originally distributed as part of the final official Berkeley Software Distribution (4.4 BSD-Lite). This is the version of vi that is shipped with all BSD-based open source distributions. It adds command history and editing, filename completions, multiple edit buffers, and multi-windowing (including multiple windows on the same edit buffer). Beyond 1.79, from October, 1996, which is the recommended stable version, there have been "development releases" of nvi, the most recent of which is 1.81.6, from November, 2007.
vile was initially derived from an early version of Microemacs in an attempt to bring the Emacs multi-window/multi-buffer editing paradigm to vi users, and was first published on Usenet's alt.sources in 1991. It provides infinite undo, UTF-8 compatibility, multi-window/multi-buffer operation, a macro expansion language, syntax highlighting, file read and write hooks, and more.
BusyBox, a set of standard Linux utilities in a single executable, includes a tiny vi clone.
Neovim, a refactor of Vim, which it strives to supersede.
See also
List of text editors
Comparison of text editors
visudo
List of Unix commands
References
Further reading
External links
The original Vi version, adapted to more modern standards
An Introduction to Display Editing with Vi, by Mark Horton and Bill Joy
vi lovers home page
Explanation of modal editing with vi – "Why, oh WHY, do those #?@! nutheads use vi?"
The original source code of ex (aka vi) versions 1.1, 2.2, 3.2, 3.6 and 3.7 ported to current UNIX
Computer-related introductions in 1976
Free text editors
Software using the BSD license
Unix SUS2008 utilities
Unix text editors
Console applications |
32496 | https://en.wikipedia.org/wiki/Vacuum%20tube | Vacuum tube | A vacuum tube, electron tube, valve (British usage), or tube (North America), is a device that controls electric current flow in a high vacuum between electrodes to which an electric potential difference has been applied.
The type known as a thermionic tube or thermionic valve utilizes thermionic emission of electrons from a hot cathode for fundamental electronic functions such as signal amplification and current rectification. Non-thermionic types such as a vacuum phototube, however, achieve electron emission through the photoelectric effect, and are used for such purposes as the detection of light intensities. In both types, the electrons are accelerated from the cathode to the anode by the electric field in the tube.
The simplest vacuum tube, the diode, invented in 1904 by John Ambrose Fleming, contains only a heated electron-emitting cathode and an anode. Electrons can only flow in one direction through the device—from the cathode to the anode. Adding one or more control grids within the tube allows the current between the cathode and anode to be controlled by the voltage on the grids.
These devices became a key component of electronic circuits for the first half of the twentieth century. They were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analog and early digital computers. Although some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, and created the discipline of electronics.
In the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, more efficient, reliable, durable, safer, and more economical than thermionic tubes. Beginning in the mid-1960s, thermionic tubes were being replaced by the transistor. However, the cathode-ray tube (CRT) remained the basis for television monitors and oscilloscopes until the early 21st century. Thermionic tubes are still used in some applications, such as the magnetron used in microwave ovens, certain high-frequency amplifiers, and amplifiers that audio enthusiasts prefer for their "warmer" tube sound.
Not all electronic circuit valves/electron tubes are vacuum tubes. Gas-filled tubes are similar devices, but containing a gas, typically at low pressure, which exploit phenomena related to electric discharge in gases, usually without a heater.
Classifications
One classification of thermionic vacuum tubes is by the number of active electrodes. A device with two active elements is a diode, usually used for rectification. Devices with three elements are triodes used for amplification and switching. Additional electrodes create tetrodes, pentodes, and so forth, which have multiple additional functions made possible by the additional controllable electrodes.
Other classifications are:
by frequency range (audio, radio, VHF, UHF, microwave)
by power rating (small-signal, audio power, high-power radio transmitting)
by cathode/filament type (indirectly heated, directly heated) and warm-up time (including "bright-emitter" or "dull-emitter")
by characteristic curves design (e.g., sharp- versus remote-cutoff in some pentodes)
by application (receiving tubes, transmitting tubes, amplifying or switching, rectification, mixing)
specialized parameters (long life, very low microphonic sensitivity and low-noise audio amplification, rugged or military versions)
specialized functions (light or radiation detectors, video imaging tubes)
tubes used to display information ("magic eye" tubes, vacuum fluorescent displays, CRTs)
Tubes have different functions, such as cathode ray tubes which create a beam of electrons for display purposes (such as the television picture tube) in addition to more specialized functions such as electron microscopy and electron beam lithography. X-ray tubes are also vacuum tubes. Phototubes and photomultipliers rely on electron flow through a vacuum, though in those cases electron emission from the cathode depends on energy from photons rather than thermionic emission. Since these sorts of "vacuum tubes" have functions other than electronic amplification and rectification they are described elsewhere.
Description
A vacuum tube consists of two or more electrodes in a vacuum inside an airtight envelope. Most tubes have glass envelopes with a glass-to-metal seal based on kovar sealable borosilicate glasses, though ceramic and metal envelopes (atop insulating bases) have been used. The electrodes are attached to leads which pass through the envelope via an airtight seal. Most vacuum tubes have a limited lifetime, due to the filament or heater burning out or other failure modes, so they are made as replaceable units; the electrode leads connect to pins on the tube's base which plug into a tube socket. Tubes were a frequent cause of failure in electronic equipment, and consumers were expected to be able to replace tubes themselves. In addition to the base terminals, some tubes had an electrode terminating at a top cap. The principal reason for doing this was to avoid leakage resistance through the tube base, particularly for the high impedance grid input. The bases were commonly made with phenolic insulation which performs poorly as an insulator in humid conditions. Other reasons for using a top cap include improving stability by reducing grid-to-anode capacitance, improved high-frequency performance, keeping a very high plate voltage away from lower voltages, and accommodating one more electrode than allowed by the base. There was even an occasional design that had two top cap connections.
The earliest vacuum tubes evolved from incandescent light bulbs, containing a filament sealed in an evacuated glass envelope. When hot, the filament releases electrons into the vacuum, a process called thermionic emission, originally known as the Edison effect. A second electrode, the anode or plate, will attract those electrons if it is at a more positive voltage. The result is a net flow of electrons from the filament to plate. However, electrons cannot flow in the reverse direction because the plate is not heated and does not emit electrons. The filament (cathode) has a dual function: it emits electrons when heated; and, together with the plate, it creates an electric field due to the potential difference between them. Such a tube with only two electrodes is termed a diode, and is used for rectification. Since current can only pass in one direction, such a diode (or rectifier) will convert alternating current (AC) to pulsating DC. Diodes can therefore be used in a DC power supply, as a demodulator of amplitude modulated (AM) radio signals and for similar functions.
Early tubes used the filament as the cathode; this is called a "directly heated" tube. Most modern tubes are "indirectly heated" by a "heater" element inside a metal tube that is the cathode. The heater is electrically isolated from the surrounding cathode and simply serves to heat the cathode sufficiently for thermionic emission of electrons. The electrical isolation allows all the tubes' heaters to be supplied from a common circuit (which can be AC without inducing hum) while allowing the cathodes in different tubes to operate at different voltages. H. J. Round invented the indirectly heated tube around 1913.
The filaments require constant and often considerable power, even when amplifying signals at the microwatt level. Power is also dissipated when the electrons from the cathode slam into the anode (plate) and heat it; this can occur even in an idle amplifier due to quiescent currents necessary to ensure linearity and low distortion. In a power amplifier, this heating can be considerable and can destroy the tube if driven beyond its safe limits. Since the tube contains a vacuum, the anodes in most small and medium power tubes are cooled by radiation through the glass envelope. In some special high power applications, the anode forms part of the vacuum envelope to conduct heat to an external heat sink, usually cooled by a blower, or water-jacket.
Klystrons and magnetrons often operate their anodes (called collectors in klystrons) at ground potential to facilitate cooling, particularly with water, without high-voltage insulation. These tubes instead operate with high negative voltages on the filament and cathode.
Except for diodes, additional electrodes are positioned between the cathode and the plate (anode). These electrodes are referred to as grids as they are not solid electrodes but sparse elements through which electrons can pass on their way to the plate. The vacuum tube is then known as a triode, tetrode, pentode, etc., depending on the number of grids. A triode has three electrodes: the anode, cathode, and one grid, and so on. The first grid, known as the control grid, (and sometimes other grids) transforms the diode into a voltage-controlled device: the voltage applied to the control grid affects the current between the cathode and the plate. When held negative with respect to the cathode, the control grid creates an electric field that repels electrons emitted by the cathode, thus reducing or even stopping the current between cathode and anode. As long as the control grid is negative relative to the cathode, essentially no current flows into it, yet a change of several volts on the control grid is sufficient to make a large difference in the plate current, possibly changing the output by hundreds of volts (depending on the circuit). The solid-state device which operates most like the pentode tube is the junction field-effect transistor (JFET), although vacuum tubes typically operate at over a hundred volts, unlike most semiconductors in most applications.
History and development
The 19th century saw increasing research with evacuated tubes, such as the Geissler and Crookes tubes. The many scientists and inventors who experimented with such tubes include Thomas Edison, Eugen Goldstein, Nikola Tesla, and Johann Wilhelm Hittorf. With the exception of early light bulbs, such tubes were only used in scientific research or as novelties. The groundwork laid by these scientists and inventors, however, was critical to the development of subsequent vacuum tube technology.
Although thermionic emission was originally reported in 1873 by Frederick Guthrie, it was Thomas Edison's apparently independent discovery of the phenomenon in 1883 that became well known. Although Edison was aware of the unidirectional property of current flow between the filament and the anode, his interest (and patent) concentrated on the sensitivity of the anode current to the current through the filament (and thus filament temperature). It was years later that John Ambrose Fleming applied the rectifying property of the Edison effect to detection of radio signals, as an improvement over the magnetic detector.
Amplification by vacuum tube became practical only with Lee de Forest's 1907 invention of the three-terminal "audion" tube, a crude form of what was to become the triode. Being essentially the first electronic amplifier, such tubes were instrumental in long-distance telephony (such as the first coast-to-coast telephone line in the US) and public address systems, and introduced a far superior and versatile technology for use in radio transmitters and receivers. The electronics revolution of the 20th century arguably began with the invention of the triode vacuum tube.
Diodes
At the end of the 19th century, radio or wireless technology was in an early stage of development and the Marconi Company was engaged in development and construction of radio communication systems. Marconi appointed English physicist John Ambrose Fleming as scientific advisor in 1899. Fleming had been engaged as scientific advisor to Edison Telephone (1879), as scientific advisor at Edison Electric Light (1882), and was also technical consultant to Edison-Swan. One of Marconi's needs was for improvement of the detector. Marconi had developed a magnetic detector, which was less responsive to natural sources of radio frequency interference than the coherer, but the magnetic detector only provided an audio frequency signal to a telephone receiver. A reliable detector that could drive a printing instrument was needed. As a result of experiments conducted on Edison effect bulbs, Fleming developed a vacuum tube that he termed the oscillation valve because it passed current in only one direction. The cathode was a carbon lamp filament, heated by passing current through it, that produced thermionic emission of electrons. Electrons that had been emitted from the cathode were attracted to the plate (anode) when the plate was at a positive voltage with respect to the cathode. Electrons could not pass in the reverse direction because the plate was not heated and not capable of thermionic emission of electrons. Fleming filed a patent for these tubes, assigned to the Marconi company, in the UK in November 1904 and this patent was issued in September 1905. Later known as the Fleming valve, the oscillation valve was developed for the purpose of rectifying radio frequency current as the detector component of radio receiver circuits.
While offering no advantage over the electrical sensitivity of crystal detectors, the Fleming valve offered advantage, particularly in shipboard use, over the difficulty of adjustment of the crystal detector and the susceptibility of the crystal detector to being dislodged from adjustment by vibration or bumping.
The first vacuum tube diodes designed for rectifier application in power supply circuits were introduced in April 1915 by Saul Dushman of General Electric.
Triodes
Originally, the only use for tubes in radio circuits was for rectification, not amplification. In 1906, Robert von Lieben filed for a patent for a cathode ray tube which included magnetic deflection. This could be used for amplifying audio signals and was intended for use in telephony equipment. He would later help refine the triode vacuum tube.
However, Lee de Forest is credited with inventing the triode tube in 1907 while experimenting to improve his original (diode) Audion. By placing an additional electrode between the filament (cathode) and plate (anode), he discovered the ability of the resulting device to amplify signals. As the voltage applied to the control grid (or simply "grid") was lowered from the cathode's voltage to somewhat more negative voltages, the amount of current from the filament to the plate would be reduced. The negative electrostatic field created by the grid in the vicinity of the cathode would inhibit the passage of emitted electrons and reduce the current to the plate. With the voltage of the grid less than that of the cathode, no direct current could pass from the cathode to the grid.
Thus a change of voltage applied to the grid, requiring very little power input to the grid, could make a change in the plate current and could lead to a much larger voltage change at the plate; the result was voltage and power amplification. In 1908, de Forest was granted a patent () for such a three-electrode version of his original Audion for use as an electronic amplifier in radio communications. This eventually became known as the triode.
de Forest's original device was made with conventional vacuum technology. The vacuum was not a "hard vacuum" but rather left a very small amount of residual gas. The physics behind the device's operation was also not settled. The residual gas would cause a blue glow (visible ionization) when the plate voltage was high (above about 60 volts). In 1912, de Forest brought the Audion to Harold Arnold in AT&T's engineering department. Arnold recommended that AT&T purchase the patent, and AT&T followed his recommendation. Arnold developed high-vacuum tubes which were tested in the summer of 1913 on AT&T's long-distance network. The high-vacuum tubes could operate at high plate voltages without a blue glow.
Finnish inventor Eric Tigerstedt significantly improved on the original triode design in 1914, while working on his sound-on-film process in Berlin, Germany. Tigerstedt's innovation was to make the electrodes concentric cylinders with the cathode at the centre, thus greatly increasing the collection of emitted electrons at the anode.
Irving Langmuir at the General Electric research laboratory (Schenectady, New York) had improved Wolfgang Gaede's high-vacuum diffusion pump and used it to settle the question of thermionic emission and conduction in a vacuum. Consequently, General Electric started producing hard vacuum triodes (which were branded Pliotrons) in 1915. Langmuir patented the hard vacuum triode, but de Forest and AT&T successfully asserted priority and invalidated the patent.
Pliotrons were closely followed by the French type 'TM' and later the English type 'R' which were in widespread use by the allied military by 1916. Historically, vacuum levels in production vacuum tubes typically ranged from 10 µPa down to 10 nPa ( down to ).
The triode and its derivatives (tetrodes and pentodes) are transconductance devices, in which the controlling signal applied to the grid is a voltage, and the resulting amplified signal appearing at the anode is a current. Compare this to the behavior of the bipolar junction transistor, in which the controlling signal is a current and the output is also a current.
For vacuum tubes, transconductance or mutual conductance () is defined as the change in the plate(anode)/cathode current divided by the corresponding change in the grid to cathode voltage, with a constant plate(anode) to cathode voltage. Typical values of for a small-signal vacuum tube are 1 to 10 millisiemens. It is one of the three 'constants' of a vacuum tube, the other two being its gain μ and plate resistance or . The Van der Bijl equation defines their relationship as follows:
The non-linear operating characteristic of the triode caused early tube audio amplifiers to exhibit harmonic distortion at low volumes. Plotting plate current as a function of applied grid voltage, it was seen that there was a range of grid voltages for which the transfer characteristics were approximately linear.
To use this range, a negative bias voltage had to be applied to the grid to position the DC operating point in the linear region. This was called the idle condition, and the plate current at this point the "idle current". The controlling voltage was superimposed onto the bias voltage, resulting in a linear variation of plate current in response to positive and negative variation of the input voltage around that point.
This concept is called grid bias. Many early radio sets had a third battery called the "C battery" (unrelated to the present-day C cell, for which the letter denotes its size and shape). The C battery's positive terminal was connected to the cathode of the tubes (or "ground" in most circuits) and whose negative terminal supplied this bias voltage to the grids of the tubes.
Later circuits, after tubes were made with heaters isolated from their cathodes, used cathode biasing, avoiding the need for a separate negative power supply. For cathode biasing, a relatively low-value resistor is connected between the cathode and ground. This makes the cathode positive with respect to the grid, which is at ground potential for DC.
However C batteries continued to be included in some equipment even when the "A" and "B" batteries had been replaced by power from the AC mains. That was possible because there was essentially no current draw on these batteries; they could thus last for many years (often longer than all the tubes) without requiring replacement.
When triodes were first used in radio transmitters and receivers, it was found that tuned amplification stages had a tendency to oscillate unless their gain was very limited. This was due to the parasitic capacitance between the plate (the amplifier's output) and the control grid (the amplifier's input), known as the Miller capacitance.
Eventually the technique of neutralization was developed whereby the RF transformer connected to the plate (anode) would include an additional winding in the opposite phase. This winding would be connected back to the grid through a small capacitor, and when properly adjusted would cancel the Miller capacitance. This technique was employed and led to the success of the Neutrodyne radio during the 1920s.
However, neutralization required careful adjustment and proved unsatisfactory when used over a wide range of frequencies.
Tetrodes and pentodes
To combat the stability problems of the triode as a radio frequency amplifier due to grid-to-plate capacitance, the physicist Walter H. Schottky invented the tetrode or screen grid tube in 1919. He showed that the addition of an electrostatic shield between the control grid and the plate could solve the problem. This design was refined by Hull and Williams. The added grid became known as the screen grid or shield grid. The screen grid is operated at a positive voltage significantly less than the plate voltage and it is bypassed to ground with a capacitor of low impedance at the frequencies to be amplified.
This arrangement substantially decouples the plate and the control grid, eliminating the need for neutralizing circuitry at medium wave broadcast frequencies. The screen grid also largely reduces the influence of the plate voltage on the space charge near the cathode, permitting the tetrode to produce greater voltage gain than the triode in amplifier circuits. While the amplification factors of typical triodes commonly range from below ten to around 100, tetrode amplification factors of 500 are common. Consequently, higher voltage gains from a single tube amplification stage became possible, reducing the number of tubes required. Screen grid tubes were put on the market in late 1927.
However, the useful region of operation of the screen grid tube as an amplifier was limited to plate voltages greater than the screen grid voltage, due to secondary emission from the plate. In any tube, electrons strike the plate with sufficient energy to cause the emission of electrons from its surface. In a triode this secondary emission of electrons is not important since they are simply re-captured by the plate. But in a tetrode they can be captured by the screen grid since it is also at a positive voltage, robbing them from the plate current and reducing the amplification of the tube. Since secondary electrons can outnumber the primary electrons over a certain range of plate voltages, the plate current can decrease with increasing plate voltage. This is the dynatron region or tetrode kink and is an example of negative resistance which can itself cause instability. Another undesirable consequence of secondary emission is that screen current is increased, which may cause the screen to exceed its power rating.
The otherwise undesirable negative resistance region of the plate characteristic was exploited with the dynatron oscillator circuit to produce a simple oscillator only requiring connection of the plate to a resonant LC circuit to oscillate. The dynatron oscillator operated on the same principle of negative resistance as the tunnel diode oscillator many years later.
The dynatron region of the screen grid tube was eliminated by adding a grid between the screen grid and the plate to create the pentode. The suppressor grid of the pentode was usually connected to the cathode and its negative voltage relative to the anode repelled secondary electrons so that they would be collected by the anode instead of the screen grid. The term pentode means the tube has five electrodes. The pentode was invented in 1926 by Bernard D. H. Tellegen and became generally favored over the simple tetrode. Pentodes are made in two classes: those with the suppressor grid wired internally to the cathode (e.g. EL84/6BQ5) and those with the suppressor grid wired to a separate pin for user access (e.g. 803, 837). An alternative solution for power applications is the beam tetrode or beam power tube, discussed below.
Multifunction and multisection tubes
Superheterodyne receivers require a local oscillator and mixer, combined in the function of a single pentagrid converter tube. Various alternatives such as using a combination of a triode with a hexode and even an octode have been used for this purpose. The additional grids include control grids (at a low potential) and screen grids (at a high voltage). Many designs use such a screen grid as an additional anode to provide feedback for the oscillator function, whose current adds to that of the incoming radio frequency signal. The pentagrid converter thus became widely used in AM receivers, including the miniature tube version of the "All American Five". Octodes, such as the 7A8, were rarely used in the United States, but much more common in Europe, particularly in battery operated radios where the lower power consumption was an advantage.
To further reduce the cost and complexity of radio equipment, two separate structures (triode and pentode for instance) can be combined in the bulb of a single multisection tube. An early example is the Loewe 3NF. This 1920s device has three triodes in a single glass envelope together with all the fixed capacitors and resistors required to make a complete radio receiver. As the Loewe set had only one tube socket, it was able to substantially undercut the competition, since, in Germany, state tax was levied by the number of sockets. However, reliability was compromised, and production costs for the tube were much greater. In a sense, these were akin to integrated circuits. In the United States, Cleartron briefly produced the "Multivalve" triple triode for use in the Emerson Baby Grand receiver. This Emerson set also has a single tube socket, but because it uses a four-pin base, the additional element connections are made on a "mezzanine" platform at the top of the tube base.
By 1940 multisection tubes had become commonplace. There were constraints, however, due to patents and other licensing considerations (see British Valve Association). Constraints due to the number of external pins (leads) often forced the functions to share some of those external connections such as their cathode connections (in addition to the heater connection). The RCA Type 55 is a double diode triode used as a detector, automatic gain control rectifier and audio preamplifier in early AC powered radios. These sets often include the 53 Dual Triode Audio Output. Another early type of multi-section tube, the 6SN7, is a "dual triode" which performs the functions of two triode tubes while taking up half as much space and costing less.
The 12AX7 is a dual "high mu" (high voltage gain) triode in a miniature enclosure, and became widely used in audio signal amplifiers, instruments, and guitar amplifiers.
The introduction of the miniature tube base (see below) which can have 9 pins, more than previously available, allowed other multi-section tubes to be introduced, such as the 6GH8/ECF82 triode-pentode, quite popular in television receivers. The desire to include even more functions in one envelope resulted in the General Electric Compactron which has 12 pins. A typical example, the 6AG11, contains two triodes and two diodes.
Some otherwise conventional tubes do not fall into standard categories; the 6AR8, 6JH8 and 6ME8 have several common grids, followed by a pair of beam deflection electrodes which deflected the current towards either of two anodes. They were sometimes known as the 'sheet beam' tubes and used in some color TV sets for color demodulation. The similar 7360 was popular as a balanced SSB (de)modulator.
Beam power tubes
A beam power tube forms the electron stream from the cathode into multiple partially collimated beams to produce a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode when the anode potential is less than that of the screen grid. Formation of beams also reduces screen grid current. In some cylindrically symmetrical beam power tubes, the cathode is formed of narrow strips of emitting material that are aligned with the apertures of the control grid, reducing control grid current. This design helps to overcome some of the practical barriers to designing high-power, high-efficiency power tubes.
Manufacturer's data sheets often use the terms beam pentode or beam power pentode instead of beam power tube, and use a pentode graphic symbol instead of a graphic symbol showing beam forming plates.
Beam power tubes offer the advantages of a longer load line, less screen current, higher transconductance and lower third harmonic distortion than comparable power pentodes. Beam power tubes can be connected as triodes for improved audio tonal quality but in triode mode deliver significantly reduced power output.
Gas-filled tubes
Gas-filled tubes such as discharge tubes and cold cathode tubes are not hard vacuum tubes, though are always filled with gas at less than sea-level atmospheric pressure. Types such as the voltage-regulator tube and thyratron resemble hard vacuum tubes and fit in sockets designed for vacuum tubes. Their distinctive orange, red, or purple glow during operation indicates the presence of gas; electrons flowing in a vacuum do not produce light within that region. These types may still be referred to as "electron tubes" as they do perform electronic functions. High-power rectifiers use mercury vapor to achieve a lower forward voltage drop than high-vacuum tubes.
Miniature tubes
Early tubes used a metal or glass envelope atop an insulating bakelite base. In 1938 a technique was developed to use an all-glass construction with the pins fused in the glass base of the envelope. This was used in the design of a much smaller tube outline, known as the miniature tube, having seven or nine pins. Making tubes smaller reduced the voltage where they could safely operate, and also reduced the power dissipation of the filament. Miniature tubes became predominant in consumer applications such as radio receivers and hi-fi amplifiers. However, the larger older styles continued to be used especially as higher-power rectifiers, in higher-power audio output stages and as transmitting tubes.
Sub-miniature tubes
Sub-miniature tubes with a size roughly that of half a cigarette were used in consumer applications as hearing-aid amplifiers. These tubes did not have pins plugging into a socket but were soldered in place. The "acorn tube" (named due to its shape) was also very small, as was the metal-cased RCA nuvistor from 1959, about the size of a thimble. The nuvistor was developed to compete with the early transistors and operated at higher frequencies than those early transistors could. The small size supported especially high-frequency operation; nuvistors were used in aircraft radio transceivers, UHF television tuners, and some HiFi FM radio tuners (Sansui 500A) until replaced by high-frequency capable transistors.
Improvements in construction and performance
The earliest vacuum tubes strongly resembled incandescent light bulbs and were made by lamp manufacturers, who had the equipment needed to manufacture glass envelopes and the vacuum pumps required to evacuate the enclosures. de Forest used Heinrich Geissler's mercury displacement pump, which left behind a partial vacuum. The development of the diffusion pump in 1915 and improvement by Irving Langmuir led to the development of high-vacuum tubes. After World War I, specialized manufacturers using more economical construction methods were set up to fill the growing demand for broadcast receivers. Bare tungsten filaments operated at a temperature of around 2200 °C. The development of oxide-coated filaments in the mid-1920s reduced filament operating temperature to a dull red heat (around 700 °C), which in turn reduced thermal distortion of the tube structure and allowed closer spacing of tube elements. This in turn improved tube gain, since the gain of a triode is inversely proportional to the spacing between grid and cathode. Bare tungsten filaments remain in use in small transmitting tubes but are brittle and tend to fracture if handled roughly—e.g. in the postal services. These tubes are best suited to stationary equipment where impact and vibration is not present.
Indirectly heated cathodes
The desire to power electronic equipment using AC mains power faced a difficulty with respect to the powering of the tubes' filaments, as these were also the cathode of each tube. Powering the filaments directly from a power transformer introduced mains-frequency (50 or 60 Hz) hum into audio stages. The invention of the "equipotential cathode" reduced this problem, with the filaments being powered by a balanced AC power transformer winding having a grounded center tap.
A superior solution, and one which allowed each cathode to "float" at a different voltage, was that of the indirectly heated cathode: a cylinder of oxide-coated nickel acted as an electron-emitting cathode and was electrically isolated from the filament inside it. Indirectly heated cathodes enable the cathode circuit to be separated from the heater circuit. The filament, no longer electrically connected to the tube's electrodes, became simply known as a "heater", and could as well be powered by AC without any introduction of hum. In the 1930s, indirectly heated cathode tubes became widespread in equipment using AC power. Directly heated cathode tubes continued to be widely used in battery-powered equipment as their filaments required considerably less power than the heaters required with indirectly heated cathodes.
Tubes designed for high gain audio applications may have twisted heater wires to cancel out stray electric fields, fields that could induce objectionable hum into the program material.
Heaters may be energized with either alternating current (AC) or direct current (DC). DC is often used where low hum is required.
Use in electronic computers
Vacuum tubes used as switches made electronic computing possible for the first time, but the cost and relatively short mean time to failure of tubes were limiting factors. "The common wisdom was that valves—which, like light bulbs, contained a hot glowing filament—could never be used satisfactorily in large numbers, for they were unreliable, and in a large installation too many would fail in too short a time". Tommy Flowers, who later designed Colossus, "discovered that, so long as valves were switched on and left on, they could operate reliably for very long periods, especially if their 'heaters' were run on a reduced current". In 1934 Flowers built a successful experimental installation using over 3,000 tubes in small independent modules; when a tube failed, it was possible to switch off one module and keep the others going, thereby reducing the risk of another tube failure being caused; this installation was accepted by the Post Office (who operated telephone exchanges). Flowers was also a pioneer of using tubes as very fast (compared to electromechanical devices) electronic switches. Later work confirmed that tube unreliability was not as serious an issue as generally believed; the 1946 ENIAC, with over 17,000 tubes, had a tube failure (which took 15 minutes to locate) on average every two days. The quality of the tubes was a factor, and the diversion of skilled people during the Second World War lowered the general quality of tubes. During the war Colossus was instrumental in breaking German codes. After the war, development continued with tube-based computers including, military computers ENIAC and Whirlwind, the Ferranti Mark 1 (one of the first commercially available electronic computers), and UNIVAC I, also available commercially.
Advances using subminiature tubes included the Jaincomp series of machines produced by the Jacobs Instrument Company of Bethesda, Maryland. Models such as its Jaincomp-B employed just 300 such tubes in a desktop-sized unit that offered performance to rival many of the then room-sized machines.
Colossus
Flowers's Colossus and its successor Colossus Mk2 were built by the British during World War II to substantially speed up the task of breaking the German high level Lorenz encryption. Using about 1,500 vacuum tubes (2,400 for Mk2), Colossus replaced an earlier machine based on relay and switch logic (the Heath Robinson). Colossus was able to break in a matter of hours messages that had previously taken several weeks; it was also much more reliable. Colossus was the first use of vacuum tubes working in concert on such a large scale for a single machine.
Whirlwind and "special-quality" tubes
To meet the reliability requirements of the 1951 US digital computer Whirlwind, "special-quality" tubes with extended life, and a long-lasting cathode in particular, were produced. The problem of short lifetime was traced largely to evaporation of silicon, used in the tungsten alloy to make the heater wire easier to draw. The silicon forms barium orthosilicate at the interface between the nickel sleeve and the cathode barium oxide coating. This "cathode interface" is a high-resistance layer (with some parallel capacitance) which greatly reduces the cathode current when the tube is switched into conduction mode. Elimination of silicon from the heater wire alloy (and more frequent replacement of the wire drawing dies) allowed the production of tubes that were reliable enough for the Whirlwind project. High-purity nickel tubing and cathode coatings free of materials such as silicates and aluminum that can reduce emissivity also contribute to long cathode life.
The first such "computer tube" was Sylvania's 7AK7 pentode of 1948 (these replaced the 7AD7, which was supposed to be better quality than the standard 6AG7 but proved too unreliable). Computers were the first tube devices to run tubes at cutoff (enough negative grid voltage to make them cease conduction) for quite-extended periods of time. Running in cutoff with the heater on accelerates cathode poisoning and the output current of the tube will be greatly reduced when switched into conduction mode. The 7AK7 tubes improved the cathode poisoning problem, but that alone was insufficient to achieve the required reliability. Further measures included switching off the heater voltage when the tubes were not required to conduct for extended periods, turning on and off the heater voltage with a slow ramp to avoid thermal shock on the heater element, and stress testing the tubes during offline maintenance periods to bring on early failure of weak units.
The tubes developed for Whirlwind were later used in the giant SAGE air-defense computer system. By the late 1950s, it was routine for special-quality small-signal tubes to last for hundreds of thousands of hours if operated conservatively. This increased reliability also made mid-cable amplifiers in submarine cables possible.
Heat generation and cooling
A considerable amount of heat is produced when tubes operate, from both the filament (heater) and the stream of electrons bombarding the plate. In power amplifiers, this source of heat is greater than cathode heating. A few types of tube permit operation with the anodes at a dull red heat; in other types, red heat indicates severe overload.
The requirements for heat removal can significantly change the appearance of high-power vacuum tubes. High power audio amplifiers and rectifiers required larger envelopes to dissipate heat. Transmitting tubes could be much larger still.
Heat escapes the device by black-body radiation from the anode (plate) as infrared radiation, and by convection of air over the tube envelope. Convection is not possible inside most tubes since the anode is surrounded by vacuum.
Tubes which generate relatively little heat, such as the 1.4-volt filament directly heated tubes designed for use in battery-powered equipment, often have shiny metal anodes. 1T4, 1R5 and 1A7 are examples. Gas-filled tubes such as thyratrons may also use a shiny metal anode since the gas present inside the tube allows for heat convection from the anode to the glass enclosure.
The anode is often treated to make its surface emit more infrared energy. High-power amplifier tubes are designed with external anodes that can be cooled by convection, forced air or circulating water. The water-cooled 80 kg, 1.25 MW 8974 is among the largest commercial tubes available today.
In a water-cooled tube, the anode voltage appears directly on the cooling water surface, thus requiring the water to be an electrical insulator to prevent high voltage leakage through the cooling water to the radiator system. Water as usually supplied has ions that conduct electricity; deionized water, a good insulator, is required. Such systems usually have a built-in water-conductance monitor which will shut down the high-tension supply if the conductance becomes too high.
The screen grid may also generate considerable heat. Limits to screen grid dissipation, in addition to plate dissipation, are listed for power devices. If these are exceeded then tube failure is likely.
Tube packages
Most modern tubes have glass envelopes, but metal, fused quartz (silica) and ceramic have also been used. A first version of the 6L6 used a metal envelope sealed with glass beads, while a glass disk fused to the metal was used in later versions. Metal and ceramic are used almost exclusively for power tubes above 2 kW dissipation. The nuvistor was a modern receiving tube using a very small metal and ceramic package.
The internal elements of tubes have always been connected to external circuitry via pins at their base which plug into a socket. Subminiature tubes were produced using wire leads rather than sockets, however, these were restricted to rather specialized applications. In addition to the connections at the base of the tube, many early triodes connected the grid using a metal cap at the top of the tube; this reduces stray capacitance between the grid and the plate leads. Tube caps were also used for the plate (anode) connection, particularly in transmitting tubes and tubes using a very high plate voltage.
High-power tubes such as transmitting tubes have packages designed more to enhance heat transfer. In some tubes, the metal envelope is also the anode. The 4CX1000A is an external anode tube of this sort. Air is blown through an array of fins attached to the anode, thus cooling it. Power tubes using this cooling scheme are available up to 150 kW dissipation. Above that level, water or water-vapor cooling are used. The highest-power tube currently available is the Eimac , a forced water-cooled power tetrode capable of dissipating 2.5 megawatts. By comparison, the largest power transistor can only dissipate about 1 kilowatt.
Names
The generic name "[thermionic] valve" used in the UK derives from the unidirectional current flow allowed by the earliest device, the thermionic diode emitting electrons from a heated filament, by analogy with a non-return valve in a water pipe. The US names "vacuum tube", "electron tube", and "thermionic tube" all simply describe a tubular envelope which has been evacuated ("vacuum"), has a heater and controls electron flow.
In many cases, manufacturers and the military gave tubes designations that said nothing about their purpose (e.g., 1614). In the early days some manufacturers used proprietary names which might convey some information, but only about their products; the KT66 and KT88 were "kinkless tetrodes". Later, consumer tubes were given names that conveyed some information, with the same name often used generically by several manufacturers. In the US, Radio Electronics Television Manufacturers' Association (RETMA) designations comprise a number, followed by one or two letters, and a number. The first number is the (rounded) heater voltage; the letters designate a particular tube but say nothing about its structure; and the final number is the total number of electrodes (without distinguishing between, say, a tube with many electrodes, or two sets of electrodes in a single envelope—a double triode, for example). For example, the 12AX7 is a double triode (two sets of three electrodes plus heater) with a 12.6V heater (which, as it happens, can also be connected to run from 6.3V). The "AX" has no meaning other than to designate this particular tube according to its characteristics. Similar, but not identical, tubes are the 12AD7, 12AE7...12AT7, 12AU7, 12AV7, 12AW7 (rare!), 12AY7, and the 12AZ7.
A system widely used in Europe known as the Mullard–Philips tube designation, also extended to transistors, uses a letter, followed by one or more further letters, and a number. The type designator specifies the heater voltage or current (one letter), the functions of all sections of the tube (one letter per section), the socket type (first digit), and the particular tube (remaining digits). For example, the ECC83 (equivalent to the 12AX7) is a 6.3V (E) double triode (CC) with a miniature base (8). In this system special-quality tubes (e.g., for long-life computer use) are indicated by moving the number immediately after the first letter: the E83CC is a special-quality equivalent of the ECC83, the E55L a power pentode with no consumer equivalent.
Special-purpose tubes
Some special-purpose tubes are constructed with particular gases in the envelope. For instance, voltage-regulator tubes contain various inert gases such as argon, helium or neon, which will ionize at predictable voltages. The thyratron is a special-purpose tube filled with low-pressure gas or mercury vapor. Like vacuum tubes, it contains a hot cathode and an anode, but also a control electrode which behaves somewhat like the grid of a triode. When the control electrode starts conduction, the gas ionizes, after which the control electrode can no longer stop the current; the tube "latches" into conduction. Removing anode (plate) voltage lets the gas de-ionize, restoring its non-conductive state.
Some thyratrons can carry large currents for their physical size. One example is the miniature type 2D21, often seen in 1950s jukeboxes as control switches for relays. A cold-cathode version of the thyratron, which uses a pool of mercury for its cathode, is called an ignitron; some can switch thousands of amperes. Thyratrons containing hydrogen have a very consistent time delay between their turn-on pulse and full conduction; they behave much like modern silicon-controlled rectifiers, also called thyristors due to their functional similarity to thyratrons. Hydrogen thyratrons have long been used in radar transmitters.
A specialized tube is the krytron, which is used for rapid high-voltage switching. Krytrons are used to initiate the detonations used to set off a nuclear weapon; krytrons are heavily controlled at an international level.
X-ray tubes are used in medical imaging among other uses. X-ray tubes used for continuous-duty operation in fluoroscopy and CT imaging equipment may use a focused cathode and a rotating anode to dissipate the large amounts of heat thereby generated. These are housed in an oil-filled aluminum housing to provide cooling.
The photomultiplier tube is an extremely sensitive detector of light, which uses the photoelectric effect and secondary emission, rather than thermionic emission, to generate and amplify electrical signals. Nuclear medicine imaging equipment and liquid scintillation counters use photomultiplier tube arrays to detect low-intensity scintillation due to ionizing radiation.
The Ignatron tube was used in resistance welding equipment in the early 1970s. The Ignatron had a cathode, anode and an igniter. The tube base was filled with mercury and the tube was used as a very high current switch. A large current potential was placed between the anode and cathode of the tube but was only permitted to conduct when the igniter in contact with the mercury had enough current to vaporize the mercury and complete the circuit. Because this was used in resistance welding there were two Ignatrons for the two phases of an AC circuit. Because of the mercury at the bottom of the tube they were extremely difficult to ship. These tubes were eventually replaced by SCRs (Silicon Controlled Rectifiers).
Powering the tube
Batteries
Batteries provided the voltages required by tubes in early radio sets. Three different voltages were generally required, using three different batteries designated as the A, B, and C battery. The "A" battery or LT (low-tension) battery provided the filament voltage. Tube heaters were designed for single, double or triple-cell lead-acid batteries, giving nominal heater voltages of 2 V, 4 V or 6 V. In portable radios, dry batteries were sometimes used with 1.5 or 1 V heaters. Reducing filament consumption improved the life span of batteries. By 1955 towards the end of the tube era, tubes using only 50 mA down to as little as 10 mA for the heaters had been developed.
The high voltage applied to the anode (plate) was provided by the "B" battery or the HT (high-tension) supply or battery. These were generally of dry cell construction and typically came in 22.5-, 45-, 67.5-, 90-, 120- or 135-volt versions. After the use of B-batteries was phased out and rectified line-power was employed to produce the high voltage needed by tubes' plates, the term "B+" persisted in the US when referring to the high voltage source. Most of the rest of the English speaking world refers to this supply as just HT (high tension).
Early sets used a grid bias battery or "C" battery which was connected to provide a negative voltage. Since no current flows through a tube's grid connection, these batteries had no current drain and lasted the longest, usually limited by their own shelf life. The supply from the grid bias battery was rarely, if ever, disconnected when the radio was otherwise switched off. Even after AC power supplies became commonplace, some radio sets continued to be built with C batteries, as they would almost never need replacing. However more modern circuits were designed using cathode biasing, eliminating the need for a third power supply voltage; this became practical with tubes using indirect heating of the cathode along with the development of resistor/capacitor coupling which replaced earlier interstage transformers.
The "C battery" for bias is a designation having no relation to the "C cell" battery size.
AC power
Battery replacement was a major operating cost for early radio receiver users. The development of the battery eliminator, and, in 1925, batteryless receivers operated by household power, reduced operating costs and contributed to the growing popularity of radio. A power supply using a transformer with several windings, one or more rectifiers (which may themselves be vacuum tubes), and large filter capacitors provided the required direct current voltages from the alternating current source.
As a cost reduction measure, especially in high-volume consumer receivers, all the tube heaters could be connected in series across the AC supply using heaters requiring the same current and with a similar warm-up time. In one such design, a tap on the tube heater string supplied the 6 volts needed for the dial light. By deriving the high voltage from a half-wave rectifier directly connected to the AC mains, the heavy and costly power transformer was eliminated. This also allowed such receivers to operate on direct current, a so-called AC/DC receiver design. Many different US consumer AM radio manufacturers of the era used a virtually identical circuit, given the nickname All American Five.
Where the mains voltage was in the 100–120 V range, this limited voltage proved suitable only for low-power receivers. Television receivers either required a transformer or could use a voltage doubling circuit. Where 230 V nominal mains voltage was used, television receivers as well could dispense with a power transformer.
Transformer-less power supplies required safety precautions in their design to limit the shock hazard to users, such as electrically insulated cabinets and an interlock tying the power cord to the cabinet back, so the line cord was necessarily disconnected if the user or service person opened the cabinet. A cheater cord was a power cord ending in the special socket used by the safety interlock; servicers could then power the device with the hazardous voltages exposed.
To avoid the warm-up delay, "instant on" television receivers passed a small heating current through their tubes even when the set was nominally off. At switch on, full heating current was provided and the set would play almost immediately.
Reliability
One reliability problem of tubes with oxide cathodes is the possibility that the cathode may slowly become "poisoned" by gas molecules from other elements in the tube, which reduce its ability to emit electrons. Trapped gases or slow gas leaks can also damage the cathode or cause plate (anode) current runaway due to ionization of free gas molecules. Vacuum hardness and proper selection of construction materials are the major influences on tube lifetime. Depending on the material, temperature and construction, the surface material of the cathode may also diffuse onto other elements. The resistive heaters that heat the cathodes may break in a manner similar to incandescent lamp filaments, but rarely do, since they operate at much lower temperatures than lamps.
The heater's failure mode is typically a stress-related fracture of the tungsten wire or at a weld point and generally occurs after accruing many thermal (power on-off) cycles. Tungsten wire has a very low resistance when at room temperature. A negative temperature coefficient device, such as a thermistor, may be incorporated in the equipment's heater supply or a ramp-up circuit may be employed to allow the heater or filaments to reach operating temperature more gradually than if powered-up in a step-function. Low-cost radios had tubes with heaters connected in series, with a total voltage equal to that of the line (mains). Some receivers made before World War II had series-string heaters with total voltage less than that of the mains. Some had a resistance wire running the length of the power cord to drop the voltage to the tubes. Others had series resistors made like regular tubes; they were called ballast tubes.
Following World War II, tubes intended to be used in series heater strings were redesigned to all have the same ("controlled") warm-up time. Earlier designs had quite-different thermal time constants. The audio output stage, for instance, had a larger cathode and warmed up more slowly than lower-powered tubes. The result was that heaters that warmed up faster also temporarily had higher resistance, because of their positive temperature coefficient. This disproportionate resistance caused them to temporarily operate with heater voltages well above their ratings, and shortened their life.
Another important reliability problem is caused by air leakage into the tube. Usually oxygen in the air reacts chemically with the hot filament or cathode, quickly ruining it. Designers developed tube designs that sealed reliably. This was why most tubes were constructed of glass. Metal alloys (such as Cunife and Fernico) and glasses had been developed for light bulbs that expanded and contracted in similar amounts, as temperature changed. These made it easy to construct an insulating envelope of glass, while passing connection wires through the glass to the electrodes.
When a vacuum tube is overloaded or operated past its design dissipation, its anode (plate) may glow red. In consumer equipment, a glowing plate is universally a sign of an overloaded tube. However, some large transmitting tubes are designed to operate with their anodes at red, orange, or in rare cases, white heat.
"Special quality" versions of standard tubes were often made, designed for improved performance in some respect, such as a longer life cathode, low noise construction, mechanical ruggedness via ruggedized filaments, low microphony, for applications where the tube will spend much of its time cut off, etc. The only way to know the particular features of a special quality part is by reading the datasheet. Names may reflect the standard name (12AU7==>12AU7A, its equivalent ECC82==>E82CC, etc.), or be absolutely anything (standard and special-quality equivalents of the same tube include 12AU7, ECC82, B329, CV491, E2163, E812CC, M8136, CV4003, 6067, VX7058, 5814A and 12AU7A).
The longest recorded valve life was earned by a Mazda AC/P pentode valve (serial No. 4418) in operation at the BBC's main Northern Ireland transmitter at Lisnagarvey. The valve was in service from 1935 until 1961 and had a recorded life of 232,592 hours. The BBC maintained meticulous records of their valves' lives with periodic returns to their central valve stores.
Vacuum
A vacuum tube needs an extremely high vacuum (or hard vacuum, from X-ray terminology) to avoid the consequences of generating positive ions within the tube. Residual gas atoms ionize when struck by an electron and can adversely affect the cathode, reducing emission. Larger amounts of residual gas can create a visible glow discharge between the tube electrodes and cause overheating of the electrodes, producing more gas, damaging the tube and possibly other components due to excess current. To avoid these effects, the residual pressure within the tube must be low enough that the mean free path of an electron is much longer than the size of the tube (so an electron is unlikely to strike a residual atom and very few ionized atoms will be present). Commercial vacuum tubes are evacuated at manufacture to about .
To prevent gases from compromising the tube's vacuum, modern tubes are constructed with getters, which are usually metals that oxidize quickly, barium being the most common. For glass tubes, while the tube envelope is being evacuated, the internal parts except the getter are heated by RF induction heating to evolve any remaining gas from the metal parts. The tube is then sealed and the getter trough or pan, for flash getters, is heated to a high temperature, again by radio frequency induction heating, which causes the getter material to vaporize and react with any residual gas. The vapor is deposited on the inside of the glass envelope, leaving a silver-colored metallic patch that continues to absorb small amounts of gas that may leak into the tube during its working life. Great care is taken with the valve design to ensure this material is not deposited on any of the working electrodes. If a tube develops a serious leak in the envelope, this deposit turns a white color as it reacts with atmospheric oxygen. Large transmitting and specialized tubes often use more exotic getter materials, such as zirconium. Early gettered tubes used phosphorus-based getters, and these tubes are easily identifiable, as the phosphorus leaves a characteristic orange or rainbow deposit on the glass. The use of phosphorus was short-lived and was quickly replaced by the superior barium getters. Unlike the barium getters, the phosphorus did not absorb any further gases once it had fired.
Getters act by chemically combining with residual or infiltrating gases, but are unable to counteract (non-reactive) inert gases. A known problem, mostly affecting valves with large envelopes such as cathode ray tubes and camera tubes such as iconoscopes, orthicons, and image orthicons, comes from helium infiltration. The effect appears as impaired or absent functioning, and as a diffuse glow along the electron stream inside the tube. This effect cannot be rectified (short of re-evacuation and resealing), and is responsible for working examples of such tubes becoming rarer and rarer. Unused ("New Old Stock") tubes can also exhibit inert gas infiltration, so there is no long-term guarantee of these tube types surviving into the future.
Transmitting tubes
Large transmitting tubes have carbonized tungsten filaments containing a small trace (1% to 2%) of thorium. An extremely thin (molecular) layer of thorium atoms forms on the outside of the wire's carbonized layer and, when heated, serve as an efficient source of electrons. The thorium slowly evaporates from the wire surface, while new thorium atoms diffuse to the surface to replace them. Such thoriated tungsten cathodes usually deliver lifetimes in the tens of thousands of hours. The end-of-life scenario for a thoriated-tungsten filament is when the carbonized layer has mostly been converted back into another form of tungsten carbide and emission begins to drop off rapidly; a complete loss of thorium has never been found to be a factor in the end-of-life in a tube with this type of emitter.
WAAY-TV in Huntsville, Alabama achieved 163,000 hours (18.6 years) of service from an Eimac external cavity klystron in the visual circuit of its transmitter; this is the highest documented service life for this type of tube.
It has been said that transmitters with vacuum tubes are better able to survive lightning strikes than transistor transmitters do. While it was commonly believed that vacuum tubes were more efficient than solid-state circuits at RF power levels above approximately 20 kilowatts, this is no longer the case, especially in medium wave (AM broadcast) service where solid-state transmitters at nearly all power levels have measurably higher efficiency. FM broadcast transmitters with solid-state power amplifiers up to approximately 15kW also show better overall power efficiency than tube-based power amplifiers.
Receiving tubes
Cathodes in small "receiving" tubes are coated with a mixture of barium oxide and strontium oxide, sometimes with addition of calcium oxide or aluminium oxide. An electric heater is inserted into the cathode sleeve and insulated from it electrically by a coating of aluminum oxide. This complex construction causes barium and strontium atoms to diffuse to the surface of the cathode and emit electrons when heated to about 780 degrees Celsius.
Failure modes
Catastrophic failures
A catastrophic failure is one that suddenly makes the vacuum tube unusable. A crack in the glass envelope will allow air into the tube and destroy it. Cracks may result from stress in the glass, bent pins or impacts; tube sockets must allow for thermal expansion, to prevent stress in the glass at the pins. Stress may accumulate if a metal shield or other object presses on the tube envelope and causes differential heating of the glass. Glass may also be damaged by high-voltage arcing.
Tube heaters may also fail without warning, especially if exposed to over voltage or as a result of manufacturing defects. Tube heaters do not normally fail by evaporation like lamp filaments since they operate at much lower temperature. The surge of inrush current when the heater is first energized causes stress in the heater and can be avoided by slowly warming the heaters, gradually increasing current with a NTC thermistor included in the circuit. Tubes intended for series-string operation of the heaters across the supply have a specified controlled warm-up time to avoid excess voltage on some heaters as others warm up. Directly heated filament-type cathodes as used in battery-operated tubes or some rectifiers may fail if the filament sags, causing internal arcing. Excess heater-to-cathode voltage in indirectly heated cathodes can break down the insulation between elements and destroy the heater.
Arcing between tube elements can destroy the tube. An arc can be caused by applying voltage to the anode (plate) before the cathode has come up to operating temperature, or by drawing excess current through a rectifier, which damages the emission coating. Arcs can also be initiated by any loose material inside the tube, or by excess screen voltage. An arc inside the tube allows gas to evolve from the tube materials, and may deposit conductive material on internal insulating spacers.
Tube rectifiers have limited current capability and exceeding ratings will eventually destroy a tube.
Degenerative failures
Degenerative failures are those caused by the slow deterioration of performance over time.
Overheating of internal parts, such as control grids or mica spacer insulators, can result in trapped gas escaping into the tube; this can reduce performance. A getter is used to absorb gases evolved during tube operation but has only a limited ability to combine with gas. Control of the envelope temperature prevents some types of gassing. A tube with an unusually high level of internal gas may exhibit a visible blue glow when plate voltage is applied. The getter (being a highly reactive metal) is effective against many atmospheric gases but has no (or very limited) chemical reactivity to inert gases such as helium. One progressive type of failure, especially with physically large envelopes such as those used by camera tubes and cathode-ray tubes, comes from helium infiltration. The exact mechanism is not clear: the metal-to-glass lead-in seals are one possible infiltration site.
Gas and ions within the tube contribute to grid current which can disturb operation of a vacuum-tube circuit. Another effect of overheating is the slow deposit of metallic vapors on internal spacers, resulting in inter-element leakage.
Tubes on standby for long periods, with heater voltage applied, may develop high cathode interface resistance and display poor emission characteristics. This effect occurred especially in pulse and digital circuits, where tubes had no plate current flowing for extended times. Tubes designed specifically for this mode of operation were made.
Cathode depletion is the loss of emission after thousands of hours of normal use. Sometimes emission can be restored for a time by raising heater voltage, either for a short time or a permanent increase of a few percent. Cathode depletion was uncommon in signal tubes but was a frequent cause of failure of monochrome television cathode-ray tubes. Usable life of this expensive component was sometimes extended by fitting a boost transformer to increase heater voltage.
Other failures
Vacuum tubes may develop defects in operation that make an individual tube unsuitable in a given device, although it may perform satisfactorily in another application. Microphonics refers to internal vibrations of tube elements which modulate the tube's signal in an undesirable way; sound or vibration pick-up may affect the signals, or even cause uncontrolled howling if a feedback path (with greater than unity gain) develops between a microphonic tube and, for example, a loudspeaker. Leakage current between AC heaters and the cathode may couple into the circuit, or electrons emitted directly from the ends of the heater may also inject hum into the signal. Leakage current due to internal contamination may also inject noise. Some of these effects make tubes unsuitable for small-signal audio use, although unobjectionable for other purposes. Selecting the best of a batch of nominally identical tubes for critical applications can produce better results.
Tube pins can develop non-conducting or high resistance surface films due to heat or dirt. Pins can be cleaned to restore conductance.
Testing
Vacuum tubes can be tested outside of their circuitry using a vacuum tube tester.
Other vacuum tube devices
Most small signal vacuum tube devices have been superseded by semiconductors, but some vacuum tube electronic devices are still in common use. The magnetron is the type of tube used in all microwave ovens. In spite of the advancing state of the art in power semiconductor technology, the vacuum tube still has reliability and cost advantages for high-frequency RF power generation.
Some tubes, such as magnetrons, traveling-wave tubes, Carcinotrons, and klystrons, combine magnetic and electrostatic effects. These are efficient (usually narrow-band) RF generators and still find use in radar, microwave ovens and industrial heating. Traveling-wave tubes (TWTs) are very good amplifiers and are even used in some communications satellites. High-powered klystron amplifier tubes can provide hundreds of kilowatts in the UHF range.
Cathode ray tubes
The cathode ray tube (CRT) is a vacuum tube used particularly for display purposes. Although there are still many televisions and computer monitors using cathode ray tubes, they are rapidly being replaced by flat panel displays whose quality has greatly improved even as their prices drop. This is also true of digital oscilloscopes (based on internal computers and analog-to-digital converters), although traditional analog scopes (dependent upon CRTs) continue to be produced, are economical, and preferred by many technicians. At one time many radios used "magic eye tubes", a specialized sort of CRT used in place of a meter movement to indicate signal strength or input level in a tape recorder. A modern indicator device, the vacuum fluorescent display (VFD) is also a sort of cathode ray tube.
The X-ray tube is a type of cathode ray tube that generates X-rays when high voltage electrons hit the anode.
Gyrotrons or vacuum masers, used to generate high-power millimeter band waves, are magnetic vacuum tubes in which a small relativistic effect, due to the high voltage, is used for bunching the electrons. Gyrotrons can generate very high powers (hundreds of kilowatts).
Free-electron lasers, used to generate high-power coherent light and even X-rays, are highly relativistic vacuum tubes driven by high-energy particle accelerators. Thus, these are sorts of cathode ray tubes.
Electron multipliers
A photomultiplier is a phototube whose sensitivity is greatly increased through the use of electron multiplication. This works on the principle of secondary emission, whereby a single electron emitted by the photocathode strikes a special sort of anode known as a dynode causing more electrons to be released from that dynode. Those electrons are accelerated toward another dynode at a higher voltage, releasing more secondary electrons; as many as 15 such stages provide a huge amplification. Despite great advances in solid-state photodetectors, the single-photon detection capability of photomultiplier tubes makes this vacuum tube device excel in certain applications. Such a tube can also be used for detection of ionizing radiation as an alternative to the Geiger–Müller tube (itself not an actual vacuum tube). Historically, the image orthicon TV camera tube widely used in television studios prior to the development of modern CCD arrays also used multistage electron multiplication.
For decades, electron-tube designers tried to augment amplifying tubes with electron multipliers in order to increase gain, but these suffered from short life because the material used for the dynodes "poisoned" the tube's hot cathode. (For instance, the interesting RCA 1630 secondary-emission tube was marketed, but did not last.) However, eventually, Philips of the Netherlands developed the EFP60 tube that had a satisfactory lifetime and was used in at least one product, a laboratory pulse generator. By that time, however, transistors were rapidly improving, making such developments superfluous.
One variant called a "channel electron multiplier" does not use individual dynodes but consists of a curved tube, such as a helix, coated on the inside with material with good secondary emission. One type had a funnel of sorts to capture the secondary electrons. The continuous dynode was resistive, and its ends were connected to enough voltage to create repeated cascades of electrons. The microchannel plate consists of an array of single stage electron multipliers over an image plane; several of these can then be stacked. This can be used, for instance, as an image intensifier in which the discrete channels substitute for focussing.
Tektronix made a high-performance wideband oscilloscope CRT with a channel electron multiplier plate behind the phosphor layer. This plate was a bundled array of a huge number of short individual c.e.m. tubes that accepted a low-current beam and intensified it to provide a display of practical brightness. (The electron optics of the wideband electron gun could not provide enough current to directly excite the phosphor.)
Vacuum tubes in the 21st century
Niche applications
Although vacuum tubes have been largely replaced by solid-state devices in most amplifying, switching, and rectifying applications, there are certain exceptions. In addition to the special functions noted above, tubes have some niche applications.
In general, vacuum tubes are much less susceptible than corresponding solid-state components to transient overvoltages, such as mains voltage surges or lightning, the electromagnetic pulse effect of nuclear explosions, or geomagnetic storms produced by giant solar flares. This property kept them in use for certain military applications long after more practical and less expensive solid-state technology was available for the same applications, as for example with the MiG-25.
Vacuum tubes are still practical alternatives to solid-state devices in generating high power at radio frequencies in applications such as industrial radio frequency heating, particle accelerators, and broadcast transmitters. This is particularly true at microwave frequencies where such devices as the klystron and traveling-wave tube provide amplification at power levels unattainable using semiconductor devices. The household microwave oven uses a magnetron tube to efficiently generate hundreds of watts of microwave power. Solid-state devices such as gallium nitride are promising replacements, but are very expensive and still in development.
In military applications, a high-power vacuum tube can generate a 10–100 megawatt signal that can burn out an unprotected receiver's frontend. Such devices are considered non-nuclear electromagnetic weapons; they were introduced in the late 1990s by both the U.S. and Russia.
Audiophiles
Enough people prefer tube sound to make tube amplifiers commercially viable in three areas: musical instrument (e.g., guitar) amplifiers, devices used in recording studios, and audiophile equipment.
Many guitarists prefer using valve amplifiers to solid-state models, often due to the way they tend to distort when overdriven. Any amplifier can only accurately amplify a signal to a certain volume; past this limit, the amplifier will begin to distort the signal. Different circuits will distort the signal in different ways; some guitarists prefer the distortion characteristics of vacuum tubes. Most popular vintage models use vacuum tubes.
Displays
Cathode ray tube
The cathode ray tube was the dominant display technology for televisions and computer monitors at the start of the 21st century. However, rapid advances and falling prices of LCD flat panel technology soon took the place of CRTs in these devices. By 2010, most CRT production had ended.
Vacuum tubes using field electron emitters
In the early years of the 21st century there has been renewed interest in vacuum tubes, this time with the electron emitter formed on a flat silicon substrate, as in integrated circuit technology. This subject is now called vacuum nanoelectronics. The most common design uses a cold cathode in the form of a large-area field electron source (for example a field emitter array). With these devices, electrons are field-emitted from a large number of closely spaced individual emission sites.
Such integrated microtubes may find application in microwave devices including mobile phones, for Bluetooth and Wi-Fi transmission, and in radar and satellite communication. , they were being studied for possible applications in field emission display technology, but there were significant production problems.
As of 2014, NASA's Ames Research Center was reported to be working on vacuum-channel transistors produced using CMOS techniques.
Characteristics
Space charge of a vacuum tube
When a cathode is heated and reaches an operating temperature around 1050° Kelvin (777° Celsius), free electrons are driven from its surface. These free electrons form a cloud in the empty space between the Cathode and the anode, known as the space charge. This space charge cloud supplies the electrons that create the current flow from the cathode to the anode. As electrons are drawn to the anode during the operation of the circuit, new electrons will boil off the cathode to replenish the space charge. The space charge is an example of an electric field.
Voltage - Current characteristics of vacuum tube
All tubes with one or more control grids are controlled by an AC (Alternating Current) input voltage applied to the control grid, while the resulting amplified signal appears at the anode as a current. Due to the high voltage placed on the anode, a relatively small anode current can represent a considerable increase in energy over the value of the original signal voltage. The space charge electrons driven off the heated cathode are strongly attracted the positive anode. The control grid(s) in a tube mediate this current flow by combining the small AC signal current with the grid's slightly negative value. When the signal sine (AC) wave is applied to the grid, it rides on this negative value, driving it both positive and negative as the AC signal wave changes.
This relationship is shown with a set of Plate Characteristics curves, (see example above,) which visually display how the output current from the anode () can be affected by a small input voltage applied on the grid (), for any given voltage on the plate(anode) ().
Every tube has a unique set of such characteristic curves. The curves graphically relate the changes to the instantaneous plate current driven by a much smaller change in the grid-to-cathode voltage () as the input signal varies.
The V-I characteristic depends upon the size and material of the plate and cathode.
Express the ratio between voltage plate and plate current.
V-I curve (Voltage across filaments, plate current)
Plate current, plate voltage characteristics
DC plate resistance of the plate—resistance of the path between anode and cathode of direct current
AC plate resistance of the plate—resistance of the path between anode and cathode of alternating current
Size of electrostatic field
Size of electrostatic field is the size between two or more plates in the tube.
Patents
—Instrument for converting alternating electric currents into continuous currents (Fleming valve patent)
—Device for amplifying feeble electrical currents
—de Forest's three electrode Audion
See also
Bogey value—close to manufacturer's stated parameter values
Fetron—a solid-state, plug-compatible, replacement for vacuum tubes
List of vacuum tubes—a list of type numbers.
List of vacuum-tube computers
Mullard–Philips tube designation
Nixie tube—a gas-filled display device sometimes misidentified as a vacuum tube
RETMA tube designation
RMA tube designation
Russian tube designations
Tube caddy
Tube tester
Valve amplifier
Zetatron
Explanatory notes
References
Further reading
Eastman, Austin V., Fundamentals of Vacuum Tubes, McGraw-Hill, 1949
Millman, J. & Seely, S. Electronics, 2nd ed. McGraw-Hill, 1951.
Philips Technical Library. Books published in the UK in the 1940s and 1950s by Cleaver Hume Press on design and application of vacuum tubes.
RCA. Radiotron Designer's Handbook, 1953 (4th Edition). Contains chapters on the design and application of receiving tubes.
RCA. Receiving Tube Manual, RC15, RC26 (1947, 1968) Issued every two years, contains details of the technical specs of the tubes that RCA sold.
Shiers, George, "The First Electron Tube", Scientific American, March 1969, p. 104.
Stokes, John, 70 Years of Radio Tubes and Valves, Vestal Press, New York, 1982, pp. 3–9.
Thrower, Keith, History of the British Radio Valve to 1940, MMA International, 1982, pp 9–13.
Tyne, Gerald, Saga of the Vacuum Tube, Ziff Publishing, 1943, (reprint 1994 Prompt Publications), pp. 30–83.
Basic Electronics: Volumes 1–5; Van Valkenburgh, Nooger & Neville Inc.; John F. Rider Publisher; 1955.
Wireless World. Radio Designer's Handbook. UK reprint of the above.
"Vacuum Tube Design"; 1940; RCA.
External links
The Vacuum Tube FAQ—FAQ from rec.audio
The invention of the thermionic valve. Fleming discovers the thermionic (or oscillation) valve, or 'diode'.
"Tubes Vs. Transistors: Is There an Audible Difference?"—1972 AES paper on audible differences in sound quality between vacuum tubes and transistors.
The Virtual Valve Museum
The cathode ray tube site
O'Neill's Electronic museum—vacuum tube museum
Vacuum tubes for beginners—Japanese Version
NJ7P Tube Database—Data manual for tubes used in North America.
Vacuum tube data sheet locator
Characteristics and datasheets
Tuning eye tubes
1904 in science
1904 in technology
Electrical components
English inventions
Glass applications
Telecommunications-related introductions in 1904
Vacuum |
32678 | https://en.wikipedia.org/wiki/Vocoder | Vocoder | A vocoder (, a contraction of voice and encoder) is a category of speech coding that analyzes and synthesizes the human voice signal for audio data compression, multiplexing, voice encryption or voice transformation.
The vocoder was invented in 1938 by Homer Dudley at Bell Labs as a means of synthesizing human speech. This work was developed into the channel vocoder which was used as a voice codec for telecommunications for speech coding to conserve bandwidth in transmission.
By encrypting the control signals, voice transmission can be secured against interception. Its primary use in this fashion is for secure radio communication. The advantage of this method of encryption is that none of the original signal is sent, only envelopes of the bandpass filters. The receiving unit needs to be set up in the same filter configuration to re-synthesize a version of the original signal spectrum.
The vocoder has also been used extensively as an electronic musical instrument. The decoder portion of the vocoder, called a voder, can be used independently for speech synthesis.
Theory
The human voice consists of sounds generated by the opening and closing of the glottis by the vocal cords, which produces a periodic waveform with many harmonics. This basic sound is then filtered by the nose and throat (a complicated resonant piping system) to produce differences in harmonic content (formants) in a controlled way, creating the wide variety of sounds used in speech. There is another set of sounds, known as the unvoiced and plosive sounds, which are created or modified by the mouth in different fashions.
The vocoder examines speech by measuring how its spectral characteristics change over time. This results in a series of signals representing these modified frequencies at any particular time as the user speaks. In simple terms, the signal is split into a number of frequency bands (the larger this number, the more accurate the analysis) and the level of signal present at each frequency band gives the instantaneous representation of the spectral energy content. To recreate speech, the vocoder simply reverses the process, processing a broadband noise source by passing it through a stage that filters the frequency content based on the originally recorded series of numbers.
Specifically, in the encoder, the input is passed through a multiband filter, then each band is passed through an envelope follower, and the control signals from the envelope followers are transmitted to the decoder. The decoder applies these (amplitude) control signals to corresponding amplifiers of the filter channels for re-synthesis.
Information about the instantaneous frequency of the original voice signal (as distinct from its spectral characteristic) is discarded; it was not important to preserve this for the vocoder's original use as an encryption aid. It is this "dehumanizing" aspect of the vocoding process that has made it useful in creating special voice effects in popular music and audio entertainment.
The vocoder process sends only the parameters of the vocal model over the communication link, instead of a point-by-point recreation of the waveform. Since the parameters change slowly compared to the original speech waveform, the bandwidth required to transmit speech can be reduced. This allows more speech channels to utilize a given communication channel, such as a radio channel or a submarine cable.
Analog vocoders typically analyze an incoming signal by splitting the signal into multiple tuned frequency bands or ranges. A modulator and carrier signal are sent through a series of these tuned bandpass filters. In the example of a typical robot voice, the modulator is a microphone and the carrier is noise or a sawtooth waveform. There are usually between eight and 20 bands.
The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands.
Often there is an unvoiced band or sibilance channel. This is for frequencies that are outside the analysis bands for typical speech but are still important in speech. Examples are words that start with the letters s, f, ch or any other sibilant sound. These can be mixed with the carrier output to increase clarity. The result is recognizable speech, although somewhat "mechanical" sounding. Vocoders often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.
In the channel vocoder algorithm, among the two components of an analytic signal, considering only the amplitude component and simply ignoring the phase component tends to result in an unclear voice; on methods for rectifying this, see phase vocoder.
History
The development of a vocoder was started in 1928 by Bell Labs engineer Homer Dudley, who was granted patents for it, on March 21, 1939, and on Nov 16, 1937.
To demonstrate the speech synthesis ability of its decoder part, the Voder (Voice Operating Demonstrator), was introduced to the public at the AT&T building at the 1939–1940 New York World's Fair. The Voder consisted of a switchable pair of electronic oscillator and noise generator as a sound source of pitched tone and hiss, 10-band resonator filters with variable-gain amplifiers as a vocal tract, and the manual controllers including a set of pressure-sensitive keys for filter control, and a foot pedal for pitch control of tone. The filters controlled by keys convert the tone and the hiss into vowels, consonants, and inflections. This was a complex machine to operate, but a skilled operator could produce recognizable speech.
Dudley's vocoder was used in the SIGSALY system, which was built by Bell Labs engineers in 1943. SIGSALY was used for encrypted high-level voice communications during World War II. The KO-6 voice coder was released in 1949 in limited quantities; it was a close approximation to the SIGSALY at 1200 bit/s. In 1953, KY-9 THESEUS 1650 bit/s voice coder used solid state logic to reduce the weight to from SIGSALY's 55 tons, and in 1961 the HY-2 voice coder, a 16-channel 2400 bit/s system, weighted and was the last implementation of a channel vocoder in a secure speech system.
Later work in this field has since used digital speech coding. The most widely used speech coding technique is linear predictive coding (LPC), which was first proposed by Fumitada Itakura of Nagoya University and Shuzo Saito of Nippon Telegraph and Telephone (NTT) in 1966. Another speech coding technique, adaptive differential pulse-code modulation (ADPCM), was developed by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973.
Applications
Terminal equipment for Digital Mobile Radio (DMR) based systems.
Digital Trunking
DMR TDMA
Digital Voice Scrambling and Encryption
Digital WLL
Voice Storage and Playback Systems
Messaging Systems
VoIP Systems
Voice Pagers
Regenerative Digital Voice Repeaters
Cochlear Implants: Noise and tone vocoding is used to simulate the effects of Cochlear Implants.
Musical and other artistic effects
Modern implementations
Even with the need to record several frequencies, and additional unvoiced sounds, the compression of vocoder systems is impressive. Standard speech-recording systems capture frequencies from about 500 Hz to 3,400 Hz, where most of the frequencies used in speech lie, typically using a sampling rate of 8 kHz (slightly greater than the Nyquist rate). The sampling resolution is typically 12 or more bits per sample resolution (16 is standard), for a final data rate in the range of 96–128 kbit/s, but a good vocoder can provide a reasonably good simulation of voice with as little as 2.4 kbit/s of data.
"Toll quality" voice coders, such as ITU G.729, are used in many telephone networks. G.729 in particular has a final data rate of 8 kbit/s with superb voice quality. G.723 achieves slightly worse quality at data rates of 5.3 kbit/s and 6.4 kbit/s. Many voice vocoder systems use lower data rates, but below 5 kbit/s voice quality begins to drop rapidly.
Several vocoder systems are used in NSA encryption systems:
LPC-10, FIPS Pub 137, 2400 bit/s, which uses linear predictive coding
Code-excited linear prediction (CELP), 2400 and 4800 bit/s, Federal Standard 1016, used in STU-III
Continuously variable slope delta modulation (CVSD), 16 kbit/s, used in wide band encryptors such as the KY-57.
Mixed-excitation linear prediction (MELP), MIL STD 3005, 2400 bit/s, used in the Future Narrowband Digital Terminal FNBDT, NSA's 21st century secure telephone.
Adaptive Differential Pulse Code Modulation (ADPCM), former ITU-T G.721, 32 kbit/s used in STE secure telephone
(ADPCM is not a proper vocoder but rather a waveform codec. ITU has gathered G.721 along with some other ADPCM codecs into G.726.)
Vocoders are also currently used in developing psychophysics, linguistics, computational neuroscience and cochlear implant research.
Modern vocoders that are used in communication equipment and in voice storage devices today are based on the following algorithms:
Algebraic code-excited linear prediction (ACELP 4.7 kbit/s – 24 kbit/s)
Mixed-excitation linear prediction (MELPe 2400, 1200 and 600 bit/s)
Multi-band excitation (AMBE 2000 bit/s – 9600 bit/s)
Sinusoidal-Pulsed Representation (SPR 600 bit/s – 4800 bit/s)
Robust Advanced Low-complexity Waveform Interpolation (RALCWI 2050bit/s, 2400bit/s and 2750bit/s)
Tri-Wave Excited Linear Prediction (TWELP 600 bit/s – 9600 bit/s)
Noise Robust Vocoder (NRV 300 bit/s and 800 bit/s)
Linear prediction-based
Since the late 1970s, most non-musical vocoders have been implemented using linear prediction, whereby the target signal's spectral envelope (formant) is estimated by an all-pole IIR filter. In linear prediction coding, the all-pole filter replaces the bandpass filter bank of its predecessor and is used at the encoder to whiten the signal (i.e., flatten the spectrum) and again at the decoder to re-apply the spectral shape of the target speech signal.
One advantage of this type of filtering is that the location of the linear predictor's spectral peaks is entirely determined by the target signal, and can be as precise as allowed by the time period to be filtered. This is in contrast with vocoders realized using fixed-width filter banks, where spectral peaks can generally only be determined to be within the scope of a given frequency band. LP filtering also has disadvantages in that signals with a large number of constituent frequencies may exceed the number of frequencies that can be represented by the linear prediction filter. This restriction is the primary reason that LP coding is almost always used in tandem with other methods in high-compression voice coders.
Waveform-interpolative
Waveform-interpolative (WI) vocoder was developed in AT&T Bell Laboratories around 1995 by W.B. Kleijn, and subsequently a low- complexity version was developed by AT&T for the DoD secure vocoder competition. Notable enhancements to the WI coder were made at the University of California, Santa Barbara. AT&T holds the core patents related to WI, and other institutes hold additional patents.
Artistic effects
Uses in music
For musical applications, a source of musical sounds is used as the carrier, instead of extracting the fundamental frequency. For instance, one could use the sound of a synthesizer as the input to the filter bank, a technique that became popular in the 1970s.
History
Werner Meyer-Eppler, a German scientist with a special interest in electronic voice synthesis, published a thesis in 1948 on electronic music and speech synthesis from the viewpoint of sound synthesis. Later he was instrumental in the founding of the Studio for Electronic Music of WDR in Cologne, in 1951.
One of the first attempts to use a vocoder in creating music was the "Siemens Synthesizer" at the Siemens Studio for Electronic Music, developed between 1956 and 1959.
In 1968, Robert Moog developed one of the first solid-state musical vocoders for the electronic music studio of the University at Buffalo.
In 1968, Bruce Haack built a prototype vocoder, named "Farad" after Michael Faraday. It was first featured on "The Electronic Record For Children" released in 1969 and then on his rock album The Electric Lucifer released in 1970.
In 1970, Wendy Carlos and Robert Moog built another musical vocoder, a ten-band device inspired by the vocoder designs of Homer Dudley. It was originally called a spectrum encoder-decoder and later referred to simply as a vocoder. The carrier signal came from a Moog modular synthesizer, and the modulator from a microphone input. The output of the ten-band vocoder was fairly intelligible but relied on specially articulated speech. Some vocoders use a high-pass filter to let some sibilance through from the microphone; this ruins the device for its original speech-coding application, but it makes the talking synthesizer effect much more intelligible.
In 1972, Isao Tomita's first electronic music album Electric Samurai: Switched on Rock was an early attempt at applying speech synthesis technique in electronic rock and pop music. The album featured electronic renditions of contemporary rock and pop songs, while utilizing synthesized voices in place of human voices. In 1974, he utilized synthesized voices in his popular classical music album Snowflakes are Dancing, which became a worldwide success and helped to popularize electronic music.
In 1973, the british band Emerson, Lake and Palmer used a vocoder on their album Brain Salad Surgery, for the song "Karn Evil 9: 3rd Impression".
The 1975 song "The Raven" from the album Tales of Mystery and Imagination by The Alan Parsons Project features Alan Parsons performing vocals through an EMI vocoder. According to the album's liner notes, "The Raven" was the first rock song to feature a digital vocoder.
Pink Floyd also used a vocoder on three of their albums, first on their 1977 Animals for the songs "Sheep" and "Pigs (Three Different Ones)", then on A Momentary Lapse of Reason on "A New Machine Part 1" and "A New Machine Part 2" (1987), and finally on 1994's The Division Bell, on "Keep Talking".
The Electric Light Orchestra was a famous user of the vocoder, being among the first to use it in a commercial context with their 1977 album Out of the Blue. The band extensively uses it on the album, including on the hits "Sweet Talkin' Woman" and "Mr. Blue Sky". On following albums, the band made sporadic use of it, notably on their hits "The Diary of Horace Wimp" and "Confusion" from their 1979 album Discovery, the tracks "Prologue", "Yours Truly, 2095", and "Epilogue" on their 1981 album Time, and "Calling America" from their 1986 album Balance of Power.
In the late 1970s, French duo Space Art used a vocoder during the recording of their second album, Trip in the Centre Head.
Phil Collins used a vocoder to provide a vocal effect for his 1981 international hit single "In the Air Tonight".
Vocoders have appeared on pop recordings from time to time, most often simply as a special effect rather than a featured aspect of the work. However, many experimental electronic artists of the new-age music genre often utilize vocoder in a more comprehensive manner in specific works, such as Jean-Michel Jarre (on Zoolook, 1984) and Mike Oldfield (on QE2, 1980 and Five Miles Out, 1982).
Vocoder module and use by M. Oldfield can be clearly seen on his "Live At Montreux 1981" DVD (Track "Sheba").
There are also some artists who have made vocoders an essential part of their music, overall or during an extended phase. Examples include the German synthpop group Kraftwerk, the Japanese new wave group Polysics, Stevie Wonder ("Send One Your Love", "A Seed's a Star") and jazz/fusion keyboardist Herbie Hancock during his late 1970s period. In 1982 Neil Young used a Sennheiser Vocoder VSM201 on six of the nine tracks on Trans. The chorus and bridge of Michael Jackson's "P.Y.T. (Pretty Young Thing)". features a vocoder ("Pretty young thing/You make me sing"), courtesy of session musician Michael Boddicker.
Coldplay have used a vocoder in some of their songs. For example, in "Major Minus" and "Hurts Like Heaven", both from the album Mylo Xyloto (2011), Chris Martin's vocals are mostly vocoder-processed. "Midnight", from Ghost Stories (2014), also features Martin singing through a vocoder. The hidden track "X Marks The Spot" from A Head Full of Dreams was also recorded through a vocoder.
Noisecore band Atari Teenage Riot have used vocoders in variety of their songs and live performances such as Live at the Brixton Academy (2002) alongside other digital audio technology both old and new.
The Red Hot Chili Peppers song "By the Way" uses a vocoder effect on Anthony Kiedis' vocals.
Among the most consistent uses of vocoder in emulating the human voice are Daft Punk, who have used this instrument from their first album Homework (1997) to their latest work Random Access Memories (2013) and consider the convergence of technological and human voice "the identity of their musical project". For instance, the lyrics of "Around the World" (1997) are integrally vocoder-processed, "Get Lucky" (2013) features a mix of natural and processed human voices, and "Instant Crush" (2013) features Julian Casablancas singing into a vocoder.
Producer Zedd, American country singer Maren Morris and American musical duo Grey made a song titled The Middle which featured a vocoder and reached top ten of the charts in 2018.
Voice effects in other arts
"Robot voices" became a recurring element in popular music during the 20th century. Apart from vocoders, several other methods of producing variations on this effect include: the Sonovox, Talk box, and Auto-Tune, linear prediction vocoders, speech synthesis, ring modulation and comb filter.
Vocoders are used in television production, filmmaking and games, usually for robots or talking computers. The robot voices of the Cylons in Battlestar Galactica were created with an EMS Vocoder 2000. The 1980 version of the Doctor Who theme, as arranged and recorded by Peter Howell, has a section of the main melody generated by a Roland SVC-350 vocoder. A similar Roland VP-330 vocoder was used to create the voice of Soundwave, a character from the Transformers series.
In 1967 the Supermarionation series Captain Scarlet and the Mysterons it was used in the closing credits theme of the first 14 episodes to provide the repetition of the words "Captain Scarlet".
See also
Audio timescale-pitch modification
Auto-Tune
Homer Dudley
List of vocoders
Phase vocoder
Silent speech interface
Talk box
Werner Meyer-Eppler
References
Multimedia references
External links
Description, photographs, and diagram for the vocoder at 120years.net
Description of a modern Vocoder.
GPL implementation of a vocoder, as a LADSPA plugin
O'Reilly Article on Vocoders
Object of Interest: The Vocoder The New Yorker Magazine mini documentary
Audio effects
Electronic musical instruments
Music hardware
Lossy compression algorithms
Speech codecs
Robotics |
33090 | https://en.wikipedia.org/wiki/Double-Cross%20System | Double-Cross System | The Double-Cross System or XX System was a World War II counter-espionage and deception operation of the British Security Service (a civilian organisation usually referred to by its cover title MI5). Nazi agents in Britain – real and false – were captured, turned themselves in or simply announced themselves, and were then used by the British to broadcast mainly disinformation to their Nazi controllers. Its operations were overseen by the Twenty Committee under the chairmanship of John Cecil Masterman; the name of the committee comes from the number 20 in Roman numerals: "XX" (i.e. a double cross).
The policy of MI5 during the war was initially to use the system for counter-espionage. It was only later that its potential for deception purposes was realised. Of the agents from the German intelligence services, Abwehr and Sicherheitsdienst (SD), some were apprehended, while many of the agents who reached British shores turned themselves in to the authorities; others were apprehended after they made elementary mistakes during their operations. In addition, some were false agents who had tricked the Germans into believing they would spy for them if they helped them reach England (e.g., Treasure, Fido). Later agents were instructed to contact agents who, unknown to the Abwehr, were controlled by the British. The Abwehr and SD sent agents over by parachute drop, submarine, or travel via neutral countries. The last route was most commonly used, with agents often impersonating refugees. After the war, it was discovered that all the agents Germany sent to Britain had given themselves up or had been captured, with the possible exception of one who committed suicide.
Early agents
Following a July 1940 conference in Kiel, the Abwehr (German intelligence) began an espionage campaign against Britain involving intelligence gathering and sabotage. Spies were sent over from Europe in various ways; some parachuted or came off a submarine. Others entered the country on false passports or posing as refugees. Public perception in Britain was that the country was full of well-trained German spies, who were deeply integrated into society. There was widespread "spy-mania", as Churchill put it. The truth was that between September and November 1940 fewer than 25 agents arrived in the country; mostly of Eastern European extraction, they were badly trained and poorly motivated.
The agents were not difficult to spot, and it became easier still when the German Enigma machine encryption was broken. MI5, with advance warning of infiltration, had no trouble picking up almost all of the spies sent to the country. Writing in 1972, John C. Masterman (who had, later in the war, headed the Twenty Committee) said that by 1941, MI5 "actively ran and controlled the German espionage system in [the United Kingdom]." It was not an idle boast; post-war records confirmed that none of the Abwehr agents, bar one who committed suicide, went unnoticed.
Once caught, the spies were deposited in the care of Lieutenant Colonel Robin Stephens at Camp 020 (Latchmere House, Richmond). After Stephens, a notorious and brilliant interrogator, had picked apart their life history, the agents were either spirited away (to be imprisoned or killed) or if judged acceptable, offered the chance to turn double agent on the Germans.
Control of the new double agents fell to Thomas Argyll Robertson (usually called Tar, from his initials), a charismatic MI5 agent. A Scot and something of a playboy, Robertson had some early experience with double agents; just prior to the war he had been case officer to Arthur Owens (code name Snow). Owens was an oddity and it became apparent that he was playing off the Germans and British, although to what end Robertson was unable to uncover. Robertson dispatched an ex-RNAS officer called Walter Dicketts (code name Celery) to neutral Lisbon in early 1941 to meet Owens' German spymaster, Nikolaus Ritter from the Abwehr, to establish Owens' bona fides. Unknown to Dicketts, Owens had betrayed him to the Germans before Dicketts entered Germany to be interrogated by experts from the Abwehr in Hamburg. Although Dicketts managed to get himself recruited as a German agent (while continuing to report to MI5), Owens claimed that Dicketts' survival meant he had been 'turned' by the Germans. When both agents returned to England, Robertson and his team spent countless hours trying to establish which agent was telling the truth. In the end Owens was interned for endangering Dicketts' life and for revealing the important information that his German radio transmitter was controlled by MI5. The whole affair resulted in the collapse of the entire Snow network comprising the double agents Owens, GW, Biscuit, Charlie, Summer and Celery. The experiment had not appeared to be a success but MI5 had learned lessons about how Abwehr operated and how double agents might be useful.
Robertson believed that turning German spies would have numerous benefits, disclosing what information Abwehr wanted and to mislead them as part of a military deception. It would also discourage them from sending more agents, if they believed an operational network existed. Section B1A (a subordinate of B section, under Guy Liddell) was formed and Robertson was put in charge of handling the double-agent program.
Robertson's first agents were not a success, Giraffe (George Graf) was never really used and Gander (Kurt Goose; MI5 had a penchant for amusingly relevant code names), had been sent to Britain with a radio that could only transmit and both were quickly decommissioned. The next two attempts were even more farcical; Gösta Caroli and Wulf Schmidt (a Danish citizen) landed, via parachute, in September 1940. The two were genuine Nazis, had trained together and were friends. Caroli was coerced into turning double in return for Schmidt's life being spared, whilst Schmidt was told that Caroli had sold him out and in anger swapped sides.
Caroli quickly became a problem; he attempted to strangle his MI5 handler before making an escape, carrying a canoe on a motorcycle. He vaguely planned to row to Holland but came unstuck after falling off the bike in front of a policeman. He was eventually recaptured and judged too much trouble to be used. Schmidt was more of a success; codenamed 'Tate', he continued to contact Germany until May 1945. These eccentric spies made Robertson aware that handling double agents was going to be a difficult task.
Methods of operation
The main form of communication that agents used with their handlers was secret writing. Letters were intercepted by the postal censorship authorities and some agents were caught. Later in the war, wireless sets were provided by the Germans. Eventually transmissions purporting to be from one double agent were facilitated by transferring the operation of the set to the main headquarters of MI5. On the British side, the fight against the Abwehr and SD was made much easier by the breaking of German ciphers. Abwehr hand ciphers were cracked early in the war and SD hand ciphers and Abwehr Enigma ciphers followed. The signals intelligence allowed an accurate assessment of whether the double agents were really trusted by the Germans and what effect their information had.
A crucial aspect of the system was the need for genuine information to be sent along with the deception material. This need caused problems early in the war, with those who controlled the release of information, being reluctant to provide even a small amount of relatively innocuous genuine material. Later in the war, as the system became better organised, genuine information was integrated into the deception system. It was used to disguise the development of "Gee", the Allies' navigation aid for bombers. One of the agents sent genuine information about Operation Torch to the Germans. It was postmarked before the landing but due to delays deliberately introduced by the British authorities, the information did not reach the Germans until after the Allied troops were ashore. The information impressed the Germans as it appeared to date from before the attack, but it was militarily useless to them.
Operation outside the United Kingdom
It was not only in the United Kingdom that the system was operated. A number of agents connected with the system were run in neutral Spain and Portugal. Some even had direct contact with the Germans in occupied Europe. One of the most famous of the agents who operated outside of the UK was Dušan Popov (Tricycle). There was even a case in which an agent started running deception operations independently from Portugal using little more than guidebooks, maps, and a very vivid imagination to convince his Abwehr handlers that he was spying in the UK. This agent, Juan Pujol García (Garbo), created a network of phantom sub-agents and eventually convinced the British authorities that he could be useful. He and his fictitious network were absorbed into the main double-cross system and he became so respected by Abwehr that they stopped landing agents in Britain after 1942. The Germans became dependent on the spurious information that was fed to them by Garbo's network and the other double-cross agents.
Operation Fortitude and D-Day landings
The British put their double-agent network to work in support of Operation Fortitude, a plan to deceive the Germans about the location of the Normandy Landings in France. Allowing one of the double agents to claim to have stolen documents describing the invasion plans might have aroused suspicion. Instead, agents were allowed to report minutiae, such as insignia on soldiers' uniforms and unit markings on vehicles. The observations in the south-central areas largely gave accurate information about the units located there. Reports from south-west England indicated few troop sightings, when in reality many units were housed there. Reports from the south-east depicted the real and the notional Operation Quicksilver forces. Any military planner would know that to mount an invasion of Europe from England, Allied units had to be staged around the country, with those that would land first placed nearest to the invasion point. German intelligence used the agent reports to construct an order of battle for the Allied forces, that placed the centre of gravity of the invasion force opposite Pas de Calais, the point on the French coast closest to England and therefore a likely invasion site. The deception was so effective that the Germans kept 15 divisions in reserve near Calais even after the invasion had begun, lest it prove to be a diversion from the main invasion at Calais. Early battle reports of insignia on Allied units only confirmed the information the double agents had sent, increasing the Germans' trust in their network. Agent Garbo was informed in radio messages from Germany after the invasion that he had been awarded the Iron Cross.
V-weapons deception
The British noticed that, during the V-1 flying bomb attacks of 1944, the weapons were falling short of Trafalgar Square, the actual Luftwaffe aiming points such as Tower Bridge being unknown to the British. Duncan Sandys was told to get MI5-controlled German agents such as Zig Zag and Tate to report the V-1 impacts back to Germany. To make the Germans aim short, the British used these double agents to exaggerate the number of V-1s falling in the north and west of London and to under-report those falling in the south and east. Around 22 June, only one of seven impacts was reported south of the Thames, when of the V-1s had fallen there. Although the Germans plotted a sample of V-1s which had radio transmitters, showing that they had fallen short, the telemetry was ignored in favour of the agents' reports.
When the Germans received a false double cross V-1 report that there was considerable damage in Southampton—which had not been a target—the V-1s were temporarily aimed at the south coast ports. The double cross deception had caused a "re-targeting" from London, not just inaccurate aiming. When V-1s launched from Heinkel He 111s on 7 July at Southampton were inaccurate, British advisor Frederick Lindemann recommended that the agents report heavy losses, to save hundreds of Londoners each week at the expense of only a few lives in the ports. When the Cabinet learned of the deception on 15 August, Herbert Morrison ruled against it, saying that they had no right to decide that one man should die while another should survive. However R V Jones refused to call off the plan absent written orders, which never came, and the deception continued.
When the V-2 rocket "blitz" began with only a few minutes from launch to impact, the deception was enhanced by providing locations damaged by bombing, verifiable by aerial reconnaissance, for impacts in central London but each "time-tagged" with an earlier impact that had fallen short of central London. From mid-January to mid-February 1945, the mean point of V-2 impacts edged eastward at the rate of a couple of miles a week, with more and more V-2s falling short of central London. Of the V-2s aimed at London, more than half landed outside the London Civil Defence Region.
List of agents
Artist – Johnny Jebsen
Balloon – Dickie Metcalf
Basket – Joseph Lenihan
Beetle – Petur Thomsen, based in Iceland
Biscuit – Sam McCarthy
Bootle – jointly handled by SIS and the French Deuxième Bureau
Bronx – Elvira Chaudoir
Brutus – Roman Czerniawski
Careless – Clark Korab
Carrot – (real name unknown), a Polish airman
Celery – Walter Dicketts
Charlie – Kiener, a German born in Britain
Cheese – Renato Levi, Italian Servizio Informazioni Militare agent
Cobweb – Ib Arnason Riis, based in Iceland
Dreadnought – Ivan Popov, brother of Dušan Popov, Tricycle
Dragonfly – Hans George
Father – Henri Arents
Fido – Roger Grosjean
Forest – Lucien G. Herviou, French, SS 1943. Collaborated with OSS (Office of Strategic Services) in 1944. German codename LUC. Codename Fidelino, Italian?. Collaborated with Monoplane - Operation Jessica.
Freak – Marquis Frano de Bona
Gander – Hans Reysen
Garbo – Juan Pujol García
Gelatine – Gerda Sullivan
Gilbert – André Latham, jointly handled by SIS and the French Deuxième Bureau
Giraffe – Georges Graf
GW – Gwilym Williams
Hamlet – Dr Koestler, an Austrian
Hatchet – Albert de Jaeger
Jacobs
Josef - Yuri Smelkov
La Chatte – Mathilde Carré
Lambert – Nikitov, a Russian
Lipstick – Josef Terradellas, a Spaniard
Meteor – Eugn Sostaric
Monoplane - Paul Jeannin 6th Army Group - French - prior codenames Jacques and Twit; German codename: Normandie. Former radio operator on the French liner Normandie.
Moonbeam – based in Canada
Mullett – Thornton, a Briton born in Belgium
Mutt and Jeff – Helge Moe and Tor Glad, two Norwegians
Peppermint – José Brugada
Puppet – Mr Fanto, a Briton
Rainbow – Günther Schütz
Rover
Scruffy – Alphonse Timmerman
Shepherd
The Snark – Maritza Mihailovic, a Yugoslavian
Sniper
Snow – Arthur Owens
Spanehl – Ivan Španiel
Spider – based in Iceland
Springbok – Hans von Kotze
Stephan – Klein
Summer – Gösta Caroli
Sweet William – William Jackson
Tate – Wulf Schmidt
Teapot
Treasure – Nathalie Sergueiew (Lily Sergeyev)
Tricycle – Dušan Popov
Washout – Ernesto Simoes
Watchdog – Werner von Janowski
Weasel – A doctor, Belgian
The Worm – Stefan Zeiss
Zigzag – Eddie Chapman
Notes
References
Bibliography
Note: Ordway/Sharpe cite Masterman
Further reading
Hinsley, F. H., and C. A. G. Simpkins. British Intelligence in the Second World War, Volume 4, Security and Counter-Intelligence. London: H.M. Stationery Office, 1990. .
Howard, Michael British Intelligence in the Second World War, Volume 5, Strategic Deception London: H.M. Stationery Office, 1990. .
John C. Campbell, "A Retrospective on John Masterman's The Double-Cross System", International Journal of Intelligence and CounterIntelligence 18: 320–353, 2005.
Jon Latimer, Deception in War, London: John Murray, 2001.
Public Record Office Secret History Files, Camp 020: MI5 and the Nazi Spies, Oliver Hoare, 2000.
Tommy Jonason & Simon Olsson, "Agent Tate: The Wartime Story of Double Agent Harry Williamson", London: Amberley Publishing, 2011. .
Benton, Kenneth . "The ISOS Years: Madrid 1941-3". Journal of Contemporary History 30 (3): 359–410, 1995.
Fiction. Overlord, Underhand'' (2013), by the American author Robert P. Wells is a fictionalized retelling of the Juan Pujol (Garbo) double-agent story from the Spanish Civil War through 1944, examining his role in MI5's Double-Cross System. .
Espionage techniques |
33091 | https://en.wikipedia.org/wiki/Juan%20Pujol%20Garc%C3%ADa | Juan Pujol García | Juan Pujol Garcia (14 February 1912 – 10 October 1988), also known as Joan Pujol Garcia, was a Spanish spy who acted as a double agent loyal to Great Britain against Nazi Germany during World War II, when he relocated to Britain to carry out fictitious spying activities for the Germans. He was given the codename Garbo by the British; their German counterparts codenamed him Alaric and referred to his non-existent spy network as "Arabal".
After developing a loathing of political extremism of all sorts during the Spanish Civil War, Pujol decided to become a spy for Britain as a way to do something "for the good of humanity". Pujol and his wife contacted the British Embassy in Madrid, which rejected his offers.
Undeterred, he created a false identity as a fanatically pro-Nazi Spanish government official and successfully became a German agent. He was instructed to travel to Britain and recruit additional agents; instead he moved to Lisbon and created bogus reports about Britain from a variety of public sources, including a tourist guide to Britain, train timetables, cinema newsreels, and magazine advertisements.
Although the information would not have withstood close examination, Pujol soon established himself as a trustworthy agent. He began inventing fictitious sub-agents who could be blamed for false information and mistakes. The Allies finally accepted Pujol when the Germans spent considerable resources attempting to hunt down a fictitious convoy. Following interviews by Desmond Bristow of Section V MI6 Iberian Section, Juan Pujol was taken on. The family were moved to Britain and Pujol was given the code name "Garbo". Pujol and his handler Tomás Harris spent the rest of the war expanding the fictitious network, communicating to the German handlers at first by letters, and later by radio. Eventually the Germans were funding a network of 27 agents, all fictitious.
Pujol had a key role in the success of Operation Fortitude, the deception operation intended to mislead the Germans about the timing, location and scale of the invasion of Normandy in 1944. The false information Pujol supplied helped persuade the Germans that the main attack would be in the , so that they kept large forces there before and even after the invasion. Pujol had the distinction of receiving military decorations from both sides of the war—being awarded the Iron Cross and becoming a Member of the Order of the British Empire.
Early life
Pujol was born in Barcelona to Joan Pujol, a Catalan who owned a cotton factory, and Mercedes García Guijarro, from the Andalusian town of Motril in the Province of Granada. The third of four children, Pujol was sent at age seven to the Valldemia boarding school run by the Marist Brothers in Mataró, from Barcelona; he remained there for the next four years. The students were only allowed out of the school on Sundays if they had a visitor, so his father made the trip every week.
His mother came from a strict Roman Catholic family and took Communion every day, but his father was much more secular and had liberal political beliefs. At age thirteen, he was transferred to a school in Barcelona run by his father's card-playing friend Monsignor Josep, where he remained for three years. After an argument with a teacher, he decided that he no longer wished to remain at the school, and became an apprentice at a hardware store.
Pujol engaged in a variety of occupations prior to and after the Spanish Civil War, such as studying animal husbandry at the Royal Poultry School in Arenys de Mar and managing various businesses, including a cinema.
His father died a few months after the Second Republic's establishment in 1931, while Pujol was completing his education as a poultry farmer. Pujol's father left his family well-provided for, until his father's factory was taken over by the workers in the early stages of the Spanish Civil War.
Spanish Civil War
In 1931, Pujol did his six months of compulsory military service in a cavalry unit, the 7th Regiment of Light Artillery. He knew he was unsuited for a military career, hating horse-riding and claiming to lack the "essential qualities of loyalty, generosity, and honor". Pujol was managing a poultry farm north of Barcelona in 1936 when the Spanish Civil War began. His sister Elena's fiancé was taken by Republican forces, and later she and his mother were arrested and charged with being counter-revolutionaries. A relative in a trade union was able to rescue them from captivity.
He was called up for military service on the Republican side (in opposition to Francisco Franco's Nationalists), but opposed the Republican government due to their treatment of his family. He hid at his girlfriend's home until he was captured in a police raid and imprisoned for a week, before being freed via the Traditionalist resistance group Socorro Blanco. They hid him until they could produce fake identity papers that showed him to be too old for military service.
He started managing a poultry farm that had been requisitioned by the local Republican government, but it was not economically viable. The experience with rule by committee intensified his antipathy towards Communism.
He re-joined the Republican military using his false papers, with the intention to desert as soon as possible, volunteering to lay telegraph cables near the front lines. He managed to desert to the Nationalist side during the Battle of the Ebro in September 1938. However, he was equally ill-treated by the Nationalist side, disliking their fascist influences and being struck and imprisoned by his colonel upon Pujol's expressing sympathy with the monarchy.
His experience with both sides left him with a deep loathing of both fascism and Communism, and by extension Nazi Germany and the Soviet Union. He was proud that he had managed to serve both sides without firing a single bullet for either. After his discharge from the Nationalist army, he met Araceli Gonzalez in Burgos and married her in Madrid; they had one child, Joan Fernando.
World War II
Independent spying
In 1940, during the early stages of World War II, Pujol decided that he must make a contribution "for the good of humanity" by helping Britain, which was at the time Germany's only adversary.
Starting in January 1941, he approached the British Embassy in Madrid three different times, including through his wife (though Pujol edited her participation out of his memoirs), but they showed no interest in employing him as a spy. Therefore, he resolved to establish himself as a German agent before approaching the British again to offer his services as a double-agent.
Pujol created an identity as a fanatically pro-Nazi Spanish government official who could travel to London on official business; he also obtained a fake Spanish diplomatic passport by fooling a printer into thinking Pujol worked for the Spanish embassy in Lisbon. He contacted Friedrich Knappe-Ratey, an agent in Madrid, codenamed "Frederico". The accepted Pujol and gave him a crash course in espionage (including secret writing), a bottle of invisible ink, a codebook, and £600 for expenses. His instructions were to move to Britain and recruit a network of British agents.
He moved instead to Lisbon, andusing a tourist's guide to Britain, reference books, and magazines from the Lisbon Public Library, and newsreel reports he saw in cinemascreated seemingly credible reports that appeared to come from London. During his time in Portugal, he stayed in Estoril, at the Hotel Palácio. He claimed to be travelling around Britain and submitted his travel expenses based on fares listed in a British railway guide. Pujol's unfamiliarity with the non-decimal system of currency used in Britain at the time was a slight difficulty. At this time Great Britain's unit of currency, the pound sterling, was subdivided into 20 shillings, each having twelve pence. Pujol was unable to total his expenses in this complex system, so simply itemised them, and said that he would send the total later.
During this time he created an extensive network of fictitious sub-agents living in different parts of Britain. Because he had never actually visited the UK, he made several mistakes, such as claiming that his alleged contact in Glasgow "would do anything for a litre of wine", unaware of Scottish drinking habits or that the UK did not use the metric system. His reports were intercepted by the British Ultra communications interceptions programme, and seemed so credible that the British counter-intelligence service MI5 launched a full-scale spy hunt.
In February 1942, either he or his wife (accounts differ) approached the United States after it had entered the war, contacting U.S. Navy Lieutenant Patrick Demorest in the naval attache's office in Lisbon, who recognised Pujol's potential. Demorest contacted his British counterparts.
Work with MI5
The British had become aware that someone had been misinforming the Germans, and realised the value of this after the wasted resources attempting to hunt down a non-existent convoy reported to them by Pujol. He was moved to Britain on 24 April 1942 and given the code name "Bovril", after the drink concentrate. However, after he passed the security check conducted by MI6 Officer Desmond Bristow, Bristow suggested that he be accompanied by MI5 officer Tomás Harris (a fluent Spanish speaker) to brief Pujol on how he and Harris should work together. Pujol's wife and child were later moved to Britain.
Pujol operated as a double agent under the XX Committee's aegis; Cyril Mills was initially Bovril's case officer; but he spoke no Spanish and quickly dropped out of the picture. His main contribution was to suggest, after the truly extraordinary dimensions of Pujol's imagination and accomplishments had become apparent, that his code name should be changed as befitted 'the best actor in the world'; and Bovril became "Garbo", after Greta Garbo. Mills passed his case over to the Spanish-speaking officer Harris.
Together, Harris and Pujol wrote 315 letters, averaging 2,000 words, addressed to a post-office box in Lisbon supplied by the Germans. His fictitious spy network was so efficient and verbose that his German handlers were overwhelmed and made no further attempts to recruit any additional spies in the UK, according to the Official History of British Intelligence in World War II.
The information supplied to German intelligence was a mixture of complete fiction, genuine information of little military value, and valuable military intelligence artificially delayed. In November 1942, just before the Operation Torch landings in North Africa, Garbo's agent on the River Clyde reported that a convoy of troopships and warships had left port, painted in Mediterranean camouflage. While the letter was sent by airmail and postmarked before the landings, it was deliberately delayed by British Intelligence in order to arrive too late to be useful. Pujol received a reply stating "we are sorry they arrived too late but your last reports were magnificent."
Pujol had been supposedly communicating with the Germans via a courier, a Royal Dutch Airlines (KLM) pilot willing to carry messages to and from Lisbon for cash. This meant that message deliveries were limited to the KLM flight schedule. In 1943, responding to German requests for speedier communication, Pujol and Harris created a fictitious radio operator. From August 1943 radio became the preferred method of communication.
On occasion, he had to invent reasons why his agents had failed to report easily available information that the Germans would eventually know about. For example, he reported that his (fabricated) Liverpool agent had fallen ill just before a major fleet movement from that port, and so was unable to report the event. To support this story, the agent eventually 'died' and an obituary was placed in the local newspaper as further evidence to convince the Germans. The Germans were also persuaded to pay a pension to the agent's widow.
For radio communication, "Alaric" needed the strongest hand encryption the Germans had. The Germans provided Garbo with this system, which was in turn supplied to the codebreakers at Bletchley Park. Garbo's encrypted messages were to be received in Madrid, manually decrypted, and re-encrypted with an Enigma machine for retransmission to Berlin. Having both the original text and the Enigma-encoded intercept of it, the codebreakers had the best possible source material for a chosen-plaintext attack on the Germans' Enigma key.
Operation Fortitude
In January 1944, the Germans told Pujol that they believed a large-scale invasion in Europe was imminent and asked to be kept informed. This invasion was Operation Overlord, and Pujol played a leading role in Operation Fortitude, the deception campaign to conceal Overlord. He sent over 500 radio messages between January 1944 and D-Day, at times more than twenty messages per day. During planning for the Normandy beach invasion, the Allies decided that it was vitally important that the German leaders be misled into believing that the landing would happen at the Strait of Dover.
In order to maintain his credibility, it was decided that Garbo (or one of his agents) should forewarn the Germans of the timing and some details of the actual invasion of Normandy, although sending it too late for them to take effective action. Special arrangements were made with the German radio operators to be listening to Garbo through the night of 5/6 June 1944, using the story that a sub-agent was about to arrive with important information. However, when the call was made at 3 AM, no reply was received from the German operators until 8 AM. This enabled Garbo to add more, genuine but now out-of-date, operational details to the message when finally received, and thus increase his standing with the Germans. Garbo told his German contacts that he was disgusted that his first message was missed, saying "I cannot accept excuses or negligence. Were it not for my ideals I would abandon the work."
On 9 June—three days after D-day—Garbo sent a message to German intelligence that was passed to Adolf Hitler and the (OKW; German High Command). Garbo said that he had conferred with his top agents and developed an order of battle showing 75 divisions in Britain; in reality, there were only about 50. Part of the "Fortitude" plan was to convince the Germans that a fictitious formation—First U.S. Army Group, comprising 11 divisions (150,000 men), commanded by General George Patton—was stationed in southeast Britain.
The deception was supported by fake planes, inflatable tanks, and vans travelling about the area transmitting bogus radio chatter. Garbo's message pointed out that units from this formation had not participated in the invasion, and therefore the first landing should be considered a diversion. A German message to Madrid sent two days later said "all reports received in the last week from Arabel [spy network codename] undertaking have been confirmed without exception and are to be described as especially valuable." A post-war examination of German records found that, during Operation Fortitude, no fewer than sixty-two of Pujol's reports were included in OKW intelligence summaries.
OKW accepted Garbo's reports so completely that they kept two armoured divisions and 19 infantry divisions in the Pas de Calais waiting for a second invasion through July and August 1944. The German Commander-in-Chief in the west, Field Marshal Gerd von Rundstedt, refused to allow General Erwin Rommel to move these divisions to Normandy. There were more German troops in the Pas de Calais region two months after the Normandy invasion than there had been on D-Day.
In late June, Garbo was instructed by the Germans to report on the falling of V-1 flying bombs. Finding no way of giving false information without arousing suspicion, and being unwilling to give correct information, Harris arranged for Garbo to be "arrested". He returned to duty a few days later, now having a "need" to avoid London, and forwarded an "official" letter of apology from the Home Secretary for his unlawful detention.
The Germans paid Pujol US$340,000 over the course of the war to support his network of agents, which at one point totalled 27 fabricated characters.
Honours
As Alaric, he was awarded the Iron Cross Second Class on 29 July 1944, for his services to the German war effort. The award was normally reserved for front-line fighting men and required Hitler's personal authorisation. The Iron Cross was presented via radio.
As Garbo, he received an MBE from King George VI, on 25 November 1944. The Nazis never realised they had been fooled, and thus Pujol earned the distinction of being one of the fewif not the only oneto receive decorations from both sides during World War II.
After the war
After the Second World War, Pujol feared reprisals from surviving Nazis. With the help of MI5, Pujol travelled to Angola and faked his death from malaria in 1949. He then moved to Lagunillas, Venezuela, where he lived in relative anonymity running a bookstore and gift shop.
Pujol divorced his first wife and married Carmen Cilia, with whom he had two sons, Carlos Miguel and Joan Carlos, and a daughter who died in 1975 at the age of 20. By 1984, Pujol had moved to his son Carlos Miguel's house in La Trinidad, Caracas.
In 1971, the British politician Rupert Allason, writing under the pen name Nigel West, became interested in Garbo. For several years, he interviewed various former intelligence officers, but none knew Garbo's real name. Eventually, Tomás Harris' friend Anthony Blunt, the Soviet spy who had penetrated MI5, said that he had met Garbo, and knew him as "either Juan or José García". Allason's investigation was stalled from that point until March 1984, when a former MI5 officer who had served in Spain supplied Pujol's full name. Allason hired a research assistant to call every J. García—an extremely common name in Spain—in the Barcelona phone book, eventually contacting Pujol's nephew. Pujol and Allason finally met in New Orleans on 20 May 1984.
At Allason's urging, Pujol travelled to London and was received by Prince Philip at Buckingham Palace, in an unusually long audience. After that he visited the Special Forces Club and was reunited with a group of his former colleagues, including T. A. Robertson, Roger Fleetwood Hesketh, Cyril Mills and Desmond Bristow.
On the 40th anniversary of D-Day, 6 June 1984, Pujol travelled to Normandy to tour the beaches and pay his respects to the dead.
Pujol died in Caracas in 1988 and is buried in Choroní, a town inside Henri Pittier National Park by the Caribbean Sea.
Network of fictitious agents
Each of Pujol's fictitious agents was tasked with recruiting additional sub-agents.
Popular culture
Literature and Music
The Counterfeit Spy (1971), by the British journalist Sefton Delmer; Pujol's name was changed to "Jorge Antonio" in order to protect his surviving family.
The Eldorado Network (1979), by the British novelist Derek Robinson published six years before Nigel West's non-fiction account.
Overlord, Underhand (2013), by the American author Robert P. Wells is a fictionalised retelling of the story of Juan Pujol (Agent Garbo), double agent with MI5, from the Spanish Civil War to 1944;
Quicksand (1971), a song by David Bowie on the Hunky Dory album makes reference to him ("I'm the twisted name on Garbo's eyes").
Film and television
Garbo: The Spy (). Documentary film, directed by Edmon Roch. Production: Ikiru Films, Colose Producciones, Centuria Films, Spain 2009.
The Man Who Fooled the Nazis. The 90-minute Spanish documentary retitled and narrated in English, shown as part of the Storyville series, first shown on BBC Four, 22 February 2011.
Secret D-DayUS television, 1998portrayed by French actor Sam Spiegel.
Garbo-Master of Deception. 1992 Columbia House and A&E 30-minute documentary
Garbo feature films have been attempted on several occasions, but none have reached production to date.
See also
Spain in World War II
References
Bibliography
External links
1912 births
1988 deaths
Double agents
Double-Cross System
Honorary Members of the Order of the British Empire
People from Barcelona
People who faked their own death
Recipients of the Iron Cross (1939), 2nd class
Spanish emigrants to Venezuela
Spanish spies
World War II spies for Germany
World War II spies for the United Kingdom
Spanish soldiers
Spanish people of World War II
Spanish anti-fascists
Spanish military personnel of the Spanish Civil War (Republican faction)
Spanish anti-communists
Spanish military personnel of the Spanish Civil War (National faction)
Deserters |
33139 | https://en.wikipedia.org/wiki/World%20Wide%20Web | World Wide Web | The World Wide Web (WWW), commonly known as the Web, is an information system where documents and other web resources are identified by Uniform Resource Locators (URLs, such as ), which may be interlinked by hyperlinks, and are accessible over the Internet. The resources of the Web are transferred via the Hypertext Transfer Protocol (HTTP), may be accessed by users by a software application called a web browser, and are published by a software application called a web server. The World Wide Web is built on top of the Internet, which pre-dated the Web by over two decades.
English scientist Tim Berners-Lee co-invented the World Wide Web in 1989 along with Robert Cailliau. He wrote the first web browser in 1990 while employed at CERN near Geneva, Switzerland. The browser was released outside CERN to other research institutions starting in January 1991, and then to the general public in August 1991. The Web began to enter everyday use in 1993–1994, when websites for general use started to become available. The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.
Web resources may be any type of downloaded media, but web pages are hypertext documents formatted in Hypertext Markup Language (HTML). Special HTML syntax displays embedded hyperlinks with URLs, which permits users to navigate to other web resources. In addition to text, web pages may contain references to images, video, audio, and software components, which are either displayed or internally executed in the user's web browser to render pages or streams of multimedia content.
Multiple web resources with a common theme and usually a common domain name make up a website. Websites are stored in computers that are running a web server, which is a program that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher or interactively from user-generated content. Websites are provided for a myriad of informative, entertainment, commercial, and governmental reasons.
History
The underlying concept of hypertext originated in previous projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based memex, which was described in the 1945 essay "As We May Think". Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System (upon which the Uniform Resource Locator is built) came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to openly discuss the possibility of a web-like system at CERN.
While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On 12 March 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics, speech and video, so that Berners-Lee goes on to use the term hypermedia.
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" (one word, abbreviated "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had already been in development for about two months and the first Web server was about a month from completing its first successful test. This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, blogs, Web 2.0 and RSS/Atom.
The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser (WorldWideWeb, which was a web editor as well) and the first web server. The first website, which described the project itself, was published on 20 December 1990.
The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a visit to UNC in 1991. Jones stored it on a magneto-optical drive and his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext. This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, some news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro; Gennaro has disclaimed this story, writing that media were "totally distorting our words for the sake of cheap sensationalism".
The first server outside Europe was installed in December 1991 at the Stanford Linear Accelerator Center (SLAC) in Palo Alto, California, to host the SPIRES-HEP database.
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested to members of both technical communities that a marriage between the two technologies was possible. But, when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies:
a system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform resource locator (URL) and uniform resource identifier (URI);
the publishing language Hypertext Markup Language (HTML);
the Hypertext Transfer Protocol (HTTP).
The World Wide Web had several differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn, presented the chronic problem of link rot. Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and toward the Web. An early popular web browser was ViolaWWW for Unix and the X Window System.
The Web began to enter general use in 1993–1994, when websites for everyday use started to become available. Historians generally agree that a turning point for the Web began with the 1993 introduction of Mosaic, a graphical web browser developed at the National Center for Supercomputing Applications at the University of Illinois at Urbana–Champaign (NCSA-UIUC). The development was led by Marc Andreessen, while funding came from the US High-Performance Computing and Communications Initiative and the High Performance Computing Act of 1991, one of several computing developments initiated by US Senator Al Gore. Before the release of Mosaic, graphics were not commonly mixed with text in web pages, and the Web was less popular than older protocols such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become by far the most popular protocol on the Internet. The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University. By the end of 1994, the total number of websites was still relatively small, but many notable websites were already active that foreshadowed or inspired today's most popular services.
Connected by the Internet, other websites were created around the world. This motivated international standards development for protocols and formatting. Berners-Lee continued to stay involved in guiding the development of web standards, such as the markup languages to compose web pages and he advocated his vision of a Semantic Web. The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an information space containing hyperlinked documents and other resources, identified by their URIs. It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP.
Berners-Lee was knighted in 2004 by Queen Elizabeth II for "services to the global development of the Internet". He never patented his invention.
Function
The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet's transport protocols.
Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' (after channel surfing), or 'navigating the Web'. Early studies of this new behavior investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation.
The following example demonstrates the functioning of a web browser when accessing a page at the URL . The browser resolves the server name of the URL () into an Internet Protocol address using the globally distributed Domain Name System (DNS). This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text:
GET /home.html HTTP/1.1
Host: example.org
The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the webserver can fulfill the request it sends an HTTP response back to the browser indicating success:
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
followed by the content of the requested page. Hypertext Markup Language (HTML) for a basic web page might look like this:
<html>
<head>
<title>Example.org – The World Wide Web</title>
</head>
<body>
<p>The World Wide Web, abbreviated as WWW and commonly known ...</p>
</body>
</html>
The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.
HTML
Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web.
Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.
HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML
Linking
Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this:
<a href="http://example.org/home.html">Example.org Homepage</a>
Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb (in its original CamelCase, which was subsequently discarded) in November 1990.
The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages (or URLs) the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called dead links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts.
WWW prefix
Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System (DNS) or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; the first web server was nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name (e.g., example.com) and the www subdomain (e.g., www.example.com) refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root.
When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering "" may be transformed to http://www.microsoft.com/ and "openoffice" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.
In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his "Podgrams" series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday (1999): "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for". In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng (), which satisfies www and literally means "myriad-dimensional net", a translation that reflects the design concept and proliferation of the World Wide Web. Tim Berners-Lee's web-space states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Use of the www prefix has been declining, especially when Web 2.0 web applications sought to brand their domain names and make them easily pronounceable.
As the mobile Web grew in popularity, services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding "www." (or, indeed, ".com") to the domain.
Scheme specifiers
The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted.
Pages
A web page (also written as webpage) is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device.
The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page.
On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol (HTTP) to make such requests to the web server.
A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc.
Static page
A static web page (sometimes called a flat page/stationary page) is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application.
Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.
Dynamic pages
A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing.
A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way.
A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server.
Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser.
JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardised version is ECMAScript. To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax (asynchronous JavaScript and XML). Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available.
Website
A website is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com.
A website may be accessible via a public Internet Protocol (IP) network, such as the Internet, or a private local area network (LAN), by referencing a uniform resource locator (URL) that identifies the site.
Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet.
Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language (HTML, XHTML). They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol (HTTP), which may optionally employ encryption (HTTP Secure, HTTPS) to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time stock market data, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs.
Browser
A web browser (commonly referred to as a browser) is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer.
In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies (see below), and home pages and may have facilities for recording passwords for logging into web sites.
The most popular browsers are Chrome, Firefox, Safari, Internet Explorer, and Edge.
Server
A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols.
The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content.
A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the webserver is implemented.
While the primary function is to serve content, full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.
Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP (Hypertext Preprocessor), or other scripting languages. This means that the behavior of the webserver can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content.
Web servers can also frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems).
Cookie
An HTTP cookie (also called web cookie, Internet cookie, browser cookie, or simply cookie) is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information (such as items added in the shopping cart in an online store) or to record the user's browsing activity (including clicking particular buttons, logging in, or recording which pages were visited in the past). They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers.
Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information or require the user to authenticate themselves by logging in. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (see cross-site scripting and cross-site request forgery for examples).
Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain "informed consent" from users before storing non-essential cookies on their device.
Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like Wi-Fi hotspot providers. He recommends using the browser in incognito mode in such circumstances.
Search engine
A web search engine or Internet search engine is a software system that is designed to carry out web search (Internet search), which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are generally presented in a line of results, often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.
Internet content that is not capable of being searched by a web search engine is generally described as the deep web.
Deep web
The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term.
The content of the deep web is hidden behind HTTP forms, and includes many very common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, among others.
The content of the deep web can be located and accessed by a direct URL or IP address, and may require a password or other security access past the public website page.
Caching
A web cache is a server computer located either on the public Internet or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request. Most web browsers also implement a browser cache by writing recently obtained data to a local data storage device. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. Enterprise firewalls often cache Web resources requested by one user for the benefit of many users. Some search engines store cached content of frequently accessed websites.
Security
For criminals, the Web has become a venue to spread malware and engage in a range of cybercrimes, including (but not limited to) identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten web pages may contain malicious code. Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting (XSS) that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favours the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users. Phishing is another common threat to the Web. In February 2013, RSA (the security division of EMC) estimated the global losses from phishing at $1.5 billion in 2012. Two of the well-known phishing methods are Covert Redirect and Open Redirect.
Proposed solutions vary. Large security companies like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of programming code and all content regardless of its source. Some have argued that for enterprises to see Web security as a business opportunity rather than a cost centre, while others call for "ubiquitous, always-on digital rights management" enforced in the infrastructure to replace the hundreds of companies that secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet.
Privacy
Every time a client requests a web page, the server can identify the request's IP address. Web servers usually log IP addresses in a log file. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems. Another way to hide personally identifiable information is by using a virtual private network. A VPN encrypts online traffic and masks the original IP address lowering the chance of user identification.
When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc. web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username, and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way, a web-based organization can develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are of potential interest to marketers, advertisers, and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organizations without the user being informed. For many ordinary people, this means little more than some unexpected e-mails in their in-box or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counter-terrorism, and espionage agencies can also identify, target, and track individuals based on their interests or proclivities on the Web.
Social networking sites usually try to get users to use their real names, interests, and locations, rather than pseudonyms, as their executives believe that this makes the social networking experience more engaging for users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles, such as text posts or digital photos, that the posting individual did not intend for these audiences. Online bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine-grained control of the privacy settings for each posting, but these can be complex and not easy to find or use, especially for beginners. Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an online profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events, and scenarios that have been imaged elsewhere. Due to image caching, mirroring, and copying, it is difficult to remove an image from the World Wide Web.
Standards
Web standards include many interdependent standards and specifications, some of which govern aspects of the Internet, not just the World Wide Web. Even when not web-focused, such standards directly or indirectly affect the development and administration of websites and web services. Considerations include the interoperability, accessibility and usability of web pages and web sites.
Web standards, in the broader sense, consist of the following:
Recommendations published by the World Wide Web Consortium (W3C)
"Living Standard" made by the Web Hypertext Application Technology Working Group (WHATWG)
Request for Comments (RFC) documents published by the Internet Engineering Task Force (IETF)
Standards published by the International Organization for Standardization (ISO)
Standards published by Ecma International (formerly ECMA)
The Unicode Standard and various Unicode Technical Reports (UTRs) published by the Unicode Consortium
Name and number registries maintained by the Internet Assigned Numbers Authority (IANA)
Web standards are not fixed sets of rules but are constantly evolving sets of finalized technical specifications of web technologies. Web standards are developed by standards organizations—groups of interested and often competing parties chartered with the task of standardization—not technologies developed and declared to be a standard by a single individual or company. It is crucial to distinguish those specifications that are under development from the ones that already reached the final development status (in the case of W3C specifications, the highest maturity level).
Accessibility
There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech-related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or ageing users as their abilities change. The Web receives information as well as providing information and interacting with society. The World Wide Web Consortium claims that it is essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities. Tim Berners-Lee once noted, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect." Many countries regulate web accessibility as a requirement for websites. International co-operation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology.
Internationalisation
The W3C Internationalisation Activity assures that web technology works in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding. Originally allowed resources to be identified by URI in a subset of US-ASCII. allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language.
See also
Electronic publishing
Internet metaphors
Internet security
Lists of websites
Prestel
Streaming media
Web development tools
Web literacy
References
Further reading
Brügger, Niels, ed, Web25: Histories from the first 25 years of the World Wide Web (Peter Lang, 2017).
Niels Brügger, ed. Web History (2010) 362 pages; Historical perspective on the World Wide Web, including issues of culture, content, and preservation.
Skau, H.O. (March 1990). "The World Wide Web and Health Information". New Devices.
External links
The first website
Early archive of the first Web site
Internet Statistics: Growth and Usage of the Web and the Internet
Living Internet A comprehensive history of the Internet, including the World Wide Web
World Wide Web Consortium (W3C)
W3C Recommendations Reduce "World Wide Wait"
World Wide Web Size Daily estimated size of the World Wide Web
Antonio A. Casilli, Some Elements for a Sociology of Online Interactions
The Erdős Webgraph Server offers weekly updated graph representation of a constantly increasing fraction of the WWW
The 25th Anniversary of the World Wide Web is an animated video produced by USAID and TechChange which explores the role of the WWW in addressing extreme poverty
Computer-related introductions in 1989
English inventions
British inventions
Human–computer interaction
Information Age
CERN
Tim Berners-Lee
Web technology
20th-century inventions |
33143 | https://en.wikipedia.org/wiki/Wireless%20LAN | Wireless LAN | A wireless LAN (WLAN) is a wireless computer network that links two or more devices using wireless communication to form a local area network (LAN) within a limited area such as a home, school, computer laboratory, campus, or office building. This gives users the ability to move around within the area and remain connected to the network. Through a gateway, a WLAN can also provide a connection to the wider Internet.
Wireless LANs based on the IEEE 802.11 standards are the most widely used computer networks in the world. These are commonly called Wi-Fi, which is a trademark belonging to the Wi-Fi Alliance. They are used for home and small office networks that link together laptop computers, printers, smartphones, Web TVs and gaming devices with a wireless router, which links them to the internet. Hotspots provided by routers at restaurants, coffee shops, hotels, libraries, and airports allow consumers to access the internet with portable wireless devices.
History
Norman Abramson, a professor at the University of Hawaii, developed the world's first wireless computer communication network, ALOHAnet. The system became operational in 1971 and included seven computers deployed over four islands to communicate with the central computer on the Oahu island without using phone lines.
Wireless LAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced by technical standards, primarily the various versions of IEEE 802.11 (in products using the Wi-Fi brand name).
Beginning in 1991, a European alternative known as HiperLAN/1 was pursued by the European Telecommunications Standards Institute (ETSI) with a first version approved in 1996. This was followed by a HiperLAN/2 functional specification with ATM influences accomplished February 2000. Neither European standard achieved the commercial success of 802.11, although much of the work on HiperLAN/2 has survived in the physical specification (PHY) for IEEE 802.11a, which is nearly identical to the PHY of HiperLAN/2.
In 2009 802.11n was added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers are dualband and able to utilize both wireless bands. This allows data communications to avoid the crowded 2.4 GHz band, which is also shared with Bluetooth devices and microwave ovens. The 5 GHz band also has more channels than the 2.4 GHz band, permitting a greater number of devices to share the space. Not all channels are available in all regions.
A HomeRF group formed in 1997 to promote a technology aimed for residential use, but it disbanded in January 2003.
Architecture
Stations
All components that can connect into a wireless medium in a network are referred to as stations. All stations are equipped with wireless network interface controllers. Wireless stations fall into two categories: wireless access points (WAPs), and clients. WAPs are base stations for the wireless network. They transmit and receive radio frequencies for wireless-enabled devices to communicate with. Wireless clients can be mobile devices such as laptops, personal digital assistants, VoIP phones and other smartphones, or non-portable devices such as desktop computers, printers, and workstations that are equipped with a wireless network interface.
Service set
The basic service set (BSS) is a set of all stations that can communicate with each other at PHY layer. Every BSS has an identification (ID) called the BSSID, which is the MAC address of the access point servicing the BSS.
There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is an ad hoc network that contains no access points, which means they cannot connect to any other basic service set. In an IBSS the STAs are configured in ad hoc (peer-to-peer) mode.
An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string.
A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells. DS can be wired or wireless. Current wireless distribution systems are mostly based on WDS or MESH protocols, though other systems are in use.
Types of wireless LANs
The IEEE 802.11 has two basic modes of operation: infrastructure and ad hoc mode. In ad hoc mode, mobile units communicate directly peer-to-peer. In infrastructure mode, mobile units communicate through a wireless access point (WAP) that also serves as a bridge to other networks such as a local area network or the Internet.
Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms: Wired Equivalent Privacy (WEP), no longer considered secure, Wi-Fi Protected Access (WPA, WPA2, WPA3), to secure wireless computer networks. Many access points will also offer Wi-Fi Protected Setup, a quick, but no longer considered secure, method of joining a new device to an encrypted network.
Infrastructure
Most Wi-Fi networks are deployed in infrastructure mode. In infrastructure mode, wireless clients, such as laptops and smartphones, connect to the WAP to join the network. The WAP usually has a wired network connection and may have permanent wireless connections to other WAPs.
WAPs are usually fixed, and provide service to their client nodes within range. Some networks will have multiple WAPs, using the same SSID and security arrangement. In that case, connecting to any WAP on that network joins the client to the network and the client software will try to choose the WAP that gives the best service, such as the WAP with the strongest signal.
Peer-to-peer
An ad hoc network is a network where stations communicate only peer-to-peer (P2P). There is no base and no one gives permission to talk. This is accomplished using the Independent Basic Service Set (IBSS). A WiFi Direct network is different type of wireless network where stations communicate peer-to-peer.
In a Wi-Fi P2P group, the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach, the user sets up a P2P group owner manually. This method is also known as autonomous group owner (autonomous GO). In the second method, called negotiation-based group creation, two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, available power in the wireless device, whether the wireless device is already a group owner in another group or a received signal strength of the first wireless device.
A peer-to-peer network allows wireless devices to directly communicate with each other. Wireless devices within range of each other can discover and communicate directly without involving central access points. This method is typically used by two computers so that they can connect to each other to form a network. This can basically occur in devices within a closed range.
If a signal strength meter is used in this situation, it may not read the strength accurately and can be misleading, because it registers the strength of the strongest signal, which may be the closest computer.
IEEE 802.11 defines the physical layer (PHY) and MAC (Media Access Control) layers based on CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance). This is in contrast to Ethernet which uses CSMA-CD (Carrier Sense Multiple Access with Collision Detection). The 802.11 specification includes provisions designed to minimize collisions, because two mobile units may both be in range of a common access point, but out of range of each other.
Bridge
A bridge can be used to connect networks, typically of different types. A wireless Ethernet bridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the Wireless LAN.
Wireless distribution system
A wireless distribution system (WDS) enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of a WDS over other solutions is that it preserves the MAC addresses of client packets across links between access points.
An access point can be either a main, relay or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Connections between clients are made using MAC addresses rather than by specifying IP assignments.
All base stations in a WDS must be configured to use the same radio channel, and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above.
WDS capability may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). Throughput in this method is halved for all clients connected wirelessly.
When it is difficult to connect all of the access points in a network by wires, it is also possible to put up access points as repeaters.
Roaming
There are two definitions for wireless LAN roaming:
Internal roaming: The Mobile Station (MS) moves from one access point (AP) to another AP within a home network if the signal strength is too weak. An authentication server (RADIUS) performs the re-authentication of MS via 802.1x (e.g. with PEAP). The billing of QoS is in the home network. A Mobile Station roaming from one access point to another often interrupts the flow of data among the Mobile Station and an application connected to the network. The Mobile Station, for instance, periodically monitors the presence of alternative access points (ones that will provide a better connection). At some point, based on proprietary mechanisms, the Mobile Station decides to re-associate with an access point having a stronger wireless signal. The Mobile Station, however, may lose a connection with an access point before associating with another access point. In order to provide reliable connections with applications, the Mobile Station must generally include software that provides session persistence.
External roaming: The MS (client) moves into a WLAN of another Wireless Internet Service Provider (WISP) and takes their services (Hotspot). The user can use a foreign network independently from their home network, provided that the foreign network allows visiting users on their network. There must be special authentication and billing systems for mobile services in a foreign network.
Applications
Wireless LANs have a great deal of applications. Modern implementations of WLANs range from small in-home networks to large, campus-sized ones to completely mobile networks on airplanes and trains.
Users can access the Internet from WLAN hotspots in restaurants, hotels, and now with portable devices that connect to 3G or 4G networks. Oftentimes these types of public access points require no registration or password to join the network. Others can be accessed once registration has occurred or a fee is paid.
Existing Wireless LAN infrastructures can also be used to work as indoor positioning systems with no modification to the existing hardware.
See also
Wireless WAN
Wi-Fi Direct
References
Wireless networking
American inventions
Local area networks |
33821 | https://en.wikipedia.org/wiki/Whistleblower | Whistleblower | A whistleblower (also written as whistle-blower or whistle blower) is a person, usually an employee, who exposes information or activity within a private, public, or government organization that is deemed illegal, immoral, illicit, unsafe, fraud, or abuse of taxpayer funds. Those who become whistleblowers can choose to bring information or allegations to surface either internally or externally. Over 83% of whistleblowers report internally to a supervisor, human resources, compliance, or a neutral third party within the company, with the thought that the company will address and correct the issues. Externally, a whistleblower can bring allegations to light by contacting a third party outside of the organization such as the media, government, or law enforcement. The most common type of retaliation reported is being abruptly terminated. However, there are several other activities that are considered retaliatory, such as sudden extreme increase in workloads, having hours cut drastically, making task completion impossible or otherwise bullying measures. Because of this, a number of laws exist to protect whistleblowers. Some third-party groups even offer protection to whistleblowers, but that protection can only go so far. Two other classifications of whistleblowing are private and public. The classifications relate to the type of organizations the whistleblower works in: private sector, or public sector. Depending on many factors, both can have varying results. About 20% of whistleblowers are successful in stopping the illegal behaviors, usually through the legal system, with the help of a whistleblower attorney. For the whistleblower's claims to be credible and successful, the whistleblower must have compelling evidence to support their claims, that the government or regulating body can use or investigate to "prove" such claims and hold corrupt companies and/or government agencies accountable.
Deeper questions and theories of whistleblowing and why people choose to do so can be studied through an ethical approach. Whistleblowing is a topic of several myths and inaccurate definitions. Leading arguments in the ideological camp, maintain that whistleblowing is the most basic of ethical traits and simply telling the truth to stop illegal harmful activities, or fraud against the government/taxpayers. In the opposite camp, many corporations and corporate or government leaders see whistleblowing as being disloyal for breaching confidentiality, especially in industries that handle sensitive client or patient information. Legal counteractive measures exist to protect whistleblowers, but that protection is subject to many stipulations. Hundreds of laws grant protection to whistleblowers, but stipulations can easily cloud that protection and leave whistleblowers vulnerable to retaliation, sometimes even threats and physical harm. However, the decision and action has become far more complicated with recent advancements in technology and communication.
Overview
Origin of term
U.S. civic activist Ralph Nader is said to have coined the phrase, but he in fact put a positive spin on the term in the early 1970s to avoid the negative connotations found in other words such as "informer" and "snitch". However, the origins of the word date back to the 19th century.
The word is linked to the use of a whistle to alert the public or a crowd about a bad situation, such as the committing of a crime or the breaking of rules during a game. The phrase whistle blower attached itself to law enforcement officials in the 19th century because they used a whistle to alert the public or fellow police. Sports referees, who use a whistle to indicate an illegal or foul play, also were called whistle blowers.
An 1883 story in the Janesville Gazette called a policeman who used his whistle to alert citizens about a riot a whistle blower, without the hyphen. By the year 1963, the phrase had become a hyphenated word, whistle-blower. The word began to be used by journalists in the 1960s for people who revealed wrongdoing, such as Nader. It eventually evolved into the compound word whistleblower.
Internal
Most whistleblowers are internal whistleblowers, who report misconduct on a fellow employee or superior within their company through anonymous reporting mechanisms often called hotlines. One of the most interesting questions with respect to internal whistleblowers is why and under what circumstances do people either act on the spot to stop illegal and otherwise unacceptable behavior or report it. There are some reasons to believe that people are more likely to take action with respect to unacceptable behavior, within an organization, if there are complaint systems that offer not just options dictated by the planning and control organization, but a choice of options for absolute confidentiality.
Anonymous reporting mechanisms, as mentioned previously, help foster a climate whereby employees are more likely to report or seek guidance regarding potential or actual wrongdoing without fear of retaliation. The coming anti-bribery management systems standard, ISO 37001, includes anonymous reporting as one of the criteria for the new standard.
External
External whistleblowers, however, report misconduct to outside people or entities. In these cases, depending on the information's severity and nature, whistleblowers may report the misconduct to lawyers, the media, law enforcement or watchdog agencies, or other local, state, or federal agencies. In some cases, external whistleblowing is encouraged by offering monetary reward.
Third party
Sometimes it is beneficial for an organization to use an external agency to create a secure and anonymous reporting channel for its employees, often referred to as a whistleblowing hotline. As well as protecting the identity of the whistleblower, these services are designed to inform the individuals at the top of the organizational pyramid of misconduct, usually via integration with specialised case management software.
Implementing a third party solution is often the easiest way for an organization to ensure compliance, or to offer a whistleblowing policy where one did not previously exist. An increasing number of companies and authorities use third party services in which the whistleblower is anonymous also towards the third party service provider, which is made possible via toll free phone numbers and/or web or app-based solutions which apply asymmetrical encryption.
Private sector whistleblowing
Private sector whistleblowing, though not as high profile as public sector whistleblowing, is arguably more prevalent and suppressed in society today. Simply because private corporations usually have stricter regulations that suppress potential whistleblowers. An example of private sector whistleblowing is when an employee reports to someone in a higher position such as a manager, or a third party that is isolated from the individual chapter, such as their lawyer or the police. In the private sector, corporate groups can easily hide wrongdoings by individual branches. It is not until these wrongdoings bleed into the top officials that corporate wrongdoings are seen by the public. Situations in which a person may blow the whistle are in cases of violated laws or company policy, such as sexual harassment or theft. These instances, nonetheless, are small compared to money laundering or fraud charges on the stock market. Whistleblowing in the private sector is typically not as high-profile or openly discussed in major news outlets, though occasionally, third parties expose human rights violations and exploitation of workers. While there are organizations such as the United States Department of Labor (DOL), and laws in place such as the Sarbanes-Oxley Act and the United States Federal Sentencing Guidelines for Organizations (FSGO) which protects whistleblowers in the private sector, many employees still fear for their jobs due to direct or indirect threats from their employers or the other parties involved. In the United States, the Department of Labor's Whistleblower Protection Program can take many types of retaliation claims based on legal actions an employee took or was perceived to take in the course of their employment. Conversely, if in the United States the retaliatory conduct occurred due to the perception of who the employee is as a person, the Equal Employment Opportunity Commission may be able to accept a complaint of retaliation. In an effort to overcome those fears, in 2010 Dodd–Frank Wall Street Reform and Consumer Protection Act was put forth to provide great incentive to whistleblowers. For example, if a whistleblower gave information which could be used to legally recover over one million dollars; then they could receive ten to thirty percent of it.
Whistleblowers have risen within the technology industry as it has expanded in recent years. They are vital for publicizing ethical breaches within private companies. Protection for these specific whistleblowers falls short; they often end up unemployed or worse- in jail. The Dodd-Frank Wall Street Reform and Consumer Protection Act offers an incentive for private sector whistleblowers, but only if they go to the SEC with information. If a whistleblower acts internally, as they often do in the technology industry, they are not protected by the law. Scandals, such as the Dragonfly search engine scandal and the Pompliano lawsuit against snapchat, have drawn attention to whistleblowers in technology.
Despite government efforts to help regulate the private sector, the employees must still weigh their options. They either expose the company and stand the moral and ethical high ground; or expose the company, lose their job, their reputation and potentially the ability to be employed again. According to a study at the University of Pennsylvania, out of three hundred whistleblowers studied, sixty nine percent of them had foregone that exact situation; and they were either fired or were forced to retire after taking the ethical high ground. It is outcomes like that which makes it all that much harder to accurately track how prevalent whistleblowing is in the private sector.
Public sector whistleblowing
Recognizing the public value of whistleblowing has been increasing over the last 50 years. In the United States, both state and Federal statutes have been put in place to protect whistleblowers from retaliation. The United States Supreme Court ruled that public sector whistleblowers are protected under First Amendment rights from any job retaliation when they raise flags over alleged corruption. Exposing misconduct or illegal or dishonest activity is a big fear for public employees because they feel they are going against their government and country. Private sector whistleblowing protection laws were in place long before ones for the public sector. After many federal whistleblowers were scrutinized in high-profile media cases, laws were finally introduced to protect government whistleblowers. These laws were enacted to help prevent corruption and encourage people to expose misconduct, illegal, or dishonest activity for the good of society. People who choose to act as whistleblowers often suffer retaliation from their employer. They most likely are fired because they are an at-will employee, which means they can be fired without a reason. There are exceptions in place for whistleblowers who are at-will employees. Even without a statute, numerous decisions encourage and protect whistleblowing on grounds of public policy. Statutes state that an employer shall not take any adverse employment actions any employee in retaliation for a good-faith report of a whistleblowing action or cooperating in any way in an investigation, proceeding, or lawsuit arising under said action. Federal whistleblower legislation includes a statute protecting all government employees. In the federal civil service, the government is prohibited from taking, or threatening to take, any personnel action against an employee because the employee disclosed information that they reasonably believed showed a violation of law, gross mismanagement, and gross waste of funds, abuse of authority, or a substantial and specific danger to public safety or health. To prevail on a claim, a federal employee must show that a protected disclosure was made, that the accused official knew of the disclosure, that retaliation resulted, and that there was a genuine connection between the retaliation and the employee's action.
Risk
Individual harm, public trust damage, and a threat of national security are three categories of harm that may come as a result of whistleblowing. Revealing a whistleblower's identity can automatically put their life in danger. Some media outlets associate words like "traitor" and "treason" with whistleblowers, and in many countries around the world, the punishment for treason is the death penalty, even if whoever allegedly committed treason may not have caused anyone physical harm. A primary argument in favor of the death penalty for treason is the potential endangerment of an entire people. In other words, the perpetrator is perceived as being responsible for any harm that befalls the country or its citizens as a result of their actions. In some instances, whistleblowers must flee their country to avoid public scrutiny, threats of death or physical harm, and in some cases criminal charges.
In a few cases, harm is done by the whistleblower to innocent people. Whistleblowers can make unintentional mistakes, and investigations can be tainted by the fear of negative publicity. One case of this was claims made in part of the Canadian health ministry, by a new employee who thought that nearly every research contract she saw in 2012 involved malfeasance. The end result was the sudden firing of seven people, false and public threats of a criminal investigation, and the death of one researcher by suicide. The government ultimately paid the innocent victims millions of dollars for lost pay, slander, and other harms, in addition to CA $2.41 million spent on the subsequent 2015 investigation into the false charges.
Common reactions
Whistleblowers are sometimes seen as selfless martyrs for public interest and organizational accountability; others view them as "traitors" or "defectors". Some even accuse them of solely pursuing personal glory and fame, or view their behavior as motivated by greed in qui tam cases. Some academics (such as Thomas Faunce) feel that whistleblowers should at least be entitled to a rebuttable presumption that they are attempting to apply ethical principles in the face of obstacles and that whistleblowing would be more respected in governance systems if it had a firmer academic basis in virtue ethics.
It is probable that many people do not even consider blowing the whistle, not only because of fear of retaliation, but also because of fear of losing their relationships at work and outside work.
Persecution of whistleblowers has become a serious issue in many parts of the world:
Employees in academia, business or government might become aware of serious risks to health and the environment, but internal policies might pose threats of retaliation to those who report these early warnings. Private company employees in particular might be at risk of being fired, demoted, denied raises and so on for bringing environmental risks to the attention of appropriate authorities. Government employees could be at a similar risk for bringing threats to health or the environment to public attention, although perhaps this is less likely.
There are examples of "early warning scientists" being harassed for bringing inconvenient truths about impending harm to the notice of the public and authorities. There have also been cases of young scientists being discouraged from entering controversial scientific fields for fear of harassment.
Whistleblowers are often protected under law from employer retaliation, but in many cases punishment has occurred, such as termination, suspension, demotion, wage garnishment, and/or harsh mistreatment by other employees. A 2009 study found that up to 38% of whistleblowers experienced professional retaliation in some form, including wrongful termination. For example, in the United States, most whistleblower protection laws provide for limited "make whole" remedies or damages for employment losses if whistleblower retaliation is proven. However, many whistleblowers report there exists a widespread "shoot the messenger" mentality by corporations or government agencies accused of misconduct and in some cases whistleblowers have been subjected to criminal prosecution in reprisal for reporting wrongdoing.
As a reaction to this many private organizations have formed whistleblower legal defense funds or support groups to assist whistleblowers; three such examples are the National Whistleblowers Center in the United States, and Whistleblowers UK and Public Concern at Work (PCaW) in the United Kingdom. Depending on the circumstances, it is not uncommon for whistleblowers to be ostracized by their co-workers, discriminated against by future potential employers, or even fired from their organization. This campaign directed at whistleblowers with the goal of eliminating them from the organization is referred to as mobbing. It is an extreme form of workplace bullying wherein the group is set against the targeted individual.
Psychological impact
There is limited research on the psychological impacts of whistle blowing. However, poor experiences of whistleblowing can cause a prolonged and prominent assault upon staff well being. As workers attempt to address concerns, they are often met with a wall of silence and hostility by management. Some whistleblowers speak of overwhelming and persistent distress, drug and alcohol problems, paranoid behaviour at work, acute anxiety, nightmares, flashbacks and intrusive thoughts. Depression is often reported by whistleblowers, and suicidal thoughts may occur in up to about 10%. General deterioration in health and self care has been described. The range of symptomatology shares many of the features of posttraumatic stress disorder, though there is debate about whether the trauma experienced by whistleblowers meets diagnostic thresholds. Increased stress related physical illness has also been described in whistleblowers. The stresses involved in whistleblowing can be huge. As such, workers remain afraid to blow the whistle, in fear that they will not be believed or they have lost faith in believing that anything will happen if they do speak out. This fear may indeed be justified, because an individual who feels threatened by whistleblowing, may plan the career destruction of the 'complainant' by reporting fictitious errors or rumours. This technique, labelled as "gaslighting", is a common, unconventional approach used by organizations to manage employees who cause difficulty by raising concerns. In extreme cases, this technique involves the organization or manager proposing that the complainant's mental health is unstable. Organizations also often attempt to ostracise and isolate whistleblowers by undermining their concerns by suggesting that these are groundless, carrying out inadequate investigations or by ignoring them altogether. Whistleblowers may also be disciplined, suspended and reported to professional bodies upon manufactured pretexts. Where whistleblowers persist in raising their concerns, they increasingly risk detriments such as dismissal. Following dismissal, whistleblowers may struggle to find further employment due to damaged reputations, poor references and blacklisting. The social impact of whistleblowing through loss of livelihood (and sometimes pension), and family strain may also impact on whistleblowers' psychological well being. Whistleblowers may also experience immense stress as a result of litigation regarding detriments such as unfair dismissal, which they often face with imperfect support or no support at all from unions. Whistleblowers who continue to pursue their concerns may also face long battles with official bodies such as regulators and government departments. Such bodies may reproduce the "institutional silence" by employers, adding to whistleblowers' stress and difficulties. In all, some whistleblowers suffer great injustice, that may never be acknowledged or rectified. Such extreme experiences of threat and loss inevitably cause severe distress and sometimes mental illness, sometimes lasting for years afterwards. This mistreatment also deters others from coming forward with concerns. Thus, poor practices remain hidden behind a wall of silence, and prevent any organization from experiencing the improvements that may be afforded by intelligent failure. Some whistleblowers who part ranks with their organizations have had their mental stability questioned, such as Adrian Schoolcraft, the NYPD veteran who alleged falsified crime statistics in his department and was forcibly committed to a mental institution. Conversely, the emotional strain of a whistleblower investigation is devastating to the accused's family.
Ethics
The definition of ethics is the moral principles that govern a person's or group's behavior. The ethical implications of whistleblowing can be negative as well as positive. Some have argued that public sector whistleblowing plays an important role in the democratic process by resolving principle agent problems. However, sometimes employees may blow the whistle as an act of revenge. Rosemary O'Leary explains this in her short volume on a topic called guerrilla government. "Rather than acting openly, guerrillas often choose to remain "in the closet", moving clandestinely behind the scenes, salmon swimming upstream against the current of power. Over the years, I have learned that the motivations driving guerrillas are diverse. The reasons for acting range from the altruistic (doing the right thing) to the seemingly petty (I was passed over for that promotion). Taken as a whole, their acts are as awe inspiring as saving human lives out of a love of humanity and as trifling as slowing the issuance of a report out of spite or anger." For example, of the more than 1,000 whistleblower complaints that are filed each year with the Pentagon's Inspector General, about 97 percent are not substantiated. It is believed throughout the professional world that an individual is bound to secrecy within their work sector. Discussions of whistleblowing and employee loyalty usually assume that the concept of loyalty is irrelevant to the issue or, more commonly, that whistleblowing involves a moral choice that pits the loyalty that an employee owes an employer against the employee's responsibility to serve the public interest. Robert A. Larmer describes the standard view of whistleblowing in the Journal of Business Ethics by explaining that an employee possesses prima facie (based on the first impression; accepted as correct until proved otherwise) duties of loyalty and confidentiality to their employers and that whistleblowing cannot be justified except on the basis of a higher duty to the public good. It is important to recognize that in any relationship which demands loyalty the relationship works both ways and involves mutual enrichment.
The ethics of Edward Snowden's actions have been widely discussed and debated in news media and academia worldwide. Edward Snowden released classified intelligence to the American people in an attempt to allow Americans to see the inner workings of the government. A person is diligently tasked with the conundrum of choosing to be loyal to the company or to blow the whistle on the company's wrongdoing. Discussions on whistleblowing generally revolve around three topics: attempts to define whistleblowing more precisely, debates about whether and when whistleblowing is permissible, and debates about whether and when one has an obligation to blow the whistle.
Motivations
Many whistleblowers have stated that they were motivated to take action to put an end to unethical practices, after witnessing injustices in their businesses or organizations. A 2009 study found that whistleblowers are often motivated to take action when they notice a sharp decline in ethical practices, as opposed to a gradual worsening. There are generally two metrics by which whistleblowers determine if a practice is unethical. The first metric involves a violation of the organization's bylaws or written ethical policies. These violations allow individuals to concretize and rationalize blowing the whistle. On the other hand, "value-driven" whistleblowers are influenced by their personal codes of ethics. In these cases, whistleblowers have been criticized for being driven by personal biases.
In addition to ethics, social and organizational pressure are a motivating forces. A 2012 study identified that individuals are more likely to blow the whistle when several others know about the wrongdoing, because they would otherwise fear consequences for keeping silent. In cases when one person is causing an injustice, the individual who notices the injustice may file a formal report, rather than confronting the wrongdoer, because confrontation would be more emotionally and psychologically stressful. Furthermore, individuals may be motivated to report unethical behavior when they believe their organizations will support them. Professionals in management roles may feel responsibility to blow the whistle to uphold the values and rules of their organizations.
Legal protection for whistleblowers
Legal protection for whistleblowers varies from country to country and may depend on the country of the original activity, where and how secrets were revealed, and how they eventually became published or publicized. Over a dozen countries have now adopted comprehensive whistleblower protection laws that create mechanisms for reporting wrongdoing and provide legal protections to whistleblowers. Over 50 countries have adopted more limited protections as part of their anti-corruption, freedom of information, or employment laws. For purposes of the English Wikipedia, this section emphasizes the English-speaking world and covers other regimes only insofar as they represent exceptionally greater or lesser protections.
Australia
There are laws in a number of states. The former NSW Police Commissioner Tony Lauer summed up official government and police attitudes as: "Nobody in Australia much likes whistleblowers, particularly in an organization like the police or the government." The former Australian intelligence officer known as Witness K, who provided evidence of Australia's controversial spying operation against the government of East Timor in 2004, face the possibility of jail if convicted.
Whistleblowers Australia is an association for those who have exposed corruption or any form of malpractice, especially if they were then hindered or abused.
Canada
The Public Sector Integrity Commissioner (PSIC) provides a safe and confidential mechanism enabling public servants and the general public to disclose wrongdoings committed in the public sector. It also protects from reprisal public servants who have disclosed wrongdoing and those who have cooperated in investigations. The office's goal is to enhance public confidence in Canada's federal public institutions and in the integrity of public servants.
Mandated by the Public Servants Disclosure Protection Act, PSIC is a permanent and independent agent of Parliament. The act, which came into force in 2007, applies to most of the federal public sector, approximately 400,000 public servants. This includes government departments and agencies, parent Crown corporations, the Royal Canadian Mounted Police and other federal public sector bodies.
Not all disclosures lead to an investigation as the act sets out the jurisdiction of the commissioner and gives the option not to investigate under certain circumstances. On the other hand, if PSIC conducts an investigation and finds no wrongdoing was committed, the commissioner must report his findings to the discloser and to the organization's chief executive. Also, reports of founded wrongdoing are presented before the House of Commons and the Senate in accordance with the act.
The act also established the Public Servants Disclosure Protection Tribunal (PSDPT) to protect public servants by hearing reprisal complaints referred by the Public Sector Integrity Commissioner. The tribunal can grant remedies in favour of complainants and order disciplinary action against persons who take reprisals.
European Union
The European Parliament approved a "Whistleblower Protection Directive" containing broad free speech protections for whistleblowers in both the public and the private sectors, including for journalists, in all member states of the European Union. The Directive prohibits direct or indirect retaliation against employees, current and former, in the public sector and the private sector. The Directive's protections apply to employees, to volunteers, and to those who assist them, including to civil society organizations and to journalists who report on their evidence. It provides equal rights for whistleblowers in the national security sector who challenge denial or removal of their security clearances. Also, whistleblowers are protected from criminal prosecution and corporate lawsuits for damages resulting from their whistleblowing, and provides for psychological support for dealing with harassment stress.
Good government observers have hailed the EU directive as setting "the global standard for best practice rights protecting freedom of speech where it counts the most—challenging abuses of power that betray the public trust," according to the U.S.-based Government Accountability Project. They have noted, however, that ambiguities remain in the Directive regarding application in some areas, such as "duty speech," that is, when employees report the same information in the course of a job assignment, for example, to a supervisor, instead of whistleblowing as formal dissent. In fact, duty speech is how the overwhelming majority of whistleblowing information gets communicated, and where the free flow of information is needed for proper functioning of organizations. However it is in response to such "duty speech" employee communication that the vast majority of retaliation against employees occurs. These observers have noted that the Directive must be understood as applying to protection against retaliation for such duty speech because without such an understanding the Directive will "miss the iceberg of what's needed".
Jamaica
In Jamaica, the Protected Disclosures Act, 2011 received assent in March 2011. It creates a comprehensive system for the protection of whistleblowers in the public and private sector. It is based on the Public Interest Disclosure Act 1998.
India
The Government of India had been considering adopting a whistleblower protection law for several years. In 2003, the Law Commission of India recommended the adoption of the Public Interest Disclosure (Protection of Informers) Act, 2002. In August 2010, the Public Interest Disclosure and Protection of Persons Making the Disclosures Bill, 2010 was introduced into the Lok Sabha, lower house of the Parliament of India. The Bill was approved by the cabinet in June 2011. The Public Interest Disclosure and Protection of Persons Making the Disclosures Bill, 2010 was renamed as The Whistleblowers' Protection Bill, 2011 by the Standing Committee on Personnel, Public Grievances, Law and Justice. The Whistleblowers' Protection Bill, 2011 was passed by the Lok Sabha on 28 December 2011. and by the Rajyasabha on 21 February 2014. The Whistle Blowers Protection Act, 2011 has received the Presidential assent on 9 May 2014 and the same has been subsequently published in the official gazette of the Government of India on 9 May 2014 by the Ministry of Law and Justice, Government of India.
Ireland
The government of Ireland committed to adopting a comprehensive whistleblower protection law in January 2012. The Protected Disclosures Act (PDA) was passed in 2014. The law covers workers in the public and private sectors, and also includes contractors, trainees, agency staff, former employees and job seekers. A range of different types of misconduct may be reported under the law, which provides protections for workers from a range of employment actions as well as whistleblowers' identity.
Netherlands
The Netherlands has measures in place to mitigate the risks of whistleblowing: the House for Whistleblowers (Huis voor klokkenluiders) offers advice and support to whistleblowers, and the Parliament passed a proposal in 2016 to establish this house for whistleblowers, to protect them from the severe negative consequences that they might endure (Kamerstuk, 2013). Dutch media organizations also provide whistleblower support; on 9September 2013 a number of major Dutch media outlets supported the launch of Publeaks, which provides a secure website for people to leak documents to the media. Publeaks is designed to protect whistleblowers. It operates on the GlobaLeaks software developed by the Hermes Center for Transparency and Digital Human Rights, which supports whistleblower-oriented technologies internationally.
Switzerland
The Swiss Council of States agreed on a draft amendment of the Swiss Code of Obligations in September 2014. The draft introduces articles 321abis to 321asepties, 328(3), 336(2)(d). An amendment of article 362(1) adds articles 321abis to 321asepties to the list of provisions that may not be overruled by labour and bargaining agreements.
Article 321ater introduces an obligation on employees to report irregularities to their employer before reporting to an authority. An employee will, however, not breach his duty of good faith if he reports an irregularity to an authority and
a period set by the employer and no longer than 60 days has lapsed since the employee has reported the incident to his employer, and
the employer has not addressed the irregularity or it is obvious that the employer has insufficiently addressed the irregularity.
Article 321aquarter provides that an employee may exceptionally directly report to an authority. Exceptions apply in cases
where the employee is in a position to objectively demonstrate that a report to his employer will prove ineffective,
where the employee has to anticipate dismissal,
where the employee must assume that the competent authority will be hindered in investigating the irregularity, or
where there is a direct and serious hazard to life, to health, to safety, or to the environment.
The draft does not improve on protection against dismissal for employees who report irregularities to their employer. The amendment does not provide for employees anonymously filing their observations of irregularities.
United Kingdom
Whistleblowing in the United Kingdom is protected by the Public Interest Disclosure Act 1998 (PIDA). Amongst other things, under the Act protected disclosures are permitted even if a non-disclosure agreement has been signed between the employer and the former or current employee; a consultation on further restricting confidentiality clauses was held in 2019.
The Freedom to Speak Up Review set out 20 principles to bring about improvements to help whistleblowers in the NHS, including:
Culture of raising concerns – to make raising issues a part of normal routine business of a well-led NHS organization.
Culture free from bullying – freedom of staff to speak out relies on staff being able to work in a culture which is free from bullying.
Training – every member of staff should receive training in their trust's approach to raising concerns and in receiving and acting on them.
Support – all NHS trusts should ensure there is a dedicated person to whom concerns can be easily reported and without formality, a "speak up guardian" .
Support to find alternative employment in the NHS – where a worker who has raised a concern cannot, as a result, continue their role, the NHS should help them seek an alternative job.
Monitor produced a whistleblowing policy in November 2015 that all NHS organizations in England are obliged to follow. It explicitly says that anyone bullying or acting against a whistleblower could be potentially liable to disciplinary action.
United States
Whistleblowing tradition in what would soon become the United States had a start in 1773 with Benjamin Franklin leaking a few letters in the Hutchinson affair. The release of the communications from royal governor Thomas Hutchinson to Thomas Whately led to a firing, a duel and arguably, both through the many general impacts of the leak and its role in convincing Franklin to join the radicals' cause, the taking of another important final step toward the American Revolution.
The first act of the Continental Congress in favor of what later came to be called whistleblowing came in the 1777-8 case of Samuel Shaw and Richard Marven. The two seamen accused Commander in Chief of the Continental Navy Esek Hopkins of torturing British prisoners of war. The Congress dismissed Hopkins and then agreed to cover the defense cost of the pair after Hopkins filed a libel suit against them under which they were imprisoned. Shaw and Marven were subsequently cleared in a jury trial.
To be considered a whistleblower in the United States, most federal whistleblower statutes require that federal employees have reason to believe their employer violated some law, rule, or regulation; testify or commence a legal proceeding on the legally protected matter; or refuse to violate the law.
In cases where whistleblowing on a specified topic is protected by statute, U.S. courts have generally held that such whistleblowers are protected from retaliation. However, a closely divided U.S. Supreme Court decision, Garcetti v. Ceballos (2006) held that the First Amendment free speech guarantees for government employees do not protect disclosures made within the scope of the employees' duties.
In the United States, legal protections vary according to the subject matter of the whistleblowing, and sometimes the state where the case arises. In passing the 2002 Sarbanes–Oxley Act, the Senate Judiciary Committee found that whistleblower protections were dependent on the "patchwork and vagaries" of varying state statutes. Still, a wide variety of federal and state laws protect employees who call attention to violations, help with enforcement proceedings, or refuse to obey unlawful directions. While this patchwork approach has often been criticized, it is also responsible for the United States having more dedicated whistleblowing laws than any other country.
The first US law adopted specifically to protect whistleblowers was the 1863 United States False Claims Act (revised in 1986), which tried to combat fraud by suppliers of the United States government during the American Civil War. The Act encourages whistleblowers by promising them a percentage of the money recovered by the government and by protecting them from employment retaliation.
Another US law that specifically protects whistleblowers is the Lloyd–La Follette Act of 1912. It guaranteed the right of federal employees to furnish information to the United States Congress. The first US environmental law to include an employee protection was the Clean Water Act of 1972. Similar protections were included in subsequent federal environmental laws, including the Safe Drinking Water Act (1974), Resource Conservation and Recovery Act (1976), Toxic Substances Control Act of 1976, Energy Reorganization Act of 1974 (through 1978 amendment to protect nuclear whistleblowers), Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA, or the Superfund Law) (1980), and the Clean Air Act (1990). Similar employee protections enforced through OSHA are included in the Surface Transportation Assistance Act (1982) to protect truck drivers, the Pipeline Safety Improvement Act (PSIA) of 2002, the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century ("AIR 21"), and the Sarbanes–Oxley Act, enacted on 30 July 2002 (for corporate fraud whistleblowers). More recent laws with some whistleblower protection include the Patient Protection and Affordable Care Act ("ACA", the Consumer Product Safety Improvement Act ("CPSIA"), the Seamans Protection Act as amended by the Coast Guard Authorization Act of 2010 ("SPA"), the Consumer Financial Protection Act ("CFPA"), the FDA Food Safety Modernization Act ("FSMA"), the Moving Ahead for Progress in the 21st Century Act ("MAP-21"), and the Taxpayer First Act ("TFA").
Investigation of retaliation against whistleblowers under 23 federal statutes falls under the jurisdiction of Directorate of Whistleblower Protection Program of the United States Department of Labor's Occupational Safety and Health Administration (OSHA). New whistleblower statutes enacted by Congress, which are to be enforced by the Secretary of Labor, are generally delegated by a Secretary's Order to OSHA's Directorate of Whistleblower Protection Program (DWPP).
The patchwork of laws means that victims of retaliation need to be aware of the laws at issue to determine the deadlines and means for making proper complaints. Some deadlines are as short as 10 days (Arizona State Employees have 10 days to file a "Prohibited Personnel Practice" Complaint before the Arizona State Personnel Board), while others are up to 300 days.
Those who report a false claim against the federal government, and suffer adverse employment actions as a result, may have up to six years (depending on state law) to file a civil suit for remedies under the US False Claims Act (FCA). Under a qui tam provision, the "original source" for the report may be entitled to a percentage of what the government recovers from the offenders. However, the "original source" must also be the first to file a federal civil complaint for recovery of the federal funds fraudulently obtained, and must avoid publicizing the claim of fraud until the US Justice Department decides whether to prosecute the claim itself. Such qui tam lawsuits must be filed under seal, using special procedures to keep the claim from becoming public until the federal government makes its decision on direct prosecution.
The Espionage Act of 1917 has been used to prosecute whistleblowers in the United States including Edward Snowden and Chelsea Manning. In 2013, Manning was convicted of violating the Espionage Act and sentenced to 35 years in prison for leaking sensitive military documents to WikiLeaks. The same year, Snowden was charged with violating the Espionage Act for releasing confidential documents belonging to the NSA.
Section 922 of the Dodd–Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) in the United States incentivizes and protects whistleblowers. By Dodd-Frank, the U.S. Securities and Exchange Commission (SEC) financially rewards whistleblowers for providing original information about violations of federal securities laws that results in sanctions of at least $1M. Additionally, Dodd-Frank offers job security to whistleblowers by illegalizing termination or discrimination due to whistleblowing. The whistleblower provision has proven successful; after the enactment of Dodd-Frank, the SEC charged KBR (company) and BlueLinx Holdings Inc. (company) with violating the whistleblower protection Rule 21F-17 by having employees sign confidentiality agreements that threatened repercussions for discussing internal matters with outside parties. President Donald Trump announced plans to dismantle Dodd-Frank in 2016. He created the Office of Accountability and Whistleblower Protection as a part of the Department of Veterans Affairs, which reportedly instead punished whistleblowers.
The federally recognized National Whistleblower Appreciation Day is observed annually on 30 July, on the anniversary of the country's original 1778 whistleblower protection law.
Other countries
There are comprehensive laws in New Zealand and South Africa. A number of other countries have recently adopted comprehensive whistleblower laws including Ghana, South Korea, and Uganda. They are also being considered in Kenya and Rwanda. The European Court of Human Rights ruled in 2008 that whistleblowing was protected as freedom of expression. Nigeria has progressed into formulating the Whistleblowing Policy in 2016. However, this has not yet been established as law. The Whistle-blower Protection bill is still pending at the National Assembly. In February 2017, Nigeria also set up a whistleblowing policy against corruption and other ills in the country.
Advocacy for whistleblower rights and protections
Many NGOs advocate for stronger and more comprehensive legal rights and protections for whistleblowers. Among them are the Government Accountability Project (GAP), Blueprint for Free Speech, Public Concern at Work (PCaW) and the Open Democracy Advice Centre
Frank Serpico, an NYPD whistleblower, prefers to use the term "lamp-lighter" to describe the whistleblower's role as a watchman. The Lamplighter Project, which aims to encourage law enforcement officers to report corruption and abuse of power and helps them do so, is named based on Serpico's usage of the term.
Modern methods used for whistleblower protection
Whistleblowers who may be at risk from those they are exposing are now using encryption methods and anonymous content sharing software to protect their identity. Tor, a highly accessible anonymity network, is one that is frequently used by whistleblowers around the world. Tor has undergone a number of large security updates to protect the identities of potential whistleblowers who may wish to anonymously leak information.
Recently specialized whistleblowing software like SecureDrop and GlobaLeaks has been built on top of the Tor technology to incentivize and simplify its adoption for secure whistleblowing.
Whistleblowing hotline
In business, whistleblowing hotlines are usually deployed as a way of mitigating risk, with the intention of providing secure, anonymous reporting for employees or third party suppliers who may otherwise be fearful of reprisals from their employer. As such, implementing a corporate whistleblowing hotline is often seen as a step towards compliance, and can also highlight an organization's stance on ethics. It is widely agreed that implementing a dedicated service for whistleblowers has a positive effect on an organizational culture.
A whistleblowing hotline is sometimes also referred to as an ethics hotline or 'Speak Up' hotline and is often facilitated by an outsourced service provider to encourage potential disclosers to come forward. Navex Global and Expolink are examples of global third party whistleblower services.
In 2018, the Harvard Business Review published findings to support the idea that whistleblowing hotlines are crucial to keeping companies healthy, stating "More whistles blown are a sign of health, not illness."
In popular culture
One of the subplots for season6 of the popular American TV show The Office focused on Andy Bernard, a salesman, discovering the printers of his company catch on fire, struggling with how to deal with the news, and the company's response to the whistleblower going public.
The 1998 film Star Trek: Insurrection involved Picard and the NCC-1701-E Enterprise crew risking their Starfleet careers to blow the whistle on a Federation conspiracy with the Son’a to forcibly relocate the Ba’ku from their planet.
In 2014, the rock/industrial band Laibach on its eighth studio album Spectre released a song titled "The Whistleblowers" . It was released on 3 March 2014 under Mute Records.
In 2016, the rock band Thrice released a song titled "Whistleblower" off of the album To Be Everywhere Is to Be Nowhere. The song is written from the perspective of Snowden.
In July 2018, CBS debuted a new reality television show entitled Whistleblower, hosted by lawyer, former judge and police officer Alex Ferrer which covers qui tam suits under the False Claims Act against companies that have allegedly defrauded the federal government.
See also
Notes and references
Bibliography
Arnold, Jason Ross (2019). Whistleblowers, Leakers, and Their Networks: From Snowden to Samizdat. Rowman & Littlefield.
Banisar, David "Whistleblowing: International Standards and Developments", in Corruption and Transparency: Debating the Frontiers between State, Market and Society, I. Sandoval, ed., World Bank-Institute for Social Research, UNAM, Washington, D.C., 2011 available online at ssrn.com
Dempster, Quentin Whistleblowers, Sydney, ABC Books, 1997. [See especially pp. 199–212: 'The Courage of the Whistleblowers']
Frais, A "Whistleblowing heroes – boon or burden?", Bulletin of Medical Ethics, 2001 Aug:(170):13–19.
Garrett, Allison, "Auditor Whistle Blowing: The Financial Fraud Detection and Disclosure Act," 17 Seton Hall Legis. J. 91 (1993).
Lauretano, Major Daniel A., "The Military Whistleblower Protection Act and the Military Mental Health Protection Act", Army Law, (Oct) 1998.
Lechner, Jay P. & Paul M. Sisco, "Sarbanes-Oxley Criminal Whistleblower Provisions & the Workplace: More Than Just Securities Fraud" 80 Florida B. J. 85 (June 2006)
Martin, Brian. Justice Ignited: The Dynamics of Backfire, (Lanham, MD: Rowman & Littlefield, 2007).
Martin, Brian with Wendy Varney. Nonviolence Speaks: Communicating against Repression, (Cresskill, NJ: Hampton Press, 2003).
Martin, Brian. Technology for Nonviolent Struggle, (London: War Resisters' International, 2001).
Martin, Brian with Lyn Carson. Random Selection in Politics, (Westport, CT: Praeger, 1999).
Martin, Brian. The Whistleblower's Handbook: How to Be an Effective Resister, (Charlbury, UK: Jon Carpenter; Sydney: Envirobook, 1999). Updated and republished 2013 as Whistleblowing: a practical guide, Sparsnäs, Sweden: Irene Publishing.
McCarthy, Robert J. "Blowing in the Wind: Answers for Federal Whistleblowers", 3 William & Mary Policy Review 184 (2012).
Rowe, Mary & Bendersky, Corinne, "Workplace Justice, Zero Tolerance and Zero Barriers: Getting People to Come Forward in Conflict Management Systems," in Negotiations and Change, From the Workplace to Society, Thomas Kochan and Richard Locke (eds), Cornell University Press, 2002
Wilkey, Robert N. Esq., "Federal Whistleblower Protection: A Means to Enforcing Maximum Hour Legislation for Medical Residents", William Mitchell Law Review, Vol. 30, Issue 1 (2003).
Engineering Ethics concepts and cases by Charles E. Harris, Jr. – Michael S. Pritchard- Michael J. Rabins.
IRS.gov, Whistleblower – Informant Award
, Whistleblower& Laws Australia;– Global and Australian Laws: Steven Asnicar, 2019
External links
Public Interest Disclosure Act 1998 from Her Majesty's Stationery Office
National Security Whistleblowers, a Congressional Research Service (CRS) Report
Survey of Federal Whistleblower and Anti-Retaliation Laws, a Congressional Research Service (CRS) Report
Whistleblower Protection Program & information at U.S. Department of Labor
Read v. Canada (Attorney General) Canadian legal framework regarding whistleblowing defence
Patients First
Whistleblowers UK
Why be a whistleblower?
Author Eyal Press discusses whistleblowers and heroism on Conversations from Penn State
"Digital Dissidents: What it Means to be a Whistleblower". Al Jazeera English.
Whistle Blower Protection Authority
Anti-corporate activism
Dissent
Freedom of expression
Freedom of speech
Grounds for termination of employment
Labour law
Political terminology
United States federal labor legislation
Workplace bullying
1970s neologisms |
33879 | https://en.wikipedia.org/wiki/Windows%20XP | Windows XP | Windows XP is a major release of Microsoft's Windows NT operating system. It is the direct successor to Windows 2000 for professional users and Windows Me for home users. It was released to manufacturing on August 24, 2001, and later to retail on October 25, 2001.
Development of Windows XP began in the late 1990s under the codename "Neptune", built on the Windows NT kernel explicitly intended for mainstream consumer use. An updated version of Windows 2000 was also initially planned for the business market. However, in January 2000, both projects were scrapped in favor of a single OS codenamed "Whistler", which would serve as a single platform for both consumer and business markets. As a result, Windows XP is the first consumer edition of Windows not based on the Windows 95 kernel and MS-DOS.
Upon its release, Windows XP received critical acclaim, noting increased performance and stability (especially compared to Windows Me), a more intuitive user interface, improved hardware support, and expanded multimedia capabilities. However, some industry reviewers were concerned by the new licensing model and product activation system. Windows XP and Windows Server 2003 were succeeded by Windows Vista and Windows Server 2008, released in 2007 and 2008, respectively. Market share of Windows XP fell below 1% by the end of 2021, right when Windows 10 was released
Mainstream support for Windows XP ended on April 14, 2009, and extended support ended on April 8, 2014. After that, the operating system ceased receiving further support. Windows Embedded POSReady 2009, based on Windows XP Professional, received security updates until April 2019. After that, unofficial methods were made available to apply the updates to other editions of Windows XP. Still, Microsoft discouraged this practice, citing incompatibility issues. , 0.5% of Windows PCs run Windows XP (on all continents, the share is below 1%), and 0.18% of all devices across all platforms run Windows XP. Windows XP is still very prevalent in many countries, such as Armenia, where 50–60% of computers use it.
Development
In the late 1990s, initial development of what would become Windows XP was focused on two individual products: "Odyssey", which was reportedly intended to succeed the future Windows 2000; and "Neptune", which was reportedly a consumer-oriented operating system using the Windows NT architecture, succeeding the MS-DOS-based Windows 98.
However, the projects proved to be too ambitious. In January 2000, shortly prior to the official release of Windows 2000, technology writer Paul Thurrott reported that Microsoft had shelved both Neptune and Odyssey in favor of a new product codenamed "Whistler", named after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort. The goal of Whistler was to unify both the consumer and business-oriented Windows lines under a single, Windows NT platform: Thurrott stated that Neptune had become "a black hole when all the features that were cut from Windows Me were simply re-tagged as Neptune features. And since Neptune and Odyssey would be based on the same code-base anyway, it made sense to combine them into a single project".
At PDC on July 13, 2000, Microsoft announced that Whistler would be released during the second half of 2001, and also unveiled the first preview build, 2250, which featured an early implementation of Windows XP's visual styles system and interface changes to Windows Explorer and the Control Panel.
Microsoft released the first public beta build of Whistler, build 2296, on October 31, 2000. Subsequent builds gradually introduced features that users of the release version of Windows XP would recognize, such as Internet Explorer 6.0, the Microsoft Product Activation system and the Bliss desktop background.
Whistler was officially unveiled during a media event on February 5, 2001, under the name Windows XP, where XP stands for "eXPerience".
Release
In June 2001, Microsoft indicated that it was planning to, in conjunction with Intel and other PC makers, spend at least 1 billion US dollars on marketing and promoting Windows XP. The theme of the campaign, "Yes You Can", was designed to emphasize the platform's overall capabilities. Microsoft had originally planned to use the slogan "Prepare to Fly", but it was replaced because of sensitivity issues in the wake of the September 11 attacks.
On August 24, 2001, Windows XP build 2600 was released to manufacturing (RTM). During a ceremonial media event at Microsoft Redmond Campus, copies of the RTM build were given to representatives of several major PC manufacturers in briefcases, who then flew off on decorated helicopters. While PC manufacturers would be able to release devices running XP beginning on September 24, 2001, XP was expected to reach general, retail availability on October 25, 2001. On the same day, Microsoft also announced the final retail pricing of XP's two main editions, "Home" (as a replacement for Windows Me for home computing) and "Professional" (as a replacement for Windows 2000 for high-end users).
New and updated features
User interface
While retaining some similarities to previous versions, Windows XP's interface was overhauled with a new visual appearance, with an increased use of alpha compositing effects, drop shadows, and "visual styles", which completely changed the appearance of the operating system. The number of effects enabled are determined by the operating system based on the computer's processing power, and can be enabled or disabled on a case-by-case basis. XP also added ClearType, a new subpixel rendering system designed to improve the appearance of fonts on liquid-crystal displays. A new set of system icons was also introduced. The default wallpaper, Bliss, is a photo of a landscape in the Napa Valley outside Napa, California, with rolling green hills and a blue sky with stratocumulus and cirrus clouds.
The Start menu received its first major overhaul in XP, switching to a two-column layout with the ability to list, pin, and display frequently used applications, recently opened documents, and the traditional cascading "All Programs" menu. The taskbar can now group windows opened by a single application into one taskbar button, with a popup menu listing the individual windows. The notification area also hides "inactive" icons by default. A "common tasks" list was added, and Windows Explorer's sidebar was updated to use a new task-based design with lists of common actions; the tasks displayed are contextually relevant to the type of content in a folder (e.g. a folder with music displays offers to play all the files in the folder, or burn them to a CD).
Fast user switching allows additional users to log into a Windows XP machine without existing users having to close their programs and log out. Although only one user at the time can use the console (i.e. monitor, keyboard, and mouse), previous users can resume their session once they regain control of the console.
Infrastructure
Windows XP uses prefetching to improve startup and application launch times. It also became possible to revert the installation of an updated device driver, should the updated driver produce undesirable results.
A copy protection system known as Windows Product Activation was introduced with Windows XP and its server counterpart, Windows Server 2003. All Windows licenses must be tied to a unique ID generated using information from the computer hardware, transmitted either via the internet or a telephone hotline. If Windows is not activated within 30 days of installation, the OS will cease to function until it is activated. Windows also periodically verifies the hardware to check for changes. If significant hardware changes are detected, the activation is voided, and Windows must be re-activated.
Networking and internet functionality
Windows XP was originally bundled with Internet Explorer 6, Outlook Express 6, Windows Messenger, and MSN Explorer. New networking features were also added, including Internet Connection Firewall, Internet Connection Sharing integration with UPnP, NAT traversal APIs, Quality of Service features, IPv6 and Teredo tunneling, Background Intelligent Transfer Service, extended fax features, network bridging, peer to peer networking, support for most DSL modems, IEEE 802.11 (Wi-Fi) connections with auto configuration and roaming, TAPI 3.1, and networking over FireWire. Remote Assistance and Remote Desktop were also added, which allow users to connect to a computer running Windows XP from across a network or the Internet and access their applications, files, printers, and devices or request help. Improvements were also made to IntelliMirror features such as Offline Files, Roaming user profiles and Folder redirection.
Backwards compatibility
To enable running software that targets or locks out specific versions of Windows, "Compatibility mode" has been added. The feature allows pretending a selected earlier version of Windows to software, starting at Windows 95.
While this ability was first introduced in Windows 2000 Service Pack 2, it had to be activated through the "register server" and was only available to administrator users, whereas Windows XP has it activated out of the box and also grants it to regular users.
Other features
Improved application compatibility and shims compared to Windows 2000.
DirectX 8.1, upgradeable to DirectX 9.0c.
A number of new features in Windows Explorer including task panes, thumbnails, and the option to view photos as a slideshow.
Improved imaging features such as Windows Picture and Fax Viewer.
Faster start-up, (because of improved Prefetch functions) logon, logoff, hibernation, and application launch sequences.
Numerous improvements to increase the system reliability such as improved System Restore, Automated System Recovery, and driver reliability improvements through Device Driver Rollback.
Hardware support improvements such as FireWire 800, and improvements to multi-monitor support under the name "DualView".
Fast user switching.
The ClearType font rendering mechanism, which is designed to improve text readability on liquid-crystal display (LCD) and similar monitors, especially laptops.
Side-by-side assemblies and registration-free COM.
General improvements to international support such as more locales, languages and scripts, MUI support in Terminal Services, improved Input Method Editors, and National Language Support.
Removed features
Some of the programs and features that were part of the previous versions of Windows did not make it to Windows XP. Various MS-DOS commands available in its Windows 9x predecessor were removed, as were the POSIX and OS/2 subsystems.
In networking, NetBEUI, NWLink and NetDDE were deprecated and not installed by default. Plug-and-play–incompatible communication devices (like modems and network interface cards) were no longer supported.
Service Pack 2 and Service Pack 3 also removed features from Windows XP, but to a less noticeable extent. For instance, support for TCP half-open connections was removed in Service Pack 2, and the address bar on the taskbar was removed in Service Pack 3.
Editions
Windows XP was released in two major editions on launch: Home Edition and Professional Edition. Both editions were made available at retail as pre-loaded software on new computers and as boxed copies. Boxed copies were sold as "Upgrade" or "Full" licenses; the "Upgrade" versions were slightly cheaper, but require an existing version of Windows to install. The "Full" version can be installed on systems without an operating system or existing version of Windows. The two editions of XP were aimed at different markets: Home Edition is explicitly intended for consumer use and disables or removes certain advanced and enterprise-oriented features present on Professional, such as the ability to join a Windows domain, Internet Information Services, and Multilingual User Interface. Windows 98 or Me can be upgraded to either edition, but Windows NT 4.0 and Windows 2000 can only be upgraded to Professional. Windows' software license agreement for pre-loaded licenses allows the software to be "returned" to the OEM for a refund if the user does not wish to use it. Despite the refusal of some manufacturers to honor the entitlement, it has been enforced by courts in some countries.
Two specialized variants of XP were introduced in 2002 for certain types of hardware, exclusively through OEM channels as pre-loaded software. Windows XP Media Center Edition was initially designed for high-end home theater PCs with TV tuners (marketed under the term "Media Center PC"), offering expanded multimedia functionality, an electronic program guide, and digital video recorder (DVR) support through the Windows Media Center application. Microsoft also unveiled Windows XP Tablet PC Edition, which contains additional pen input features, and is optimized for mobile devices meeting its Tablet PC specifications. Two different 64-bit editions of XP were made available. The first, Windows XP 64-Bit Edition, was intended for IA-64 (Itanium) systems; as IA-64 usage declined on workstations in favor of AMD's x86-64 architecture, the Itanium edition was discontinued in January 2005. A new 64-bit edition supporting the x86-64 architecture, called Windows XP Professional x64 Edition, was released in April of the same year.
Microsoft also targeted emerging markets with the 2004 introduction of Windows XP Starter Edition, a special variant of Home Edition intended for low-cost PCs. The OS is primarily aimed at first-time computer owners, containing heavy localization (including wallpapers and screen savers incorporating images of local landmarks), and a "My Support" area which contains video tutorials on basic computing tasks. It also removes certain "complex" features, and does not allow users to run more than three applications at a time. After a pilot program in India and Thailand, Starter was released in other emerging markets throughout 2005. In 2006, Microsoft also unveiled the FlexGo initiative, which would also target emerging markets with subsidized PCs on a pre-paid, subscription basis.
As a result of unfair competition lawsuits in Europe and South Korea, which both alleged that Microsoft had improperly leveraged its status in the PC market to favor its own bundled software, Microsoft was ordered to release special editions of XP in these markets that excluded certain applications. In March 2004, after the European Commission fined Microsoft €497 million (US$603 million), Microsoft was ordered to release "N" editions of XP that excluded Windows Media Player, encouraging users to pick and download their own media player software. As it was sold at the same price as the edition with Windows Media Player included, certain OEMs (such as Dell, who offered it for a short period, along with Hewlett-Packard, Lenovo and Fujitsu Siemens) chose not to offer it. Consumer interest was minuscule, with roughly 1,500 units shipped to OEMs, and no reported sales to consumers. In December 2005, the Korean Fair Trade Commission ordered Microsoft to make available editions of Windows XP and Windows Server 2003 that do not contain Windows Media Player or Windows Messenger. The "K" and "KN" editions of Windows XP were released in August 2006, and are only available in English and Korean, and also contain links to third-party instant messenger and media player software.
Service packs
A service pack is a cumulative update package that is a superset of all updates, and even service packs, that have been released before it. Three service packs have been released for Windows XP. Service Pack 3 is slightly different, in that it needs at least Service Pack 1 to have been installed, in order to update a live OS. However, Service Pack 3 can still be embedded into a Windows installation disc; SP1 is not reported as a prerequisite for doing so.
Service Pack 1
Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contained over 300 minor, post-RTM bug fixes, along with all security patches released since the original release of XP. SP1 also added USB 2.0 support, the Microsoft Java Virtual Machine, .NET Framework support, and support for technologies used by the then-upcoming Media Center and Tablet PC editions of XP. The most significant change on SP1 was the addition of Set Program Access and Defaults, a settings page which allows programs to be set as default for certain types of activities (such as media players or web browsers) and for access to bundled, Microsoft programs (such as Internet Explorer or Windows Media Player) to be disabled. This feature was added to comply with the settlement of United States v. Microsoft Corp., which required Microsoft to offer the ability for OEMs to bundle third-party competitors to software it bundles with Windows (such as Internet Explorer and Windows Media Player), and give them the same level of prominence as those normally bundled with the OS.
On February 3, 2003, Microsoft released Service Pack 1a (SP1a). It was the same as SP1, except, the Microsoft Java Virtual Machine was excluded.
Service Pack 2
Service Pack 2 (SP2) for Windows XP Home edition and Professional edition was released on August 25, 2004. Headline features included WPA encryption compatibility for Wi-Fi and usability improvements to the Wi-Fi networking user interface, partial Bluetooth support, and various improvements to security systems.
The security improvements (codenamed "Springboard", as these features were intended to underpin additional changes in Longhorn) included a major revision to the included firewall (renamed Windows Firewall, and now enabled by default), and an update to Data Execution Prevention, which gained hardware support in the NX bit that can stop some forms of buffer overflow attacks. Raw socket support is removed (which supposedly limits the damage done by zombie machines) and the Windows Messenger service (which had been abused to cause pop-up advertisements to be displayed as system messages without a web browser or any additional software) became disabled by default. Additionally, security-related improvements were made to e-mail and web browsing. Service Pack 2 also added Security Center, an interface that provides a general overview of the system's security status, including the state of the firewall and automatic updates. Third-party firewall and antivirus software can also be monitored from Security Center.
The unique boot screens that identified the edition of Windows XP currently running, including a green progress bar for Home Edition and a blue progress bar for other editions, were removed and replaced with a generic "Windows XP" boot screen with a blue progress bar with this service pack.
In August 2006, Microsoft released updated installation media for Windows XP and Windows Server 2003 SP2 (SP2b), in order to incorporate a patch requiring ActiveX controls in Internet Explorer to be manually activated before a user may interact with them. This was done so that the browser would not violate a patent owned by Eolas. Microsoft has since licensed the patent, and released a patch reverting the change in April 2008. In September 2007, another minor revision known as SP2c was released for XP Professional, extending the number of available product keys for the operating system to "support the continued availability of Windows XP Professional through the scheduled system builder channel end-of-life (EOL) date of January 31, 2009."
Service Pack 3
The third and final Service Pack, SP3, was released through different channels between April and June 2008, about a year after the release of Windows Vista, and about a year before the release of Windows 7. Service Pack 3 was not available for Windows XP x64 Edition, which was based on the Windows Server 2003 kernel and, as a result, used its service packs rather than the ones for the other editions.
It began being automatically pushed out to Automatic Updates users on July 10, 2008. A feature set overview which detailed new features available separately as stand-alone updates to Windows XP, as well as backported features from Windows Vista, was posted by Microsoft. A total of 1,174 fixes are included in SP3. Service Pack 3 could be installed on systems with Internet Explorer versions 6, 7, or 8; Internet Explorer 7 was not included as part of SP3.
Service Pack 3 included security enhancements over and above those of SP2, including APIs allowing developers to enable Data Execution Prevention for their code, independent of system-wide compatibility enforcement settings, the Security Support Provider Interface, improvements to WPA2 security, and an updated version of the Microsoft Enhanced Cryptographic Provider Module that is FIPS 140-2 certified.
In incorporating all previously released updates not included in SP2, Service Pack 3 included many other key features. Windows Imaging Component allowed camera vendors to integrate their own proprietary image codecs with the operating system's features, such as thumbnails and slideshows. In enterprise features, Remote Desktop Protocol 6.1 included support for ClearType and 32-bit color depth over RDP, while improvements made to Windows Management Instrumentation in Windows Vista to reduce the possibility of corruption of the WMI repository were backported to XP SP3.
In addition, SP3 contains updates to the operating system components of Windows XP Media Center Edition (MCE) and Windows XP Tablet PC Edition, and security updates for .NET Framework version 1.0, which is included in these editions. However, it does not include update rollups for the Windows Media Center application in Windows XP MCE 2005. SP3 also omits security updates for Windows Media Player 10, although the player is included in Windows XP MCE 2005. The Address Bar DeskBand on the Taskbar is no longer included because of antitrust violation concerns.
Unofficial SP3 ZIP download packages were released on a now-defunct website called The Hotfix from 2005 to 2007. The owner of the website, Ethan C. Allen, was a former Microsoft employee in Software Quality Assurance and would comb through the Microsoft Knowledge Base articles daily and download new hotfixes Microsoft would put online within the articles. The articles would have a "kbwinxppresp3fix" and/or "kbwinxpsp3fix" tag, thus allowing Allen to easily find and determine which fixes were planned for the official SP3 release to come. Microsoft publicly stated at the time that the SP3 pack was unofficial and users should not install it. Allen also released a Vista SP1 package in 2007, for which Allen received a cease-and-desist email from Microsoft.
System requirements
System requirements for Windows XP are as follows:
Notes
Physical memory limits
The maximum amount of RAM that Windows XP can support varies depending on the product edition and the processor architecture. Most 32-bit editions of XP support up to 4 GB, with the exception of Windows XP Starter, which is limited to 512 MB. 64-bit editions support up to 128 GB.
Processor limits
Windows XP Professional supports up to two physical processors;
Windows XP Home Edition is limited to one.
However, XP supports a greater number of logical processors:
32-bit editions support up to 32 logical processors, whereas 64-bit editions support up to 64 logical processors.
Support lifecycle
Support for the original release of Windows XP (without a service pack) ended on August 30, 2005. Both Windows XP Service Pack 1 and 1a were retired on October 10, 2006, and both Windows 2000 and Windows XP SP2 reached their end of support on July 13, 2010, about 24 months after the launch of Windows XP Service Pack 3. The company stopped general licensing of Windows XP to OEMs and terminated retail sales of the operating system on June 30, 2008, 17 months after the release of Windows Vista. However, an exception was announced on April 3, 2008, for OEMs producing what it defined as "ultra low-cost personal computers", particularly netbooks, until one year after the availability of Windows 7 on October 22, 2010. Analysts felt that the move was primarily intended to compete against Linux-based netbooks, although Microsoft's Kevin Hutz stated that the decision was due to apparent market demand for low-end computers with Windows.
Variants of Windows XP for embedded systems have different support policies: Windows XP Embedded SP3 and Windows Embedded for Point of Service SP3 were supported until January and April 2016, respectively. Windows Embedded Standard 2009, which was succeeded by Windows Embedded Standard 7, and Windows Embedded POSReady 2009, which was succeeded by Windows Embedded POSReady 7, were supported until January and April 2019, respectively. These updates, while intended for the embedded editions, could also be downloaded on standard Windows XP with a registry hack, which enabled unofficial patches until April 2019. However, Microsoft advised Windows XP users against installing these fixes, citing incompatibility issues.
End of support
On April 14, 2009, Windows XP exited mainstream support and entered the extended support phase; Microsoft continued to provide security updates every month for Windows XP, however, free technical support, warranty claims, and design changes were no longer being offered. Extended support ended on April 8, 2014, over 12 years after the release of Windows XP; normally Microsoft products have a support life cycle of only 10 years. Beyond the final security updates released on April 8, no more security patches or support information are provided for XP free-of-charge; "critical patches" will still be created, and made available only to customers subscribing to a paid "Custom Support" plan. As it is a Windows component, all versions of Internet Explorer for Windows XP also became unsupported.
In January 2014, it was estimated that more than 95% of the 3 million automated teller machines in the world were still running Windows XP (which largely replaced IBM's OS/2 as the predominant operating system on ATMs); ATMs have an average lifecycle of between seven and ten years, but some have had lifecycles as long as 15. Plans were being made by several ATM vendors and their customers to migrate to Windows 7-based systems over the course of 2014, while vendors have also considered the possibility of using Linux-based platforms in the future to give them more flexibility for support lifecycles, and the ATM Industry Association (ATMIA) has since endorsed Windows 10 as a further replacement. However, ATMs typically run the embedded variant of Windows XP, which was supported through January 2016. As of May 2017, around 60% of the 220,000 ATMs in India still run Windows XP.
Furthermore, at least 49% of all computers in China still ran XP at the beginning of 2014. These holdouts were influenced by several factors; prices of genuine copies of later versions of Windows in the country are high, while Ni Guangnan of the Chinese Academy of Sciences warned that Windows 8 could allegedly expose users to surveillance by the United States government, and the Chinese government would ban the purchase of Windows 8 products for government use in May 2014 in protest of Microsoft's inability to provide "guaranteed" support. The government also had concerns that the impending end of support could affect their anti-piracy initiatives with Microsoft, as users would simply pirate newer versions rather than purchasing them legally. As such, government officials formally requested that Microsoft extend the support period for XP for these reasons. While Microsoft did not comply with their requests, a number of major Chinese software developers, such as Lenovo, Kingsoft and Tencent, will provide free support and resources for Chinese users migrating from XP. Several governments, in particular those of the Netherlands and the United Kingdom, elected to negotiate "Custom Support" plans with Microsoft for their continued, internal use of Windows XP; the British government's deal lasted for a year, and also covered support for Office 2003 (which reached end-of-life the same day) and cost £5.5 million.
On March 8, 2014, Microsoft deployed an update for XP that, on the 8th of each month, displays a pop-up notification to remind users about the end of support; however, these notifications may be disabled by the user. Microsoft also partnered with Laplink to provide a special "express" version of its PCmover software to help users migrate files and settings from XP to a computer with a newer version of Windows.
Despite the approaching end of support, there were still notable holdouts that had not migrated past XP; many users elected to remain on XP because of the poor reception of Windows Vista, sales of newer PCs with newer versions of Windows declined because of the Great Recession and the effects of Vista, and deployments of new versions of Windows in enterprise environments require a large amount of planning, which includes testing applications for compatibility (especially those that are dependent on Internet Explorer 6, which is not compatible with newer versions of Windows). Major security software vendors (including Microsoft itself) planned to continue offering support and definitions for Windows XP past the end of support to varying extents, along with the developers of Google Chrome, Mozilla Firefox, and Opera web browsers; despite these measures, critics similarly argued that users should eventually migrate from XP to a supported platform. The United States' Computer Emergency Readiness Team released an alert in March 2014 advising users of the impending end of support, and informing them that using XP after April 8 may prevent them from meeting US government information security requirements.
Microsoft continued to provide Security Essentials virus definitions and updates for its Malicious Software Removal Tool (MSRT) for XP until July 14, 2015. As the end of extended support approached, Microsoft began to increasingly urge XP customers to migrate to newer versions such as Windows 7 or 8 in the interest of security, suggesting that attackers could reverse engineer security patches for newer versions of Windows and use them to target equivalent vulnerabilities in XP. Windows XP is remotely exploitable by numerous security holes that were discovered after Microsoft stopped supporting it.
Similarly, specialized devices that run XP, particularly medical devices, must have any revisions to their software—even security updates for the underlying operating system—approved by relevant regulators before they can be released. For this reason, manufacturers often did not allow any updates to devices' operating systems, leaving them open to security exploits and malware.
Despite the end of support for Windows XP, Microsoft has released three emergency security updates for the operating system to patch major security vulnerabilities:
A patch released in May 2014 to address recently discovered vulnerabilities in Internet Explorer 6 through 11 on all versions of Windows.
A patch released in May 2017 to address a vulnerability that was being leveraged by the WannaCry ransomware attack.
A patch released in May 2019 to address a critical code execution vulnerability in Remote Desktop Services which can be exploited in a similar way as the WannaCry vulnerability.
Researchers reported in August 2019 that Windows 10 users may be at risk for "critical" system compromise because of design flaws of hardware device drivers from multiple providers. In the same month, computer experts reported that the BlueKeep security vulnerability, , that potentially affects older unpatched Microsoft Windows versions via the program's Remote Desktop Protocol, allowing for the possibility of remote code execution, may now include related flaws, collectively named DejaBlue, affecting newer Windows versions (i.e., Windows 7 and all recent versions) as well. In addition, experts reported a Microsoft security vulnerability, , based on legacy code involving Microsoft CTF and ctfmon (ctfmon.exe), that affects all Windows versions from the older Windows XP version to the most recent Windows 10 versions; a patch to correct the flaw is currently available.
Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows XP and Windows Me would end on July 31, 2019 (and for Windows 7 on January 22, 2020). Others, such as Steam, had done the same, ending support for Windows XP and Windows Vista in January 2019.
In 2020, Microsoft announced that it would disable the Windows Update service for SHA-1 endpoints; since Windows XP did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of October 2021, the old updates for Windows XP are still available on the Microsoft Update Catalog.
Reception
On release, Windows XP received critical acclaim. CNET described the operating system as being "worth the hype", considering the new interface to be "spiffier" and more intuitive than previous versions, but feeling that it may "annoy" experienced users with its "hand-holding". XP's expanded multimedia support and CD burning functionality were also noted, along with its streamlined networking tools. The performance improvements of XP in comparison to 2000 and Me were also praised, along with its increased number of built-in device drivers in comparison to 2000. The software compatibility tools were also praised, although it was noted that some programs, particularly older MS-DOS software, may not work correctly on XP because of its differing architecture. They panned Windows XP's new licensing model and product activation system, considering it to be a "slightly annoying roadblock", but acknowledged Microsoft's intent for the changes. PC Magazine provided similar praise, although noting that a number of its online features were designed to promote Microsoft-owned services, and that aside from quicker boot times, XP's overall performance showed little difference over Windows 2000. Windows XP's default theme, Luna, was criticized by some users for its childish look.
Despite extended support for Windows XP ending in 2014, many users – including some enterprises – were reluctant to move away from an operating system they viewed as a stable known quantity despite the many security and functionality improvements in subsequent releases of Windows. Windows XP's longevity was viewed as testament to its stability and Microsoft's successful attempts to keep it up to date, but also as an indictment of its direct successor's perceived failings.
Market share
According to web analytics data generated by Net Applications, Windows XP was the most widely used operating system until August 2012, when Windows 7 overtook it (later overtaken by Windows 10), while StatCounter indicates it happening almost a year earlier. In January 2014, Net Applications reported a market share of 29.23% of "desktop operating systems" for XP (when XP was introduced there was not a separate mobile category to track), while W3Schools reported a share of 11.0%.
, in most regions or continents, Windows XP market share on PCs, as a fraction of the total Windows share, has gone below 1% (1.72% in Africa, where it was previously at 0.8%). XP still has a double-digit market share in a few countries, such as Armenia at over 50%, at 57%, where Windows 7 was highest ranked, and with it being replaced by Windows 10, Windows XP got highest ranked for the longest time, and had over 60% share on some weekends in summer of 2019.
Source code leak
On September 23, 2020, source code for Windows XP with Service Pack 1 and Windows Server 2003 was leaked onto the imageboard 4chan by an unknown user. Anonymous users managed to compile the code, as well as a Twitter user who posted videos of the process on YouTube proving that the code was genuine. The videos were later removed on copyright grounds by Microsoft. The leak was incomplete as it was missing the Winlogon source code and some other components. The original leak itself was spread using magnet links and torrent files whose payload originally included Server 2003 and XP source code and which was later updated with additional files, among which were previous leaks of Microsoft products, its patents, media about conspiracy theories on Bill Gates by anti-vaccination movements and an assortment of PDF files on different topics.
Microsoft issued a statement stating that it was investigating the leaks.
See also
BlueKeep (security vulnerability)
Comparison of operating systems
History of operating systems
List of operating systems
References
Further reading
External links
Windows XP End of Support
Security Update for Windows XP SP3 (KB4012598)
2001 software
Products and services discontinued in 2014
XP
IA-32 operating systems
Obsolete technologies |
33941 | https://en.wikipedia.org/wiki/Windows%202000 | Windows 2000 | Windows 2000 is a major release of the Windows NT operating system developed by Microsoft and oriented towards businesses. It was the direct successor to Windows NT 4.0, and was released to manufacturing on December 15, 1999, and was officially released to retail on February 17, 2000. It was Microsoft's business operating system until the introduction of Windows XP in 2001.
Windows 2000 introduced NTFS 3.0, Encrypting File System, as well as basic and dynamic disk storage. Support for people with disabilities was improved over Windows NT 4.0 with a number of new assistive technologies, and Microsoft increased support for different languages and locale information. The Windows 2000 Server family has additional features, most notably the introduction of Active Directory, which in the years following became a widely used directory service in business environments.
Four editions of Windows 2000 were released: Professional, Server, Advanced Server, and Datacenter Server; the latter was both released to manufacturing and launched months after the other editions. While each edition of Windows 2000 was targeted at a different market, they shared a core set of features, including many system utilities such as the Microsoft Management Console and standard system administration applications.
Microsoft marketed Windows 2000 as the most secure Windows version ever at the time; however, it became the target of a number of high-profile virus attacks such as Code Red and Nimda. For ten years after its release, it continued to receive patches for security vulnerabilities nearly every month until reaching the end of its lifecycle on July 13, 2010.
Windows 2000 and Windows 2000 Server were succeeded by Windows XP and Windows Server 2003, released in 2001 and 2003, respectively. Windows 2000's successor, Windows XP, became the minimum supported OS for most Windows programs up until Windows 7 replaced it, and unofficial methods were made to run these programs on Windows 2000.
Windows 2000 is the final version of Windows which supports PC-98, I486 and SGI Visual Workstation 320 and 540, as well as Alpha, MIPS and PowerPC in alpha, beta, and release candidate versions. Its successor, Windows XP, requires a processor in any supported architecture (IA-32 for 32-bit CPUs and x86-64 and Itanium for 64-bit CPUs).
History
Windows 2000 is a continuation of the Microsoft Windows NT family of operating systems, replacing Windows NT 4.0. The original name for the operating system was Windows NT 5.0 and the prep beta builds were compiled between March to August 1997, these builds were identical to Windows NT 4.0. The first official beta was released in September 1997, followed by Beta 2 in August 1998. On October 27, 1998, Microsoft announced that the name of the final version of the operating system would be Windows 2000, a name which referred to its projected release date. Windows 2000 Beta 3 was released in May 1999. NT 5.0 Beta 1 was similar to NT 4.0, including a very similarly themed logo. NT 5.0 Beta 2 introduced a new 'mini' boot screen, and removed the 'dark space' theme in the logo. The NT 5.0 betas had very long startup and shutdown sounds, though these were changed in the early Windows 2000 beta, but during Beta 3, a new piano-made startup and shutdown sounds were made, featured in the final version as well as in Windows Me. The new login prompt from the final version made its first appearance in Beta 3 build 1946 (the first build of Beta 3). The new, updated icons (for My Computer, Recycle Bin etc.) first appeared in Beta 3 build 1964. The Windows 2000 boot screen in the final version first appeared in Beta 3 build 1983. Windows 2000 did not have an actual codename because, according to Dave Thompson of Windows NT team, "Jim Allchin didn't like codenames".
Windows 2000 Service Pack 1 was codenamed "Asteroid" and Windows 2000 64-bit was codenamed "Janus." During development, there was a build for the Alpha which was abandoned in the final stages of development (between RC1 and RC2) after Compaq announced they had dropped support for Windows NT on Alpha. From here, Microsoft issued three release candidates between July and November 1999, and finally released the operating system to partners on December 12, 1999, followed by manufacturing three days later on December 15. The public could buy the full version of Windows 2000 on February 17, 2000. Three days before this event, which Microsoft advertised as "a standard in reliability," a leaked memo from Microsoft reported on by Mary Jo Foley revealed that Windows 2000 had "over 63,000 potential known defects." After Foley's article was published, she claimed that Microsoft blacklisted her for a considerable time. However, Abraham Silberschatz et al. claim in their computer science textbook that "Windows 2000 was the most reliable, stable operating system Microsoft had ever shipped to that point. Much of this reliability came from maturity in the source code, extensive stress testing of the system, and automatic detection of many serious errors in drivers." InformationWeek summarized the release "our tests show the successor to NT 4.0 is everything we hoped it would be. Of course, it isn't perfect either." Wired News later described the results of the February launch as "lackluster." Novell criticized Microsoft's Active Directory, the new directory service architecture, as less scalable or reliable than its own Novell Directory Services (NDS) alternative.
Windows 2000 was initially planned to replace both Windows 98 and Windows NT 4.0. However, this changed later, as an updated version of Windows 98 called Windows 98 SE was released in 1999.
On or shortly before February 12, 2004, "portions of the Microsoft Windows 2000 and Windows NT 4.0 source code were illegally made available on the Internet." The source of the leak was later traced to Mainsoft, a Windows Interface Source Environment partner. Microsoft issued the following statement:
"Microsoft source code is both copyrighted and protected as a trade secret. As such, it is illegal to post it, make it available to others, download it or use it."
Despite the warnings, the archive containing the leaked code spread widely on the file-sharing networks. On February 16, 2004, an exploit "allegedly discovered by an individual studying the leaked source code" for certain versions of Microsoft Internet Explorer was reported. On April 15, 2015, GitHub took down a repository containing a copy of the Windows NT 4.0 source code that originated from the leak.
Microsoft planned to release a 64-bit version of Windows 2000, which would run on 64-bit Intel Itanium microprocessors, in 2000. However, the first officially released 64-bit version of Windows was Windows XP 64-Bit Edition, released alongside the 32-bit editions of Windows XP on October 25, 2001, followed by the server versions Windows Datacenter Server Limited Edition and later Windows Advanced Server Limited Edition, which were based on the pre-release Windows Server 2003 (then known as Windows .NET Server) codebase. These editions were released in 2002, were shortly available through the OEM channel and then were superseded by the final versions of Server 2003.
New and updated features
Windows 2000 introduced many of the new features of Windows 98 and 98 SE into the NT line, such as the Windows Desktop Update, Internet Explorer 5 (Internet Explorer 6, which followed in 2001, is also available for Windows 2000), Outlook Express, NetMeeting, FAT32 support, Windows Driver Model, Internet Connection Sharing, Windows Media Player, WebDAV support etc. Certain new features are common across all editions of Windows 2000, among them NTFS 3.0, the Microsoft Management Console (MMC), UDF support, the Encrypting File System (EFS), Logical Disk Manager, Image Color Management 2.0, support for PostScript 3-based printers, OpenType (.OTF) and Type 1 PostScript (.PFB) font support (including a new font—Palatino Linotype—to showcase some OpenType features), the Data protection API (DPAPI), an LDAP/Active Directory-enabled Address Book, usability enhancements and multi-language and locale support. Windows 2000 also introduced USB device class drivers for USB printers, Mass storage class devices, and improved FireWire SBP-2 support for printers and scanners, along with a Safe removal applet for storage devices. Windows 2000 SP4 has added the native USB 2.0 support. Windows 2000 is also the first Windows version to support hibernation at the operating system level (OS-controlled ACPI S4 sleep state) unlike Windows 98 which required special drivers from the hardware manufacturer or driver developer.
A new capability designed to protect critical system files called Windows File Protection was introduced. This protects critical Windows system files by preventing programs other than Microsoft's operating system update mechanisms such as the Package Installer, Windows Installer and other update components from modifying them. The System File Checker utility provides users the ability to perform a manual scan of the integrity of all protected system files, and optionally repair them, either by restoring from a cache stored in a separate "DLLCACHE" directory, or from the original install media.
Microsoft recognized that a serious error (a Blue Screen of Death or stop error) could cause problems for servers that needed to be constantly running and so provided a system setting that would allow the server to automatically reboot when a stop error occurred. Also included is an option to dump any of the first 64 KB of memory to disk (the smallest amount of memory that is useful for debugging purposes, also known as a minidump), a dump of only the kernel's memory, or a dump of the entire contents of memory to disk, as well as write that this event happened to the Windows 2000 event log. In order to improve performance on servers running Windows 2000, Microsoft gave administrators the choice of optimizing the operating system's memory and processor usage patterns for background services or for applications. Windows 2000 also introduced core system administration and management features as the Windows Installer, Windows Management Instrumentation and Event Tracing for Windows (ETW) into the operating system.
Plug and Play and hardware support improvements
The most notable improvement from Windows NT 4.0 is the addition of Plug and Play with full ACPI and Windows Driver Model support. Similar to Windows 9x, Windows 2000 supports automatic recognition of installed hardware, hardware resource allocation, loading of appropriate drivers, PnP APIs and device notification events. The addition of the kernel PnP Manager along with the Power Manager are two significant subsystems added in Windows 2000.
Windows 2000 introduced version 3 print drivers (user mode printer drivers) based on Unidrv, which made it easier for printer manufacturers to write device drivers for printers. Generic support for 5-button mice is also included as standard and installing IntelliPoint allows reassigning the programmable buttons. Windows 98 lacked generic support. Driver Verifier was introduced to stress test and catch device driver bugs.
Shell
Windows 2000 introduces layered windows that allow for transparency, translucency and various transition effects like shadows, gradient fills and alpha-blended GUI elements to top-level windows. Menus support a new Fade transition effect.
The Start menu in Windows 2000 introduces personalized menus, expandable special folders and the ability to launch multiple programs without closing the menu by holding down the SHIFT key. A Re-sort button forces the entire Start Menu to be sorted by name. The Taskbar introduces support for balloon notifications which can also be used by application developers. Windows 2000 Explorer introduces customizable Windows Explorer toolbars, auto-complete in Windows Explorer address bar and Run box, advanced file type association features, displaying comments in shortcuts as tooltips, extensible columns in Details view (IColumnProvider interface), icon overlays, integrated search pane in Windows Explorer, sort by name function for menus, and Places bar in common dialogs for Open and Save.
Windows Explorer has been enhanced in several ways in Windows 2000. It is the first Windows NT release to include Active Desktop, first introduced as a part of Internet Explorer 4.0 (specifically Windows Desktop Update), and only pre-installed in Windows 98 by that time. It allowed users to customize the way folders look and behave by using HTML templates, having the file extension HTT. This feature was abused by computer viruses that employed malicious scripts, Java applets, or ActiveX controls in folder template files as their infection vector. Two such viruses are VBS/Roor-C and VBS.Redlof.a.
The "Web-style" folders view, with the left Explorer pane displaying details for the object currently selected, is turned on by default in Windows 2000. For certain file types, such as pictures and media files, the preview is also displayed in the left pane. Until the dedicated interactive preview pane appeared in Windows Vista, Windows 2000 had been the only Windows release to feature an interactive media player as the previewer for sound and video files, enabled by default. However, such a previewer can be enabled in previous versions of Windows with the Windows Desktop Update installed through the use of folder customization templates. The default file tooltip displays file title, author, subject and comments; this metadata may be read from a special NTFS stream, if the file is on an NTFS volume, or from an OLE structured storage stream, if the file is a structured storage document. All Microsoft Office documents since Office 4.0 make use of structured storage, so their metadata is displayable in the Windows 2000 Explorer default tooltip. File shortcuts can also store comments which are displayed as a tooltip when the mouse hovers over the shortcut. The shell introduces extensibility support through metadata handlers, icon overlay handlers and column handlers in Explorer Details view.
The right pane of Windows 2000 Explorer, which usually just lists files and folders, can also be customized. For example, the contents of the system folders aren't displayed by default, instead showing in the right pane a warning to the user that modifying the contents of the system folders could harm their computer. It's possible to define additional Explorer panes by using DIV elements in folder template files. This degree of customizability is new to Windows 2000; neither Windows 98 nor the Desktop Update could provide it. The new DHTML-based search pane is integrated into Windows 2000 Explorer, unlike the separate search dialog found in all previous Explorer versions. The Indexing Service has also been integrated into the operating system and the search pane built into Explorer allows searching files indexed by its database.
NTFS 3.0
Microsoft released the version 3.0 of NTFS (sometimes incorrectly called "NTFS 5" in relation to the kernel version number) as part of Windows 2000; this introduced disk quotas (provided by QuotaAdvisor), file-system-level encryption, sparse files and reparse points. Sparse files allow for the efficient storage of data sets that are very large yet contain many areas that only have zeros. Reparse points allow the object manager to reset a file namespace lookup and let file system drivers implement changed functionality in a transparent manner. Reparse points are used to implement volume mount points, junctions, Hierarchical Storage Management, Native Structured Storage and Single Instance Storage. Volume mount points and directory junctions allow for a file to be transparently referred from one file or directory location to another.
Windows 2000 also introduces a Distributed Link Tracking service to ensure file shortcuts remain working even if the target is moved or renamed. The target object's unique identifier is stored in the shortcut file on NTFS 3.0 and Windows can use the Distributed Link Tracking service for tracking the targets of shortcuts, so that the shortcut file may be silently updated if the target moves, even to another hard drive.
Encrypting File System
The Encrypting File System (EFS) introduced strong file system-level encryption to Windows. It allows any folder or drive on an NTFS volume to be encrypted transparently by the user. EFS works together with the EFS service, Microsoft's CryptoAPI and the EFS File System Runtime Library (FSRTL). To date, its encryption has not been compromised.
EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes less time to encrypt and decrypt large amounts of data than if an asymmetric key cipher were used. The symmetric key used to encrypt the file is then encrypted with a public key associated with the user who encrypted the file, and this encrypted data is stored in the header of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key stored in the file header. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user.
For a user losing access to their key, support for recovery agents that can decrypt files is built into EFS. A Recovery Agent is a user who is authorized by a public key recovery certificate to decrypt files belonging to other users using a special private key. By default, local administrators are recovery agents however they can be customized using Group Policy.
Basic and dynamic disk storage
Windows 2000 introduced the Logical Disk Manager and the diskpart command line tool for dynamic storage. All versions of Windows 2000 support three types of dynamic disk volumes (along with basic disks): simple volumes, spanned volumes and striped volumes:
Simple volume, a volume with disk space from one disk.
Spanned volumes, where up to 32 disks show up as one, increasing it in size but not enhancing performance. When one disk fails, the array is destroyed. Some data may be recoverable. This corresponds to JBOD and not to RAID-1.
Striped volumes, also known as RAID-0, store all their data across several disks in stripes. This allows better performance because disk reads and writes are balanced across multiple disks. Like spanned volumes, when one disk in the array fails, the entire array is destroyed (some data may be recoverable).
In addition to these disk volumes, Windows 2000 Server, Windows 2000 Advanced Server, and Windows 2000 Datacenter Server support mirrored volumes and striped volumes with parity:
Mirrored volumes, also known as RAID-1, store identical copies of their data on 2 or more identical disks (mirrored). This allows for fault tolerance; in the event one disk fails, the other disk(s) can keep the server operational until the server can be shut down for replacement of the failed disk.
Striped volumes with parity, also known as RAID-5, functions similar to striped volumes/RAID-0, except "parity data" is written out across each of the disks in addition to the data. This allows the data to be "rebuilt" in the event a disk in the array needs replacement.
Accessibility
With Windows 2000, Microsoft introduced the Windows 9x accessibility features for people with visual and auditory impairments and other disabilities into the NT-line of operating systems. These included:
StickyKeys: makes modifier keys (ALT, CTRL and SHIFT) become "sticky": a user can press the modifier key, and then release it before pressing the combination key. (Activated by pressing Shift five times quickly.)
FilterKeys: a group of keyboard-related features for people with typing issues, including:
Slow Keys: Ignore any keystroke not held down for a certain period.
Bounce Keys: Ignore repeated keystrokes pressed in quick succession.
Repeat Keys: lets users slow down the rate at which keys are repeated via the keyboard's key-repeat feature.
Toggle Keys: when turned on, Windows will play a sound when the CAPS LOCK, NUM LOCK or SCROLL LOCK key is pressed.
SoundSentry: designed to help users with auditory impairments, Windows 2000 shows a visual effect when a sound is played through the sound system.
MouseKeys: lets users move the cursor around the screen via the numeric keypad.
SerialKeys: lets Windows 2000 support speech augmentation devices.
High contrast theme: to assist users with visual impairments.
Microsoft Magnifier: a screen magnifier that enlarges a part of the screen the cursor is over.
Additionally, Windows 2000 introduced the following new accessibility features:
On-screen keyboard: displays a virtual keyboard on the screen and allows users to press its keys using a mouse or a joystick.
Microsoft Narrator: introduced in Windows 2000, this is a screen reader that utilizes the Speech API 4, which would later be updated to Speech API 5 in Windows XP
Utility Manager: an application designed to start, stop, and manage when accessibility features start. This was eventually replaced by the Ease of Access Center in Windows Vista.
Accessibility Wizard: a control panel applet that helps users set up their computer for people with disabilities.
Languages and locales
Windows 2000 introduced the Multilingual User Interface (MUI). Besides English, Windows 2000 incorporates support for Arabic, Armenian, Baltic, Central European, Cyrillic, Georgian, Greek, Hebrew, Indic, Japanese, Korean, Simplified Chinese, Thai, Traditional Chinese, Turkic, Vietnamese and Western European languages. It also has support for many different locales.
Games
Windows 2000 included version 7.0 of the DirectX API, commonly used by game developers on Windows 98. The last version of DirectX that was released for Windows 2000 was DirectX 9.0c (Shader Model 3.0), which shipped with Windows XP Service Pack 2. Microsoft published quarterly updates to DirectX 9.0c through the February 2010 release after which support was dropped in the June 2010 SDK. These updates contain bug fixes to the core runtime and some additional libraries such as D3DX, XAudio 2, XInput and Managed DirectX components. The majority of games written for versions of DirectX 9.0c (up to the February 2010 release) can therefore run on Windows 2000.
Windows 2000 included the same games as Windows NT 4.0 did: FreeCell, Minesweeper, Pinball, and Solitaire.
System utilities
Windows 2000 introduced the Microsoft Management Console (MMC), which is used to create, save, and open administrative tools. Each of these is called a console, and most allow an administrator to administer other Windows 2000 computers from one centralised computer. Each console can contain one or many specific administrative tools, called snap-ins. These can be either standalone (with one function), or an extension (adding functions to an existing snap-in). In order to provide the ability to control what snap-ins can be seen in a console, the MMC allows consoles to be created in author mode or user mode. Author mode allows snap-ins to be added, new windows to be created, all portions of the console tree to be displayed and consoles to be saved. User mode allows consoles to be distributed with restrictions applied. User mode consoles can grant full access to the user for any change, or they can grant limited access, preventing users from adding snapins to the console though they can view multiple windows in a console. Alternatively users can be granted limited access, preventing them from adding to the console and stopping them from viewing multiple windows in a single console.
The main tools that come with Windows 2000 can be found in the Computer Management console (in Administrative Tools in the Control Panel). This contains the Event Viewer—a means of seeing events and the Windows equivalent of a log file, a system information utility, a backup utility, Task Scheduler and management consoles to view open shared folders and shared folder sessions, configure and manage COM+ applications, configure Group Policy, manage all the local users and user groups, and a device manager. It contains Disk Management and Removable Storage snap-ins, a disk defragmenter as well as a performance diagnostic console, which displays graphs of system performance and configures data logs and alerts. It also contains a service configuration console, which allows users to view all installed services and to stop and start them, as well as configure what those services should do when the computer starts. CHKDSK has significant performance improvements.
Windows 2000 comes with two utilities to edit the Windows registry, REGEDIT.EXE and REGEDT32.EXE. REGEDIT has been directly ported from Windows 98, and therefore does not support editing registry permissions. REGEDT32 has the older multiple document interface (MDI) and can edit registry permissions in the same manner that Windows NT's REGEDT32 program could. REGEDIT has a left-side tree view of the Windows registry, lists all loaded hives and represents the three components of a value (its name, type, and data) as separate columns of a table. REGEDT32 has a left-side tree view, but each hive has its own window, so the tree displays only keys and it represents values as a list of strings. REGEDIT supports right-clicking of entries in a tree view to adjust properties and other settings. REGEDT32 requires all actions to be performed from the top menu bar. Windows XP is the first system to integrate these two programs into a single utility, adopting the REGEDIT behavior with the additional NT features.
The System File Checker (SFC) also comes with Windows 2000. It is a command line utility that scans system files and verifies whether they were signed by Microsoft and works in conjunction with the Windows File Protection mechanism. It can also repopulate and repair all the files in the Dllcache folder.
Recovery Console
The Recovery Console is run from outside the installed copy of Windows to perform maintenance tasks that can neither be run from within it nor feasibly be run from another computer or copy of Windows 2000. It is usually used to recover the system from problems that cause booting to fail, which would render other tools useless, like Safe Mode or Last Known Good Configuration, or chkdsk. It includes commands like fixmbr, which are not present in MS-DOS.
It has a simple command-line interface, used to check and repair the hard drive(s), repair boot information (including NTLDR), replace corrupted system files with fresh copies from the CD, or enable/disable services and drivers for the next boot.
The console can be accessed in either of the two ways:
Booting from the Windows 2000 CD, and choosing to start the Recovery Console from the CD itself instead of continuing with setup. The Recovery Console is accessible as long as the installation CD is available.
Preinstalling the Recovery Console on the hard disk as a startup option in Boot.ini, via WinNT32.exe, with the /cmdcons switch. In this case, it can only be started as long as NTLDR can boot from the system partition.
Windows Scripting Host 2.0
Windows 2000 introduced Windows Script Host 2.0 which included an expanded object model and support for logon and logoff scripts.
Networking
Starting with Windows 2000, the Server Message Block (SMB) protocol directly interfaces with TCP/IP. In Windows NT 4.0, SMB requires the NetBIOS over TCP/IP (NBT) protocol to work on a TCP/IP network.
Windows 2000 introduces a client-side DNS caching service. When the Windows DNS resolver receives a query response, the DNS resource record is added to a cache. When it queries the same resource record name again and it is found in the cache, then the resolver does not query the DNS server. This speeds up DNS query time and reduces network traffic.
Server family features
The Windows 2000 Server family consists of Windows 2000 Server, Windows 2000 Advanced Server, Windows 2000 Small Business Server, and Windows 2000 Datacenter Server.
All editions of Windows 2000 Server have the following services and features built in:
Routing and Remote Access Service (RRAS) support, facilitating dial-up and VPN connections using IPsec, L2TP or L2TP/IPsec, support for RADIUS authentication in Internet Authentication Service, network connection sharing, Network Address Translation, unicast and multicast routing schemes.
Remote access security features: Remote Access Policies for setup, verify Caller ID (IP address for VPNs), callback and Remote access account lockout
Autodial by location feature using the Remote Access Auto Connection Manager service
Extensible Authentication Protocol support in IAS (EAP-MD5 and EAP-TLS) later upgraded to PEAPv0/EAP-MSCHAPv2 and PEAP-EAP-TLS in Windows 2000 SP4
DNS server, including support for Dynamic DNS. Active Directory relies heavily on DNS.
IPsec support and TCP/IP filtering
Smart card support
Microsoft Connection Manager Administration Kit (CMAK) and Connection Point Services
Support for distributed file systems (DFS)
Hierarchical Storage Management support including remote storage, a service that runs with NTFS and automatically transfers files that are not used for some time to less expensive storage media
Fault tolerant volumes, namely Mirrored and RAID-5
Group Policy (part of Active Directory)
IntelliMirror, a collection of technologies for fine-grained management of Windows 2000 Professional clients that duplicates users' data, applications, files, and settings in a centralized location on the network. IntelliMirror employs technologies such as Group Policy, Windows Installer, Roaming profiles, Folder Redirection, Offline Files (also known as Client Side Caching or CSC), File Replication Service (FRS), Remote Installation Services (RIS) to address desktop management scenarios such as user data management, user settings management, software installation and maintenance.
COM+, Microsoft Transaction Server and Distributed Transaction Coordinator
MSMQ 2.0
TAPI 3.0
Integrated Windows Authentication (including Kerberos, Secure channel and SPNEGO (Negotiate) SSP packages for Security Support Provider Interface (SSPI)).
MS-CHAP v2 protocol
Public Key Infrastructure (PKI) and Enterprise Certificate Authority support
Terminal Services and support for the Remote Desktop Protocol (RDP)
Internet Information Services (IIS) 5.0 and Windows Media Services 4.1
Network quality of service features
A new Windows Time service which is an implementation of Simple Network Time Protocol (SNTP) as detailed in IETF . The Windows Time service synchronizes the date and time of computers in a domain running on Windows 2000 Server or later. Windows 2000 Professional includes an SNTP client.
The Server editions include more features and components, including the Microsoft Distributed File System (DFS), Active Directory support and fault-tolerant storage.
Distributed File System
The Distributed File System (DFS) allows shares in multiple different locations to be logically grouped under one folder, or DFS root. When users try to access a network share off the DFS root, the user is really looking at a DFS link and the DFS server transparently redirects them to the correct file server and share. A DFS root can only exist on a Windows 2000 version that is part of the server family, and only one DFS root can exist on that server.
There can be two ways of implementing a DFS namespace on Windows 2000: either through a standalone DFS root or a domain-based DFS root. Standalone DFS allows for only DFS roots on the local computer, and thus does not use Active Directory. Domain-based DFS roots exist within Active Directory and can have their information distributed to other domain controllers within the domain – this provides fault tolerance to DFS. DFS roots that exist on a domain must be hosted on a domain controller or on a domain member server. The file and root information is replicated via the Microsoft File Replication Service (FRS).
Active Directory
A new way of organizing Windows network domains, or groups of resources, called Active Directory, is introduced with Windows 2000 to replace Windows NT's earlier domain model. Active Directory's hierarchical nature allowed administrators a built-in way to manage user and computer policies and user accounts, and to automatically deploy programs and updates with a greater degree of scalability and centralization than provided in previous Windows versions. User information stored in Active Directory also provided a convenient phone book-like function to end users. Active Directory domains can vary from small installations with a few hundred objects, to large installations with millions. Active Directory can organise and link groups of domains into a contiguous domain name space to form trees. Groups of trees outside of the same namespace can be linked together to form forests.
Active Directory services could always be installed on a Windows 2000 Server Standard, Advanced, or Datacenter computer, and cannot be installed on a Windows 2000 Professional computer. However, Windows 2000 Professional is the first client operating system able to exploit Active Directory's new features. As part of an organization's migration, Windows NT clients continued to function until all clients were upgraded to Windows 2000 Professional, at which point the Active Directory domain could be switched to native mode and maximum functionality achieved.
Active Directory requires a DNS server that supports SRV resource records, or that an organization's existing DNS infrastructure be upgraded to support this. There should be one or more domain controllers to hold the Active Directory database and provide Active Directory directory services.
Volume fault tolerance
Along with support for simple, spanned and striped volumes, the Windows 2000 Server family also supports fault-tolerant volume types. The types supported are mirrored volumes and RAID-5 volumes:
Mirrored volumes: the volume contains several disks, and when data is written to one it is also written to the other disks. This means that if one disk fails, the data can be totally recovered from the other disk. Mirrored volumes are also known as RAID-1.
RAID-5 volumes: a RAID-5 volume consists of multiple disks, and it uses block-level striping with parity data distributed across all member disks. Should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive "on-the-fly."
Deployment
Windows 2000 can be deployed to a site via various methods. It can be installed onto servers via traditional media (such as CD) or via distribution folders that reside on a shared folder. Installations can be attended or unattended. During a manual installation, the administrator must specify configuration options. Unattended installations are scripted via an answer file, or a predefined script in the form of an INI file that has all the options filled in. An answer file can be created manually or using the graphical Setup manager. The Winnt.exe or Winnt32.exe program then uses that answer file to automate the installation. Unattended installations can be performed via a bootable CD, using Microsoft Systems Management Server (SMS), via the System Preparation Tool (Sysprep), via the Winnt32.exe program using the /syspart switch or via Remote Installation Services (RIS). The ability to slipstream a service pack into the original operating system setup files is also introduced in Windows 2000.
The Sysprep method is started on a standardized reference computer – though the hardware need not be similar – and it copies the required installation files from the reference computer to the target computers. The hard drive does not need to be in the target computer and may be swapped out to it at any time, with the hardware configured later. The Winnt.exe program must also be passed a /unattend switch that points to a valid answer file and a /s file that points to one or more valid installation sources.
Sysprep allows the duplication of a disk image on an existing Windows 2000 Server installation to multiple servers. This means that all applications and system configuration settings will be copied across to the new installations, and thus, the reference and target computers must have the same HALs, ACPI support, and mass storage devices – though Windows 2000 automatically detects "plug and play" devices. The primary reason for using Sysprep is to quickly deploy Windows 2000 to a site that has multiple computers with standard hardware. (If a system had different HALs, mass storage devices or ACPI support, then multiple images would need to be maintained.)
Systems Management Server can be used to upgrade multiple computers to Windows 2000. These must be running Windows NT 3.51, Windows NT 4.0, Windows 98 or Windows 95 OSR2.x along with the SMS client agent that can receive software installation operations. Using SMS allows installations over a wide area and provides centralised control over upgrades to systems.
Remote Installation Services (RIS) are a means to automatically install Windows 2000 Professional (and not Windows 2000 Server) to a local computer over a network from a central server. Images do not have to support specific hardware configurations and the security settings can be configured after the computer reboots as the service generates a new unique security ID (SID) for the machine. This is required so that local accounts are given the right identifier and do not clash with other Windows 2000 Professional computers on a network.
RIS requires that client computers are able to boot over the network via either a network interface card that has a Pre-Boot Execution Environment (PXE) boot ROM installed or that the client computer has a network card installed that is supported by the remote boot disk generator. The remote computer must also meet the Net PC specification. The server that RIS runs on must be Windows 2000 Server and it must be able to access a network DNS Service, a DHCP service and the Active Directory services.
Editions
Microsoft released various editions of Windows 2000 for different markets and business needs: Professional, Server, Advanced Server and Datacenter Server. Each was packaged separately.
Windows 2000 Professional was designed as the desktop operating system for businesses and power users. It is the client version of Windows 2000. It offers greater security and stability than many of the previous Windows desktop operating systems. It supports up to two processors, and can address up to 4GB of RAM. The system requirements are a Pentium processor (or equivalent) of 133MHz or greater, at least 32MB of RAM, 650MB of hard drive space, and a CD-ROM drive (recommended: Pentium II, 128MB of RAM, 2GB of hard drive space, and CD-ROM drive). However, despite the official minimum processor requirements, it is still possible to install Windows 2000 on 4th-generation x86 CPUs such as the 80486.
Windows 2000 Server shares the same user interface with Windows 2000 Professional, but contains additional components for the computer to perform server roles and run infrastructure and application software. A significant new component introduced in the server versions is Active Directory, which is an enterprise-wide directory service based on LDAP (Lightweight Directory Access Protocol). Additionally, Microsoft integrated Kerberos network authentication, replacing the often-criticised NTLM (NT LAN Manager) authentication system used in previous versions. This also provided a purely transitive-trust relationship between Windows 2000 Server domains in a forest (a collection of one or more Windows 2000 domains that share a common schema, configuration, and global catalog, being linked with two-way transitive trusts). Furthermore, Windows 2000 introduced a Domain Name Server which allows dynamic registration of IP addresses. Windows 2000 Server supports up to 4 processors and 4GB of RAM, with a minimum requirement of 128MB of RAM and 1GB hard disk space, however requirements may be higher depending on installed components.
Windows 2000 Advanced Server is a variant of Windows 2000 Server operating system designed for medium-to-large businesses. It offers the ability to create clusters of servers, support for up to 8 CPUs, a main memory amount of up to 8GB on Physical Address Extension (PAE) systems and the ability to do 8-way SMP. It supports TCP/IP load balancing and builds on Microsoft Cluster Server (MSCS) in Windows NT Enterprise Server 4.0, adding enhanced functionality for two-node clusters. System requirements are similar to those of Windows 2000 Server, however they may need to be higher to scale to larger infrastructure.
Windows 2000 Datacenter Server is a variant of Windows 2000 Server designed for large businesses that move large quantities of confidential or sensitive data frequently via a central server. Like Advanced Server, it supports clustering, failover and load balancing. Its minimum system requirements are normal, but it was designed to be capable of handing advanced, fault-tolerant and scalable hardware—for instance computers with up to 32 CPUs and 32GBs RAM, with rigorous system testing and qualification, hardware partitioning, coordinated maintenance and change control. System requirements are similar to those of Windows 2000 Server Advanced, however they may need to be higher to scale to larger infrastructure. Windows 2000 Datacenter Server was released to manufacturing on August 11, 2000 and launched on September 26, 2000. This edition was based on Windows 2000 with Service Pack 1 and was not available at retail.
Service packs
Windows 2000 has received four full service packs and one rollup update package following SP4, which is the last service pack. Microsoft phased out all development of its Java Virtual Machine (JVM) from Windows 2000 in SP3. Internet Explorer 5.01 has also been upgraded to the corresponding service pack level.
Service Pack 4 with Update Rollup was released on September 13, 2005, nearly four years following the release of Windows XP and sixteen months prior to the release of Windows Vista.
Microsoft had originally intended to release a fifth service pack for Windows 2000, but Microsoft cancelled this project early in its development, and instead released Update Rollup 1 for SP4, a collection of all the security-related hotfixes and some other significant issues. The Update Rollup does not include all non-security related hotfixes and is not subjected to the same extensive regression testing as a full service pack. Microsoft states that this update will meet customers' needs better than a whole new service pack, and will still help Windows 2000 customers secure their PCs, reduce support costs, and support existing computer hardware.
Upgradeability
Several Windows 2000 components are upgradable to latest versions, which include new versions introduced in later versions of Windows, and other major Microsoft applications are available. These latest versions for Windows 2000 include:
ActiveSync 4.5
DirectX 9.0c (5 February 2010 Redistributable)
Internet Explorer 6 SP1 and Outlook Express 6 SP1
Microsoft Agent 2.0
Microsoft Data Access Components 2.81
Microsoft NetMeeting 3.01 and Microsoft Office 2003 on Windows 2000 SP3 and SP4 (and Microsoft Office XP on Windows 2000 versions below SP3.)
MSN Messenger 7.0 (Windows Messenger)
MSXML 6.0 SP2
.NET Framework 2.0 SP2
Tweak UI 1.33
Visual C++ 2008
Visual Studio 2005
Windows Desktop Search 2.66
Windows Script Host 5.7
Windows Installer 3.1
Windows Media Format Runtime and Windows Media Player 9 Series (including Windows Media Encoder 7.1 and the Windows Media 8 Encoding Utility)
Security
During the Windows 2000 period, the nature of attacks on Windows servers changed: more attacks came from remote sources via the Internet. This has led to an overwhelming number of malicious programs exploiting the IIS services – specifically a notorious buffer overflow tendency. This tendency is not operating-system-version specific, but rather configuration-specific: it depends on the services that are enabled. Following this, a common complaint is that "by default, Windows 2000 installations contain numerous potential security problems. Many unneeded services are installed and enabled, and there is no active local security policy." In addition to insecure defaults, according to the SANS Institute, the most common flaws discovered are remotely exploitable buffer overflow vulnerabilities. Other criticized flaws include the use of vulnerable encryption techniques.
Code Red and Code Red II were famous (and much discussed) worms that exploited vulnerabilities of the Windows Indexing Service of Windows 2000's Internet Information Services (IIS). In August 2003, security researchers estimated that two major worms called Sobig and Blaster infected more than half a million Microsoft Windows computers. The 2005 Zotob worm was blamed for security compromises on Windows 2000 machines at ABC, CNN, the New York Times Company, and the United States Department of Homeland Security.
On September 8, 2009, Microsoft skipped patching two of the five security flaws that were addressed in the monthly security update, saying that patching one of the critical security flaws was "infeasible." According to Microsoft Security Bulletin MS09-048: "The architecture to properly support TCP/IP protection does not exist on Microsoft Windows 2000 systems, making it infeasible to build the fix for Microsoft Windows 2000 Service Pack 4 to eliminate the vulnerability. To do so would require re-architecting a very significant amount of the Microsoft Windows 2000 Service Pack 4 operating system, there would be no assurance that applications designed to run on Microsoft Windows 2000 Service Pack 4 would continue to operate on the updated system." No patches for this flaw were released for the newer Windows XP (32-bit) and Windows XP Professional x64 Edition either, despite both also being affected; Microsoft suggested turning on Windows Firewall in those versions.
Support lifecycle
Windows 2000 and Windows 2000 Server were superseded by newer Microsoft operating systems: Windows 2000 Server products by Windows Server 2003, and Windows 2000 Professional by Windows XP Professional.
The Windows 2000 family of operating systems moved from mainstream support to the extended support phase on June 30, 2005. Microsoft says that this marks the progression of Windows 2000 through the Windows lifecycle policy. Under mainstream support, Microsoft freely provides design changes if any, service packs and non-security related updates in addition to security updates, whereas in extended support, service packs are not provided and non-security updates require contacting the support personnel by e-mail or phone. Under the extended support phase, Microsoft continued to provide critical security updates every month for all components of Windows 2000 (including Internet Explorer 5.0 SP4) and paid per-incident support for technical issues. Because of Windows 2000's age, updated versions of components such as Windows Media Player 11 and Internet Explorer 7 have not been released for it. In the case of Internet Explorer, Microsoft said in 2005 that, "some of the security work in IE 7 relies on operating system functionality in XP SP2 that is non-trivial to port back to Windows 2000."
While users of Windows 2000 Professional and Server were eligible to purchase the upgrade license for Windows Vista Business or Windows Server 2008, neither of these operating systems can directly perform an upgrade installation from Windows 2000; a clean installation must be performed instead or a two-step upgrade through XP/2003. Microsoft has dropped the upgrade path from Windows 2000 (and earlier) to Windows 7. Users of Windows 2000 must buy a full Windows 7 license.
Although Windows 2000 is the last NT-based version of Microsoft Windows which does not include product activation, Microsoft has introduced Windows Genuine Advantage for certain downloads and non-critical updates from the Download Center for Windows 2000.
Windows 2000 reached the end of its lifecycle on July 13, 2010 (alongside Service Pack 2 of Windows XP). It will not receive new security updates and new security-related hotfixes after this date. In Japan, over 130,000 servers and 500,000 PCs in local governments were affected; many local governments said that they will not update as they do not have funds to cover a replacement.
As of 2011, Windows Update still supports the Windows 2000 updates available on Patch Tuesday in July 2010, e.g., if older optional Windows 2000 features are enabled later. Microsoft Office products under Windows 2000 have their own product lifecycles. While Internet Explorer 6 for Windows XP did receive security patches up until it lost support, this is not the case for IE6 under Windows 2000. The Windows Malicious Software Removal Tool installed monthly by Windows Update for XP and later versions can be still downloaded manually for Windows 2000.
Microsoft in 2020 announced that it would disable the Windows Update service for SHA-1 endpoints and since Windows 2000 did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However, as of April 2021, the old updates for Windows 2000 are still available on the Microsoft Update Catalog.
Total cost of ownership
In October 2002, Microsoft commissioned IDC to determine the total cost of ownership (TCO) for enterprise applications on Windows 2000 versus the TCO of the same applications on Linux. IDC's report is based on telephone interviews of IT executives and managers of 104 North American companies in which they determined what they were using for a specific workload for file, print, security and networking services.
IDC determined that the four areas where Windows 2000 had a better TCO than Linux – over a period of five years for an average organization of 100 employees – were file, print, network infrastructure and security infrastructure. They determined, however, that Linux had a better TCO than Windows 2000 for web serving. The report also found that the greatest cost was not in the procurement of software and hardware, but in staffing costs and downtime. While the report applied a 40% productivity factor during IT infrastructure downtime, recognizing that employees are not entirely unproductive, it did not consider the impact of downtime on the profitability of the business. The report stated that Linux servers had less unplanned downtime than Windows 2000 servers. It found that most Linux servers ran less workload per server than Windows 2000 servers and also that none of the businesses interviewed used 4-way SMP Linux computers. The report also did not take into account specific application servers – servers that need low maintenance and are provided by a specific vendor. The report did emphasize that TCO was only one factor in considering whether to use a particular IT platform, and also noted that as management and server software improved and became better packaged the overall picture shown could change.
See also
Architecture of Windows NT
BlueKeep (security vulnerability)
Comparison of operating systems
DEC Multia, one of the DEC Alpha computers capable of running Windows 2000 beta
Microsoft Servers, Microsoft's network server software brand
Windows Neptune, a cancelled consumer edition based on Windows 2000
References
Further reading
Bolosky, William J.; Corbin, Scott; Goebel, David; & Douceur, John R. "Single Instance Storage in Windows 2000." Microsoft Research & Balder Technology Group, Inc. (white paper).
Bozman, Jean; Gillen, Al; Kolodgy, Charles; Kusnetzky, Dan; Perry, Randy; & Shiang, David (October 2002). "Windows 2000 Versus Linux in Enterprise Computing: An assessment of business value for selected workloads." IDC, sponsored by Microsoft Corporation. White paper.
Finnel, Lynn (2000). MCSE Exam 70–215, Microsoft Windows 2000 Server. Microsoft Press. .
Microsoft. Running Nonnative Applications in Windows 2000 Professional . Windows 2000 Resource Kit. Retrieved May 4, 2005.
Microsoft. "Active Directory Data Storage." Retrieved May 9, 2005.
Minasi, Mark (1999). Installing Windows 2000 of Mastering Windows 2000 Server. Sybex. Chapter 3 – Installing Windows 2000 On Workstations with Remote Installation Services.
Russinovich, Mark (October 1997). "Inside NT's Object Manager." Windows IT Pro.
Russinovich, Mark (2002). "Inside Win2K NTFS, Part 1." Windows IT Pro (formerly Windows 2000 Magazine).
Saville, John (January 9, 2000). "What is Native Structure Storage?." Windows IT Pro (formerly Windows 2000 Magazine).
Siyan, Kanajit S. (2000). "Windows 2000 Professional Reference." New Riders. .
Solomon, David; & Russinovich, Mark E. (2000). Inside Microsoft Windows 2000 (Third Edition). Microsoft Press. .
Tanenbaum, Andrew S. (2001), Modern Operating Systems (2nd Edition), Prentice-Hall
Trott, Bob (October 27, 1998). "It's official: NT 5.0 becomes Windows 2000." InfoWorld.
Wallace, Rick (2000). MCSE Exam 70–210, Microsoft Windows 2000 Professional. Microsoft Press. .
External links
Windows 2000 End-of-Life
Windows 2000 Service Pack 4
Windows 2000 Update Rollup 1 Version 2
1999 software
2000 software
Products and services discontinued in 2010
IA-32 operating systems
2000 |
33958 | https://en.wikipedia.org/wiki/Wake-on-LAN | Wake-on-LAN | Wake-on-LAN (WoL or WOL) is an Ethernet or Token Ring computer networking standard that allows a computer to be turned on or awakened by a network message.
The message is usually sent to the target computer by a program executed on a device connected to the same local area network. It is also possible to initiate the message from another network by using subnet directed broadcasts or a WoL gateway service.
Equivalent terms include wake on WAN, remote wake-up, power on by LAN, power up by LAN, resume by LAN, resume on LAN and wake up on LAN. If the computer being awakened is communicating via Wi-Fi, a supplementary standard called Wake on Wireless LAN (WoWLAN) must be employed.
The WoL and WoWLAN standards are often supplemented by vendors to provide protocol-transparent on-demand services, for example in the Apple Bonjour wake-on-demand (Sleep Proxy) feature.
History
In October 1996, Intel and IBM formed the Advanced Manageability Alliance (AMA). In April 1997, this alliance introduced the Wake-on-LAN technology.
Principle of operation
Ethernet connections, including home and work networks, wireless data networks and the Internet itself, are based on frames sent between computers. WoL is implemented using a specially designed frame called a magic packet, which is sent to all computers in a network, among them the computer to be awakened. The magic packet contains the MAC address of the destination computer, an identifying number built into each network interface card ("NIC") or other ethernet device in a computer, that enables it to be uniquely recognized and addressed on a network. Powered-down or turned off computers capable of Wake-on-LAN will contain network devices able to "listen" to incoming packets in low-power mode while the system is powered down. If a magic packet is received that is directed to the device's MAC address, the NIC signals the computer's power supply or motherboard to initiate system wake-up, in the same way that pressing the power button would do.
The magic packet is sent on the data link layer (layer 2 in the OSI model) and when sent, is broadcast to all attached devices on a given network, using the network broadcast address; the IP-address (layer 3 in the OSI model) is not used.
Because Wake-on-LAN is built upon broadcast technology, it can generally only be used within the current network subnet. There are some exceptions, though, and Wake-on-LAN can operate across any network in practice, given appropriate configuration and hardware, including remote wake-up across the Internet.
In order for Wake-on-LAN to work, parts of the network interface need to stay on. This consumes a small amount of standby power, much less than normal operating power. The link speed is usually reduced to the lowest possible speed to not waste power (e.g. a Gigabit Ethernet NIC maintains only a 10 Mbit/s link). Disabling wake-on-LAN when not needed can very slightly reduce power consumption on computers that are switched off but still plugged into a power socket. The power drain becomes a consideration on battery powered devices such as laptops as this can deplete the battery even when the device is completely shut down.
Magic packet
The magic packet is a frame that is most often sent as a broadcast and that contains anywhere within its payload 6 bytes of all 255 (FF FF FF FF FF FF in hexadecimal), followed by sixteen repetitions of the target computer's 48-bit MAC address, for a total of 102 bytes.
Since the magic packet is only scanned for the string above, and not actually parsed by a full protocol stack, it could be sent as payload of any network- and transport-layer protocol, although it is typically sent as a UDP datagram to port 0 (reserved port number), 7 (Echo Protocol) or 9 (Discard Protocol), or directly over Ethernet as EtherType 0x0842. A connection-oriented transport-layer protocol like TCP is less suited for this task as it requires establishing an active connection before sending user data.
A standard magic packet has the following basic limitations:
Requires destination computer MAC address (also may require a SecureOn password)
Does not provide a delivery confirmation
May not work outside of the local network
Requires hardware support of Wake-on-LAN on destination computer
Most 802.11 wireless interfaces do not maintain a link in low power states and cannot receive a magic packet
The Wake-on-LAN implementation is designed to be very simple and to be quickly processed by the circuitry present on the network interface card with minimal power requirement. Because Wake-on-LAN operates below the IP protocol layer, IP addresses and DNS names are meaningless and so the MAC address is required.
Subnet directed broadcasts
A principal limitation of standard broadcast wake-on-LAN is that broadcast packets are generally not routed. This prevents the technique being used in larger networks or over the Internet. Subnet directed broadcasts (SDB) may be used to overcome this limitation. SDB may require changes to intermediate router configuration. Subnet directed broadcasts are treated like unicast network packets until processed by the final (local) router. This router then broadcasts the packet using layer 2 broadcast. This technique allows a broadcast to be initiated on a remote network but requires all intervening routers to forward the SDB. When preparing a network to forward SDB packets, care must be taken to filter packets so that only desired (e.g. WoL) SDB packets are permittedotherwise the network may become a participant in DDoS attacks such as the Smurf attack.
Troubleshooting magic packets
Wake-on-LAN can be a difficult technology to implement, because it requires appropriate BIOS/UEFI, network card and, sometimes, operating system and router support to function reliably. In some cases, hardware may wake from one low power state but not from others. This means that due to hardware issues the computer may be waking up from the "soft off state" (S5) but doesn't wake from sleep or hibernation or vice versa. Also, it is not always clear what kind of magic packet a NIC expects to see.
In that case, software tools like a packet analyzer can help with Wake-on-LAN troubleshooting as they allow confirming (while the PC is still on) that the magic packet is indeed visible to a particular computer's NIC. The same magic packet can then be used to find out if the computer powers up from an offline state. This allows networking issues to be isolated from other hardware issues. In some cases they also confirm that the packet was destined for a specific PC or sent to a broadcast address and they can additionally show the packet's internals.
Starting with Windows Vista, the operating system logs all wake sources in the "System" event log. The Event Viewer and the powercfg.exe /lastwake command can retrieve them.
Security considerations
Unauthorized access
Magic packets are sent via the data link or OSI-2 layer, which can be used or abused by anyone on the same LAN, unless the L2 LAN equipment is capable of (and configured for) filtering such traffic to match site-wide security requirements.
Firewalls may be used to prevent clients among the public WAN from accessing the broadcast addresses of inside LAN segments, or routers may be configured to ignore subnet-directed broadcasts (see above).
Certain NICs support a security feature called "SecureOn". It allows users to store within the NIC a hexadecimal password of 6 bytes. Clients have to append this password to the magic packet. The NIC wakes the system only if the MAC address and password are correct. This security measure significantly decreases the risk of successful brute force attacks, by increasing the search space by 48 bits (6 bytes), up to 296 combinations if the MAC address is entirely unknown. However any network eavesdropping will expose the cleartext password. Still, only a few NIC and router manufacturers support such security features.
Abuse of the Wake-on-LAN feature only allows computers to be switched on; it does not in itself bypass password and other forms of security, and is unable to power off the machine once on. However, many client computers attempt booting from a PXE server when powered up by WoL. Therefore, a combination of DHCP and PXE servers on the network can sometimes be used to start a computer with an attacker's boot image, bypassing any security of the installed operating system and granting access to unprotected, local disks over the network.
Interactions with network access control
The use of Wake-on-LAN technology on enterprise networks can sometimes conflict with network access control solutions such as 802.1X or MAC-based authentication, which may prevent magic packet delivery if a machine's WoL hardware has not been designed to maintain a live authentication session while in a sleep state. Configuration of these two features in tandem often requires tuning of timing parameters and thorough testing.
Data privacy
Some PCs include technology built into the chipset to improve security for Wake-on-LAN. For example, Intel AMT (a component of Intel vPro technology), includes Transport Layer Security (TLS), an industry-standard protocol that strengthens encryption.
AMT uses TLS encryption to secure an out-of-band communication tunnel to an AMT-based PC for remote management commands such as Wake-on-LAN. AMT secures the communication tunnel with Advanced Encryption Standard (AES) 128-bit encryption and RSA keys with modulus lengths of 2,048 bits. Because the encrypted communication is out-of-band, the PC's hardware and firmware receive the magic packet before network traffic reaches the software stack for the operating system (OS). Since the encrypted communication occurs "below" the OS level, it is less vulnerable to attacks by viruses, worms, and other threats that typically target the OS level.
IT shops using Wake-on-LAN through the Intel AMT implementation can wake an AMT PC over network environments that require TLS-based security, such as IEEE 802.1X, Cisco Self Defending Network (SDN), and Microsoft Network Access Protection (NAP) environments. The Intel implementation also works for wireless networks.
Hardware requirements
Wake-on-LAN support is implemented on the motherboard of a computer and the network interface card, and is consequently not dependent on the operating system running on the hardware. Some operating systems can control Wake-on-LAN behaviour via NIC drivers. With older motherboards, if the network interface is a plug-in card rather than being integrated into the motherboard, the card may need to be connected to the motherboard by an additional cable. Motherboards with an embedded Ethernet controller which supports Wake-on-LAN do not need a cable. The power supply must meet ATX 2.01 specifications.
Hardware implementations
Older motherboards must have a WAKEUP-LINK header onboard connected to the network card via a special 3-pin cable; however, systems supporting the PCI 2.2 standard and with a PCI 2.2 compliant network adapter card do not usually require a Wake-on-LAN cable as the required standby power is relayed through the PCI bus.
PCI version 2.2 supports PME (Power Management Events). PCI cards send and receive PME signals via the PCI socket directly, without the need for a Wake-on-LAN cable.
Wake-on-LAN usually needs to be enabled in the Power Management section of a PC motherboard's BIOS/UEFI setup utility, although on some systems, such as Apple computers, it is enabled by default. On older systems the BIOS/UEFI setting may be referred to as WoL; on newer systems supporting PCI version 2.2, it may be referred to as PME (Power Management Events, which include WoL). It may also be necessary to configure the computer to reserve standby power for the network card when the system is shut down.
In addition, in order to get Wake-on-LAN to work, enabling this feature on the network interface card or on-board silicon is sometimes required. Details of how to do this depend upon the operating system and the device driver.
Laptops powered by the Intel Centrino Processor Technology or newer (with explicit BIOS/UEFI support) allow waking up the machine using wireless Wake on Wireless LAN (WoWLAN).
In most modern PCs, ACPI is notified of the "waking up" and takes control of the power-up. In ACPI, OSPM must record the "wake source" or the device that is causing the power-upthe device being the "Soft" power switch, the NIC (via Wake-on-LAN), the cover being opened, a temperature change, etc.
The 3-pin WoL interface on the motherboard consist of pin-1 +5V DC (red), pin-2 Ground (black), pin-3 (green or yellow). By supplying the pin-3 wake signal with +5V DC the computer will be triggered to power up provided WoL is enabled in the BIOS/UEFI configuration.
Software requirements
Software which sends a WoL magic packet is referred to in different circles as both a "client" and a "server", which can be a source of confusion. While WoL hardware or firmware is arguably performing the role of a "server", Web-based interfaces which act as a gateway through which users can issue WoL packets without downloading a local client often become known as "The Wake On LAN Server" to users. Additionally, software that administers WoL capabilities from the host OS side may be carelessly referred to as a "client" on occasion, and of course, machines running WoL generally tend to be end-user desktops, and as such, are "clients" in modern IT parlance.
Creating and sending the magic packet
Sending a magic packet requires knowledge of the target computer's MAC address. Software to send WoL magic packets is available for all modern platforms, including Windows, Macintosh and Linux, plus many smartphones. Examples include: Wake On LAN GUI, LAN Helper, Magic Packet Utility, NetWaker for Windows, Nirsoft WakeMeOnLAN, WakeOnLANx, EMCO WOL, Aquila Tech Wake on LAN, ManageEngine WOL utility, FusionFenix and SolarWinds WOL Tool. There are also web sites that allow a Magic Packet to be sent online without charge. Example source code for a developer to add Wake-on-LAN to a program is readily available in many computer languages.
Ensuring the magic packet travels from source to destination
If the sender is on the same subnet (local network, aka LAN) as the computer to be awakened there are generally no issues. When sending over the Internet, and in particular where a NAT (Network Address Translator) router, as typically deployed in most homes, is involved, special settings often need to be set. For example, in the router, the computer to be controlled needs to have a dedicated IP address assigned (aka a DHCP reservation). Also, since the controlled computer will be "sleeping" except for some electricity on to part of its LAN card, typically it will not be registered at the router as having an active IP lease.
Further, the WoL protocol operates on a "deeper level" in the multi-layer networking architecture. To ensure the magic packet gets from source to destination while the destination is sleeping, the ARP Binding (also known as IP & MAC binding) must typically be set in a NAT router. This allows the router to forward the magic packet to the sleeping computer's MAC adapter at a networking layer below typical IP usage. In the NAT router, ARP binding requires just a dedicated IP number and the MAC address of the destination computer. There are some security implications associated with ARP binding (see ARP spoofing); however, as long as none of the computers connected to the LAN are compromised, an attacker must use a computer that is connected directly to the target LAN (plugged into the LAN via cable, or by breaking through the Wi‑Fi connection security to gain access to the LAN).
Most home routers are able to send magic packets to LAN; for example, routers with the DD-WRT, Tomato or PfSense firmware have a built-in Wake-on-LAN client. OpenWrt supports both Linux implementations for WoL etherwake and WoLs.
Responding to the magic packet
Most WoL hardware functionally is typically blocked by default and needs to be enabled in using the system BIOS/UEFI. Further configuration from the OS is required in some cases, for example via the Device Manager network card properties on Windows operating systems.
Microsoft Windows
Newer versions of Microsoft Windows integrate WoL functionality into the Device Manager. This is available in the Power Management tab of each network device's driver properties. For full support of a device's WoL capabilities (such as the ability to wake from an ACPI S5 power off state), installation of the full driver suite from the network device manufacturer may be necessary, rather than the bare driver provided by Microsoft or the computer manufacturer. In most cases correct BIOS/UEFI configuration is also required for WoL to function.
The ability to wake from a hybrid shutdown state (S4) (aka Fast Startup) or a soft powered-off state (S5) is unsupported in Windows 8 and above, and Windows Server 2012 and above. This is because of a change in the OS behavior which causes network adapters to be explicitly not armed for WoL when shutdown to these states occurs. WOL from a non-hybrid hibernation state (S4) (i.e. when a user explicitly requests hibernation) or a sleep state (S3) is supported. However, some hardware will enable WoL from states that are unsupported by Windows.
Mac hardware (OS X)
Modern Mac hardware supports WoL functionality when the computer is in a sleep state, but it is not possible to wake up a Mac computer from a powered-off state.
The feature is controlled via the OS X System Preferences Energy Saver panel, in the Options tab. Marking the Wake for network access checkbox enables Wake-on-LAN.
Apple's Apple Remote Desktop client management system can be used to send Wake-on-LAN packets, but there are also freeware and shareware Mac OS X applications available.
On Mac OS X Snow Leopard and later, the service is called Wake on Demand or Bonjour Sleep Proxy and is synonymous with the Sleep Proxy Service. It comes enabled out of the box, but in previous versions of the operating system, the service needs to be enabled under the Energy Saver pane of System Preferences. The network interface card may allow the service to function only on Wi‑Fi, only on Ethernet, or both.
Linux
Wake-on-LAN support may be changed using a subfunction of the ethtool command.
Other machine states and LAN wakeup signals
In the early days of Wake-on-LAN the situation was relatively simple: a machine was connected to power but switched off, and it was arranged that a special packet be sent to switch the machine on.
Since then many options have been added and standards agreed upon. A machine can be in seven power states from S0 (fully on) through S5 (powered down but plugged in) and disconnected from power (G3, Mechanical Off), with names such as "sleep", "standby", and "hibernate". In some reduced-power modes the system state is stored in RAM and the machine can wake up very quickly; in others the state is saved to disk and the motherboard powered down, taking at least several seconds to wake up. The machine can be awakened from a reduced-power state by a variety of signals.
The machine's BIOS/UEFI must be set to allow Wake-on-LAN. To allow wakeup from powered-down state S5, wakeup on PME (Power Management Event) is also required. The Intel adapter allows "Wake on Directed Packet", "Wake on Magic Packet", "Wake on Magic Packet from power off state", and "Wake on Link". Wake on Directed Packet is particularly useful as the machine will automatically come out of standby or hibernation when it is referenced, without the user or application needing to explicitly send a magic packet. Unfortunately in many networks waking on directed packet (any packet with the adapter's MAC address or IP address) or on link is likely to cause wakeup immediately after going to a low-power state. Details for any particular motherboard and network adapter are to be found in the relevant manuals; there is no general method. Knowledge of signals on the network may also be needed to prevent spurious wakening.
Unattended operation
For a machine which is normally unattended, precautions need to be taken to make the Wake-on-LAN function as reliable as possible. For a machine procured to work in this way, Wake-on-LAN functionality is an important part of the purchase procedure.
Some machines do not support Wake-on-LAN after they have been disconnected from power (e.g., when power is restored after a power failure). Use of an uninterruptible power supply (UPS) will give protection against a short period without power, although the battery will discharge during a prolonged power-cut.
Waking up without operator presence
If a machine that is not designed to support Wake-on-LAN is left powered down after power failure, it may be possible to set the BIOS/UEFI to start it up automatically on restoration of power, so that it is never left in an unresponsive state. A typical BIOS/UEFI setting is AC back function which may be on, off, or memory. On is the correct setting in this case; memory, which restores the machine to the state it was in when power was lost, may leave a machine which was hibernating in an unwakeable state.
Other problems can affect the ability to start or control the machine remotely: hardware failure of the machine or network, failure of the BIOS/UEFI settings battery (the machine will halt when started before the network connection is made, displaying an error message and requiring a keypress), loss of control of the machine due to software problems (machine hang, termination of remote control or networking software, etc.), and virus infection or hard disk corruption. Therefore, the use of a reliable server-class machine with RAID drives, redundant power supplies, etc., will help to maximize availability. Additionally, a device which can switch the machine off and on again, controlled perhaps by a remote signal, can force a reboot which will clear problems due to misbehaving software.
For a machine not in constant use, energy can be conserved by putting the machine into low-power RAM standby after a short timeout period. If a connection delay of a minute or two is acceptable, the machine can timeout into hibernation, powered off with its state saved to disk.
Wake on Internet
The originator of the wakeup signal (magic packet) does not have to be on the same local area network (LAN) as the computer being woken. It can be sent from anywhere using:
A virtual private network (VPN)which makes the originator appear to be a member of the LAN.
The Internet with local broadcastingsome routers permit a packet received from the Internet to be broadcast to the entire LAN; the default TCP or UDP ports preconfigured to relay WoL requests are usually ports 7 (Echo Protocol), 9 (Discard Protocol), or both. This proxy setting must be enabled in the router, and port forwarding rules may need to be configured in its embedded firewall in order to accept magic packets coming from the internet side to these restricted port numbers, and to allow rebroadcasting them on the local network (normally to the same ports and the same TCP or UDP protocol). Such routers may also be configurable to use different port numbers for this proxying service.
The Internet without local broadcastingif (as often) the firewall or router at the destination does not permit packets received from the Internet to be broadcast to the local network, Wake-on-Internet may still be achieved by sending the magic packet to any specified port of the destination's Internet address, having previously set the firewall or router to forward packets arriving at that port to the local IP address of the computer being woken. The router may require reservation of the local IP address of the computer being woken in order to forward packets to it when it is not live.
See also
Alert on LAN
Alert Standard Format
Desktop and mobile Architecture for System Hardware
RTC Alarm
Wake-on-RingTelephone line ring event
Conventional PCI pinoutPower Management Event (PME#) signal
Wired for Management
References
Computer-related introductions in 1997
Networking standards
BIOS
Unified Extensible Firmware Interface
Remote control
Ethernet |
34167 | https://en.wikipedia.org/wiki/Xyzzy%20%28computing%29 | Xyzzy (computing) | In computing, Xyzzy is sometimes used as a metasyntactic variable or as a video game cheat code. Xyzzy comes from the Colossal Cave Adventure computer game, where it is the first "magic word" that most players encounter (others include "plugh" and "plover").
Origin
Modern usage is primarily from one of the earliest computer games, Colossal Cave Adventure, in which the idea is to explore a cave with many rooms, collecting the treasures found there. By typing "xyzzy" at the appropriate time, the player could move instantly between two otherwise distant points. As Colossal Cave Adventure was both one of the first adventure games and one of the first interactive fiction pieces, hundreds of later interactive fiction games included responses to the command "xyzzy" in tribute.
The origin of the word "xyzzy" has been the subject of debate. According to Ron Hunsinger, the sequence of letters "XYZZY" has been used as a mnemonic to remember the process for computing cross products. Will Crowther, the author of Colossal Cave Adventure, states that he was unaware of the mnemonic, and that he "made it up from whole cloth" when writing the game.
Usage
Operating systems
Xyzzy has been implemented as an undocumented no-op command on several operating systems; in the 16-bit version of Data General's AOS, for example, it would typically respond "Nothing happens", just as the game did if the magic was invoked at the wrong spot or before a player had performed the action that enabled the word. The 32-bit version, AOS/VS, would respond "Twice as much happens". On several computer systems from Sun Microsystems, the command "xyzzy" is used to enter the interactive shell of the U-Boot bootloader. Early versions of Zenith Z-DOS (a re-branded variant of MS-DOS 1.25) had the command "xyzzy" which took a parameter of "on" or "off". Xyzzy by itself would print the status of the last "xyzzy on" or "xyzzy off" command.
When booting a Cr-48 from developer mode, when the screen displays the "sad laptop" image, pressing xyzzy produces a joke Blue Screen of Death.
According to Brantley Coile, the Cisco PIX firewall had a xyzzy command that simply said "Nothing happens." He also put the command into the Coraid VSX to escape the CLI and get into the shell. It would announce "Foof! You are in a directory. There are files here." The new California Coraid management made the developers change the string to "/exportmode" and get rid of the "Foof!" message Since regaining ownership of the Coraid software, the command is being returned to the system and now, in VSX release 8, the response is ">>Foof!<< You are in a debris room."
Application programs
Within the low-traffic Usenet newsgroup alt.xyzzy, the word is used for test messages, to which other readers (if there are any) customarily respond, "Nothing happens" as a note that the test message was successfully received. In the Internet Relay Chat client mIRC and Pidgin, entering the undocumented command "/xyzzy" will display the response "Nothing happens". The string "xyzzy" is also used internally by mIRC as the hard-coded master encryption key that is used to decrypt over 20 sensitive strings from within the mirc.exe program file.
A "deluxe chatting program" for DIGITAL's VAX/VMS written by David Bolen in 1987 and distributed via BITNET took the name xyzzy. It enabled users on the same system or on linked DECnet nodes to communicate via text in real time. There was a compatible program with the same name for IBM's VM/CMS.
xYzZY is used as the default boundary marker by the Perl HTTP::Message module for multipart MIME messages, and was used in Apple's AtEase for workgroups as the default administrator password in the 1990s.
Gmail supports the command XYZZY when connected via IMAP before logging in. It takes no arguments, and responds with "OK Nothing happens."
The Hewlett-Packard 9836A computer with HPL 2.0 programming language has XYZZY built into the HPL language itself with the result of "I see no cave here." when used. The same message is returned from HP 3458A and HP 3245A instruments when queried with XYZZY via the HPIB bus.
In most versions of the Ingres dbms, "select xyzzy('')" returns "Nothing happens." However, "select xyzzy('wim')" returns "Nothing happens to Wim". The xyzzy() function has been part of the Ingres product since at least version 5 (late 1980s), but was removed from the main codeline sometime in the early 2000s. While talking to one of the members of the Ingres development team, Wim de Boer, at that time the secretary of the Ingres Users Group Nederland (IUGN), mentioned the removal of this Easter egg. This developer, who was a frequent speaker on the events organised by the IUGN, somehow managed to put the function back into the product and—especially for Wim—added handling for the 'wim' value of the parameter.
Other computer games and media
The popular Minesweeper game under older versions of Microsoft Windows had a cheat mode triggered by entering the command xyzzy, then pressing the key sequence shift and then enter, which turned a single pixel in the top-left corner of the entire screen into a small black or white dot depending on whether or not the mouse pointer is over a mine. This easter egg was present in all Windows versions through Windows XP Service Pack 3, but under Windows 95, 98 and NT 4.0 the pixel was visible only if the standard Explorer desktop was not running. The easter egg does not exist in versions after Windows XP SP3.
In the game Zork, typing xyzzy and pressing enter produces the response: "A hollow voice says 'fool. The command commonly produces a humorous response in other Infocom games and text adventures, leading to its usage in the title of the interactive fiction competition, the XYZZY Awards.
In Hugo's House of Horrors, typing xyzzy gives the message "We are getting desperate, aren't we!".
In Dungeons and Dragons Online, Xy'zzy is the nigh-invulnerable raid boss in the Hound of Xoriat adventure.
In the PC version of the popular Electronic Arts game Road Rash, the cheat mode is enabled by typing the key string "xyzzy" in the middle of the race.
In Primordia, one is able to get a bonus short scene featuring a shout-out to Colossal Cave Adventure as a form of non-playable text-adventure, which is accessible by typing 'xyzzy' in Memorious's data-kiosk.
In the video game Deus Ex, protagonist JC Denton is trying to make contact with the Mole People, and when their representative, Curly, prompts for a password to reveal the Mole People's hideout, Denton tries "xyzzy" if he has not already obtained the password. Curly denies this attempt, as one would expect.
Andrew Sega released an album under the name XYZZY.
In Minecraft, xyzzy is one of the random words that appear under the enchantments on an enchanting table (written in the Standard Galactic Alphabet).
References
Cheating in video games
Magic words
Mnemonics
Words originating in fiction
Interactive fiction |
34579 | https://en.wikipedia.org/wiki/2000s | 2000s | The 2000s (pronounced "two-thousands"; shortened to the 00s and known as the aughts or noughties) was a decade of the Gregorian calendar that began on January 1, 2000, and ended on December 31, 2009.
The early part of the decade saw the long predicted breakthrough of economic giant China, which had double-digit growth during nearly the whole decade. To a lesser extent, India also benefited from an economic boom, which saw the two most populous countries becoming an increasingly dominant economic force. The rapid catching-up of emerging economies with developed countries sparked some protectionist tensions during the period and was partly responsible for an increase in energy and food prices at the end of the decade. The economic developments in the latter third of the decade were dominated by a worldwide economic downturn, which started with the crisis in housing and credit in the United States in late 2007 and led to the bankruptcy of major banks and other financial institutions. The outbreak of this global financial crisis sparked a global recession, beginning in the United States and affecting most of the industrialized world.
The growth of the Internet contributed to globalization during the decade, which allowed faster communication among people around the world; social networking sites arose as a new way for people to stay in touch from distant locations, as long as they had an internet connection. The first social networking sites were Friendster, Myspace, Facebook, and Twitter, established in 2002, 2003, 2004, and 2006, respectively. Myspace was the most popular social networking website until June 2009, when Facebook overtook it in number of American users. E-mail continued to be popular throughout the decade and began to replace "snail mail" as the primary way of sending letters and other messages to people in distant locations, though it had existed since 1971.
The War on Terror and War in Afghanistan began after the September 11 attacks in 2001. The International Criminal Court was formed in 2002. In 2003, a United States-led coalition invaded Iraq, and the Iraq War led to the end of Saddam Hussein's rule as Iraqi President and the Ba'ath Party in Iraq. Al-Qaeda and affiliated Islamist militant groups performed terrorist acts throughout the decade. The Second Congo War, the deadliest conflict since World War II, ended in July 2003. Further wars that ended included the Algerian Civil War, the Angolan Civil War, the Sierra Leone Civil War, the Second Liberian Civil War, the Nepalese Civil War, and the Sri Lankan Civil War. Wars that began included the conflict in the Niger Delta, the Houthi insurgency in Yemen, and the Mexican Drug War.
Climate change and global warming became common concerns in the 2000s. Prediction tools made significant progress during the decade, UN-sponsored organizations such as the IPCC gained influence, and studies such as the Stern report influenced public support for paying the political and economic costs of countering climate change. The global temperature kept climbing during the decade. In December 2009, the World Meteorological Organization (WMO) announced that the 2000s may have been the warmest decade since records began in 1850, with four of the five warmest years since 1850 having occurred in this decade. The WMO's findings were later echoed by the NASA and the NOAA.
Usage of computer-generated imagery became more widespread in films produced during the 2000s, especially with the success of 2001's Shrek. Anime films gained more exposure outside Japan with the release of Spirited Away. December 2009's Avatar became the highest-grossing film. Documentary and mockumentary films, such as March of the Penguins, Super Size Me, Borat and Surf's Up, were popular in the 2000s. 2004's Fahrenheit 9/11 by Michael Moore was the highest grossing documentary of all time. Online films became popular, and conversion to digital cinema started. Video game consoles released in this decade included the PlayStation 2, the Xbox, the GameCube, the Wii, the PlayStation 3 and the Xbox 360; while portable video game consoles included Game Boy Advance, Nintendo DS and PlayStation Portable. Wii Sports was the decade's best-selling console video game, while New Super Mario Bros. was the decade's best-selling portable video game. J. K. Rowling was the best-selling author in the decade overall thanks to the Harry Potter book series, although she did not pen the best-selling book, being second to The Da Vinci Code. Eminem was named the music artist of the decade by Billboard.
Name for the decade
Orthographically, the decade can be written as the "2000s" or the 00s". In the English-speaking world, a name for the decade wasn't immediately accepted as it had been for other decades ('80s, '90s), but usage eventually settled on "aughts" (US) or "noughties" (UK). Other possibilities included "two-thousands", "ohs", "oh ohs", "double ohs", "zeros", and "double zeros". The years of the decade can be referred to as '01, '02, etc., pronounced oh-one, oh-two, etc.
Demographics
For times after World War II, demographic data of some accuracy becomes available for a significant number of countries, and population estimates are often given as grand totals of numbers (typically given by country) of widely diverging accuracies. Some sources give these numbers rounded to the nearest million or the nearest thousand, while others give them without any rounding.
Taking these numbers at face value would be false precision; in spite of being stated to four, seven or even ten digits, they should not be interpreted as accurate to more than three digits at best (estimates by the United States Census Bureau and by the United Nations differ by about 0.5–1.5%).
Politics and wars
The War on Terror and War in Afghanistan began after the September 11 attacks in 2001. The International Criminal Court was formed in 2002. In 2003 a United States-led coalition invaded Iraq, and the Iraq War led to the end of Saddam Hussein's rule as Iraqi President and the Ba'ath Party in Iraq. Al-Qaeda and affiliated Islamist militant groups performed terrorist acts throughout the decade. These acts included the 2004 Madrid train bombings, 7/7 London bombings in 2005, and the Mumbai attacks related to al-Qaeda in 2008. The European Union expanded its sanctions amid Iran's failure to comply with its transparency obligations under the Nuclear Non-Proliferation Treaty and United Nations resolutions.
The War on Terror generated extreme controversy around the world, with questions regarding the justification for certain U.S. actions leading to a loss of support for the American government, both in and outside the United States. The additional armed conflict occurred in the Middle East, including between Israel and Hezbollah, then with Israel and Hamas. The most significant loss of life due to natural disasters came from the 2004 Indian Ocean earthquake, which caused a tsunami that killed around one quarter-million people and displaced well over a million others.
Terrorist attacks
The most prominent terrorist attacks committed against the civilian population during the decade include:
September 11 attacks in New York City; Washington, D.C.; and Shanksville, Pennsylvania
2001 anthrax attacks in the United States
2002 Bali bombings in Bali, Indonesia
2003 Istanbul bombings in Istanbul, Turkey
2004 Madrid train bombings
2004 Beslan school hostage crisis
2005 London bombings
2007 Yazidi communities bombings
2008 Mumbai attacks
Wars
The most prominent armed conflicts of the decade include:
International wars
War on Terror (2001–present) – refers to several ideological, military, and diplomatic campaigns aimed at putting an end to international terrorism by preventing groups defined by the U.S. and its allies as terrorist (mostly Islamist groups such as al-Qaeda, Hezbollah, and Hamas) from posing a threat to the U.S. and its allies, and by putting an end to state sponsorship of terrorism. The campaigns were launched by the United States, with support from NATO and other allies, following the September 11 attacks that were carried out by al-Qaeda. Today the term has become mostly associated with Bush administration-led wars in Afghanistan and Iraq.
War in Afghanistan (2001–2021) – In 2001, the United States, the United Kingdom, Italy, Spain, Canada, and Australia invaded Afghanistan seeking to oust the Taliban and find al-Qaeda mastermind Osama bin Laden. In 2011, the US government claimed Navy Seals had killed Bin Laden and buried his body at sea. Fatalities of coalition troops: 1,553 (2001 to 2009).
Iraq War (2003–2011) – In 2003, the United States, the United Kingdom, Spain, Australia, and Poland invaded and occupied Iraq. Claims that Iraq had weapons of mass destruction at its disposal were later found to be unproven. The war, which ended the rule of Saddam Hussein's Ba'ath Party, also led to violence against the coalition forces and between many Sunni and Shia Iraqi groups and al-Qaeda operations in Iraq. Casualties of the Iraq War: Approximately 110,600 between March 2003 to April 2009. Hussein was eventually sentenced to death and hanged on December 30, 2006.
Arab–Israeli conflict (1948 – present)
2006 Lebanon War (summer 2006) – took place in southern Lebanon and northern Israel. The principal parties were Hezbollah paramilitary forces and the Israeli military. The war that began as a military operation in response to the abduction of two Israeli reserve soldiers by the Hezbollah gradually strengthened and became a wider confrontation.
Israeli–Palestinian conflict (Early 20th century – present)
Second Intifada (2000–2005) – After the signing of the Oslo Accords failed to bring about a Palestinian state, in September 2000, the Second Intifada (uprising) broke out, a period of intensified Palestinian-Israeli violence, which has been taking place until the present day. As a result of the significant increase of suicide bombing attacks within Israeli population centers during the first years of the Al-Aqsa Intifada, in June 2002 Israel began the construction of the West Bank Fence along the Green Line border arguing that the barrier is necessary to protect Israeli civilians from Palestinian terrorism. The significantly reduced number of incidents of suicide bombings from 2002 to 2005 has been partly attributed to the barrier. The barrier's construction, which has been highly controversial, became a significant issue of contention between the two sides. The Second Intifada has caused thousands of victims on both sides, both among combatants and among civilians – The death toll, including both military and civilian, is estimated to be 5,500 Palestinians and over 1,000 Israelis, as well as 64 foreign citizens. Many Palestinians consider the Second Intifada to be a legitimate war of national liberation against foreign occupation, whereas many Israelis consider it to be a terrorist campaign.
2008–2009 Israel–Gaza conflict – the frequent Hamas Qassam rocket and mortar fire launched from within civilian population centers in Gaza towards the Israeli southern civilian communities led to an Israeli military operation in Gaza, which had the stated aim of reducing the Hamas rocket attacks and stopping the arms smuggling into the Gaza Strip. Throughout the conflict, Hamas further intensified its rocket and mortar attacks against Israel, hitting civilian targets and reaching major Israeli cities Beersheba and Ashdod, for the first time. The intense urban warfare in densely populated Gaza combined with the use of massive firepower by the Israeli side and the intensified Hamas rocket attacks towards populated Israeli civilian targets led to a high toll on the Palestinian side and among civilians.
The Second Congo War (1998–2003) – took place mainly in the Democratic Republic of the Congo. The widest interstate war in modern African history, it directly involved nine African nations, as well as about twenty armed groups. It earned the epithet of "Africa's World War" and the "Great War of Africa." An estimated 3.8 million people died, mostly from starvation and disease brought about by the deadliest conflict since World War II. Millions more were displaced from their homes or sought asylum in neighboring countries.
2008 South Ossetia war – Russia invaded Georgia in response to Georgian aggression towards civilians and attack on South Ossetia. Both Russia and Georgia were condemned internationally for their actions.
The Second Chechen War (1999–2000) – the war was launched by the Russian Federation on August 26, 1999, in response to the invasion of Dagestan and the Russian apartment bombings, which were blamed on the Chechens. During the war, Russian forces largely recaptured the separatist region of Chechnya. The campaign largely reversed the outcome of the First Chechen War, in which the region gained de facto independence as the Chechen Republic of Ichkeria.
The Eritrean–Ethiopian War came to a close in 2000.
Kivu conflict (2004–2009) – armed conflict between the military of the Democratic Republic of the Congo (FARDC) and the Hutu Power group Democratic Forces for the Liberation of Rwanda (FDLR).
2009 Nigerian sectarian violence – an armed conflict between Boko Haram, a militant Islamist group, and Nigerian security forces.
Civil wars and guerrilla wars
War in Darfur (2003–2009) – an armed conflict in the Darfur region of western Sudan. The conflict began when the Sudan Liberation Movement/Army (SLM/A) and Justice and Equality Movement (JEM) in Darfur took up arms, accusing the government of oppressing black Africans in favor of Arabs. One side was composed mainly of the Sudanese military and the Sudanese militia group Janjaweed, recruited mostly from the Afro-Arab Abbala tribes of the northern Rizeigat region in Sudan. The other side was made up of rebel groups, notably the Sudan Liberation Movement/Army and the Justice and Equality Movement, recruited primarily from the non-Arab Muslim Fur, Zaghawa, and Masalit ethnic groups. Millions of people were displaced from their homes during the conflict. There are various estimates on the number of human casualties – Sudanese authorities claim a death toll of roughly 19,500 civilians while certain non-governmental organizations, such as the Coalition for International Justice, claim that over 400,000 people have been killed during the conflict. Former U.S. President George W. Bush called the events in Darfur a genocide during his presidency. The United States Congress unanimously passed House Concurrent Resolution 467, which declared the situation in Darfur a state-sponsored genocide by the Janjaweed. In 2008, the International Criminal Court charged Omar al-Bashir with genocide for his role in the War in Darfur.
Mexican Drug War (2006–present) – an armed conflict fought between rival drug cartels and government forces in Mexico. Although Mexican drug cartels, or drug trafficking organizations, have existed for quite some time, they have become more powerful since the demise of Colombia's Cali and Medellín cartels in the 1990s. Mexican drug cartels now dominate the wholesale illicit drug market in the United States. Arrests of key cartel leaders, particularly in the Tijuana and Gulf cartels, have led to increasing drug violence as cartels fight for control of the trafficking routes into the United States. Roughly more than 16,851 people in total were killed between December 2006 until November 2009.
In India, Naxalite–Maoist insurgency (1967–present) has grown alarmingly with attacks such as April 2010 Maoist attack in Dantewada, Jnaneswari Express train derailment, and Rafiganj train disaster. Naxalites are a group of far-left radical communists, supportive of Maoist political sentiment and ideology. It is presently the longest continuously active conflict worldwide. In 2006 Prime Minister Manmohan Singh called the Naxalites "The single biggest internal security challenge ever faced by our country." In 2009, he said the country was "losing the battle against Maoist rebels". According to standard definitions the Naxalite–Maoist insurgency is an ongoing conflict between Maoist groups, known as Naxalites or Naxals, and the Indian government. On April 6, 2010, Maoist rebels killed 75 security forces in a jungle ambush in central India in the worst-ever massacre of security forces by the insurgents. On the same day, Gopal, a top Maoist leader, said the attack was a "direct consequence" of the government's Operation Green Hunt offensive. This raised some voices of use of Indian Air Force against Naxalites, which were, however, declined, citing "We can't use oppressive force against our own people".
The Colombian Armed Conflict continues causing deaths and terror in Colombia. Beginning in 1964, the FARC and ELN narcoterrorist groups were taking control of rural areas of the country by the beginning of the decade, while terrorist paramilitaries grew in other places as businesspeople and politicians thought the State would lose the war against guerrillas. However, after the failure of the peace process and the activation of Plan Colombia, Álvaro Uribe Vélez was elected president in 2002, starting a massive attack on terrorist groups, with cooperation from civil population, foreign aid and legal armed forces. The AUC paramilitary organization disbanded in 2006, while ELN guerrillas have been weakened. The Popular Liberation Army demobilized while the country's biggest terrorist group, FARC has been weakened and most of their top commanders have been killed or died during the decade. During the second half of the decade, a new criminal band has been formed by former members of AUC who did not demobilize, calling themselves Aguilas Negras. Although the Colombian State has taken back control over most of the country, narcoterrorism still causes pain in the country. Since 2008, the Internet has become a new field of battle. Facebook has gained nationwide popularity and has become the birthplace of many civil movements against narcoterrorism such as "Colombia Soy Yo" (I am Colombia) or "Fundación Un Millón de Voces" (One Million Voices Foundation), responsible for the international protests against illegal groups during the last years.
The Sierra Leone Civil War (1991–2002) came to an end when the Revolutionary United Front (RUF) finally laid down their arms. More than two million people were displaced from their homes because of the conflict (well over one-third of the population) many of whom became refugees in neighboring countries. Tens of thousands were killed during the conflict.
The Sri Lankan Civil War (1983–2009) came to an end after the government defeated the Liberation Tigers of Tamil Eelam. Over 80,000 people were killed during the course of the conflict.
War in North-West Pakistan (2004–present) – an armed conflict between the Pakistani Armed Forces and Islamic militants made up of local tribesmen, the Taliban, and foreign Mujahideen (Holy Warriors). It began in 2004 when tensions rooted in the Pakistani Army's search for al-Qaeda members in Pakistan's mountainous Waziristan area (in the Federally Administered Tribal Areas) escalated into armed resistance by local tribesmen. The violence has displaced 3.44 million civilians and led to more than 7,000 civilians being killed.
The Angolan Civil War (1975–2002), once a major proxy conflict of the Cold War, the conflict ended after the anti-Communist organization UNITA disbanded to become a political party. By the time the 27-year conflict was formally brought to an end, an estimated 500,000 people had been killed.
Shia insurgency in Yemen (2004–present) – a civil war in the Sada'a Governorate of Yemen. It began after the Shia Zaidiyyah sect launched an uprising against the Yemeni government. The Yemeni government has accused Iran of directing and financing the insurgency. Thousands of rebels and civilians have been killed during the conflict.
Somali Civil War (1991–present)
Somalia War (2006–2009) – involved largely Ethiopian and Somali Transitional Federal Government (TFG) forces who fought against the Somali Islamist umbrella group, the Islamic Court Union (ICU), and other affiliated militias for control of the country. The war spawned pirates who hijacked hundreds of ships off the coast of Somalia, holding ships and crew for ransom often for months (see also Piracy in Somalia). 1.9 million people were displaced from their homes during the conflict and the number of civilian casualties during the conflict is estimated at 16,724.
Somali Civil War (2009–present) – involved largely the forces of the Somali Somali Transitional Federal Government (TFG) assisted by African Union peacekeeping troops, whom fought against various militant Islamist factions for control of the country. The violence has displaced thousands of people residing in Mogadishu, the nation's capital. 1,739 people in total were killed between January 1, 2009, and January 1, 2010.
Conflict in the Niger Delta (2004–present) – an ongoing conflict in the Niger Delta region of Nigeria. The conflict was caused due to the tensions between the foreign oil corporations and a number of the Niger Delta's minority ethnic groups who felt they were being exploited, particularly the Ogoni and the Ijaw. The competition for oil wealth has led to an endless violence cycle between innumerable ethnic groups, causing the militarization of nearly the entire region that was occupied by militia groups as well as Nigerian military and the forces of the Nigerian Police.
Algerian Civil War (1991–2002) – the conflict effectively ended with a government victory, following the surrender of the Islamic Salvation Army and the 2002 defeat of the Armed Islamic Group. It is estimated that more than 100,000 people were killed during the course of the conflict.
Civil war in Chad (1998–present)
Chadian Civil War (1998–2002) – involved the Movement for Justice and Democracy in Chad (MDJT) rebels that skirmished periodically with government troops in the Tibesti region, resulting in hundreds of civilian, government, and rebel casualties.
Chadian Civil War (2005–2010) – involved Chadian government forces and several Chadian rebel groups. The Government of Chad estimated in January 2006 that 614 Chadian citizens had been killed in cross-border raids. The fighting still continues despite several attempts to reach agreements.
Nepalese Civil War (1996–2006) – the conflict ended with a peace agreement was reached between the government and the Maoist party in which it was set that the Maoists would take part in the new government in return for surrendering their weapons to the UN. It is estimated that more than 12,700 people were killed during the course of the conflict.
Second Liberian Civil War (1999–2003) – The conflict began in 1999 when a rebel group Liberians United for Reconciliation and Democracy (LURD), with support from the Government of Guinea, took over northern Liberia through a coup. In early 2003, a different rebel group, the Movement for Democracy in Liberia, emerged in the south. As a result, by June–July 2003, president Charles Taylor's government controlled only a third of the country. The capital Monrovia was besieged by LURD, and that group's shelling of the city resulted in the deaths of many civilians. Thousands of people were displaced from their homes as a result of the conflict.
Insurgency in the Maghreb (2002–present) – Algeria has been the subject of an Islamic insurgency since 2002 waged by the Sunni Islamic Jihadist militant group Salafist Group for Preaching and Combat (GSPC). GSPC allied itself with the Al-Qaeda Organization in the Islamic Maghreb against the Algerian government. The conflict has since spread to other neighboring countries.
Ituri conflict (1999–2007) – a conflict fought between the Lendu and Hema ethnic groups in the Ituri region of northeastern Democratic Republic of Congo (DRC). While there have been many phases to the conflict, the most recent armed clashes ran from 1999 to 2003, with a low-level conflict continuing until 2007. More than 50,000 people have been killed in the conflict and hundreds of thousands forced from their homes.
Central African Republic Bush War (2004–2007) – began with the rebellion by the Union of Democratic Forces for Unity (UFDR) rebels, after the current president of the Central African Republic, François Bozizé, seized power in a 2003 coup. The violence has displaced around 10,000 civilians and has led to hundreds of civilians being killed.
Civil war in Afghanistan (1996–2001) – an armed conflict that continued after the capture of Kabul by the Taliban, in which the formation of the Afghan Northern Alliance attempted to oust the Taliban. It proved largely unsuccessful, as the Taliban continued to make gains and eliminated much of the Alliance's leadership.
Coups
The most prominent coups d'état of the decade include:
2000 overthrow of Slobodan Milošević in the Federal Republic of Yugoslavia – after Slobodan Milošević was accused by opposition figures of winning the 2000 election through electoral fraud, mass protests led by the opposition movement Otpor! pressure Slobodan Milošević to resign. Milošević was later arrested in 2001 and sent to the Hague to face war crimes charges for his alleged involvement in war crimes of the Yugoslav Wars.
2002 Venezuelan coup d'état attempt – a failed military coup d'état on April 11, 2002, which aimed to overthrow the president of Venezuela Hugo Chávez. During the coup Hugo Chávez was arrested and Pedro Carmona became the interim President for 47 hours. The coup led to a pro-Chávez uprising that the Metropolitan Police attempted to suppress. The pro-Chávez Presidential Guard eventually retook the Miraflores presidential palace without firing a shot, leading to the collapse of the Carmona government.
2004 Haitian coup d'état – a conflict fought for several weeks in Haiti during February 2004 that resulted in the premature end of President Jean-Bertrand Aristide's second term, and the installment of an interim government led by Gérard Latortue.
2006 Thai coup d'état – on September 19, 2006, while the elected Thai Prime Minister Thaksin Shinawatra was in New York for a meeting of the UN, Army Commander-in-Chief Lieutenant General Sonthi Boonyaratglin launched a bloodless coup d'état.
Fatah–Hamas conflict (2006–2009) – an armed conflict fought between the two main Palestinian factions, Fatah and Hamas with each vying to assume political control of the Palestinian territories. In June 2007, Hamas took control of the entire Gaza Strip, and established a separate government while Fatah remained in control of the West Bank. This in practice divided the Palestinian Authority into two. Various forces affiliated with Fatah engaged in combat with Hamas, in numerous gun battles. Most Fatah leaders eventually escaped to Egypt and the West Bank, while some were captured and killed.
2009 Honduras coup d'état – The armed forces of the country entered the president's residence and overthrew president Manuel Zelaya. (see 2009 Honduran constitutional crisis).
Nuclear threats
Since 2005, Iran's nuclear program has become the subject of contention with the Western world due to suspicions that Iran could divert the civilian nuclear technology to a weapons program. This has led the UN Security Council to impose sanctions against Iran on select companies linked to this program, thus furthering its economic isolation on the international scene. The U.S. Director of National Intelligence said in February 2009 that Iran would not realistically be able to a get a nuclear weapon until 2013, if it chose to develop one.
In 2003, the United States invaded Iraq over allegations that its leader Saddam Hussein was stockpiling weapons of mass destruction including chemical and biological weapons or was in the process of creating them. None were found, spawning multiple theories.
North Korea successfully performed two nuclear tests in 2006 and 2009.
Operation Orchard – during the operation, Israel bombed what was believed to be a Syrian nuclear reactor on September 6, 2007, which was thought to be built with the aid of North Korea. The White House and Central Intelligence Agency (CIA) later declared that American intelligence indicated the site was a nuclear facility with a military purpose, though Syria denies this.
The Doomsday Clock, the symbolic representation of the threat of nuclear annihilation, moved four minutes closer to midnight: two minutes in 2002 and two minutes in 2007 to 5 minutes to midnight.
Decolonization and independence
East Timor regains independence from Indonesia in 2002. Portugal granted independence to East Timor in 1975, but it was soon after invaded by Indonesia, which only recognized East Timorese independence in 2002.
Montenegro gains independence from Serbia in 2006, ending the 88-year-old Yugoslavia. Although not an EU member, Montenegro uses the Euro as its national currency.
Kosovo declares independence from Serbia in 2008, though its independence still remains unrecognized by many countries.
On August 23, 2005, Israel's unilateral disengagement from 25 Jewish settlements in the Gaza Strip and West Bank ends.
On August 26, 2008, Russia formally recognises the disputed Georgian regions of Abkhazia and South Ossetia as independent states. The vast majority of United Nations member states maintain that the areas belong to Georgia.
Democracy
During this decade, the peaceful transfer of power through elections first occurred in Mexico, Indonesia, Taiwan, Colombia, and several other countries. (See below.)
Prominent political events
The prominent political events of the decade include:
North America
Canada
Paul Martin replaces Jean Chrétien as Prime Minister of Canada in 2003 by becoming the new leader of the Liberal Party. Stephen Harper was elected prime minister in 2006 following the defeat of Paul Martin's government in a motion of no confidence.
Greenland
Greenland was granted further Self-governance (or "self-rule") within the Kingdom of Denmark on June 21, 2009.
Mexico
Vicente Fox was elected President of Mexico in the 2000 presidential election, making him the first president elected from an opposition party in 71 years, defeating the then-dominant Institutional Revolutionary Party (PRI).
United States
George W. Bush was sworn in succeeding Bill Clinton as the 43rd President of the United States on January 20, 2001, following a sharply contested election.
On October 26, 2001, U.S. President George W. Bush signed the USA PATRIOT Act into law.
On February 15, 2003, anti-war protests broke out around the world in opposition to the U.S. Invasion of Iraq, in what the Guinness Book of World Records called the largest anti-war rally in human history. In reaction, The New York Times writer Patrick Tyler wrote in a February 17 article that: ...the huge anti-war demonstrations around the world this weekend are reminders that there may still be two superpowers on the planet: the United States and world public opinion.
On June 5, 2004, Ronald Reagan, the 40th President of the United States, died after having suffered from Alzheimer's disease for nearly a decade. His seven-day state funeral followed, spanning June 5–11. The general public stood in long lines waiting for a turn to view the casket. People passed by the casket at a rate of about 5,000 per hour (83.3 per minute, or 1.4 per second) and the wait time was about three hours. In all, 104,684 passed through when Reagan lay in state.
Barack Obama was sworn in as the 44th President of the United States in 2009, becoming the nation's first African American president.
South America
November 19, 2000 – Peruvian dictator/president Alberto Fujimori resigns via fax. Valentín Paniagua is named Temporary President.
Álvaro Uribe is elected President of Colombia in 2002, the first political independent to do so in more than a century and a half, creating the right-wing political movement known as uribism. Uribe was re-elected in 2006.
In 2006, Michelle Bachelet is elected as the first female President of Chile.
Left-wing governments emerge in South American countries. These governments include those of Hugo Chávez in Venezuela since 1999, Fernando Lugo in Paraguay, Rafael Correa in Ecuador, and Evo Morales in Bolivia. With the creation of the ALBA, Fidel Castro—leader of Cuba between 1959 and 2008—and Hugo Chávez reaffirmed their opposition to the aggressive militarism and imperialism of the United States.
Luiz Inácio Lula da Silva was elected (2002) and reelected (2006) President of Brazil.
In 2003, Néstor Kirchner was elected as President of Argentina. And in 2007, he was later succeeded by his wife, Cristina Fernández de Kirchner, who became the first directly elected female President of Argentina
May 23, 2008 – The Union of South American Nations, a supranational union, is made from joining the Andean Community and Mercosur.
Asia
On May 18, 2000, Chen Shui-bian was elected as the president of Taiwan, ending the half-century rule of the KMT on the island, and became the first president of the DPP.
Israeli withdrawal from the Israeli security zone in southern Lebanon – on May 25, 2000, Israel withdrew IDF forces from the Israeli occupation of southern Lebanon in southern Lebanon after 22 years.
In July 2000 the Camp David 2000 Summit was held which was aimed at reaching a "final status" agreement between the Palestinians and the Israelis. The summit collapsed after Yasser Arafat would not accept a proposal drafted by American and Israeli negotiators. Barak was prepared to offer the entire Gaza Strip, a Palestinian capital in a part of East Jerusalem, 73% of the West Bank (excluding eastern Jerusalem) raising to 90–94% after 10–25 years, and financial reparations for Palestinian refugees for peace. Arafat turned down the offer without making a counter-offer.
January 20, 2001 – June 30, 2010 – Gloria Macapagal Arroyo was the 14th president of the Republic of the Philippines.
2002 – Recep Tayyip Erdoğan was elected as Prime Minister of Turkey. Abdullah Gül was elected as President of Turkey.
March 15–16, 2003 – CPC General Secretary, President Hu Jintao and Premier Wen Jiabao, replaced former People's Republic of China leaders Jiang Zemin and Zhu Rongji.
2003 – the 12-year self-government in Iraqi Kurdistan ends, developed under the protection of the UN "No-fly zone" during the now-ousted Saddam Hussein regime.
2003 – Prime minister of Malaysia Mahathir Muhammad resigns in October, he was succeeded by Abdullah bin Ahmad Badawi.
Manmohan Singh was elected (2004) and reelected (2009) Prime Minister in India. He is the only Prime Minister since Jawaharlal Nehru to return to power after completing a full five-year term. Singh previously carried out economic reforms in India in 1991, during his tenure as the Finance Minister.
January 9, 2005 – Mahmoud Abbas is elected to succeed Yasser Arafat as Palestinian Authority President.
August 1, 2005 – Fahd, the King of Saudi Arabia from 1982 to 2005, died and is replaced by King Abdullah.
January 4, 2006 – Powers are transferred from Israeli Prime Minister Ariel Sharon to his deputy, Vice Prime Minister Ehud Olmert, after Sharon suffers a massive hemorrhagic stroke.
December 30, 2006 – Former leader of Iraq Saddam Hussein is executed.
2007 – The King of Nepal is suspended from exercising his duties by the newly formed interim legislature on January 15, 2007.
2007 political crisis in Pakistan, Pervez Musharraf retired after the assassination of Benazir Bhutto
2008 – Nepal becomes the youngest democracy of the world by transforming from a constitutional monarchy to a socialist republic on May 28, 2008.
2008–10 Thai political crisis and 2010 Thai political protests by "red shirts" demonstrations.
2009 Iranian election protests – The 2009 Iranian presidential election sparked massive protests in Iran and around the world against alleged electoral fraud and in support of defeated candidate Mir-Hossein Mousavi. During the protests the Iranian authorities closed universities in Tehran, blocked web sites, blocked cell phone transmissions and text messaging, and banned rallies. Several demonstrators in Iran were killed or imprisoned during the protests. Dozens of human casualties were reported or confirmed.
Death and funeral of Corazon Aquino – Former President Corazon Aquino of the Philippines died of cardiorespiratory arrest on August 1, 2009, at the age of 76 after being in hospital from June 2009, and being diagnosed with colorectal cancer in March 2008.
Europe
The Mayor of London is an elected politician who, along with the London Assembly of 25 members, is accountable for the strategic government of Greater London. The role, created in 2000 after the London devolution referendum, was the first directly elected mayor in the United Kingdom.
The Netherlands becomes the first country in the world to fully legalize same-sex marriage on April 1, 2001.
Silvio Berlusconi becomes Prime Minister of Italy in 2001 and again in 2008, after two years of a government held by Romano Prodi, dominating the political scene for more than a decade and becoming the longest-serving post-war Prime Minister.
European integration makes progress with the definitive circulation of the euro in twelve countries in 2002 and the widening of European Union to 27 countries in 2007. A European Constitution bill is rejected by French and Dutch voters in 2005, but a similar text, the Treaty of Lisbon, is drafted in 2007 and finally adopted by the 27 members countries.
June 1–4, 2002 – The Golden Jubilee of Queen Elizabeth II was the international celebration marking the 50th anniversary of the accession of Elizabeth II to the thrones of seven countries.
The Rose Revolution in Georgia leads to the ousting of Eduard Shevardnadze and the end of the Soviet era of leadership in the country.
José Luis Rodríguez Zapatero replaced José María Aznar as President of the Government of Spain in 2004.
The Orange Revolution in Ukraine occurs in the aftermath of the 2004 Ukrainian presidential election.
Pope John Paul II dies on April 2, 2005. Pope Benedict XVI is elected on April 19, 2005.
Angela Merkel becomes the first female Chancellor of Germany in 2005.
The St Andrews Agreement signed in St Andrews, Fife, Scotland to restore the Northern Ireland Assembly and bring in the principle of policing by consent with the Police Service of Northern Ireland with all parties in 2006.
Nicolas Sarkozy is elected President of France in 2007 succeeding Jacques Chirac, who had held the position for 12 years.
Gordon Brown succeeds Tony Blair as Prime Minister of the United Kingdom in 2007.
Tony Blair was officially confirmed as Middle East envoy for the United Nations, European Union, United States, and Russia in 2007.
Dmitry Medvedev succeeded Vladimir Putin as the President of Russia in 2008.
Parties broadly characterised by political scientists as being right-wing populist soar throughout the 2000s, in the wake of increasing anti-Islam and anti-immigration sentiment in most Western European countries. By 2010, such parties (albeit often significant differences between them) were present in the national parliaments of Belgium, the Netherlands, Denmark, Norway, Sweden, Finland, Switzerland, Austria, Italy and Greece. In Austria, Italy and Switzerland, the Freedom Party of Austria, Lega Nord and Swiss People's Party, respectively, were at times also part of the national governments, and in Denmark, the Danish People's Party tolerated a right-liberal minority government from 2001 throughout the decade. While not being present in the national parliaments of France and the United Kingdom, Jean-Marie Le Pen of the National Front came second in the first round of the 2002 French presidential elections, and in the 2009 European Parliament election, the UK Independence Party came second, beating even the Labour Party, while the British National Party managed to win two seats for the first time.
World leaders
2000 – 2001 – 2002 – 2003 – 2004 – 2005 – 2006 – 2007 – 2008 – 2009– 2010
Assassinations and attempts
Prominent assassinations, targeted killings, and assassination attempts include:
January 16, 2001 – Laurent-Désiré Kabila, the President of the Democratic Republic of the Congo was assassinated by a bodyguard. The motive remains unexplained.
October 17, 2001 – Israeli Minister of Tourism Rehavam Ze'evi was assassinated by three Palestinian assailants, members of the Popular Front for the Liberation of Palestine.
May 6, 2002 – Pim Fortuyn, Dutch politician, was assassinated by environmentalist activist Volkert van der Graaf.
March 12, 2003 – Zoran Đinđić, Serbian and Montenegrin Prime Minister, is assassinated by Zvezdan Jovanović, a soldier of Milorad Ulemek, the former commander of the Special Operations Unit of Yugoslavia's secret police.
September 10, 2003 – Anna Lindh, Swedish foreign minister, was assassinated after being stabbed in the chest, stomach, and arms by Serbian and Montenegrin national Mijailo Mijailović while shopping in a Stockholm department store.
March 22, 2004 – Ahmed Yassin, the founder and spiritual leader of the militant Islamist group Hamas, was assassinated in the Gaza Strip by the Israeli Air Force.
November 2, 2004 – Theo van Gogh, Dutch filmmaker and critic of Islamic culture, was assassinated in Amsterdam by Mohammed Bouyeri.
February 14, 2005 – Rafic Hariri, former Prime Minister of Lebanon, was assassinated when explosives equivalent to around 1,000 kg of TNT were detonated as his motorcade drove past the St. George Hotel in Beirut. The assassination attempt also killed at least 16 other people and injured 120 others.
December 27, 2007 – Benazir Bhutto, former Pakistani prime minister, was assassinated at an election rally in Rawalpindi by a bomb blast. The assassination attempt also killed at least 80 other people.
March 2, 2009 – João Bernardo Vieira, President of Guinea-Bissau, was assassinated during an armed attack on his residence in Bissau.
May 31, 2009 – George Tiller, pro-choice advocate and late-term abortion provider, was assassinated at his church in Wichita, Kansas, by anti-abortion extremist Scott Roeder.
Disasters
Natural disasters
The 2000s experienced some of the worst and most destructive natural disasters in history.
Earthquakes (including tsunamis)
On January 13, 2001, a 7.6-magnitude earthquake strikes El Salvador, killing 944 people.
On January 26, 2001, an earthquake hits Gujarat, India, killing more than 12,000.
On February 28, 2001, the Nisqually earthquake hits the Seattle metro area. It caused major damage to the old highway standing in the urban center of Seattle.
On February 13, 2001, a 6.6-magnitude earthquake hits El Salvador, killing at least 400.
On May 21, 2003, an earthquake in the Boumerdès region of northern Algeria kills 2,200.
On December 26, 2003, the massive 2003 Bam earthquake devastates southeastern Iran; over 40,000 people are reported killed in the city of Bam.
On December 26, 2004, one of the worst natural disasters in recorded history hits southeast Asia, when the largest earthquake in 40 years hits the entire Indian Ocean region. The massive 9.3 magnitude earthquake, epicentered just off the west coast of the Indonesian island of Sumatra, generates enormous tsunami waves that crash into the coastal areas of a number of nations including Thailand, India, Sri Lanka, the Maldives, Malaysia, Myanmar, Bangladesh, and Indonesia. The official death toll from the Boxing Day tsunami in the affected countries with over 230,000 people dead.
On October 8, 2005, the 2005 Kashmir earthquake kills about 80,000 people.
On May 12, 2008, over 69,000 are killed in central south-west China by the Wenchuan quake, an earthquake measuring 7.9 on the moment magnitude scale. The epicenter was west-northwest of the provincial capital Chengdu, Sichuan province.
Tropical cyclones, other weather, and bushfires
July 7–11, 2005 – Hurricane Dennis caused damage in the Caribbean and southeastern United States. Dennis killed a total of 88 people and caused $3.71 billion in damages.
August 28–29, 2005 – Hurricane Katrina made landfall in Mississippi, devastating the city of New Orleans and nearby coastal areas. Katrina was recognized as the costliest natural disaster in the United States at the time, after causing a record $108 billion in damages (a record later surpassed by Hurricane Harvey in 2017). Katrina caused over 1,200 deaths.
November 30, 2006 – Typhoon Durian (known in the Philippines as Typhoon Reming) affected the Philippines’ Bicol Region, and together with a concurrent eruption of Mayon Volcano, caused mudflows and killed more than 1,200 people.
August 30, 2007 – Group of Croatian firefighters who were flown in on the island Kornat as part of the 2007 coast fires firefighting efforts perished. Twelve out of thirteen men who found themselves surrounded by fire were killed in the event which was the biggest loss of lives in the history of Croatian firefighting.
May 3, 2008 – Cyclone Nargis had an extreme impact in Myanmar, causing nearly 140,000 deaths and $10 billion in damages.
June 21, 2008 – Typhoon Fengshen passed over Visayas, Philippines, sinking the ship MV Princess of the Stars, and killing more than 800 passengers.
February 7 – March 14, 2009 – The Black Saturday bushfires, the deadliest bushfires in Australian history, took place across the Australian state of Victoria during extreme bushfire-weather conditions, killing 173 people, injuring more than 500, and leaving around 7,500 homeless. The fires came after Melbourne recorded the highest-ever temperature () of any capital city in Australia. The majority of the fires were caused by either fallen or clashing power lines, or arson.
Winter of 2009–2010 – The winter of 2009–2010 saw abnormally cold temperatures in Europe, Asia, and America. A total of 21 people were reported to have died as a result of the cold in the British Isles. On December 26, 2009, Saint Petersburg, Russia, was covered by 35 cm of snow, the largest December snowfall recorded in the city since 1881.
September 25–26, 2009 – Typhoon Ketsana (known in the Philippines as Tropical Storm Ondoy) caused flooding in the Philippines, mostly in the Manila Metropolitan area, killing nearly 700 people in total. Flooding levels reached a record of 20 ft (6.1 m) in rural areas.
Epidemics
Antibiotic resistance is a serious and growing phenomenon in contemporary medicine and has emerged as one of the eminent public health concerns of the 21st century, particularly as it pertains to pathogenic organisms (the term is not especially relevant to organisms which don't cause disease in humans).
The outbreak of foot-and-mouth disease in the United Kingdom in 2001 caused a crisis in British agriculture and tourism. This epizootic saw 2,000 cases of the disease in farms across most of the British countryside. Over 10 million sheep and cattle were killed.
Between November 2002 and July 2003, an outbreak of severe acute respiratory syndrome (SARS) occurred in Hong Kong, with 8,273 cases and 775 deaths worldwide (9.6% fatality) according to the World Health Organization (WHO). Within weeks, SARS spread from Hong Kong to infect individuals in 37 countries in early 2003.
Methicillin-resistant Staphylococcus aureus: the Office for National Statistics reported 1,629 MRSA-related deaths in England and Wales during 2005, indicating a MRSA-related mortality rate half the rate of that in the United States for 2005, even though the figures from the British source were explained to be high because of "improved levels of reporting, possibly brought about by the continued high public profile of the disease" during the time of the 2005 United Kingdom General Election. MRSA is thought to have caused 1,652 deaths in 2006 in UK up from 51 in 1993.
The 2009 H1N1 (swine flu) flu pandemic was also considered a natural disaster. On October 25, 2009, U.S. President Barack Obama officially declared H1N1 a national emergency. Despite President Obama's concern, a Fairleigh Dickinson University PublicMind poll found in October 2009 that an overwhelming majority of New Jerseyans (74%) were not very worried or not at all worried about contracting the H1N1 flu virus.
A study conducted in coordination with the University of Michigan Health Service is scheduled for publication in the December 2009 American Journal of Roentgenology warning that H1N1 flu can cause pulmonary embolism, surmised as a leading cause of death in this current pandemic. The study authors suggest physician evaluation via contrast enhanced CT scans for the presence of pulmonary emboli when caring for patients diagnosed with respiratory complications from a "severe" case of the H1N1 flu.
As of May 30, 2010, as stated by the World Health Organization, more than 214 countries and overseas territories or communities have reported laboratory confirmed cases of pandemic influenza H1N1 2009, including over 18,138 deaths.
Footnote
The Walkerton Tragedy is a series of events that accompanied the contamination of the water supply of Walkerton, Ontario, Canada, by Escherichia coli bacteria in May 2000.
Starting May 11, 2000, many residents of the community of about 5,000 people began to simultaneously experience bloody diarrhea, gastrointestinal infections and other symptoms of E. coli infection.
Seven people died directly from drinking the E. coli contaminated water, who might have been saved if the Walkerton Public Utilities Commission had admitted to contaminated water sooner, and about 2,500 became ill.
In 2001 a similar outbreak in North Battleford, Saskatchewan caused by the protozoan Cryptosporidium affected at least 5,800 people.
Non-natural disasters
Vehicular wrecks
On July 25, 2000, Air France Flight 4590, a Concorde aircraft, crashed into a hotel in Gonesse just after takeoff from Paris, killing all 109 aboard and 4 in the hotel. This was the only Concorde accident in which fatalities occurred. It was the beginning of the end for Concorde as an airliner; the type was retired three years later.
On August 12, 2000, the Russian submarine K-141 Kursk sank in the Barents Sea, killing all 118 men on board.
On November 11, 2000, the Kaprun disaster occurred. 155 people perished in a fire that broke out on a train in the Austrian Alps.
On October 8, 2001, two aircraft collide on a runway at the Linate Airport in Milan, Italy, killing all 114 people aboard both aircraft and 4 people on the ground.
On November 12, 2001, American Airlines Flight 587 crashed into a neighborhood in Queens, New York City, killing all 260 aboard and 5 people on the ground.
On May 25, 2002, China Airlines Flight 611 broke up in mid-air and plunged into the Taiwan Strait, killing all 225 people on board.
On July 1, 2002, a Tupolev Tu-154 passenger airliner and a Boeing 757 cargo plane collided above the German town of Überlingen. All 71 people on both aircraft died.
On July 27, 2002, a Sukhoi Su-27 fighter jet crashed at an air show in Ukraine, killing 77 and injuring 543, making it the worst air show disaster in history.
On September 26, 2002, the ferry MV Le Joola sank off the coast of Gambia, killing at least 1,863 people.
On February 1, 2003, at the conclusion of the STS-107 mission, the Space Shuttle Columbia disintegrated during reentry over Texas, killing all seven astronauts on board.
On February 19, 2003, an Ilyushin Il-76 military aircraft crashed outside the Iranian city of Kerman, killing 275.
On August 14, 2005, Helios Airways Flight 522 crashed into a mountain north of Marathon, Greece, while flying from Larnaca, Cyprus, to Athens, Greece. All 115 passengers and six crew on board the aircraft were killed.
On August 16, 2005, West Caribbean Airways Flight 708 crashed in a remote region of Venezuela, killing 160.
On September 29, 2006, Gol Transportes Aéreos Flight 1907 collided with a new Embraer Legacy 600 business jet over the Brazilian Amazon and crashed, killing all 154 people on board. The Embraer aircraft made an emergency landing at a nearby military outpost with no harm to its seven occupants.
On December 30, 2006, the ferry MV Senopati Nusantara sank in a storm in the Java Sea, killing between 400 and 500 of the 628 people aboard. Three days later, Adam Air Flight 574 crashed in the same storm, killing all 102 people on board.
On July 17, 2007, TAM Airlines Flight 3054 skidded off the runway at Congonhas-São Paulo Airport and crashed into a nearby warehouse, leaving 199 people dead.
On February 12, 2009, Colgan Air Flight 3407 crashed on approach in Buffalo, New York, killing 50.
On June 1, 2009, Air France Flight 447 crashed into the southern Atlantic Ocean after instrument failure disoriented the crew. All 228 people on board perished.
On June 30, 2009, Yemenia Flight 626 crashed into the Indian Ocean near the Comoros islands. Of the 153 people on board, only 12-year-old Bahia Bakari survived.
Stampedes
The 2005 Baghdad bridge stampede occurred on August 31, 2005, when 953 people died following a stampede on Al-Aaimmah bridge, which crosses the Tigris river in the Iraqi capital of Baghdad.
Economics
The most significant evolution of the early 2000s in the economic landscape was the long-time predicted breakthrough of economic giant China, which had double-digit growth during nearly the whole decade. To a lesser extent, India also benefited from an economic boom which saw the two most populous countries becoming an increasingly dominant economic force. The rapid catching-up of emerging economies with developed countries sparked some protectionist tensions during the period and was partly responsible for an increase in energy and food prices at the end of the decade. The economic developments in the latter third of the decade were dominated by a worldwide economic downturn, which started with the crisis in housing and credit in the United States in late 2007, and led to the bankruptcy of major banks and other financial institutions. The outbreak of this global financial crisis sparked a global recession, beginning in the United States and affecting most of the industrialized world.
A study by the World Institute for Development Economics Research at United Nations University reports that the richest 1% of adults alone owned 40% of global assets in the year 2000. The three richest people possess more financial assets than the lowest 48 nations combined.
The combined wealth of the "10 million dollar millionaires" grew to nearly $41 trillion in 2008.
The sale of UK gold reserves, 1999–2002 was a policy pursued by HM Treasury when gold prices were at their lowest in 20 years, following an extended bear market. The period itself has been dubbed by some commentators as the Brown Bottom or Brown's Bottom.
The period takes its name from Gordon Brown, the then UK Chancellor of the Exchequer (who later became Prime Minister), who decided to sell approximately half of the UK's gold reserves in a series of auctions. At the time, the UK's gold reserves were worth about US$6.5 billion, accounting for about half of the UK's US$13 billion foreign currency net reserves.
The 2001 AOL merger with Time Warner (a deal valued at $350 billion; which was the largest merger in American business history) was 'the biggest mistake in corporate history', believes Time Warner chief Jeff Bewkes
February 7, 2004 – EuroMillions transnational lottery, launched by France's Française des Jeux, Spain's Loterías y Apuestas del Estado, and the United Kingdom's Camelot.
In 2007, it was reported that in the UK, one pound in every seven spent went to the Tesco grocery and general merchandise retailer.
On October 9, 2007, the Dow Jones Industrial Average closed at the record level of 14,164.53. Two days later on October 11, the Dow would trade at its highest intra-day level ever, at the 14,198.10 mark. In what would normally take many years to accomplish; numerous reasons were cited for the Dow's extremely rapid rise from the 11,000 level in early 2006, to the 14,000 level in late 2007. They included future possible takeovers and mergers, healthy earnings reports particularly in the tech sector, and moderate inflationary numbers; fueling speculation the Federal Reserve would not raise interest rates. Roughly on par with the 2000 record when adjusted for inflation, this represented the final high of the cyclical bull. The index closed 2007 at 13,264.82, a level it would not surpass for nearly five years.
Economic growth in the world
Between 1999 and 2009, according to the World Bank statistics for GDP:
The world economy by nominal GDP almost doubled in size from U.S. $30.21 trillion in 1999 to U.S. $58.23 trillion in 2009. This figure is not adjusted for inflation. By PPP, world GDP rose 78%, according to the IMF. But inflation adjusted nominal GDP rose only 42%, according to IMF constant price growth rates. The following figures are not inflation adjusted nominal GDP and should be interpreted with extreme caution:
The United States (U.S. $14.26 trillion) retained its position of possessing the world's largest economy. However, the size of its contribution to the total global economy dropped from 28.8% to 24.5% by nominal price or a fall from 23.8% to 20.4% adjusted for purchasing power.
Japan (U.S. $5.07 trillion) retained its position of possessing the second largest economy in the world, but its contribution to the world economy also shrank significantly from 14.5% to 8.7% by nominal price or a fall from 7.8% to 6.0% adjusted for purchasing power.
China (U.S. $4.98 trillion) went from being the sixth largest to the third largest economy, and in 2009 contributed to 8.6% of the world's economy, up from 3.3% in 1999 by nominal price or a rise from 6.9% to 12.6% adjusted for purchasing power.
Germany (U.S. $3.35 trillion), France (U.S. $2.65 trillion), United Kingdom (U.S. $2.17 trillion) and Italy (U.S. $2.11 trillion) followed as the 4th, 5th, 6th and 7th largest economies, respectively in 2009.
Brazil (U.S. $1.57 trillion) retained its position as the 8th largest economy, followed by Spain (U.S. $1.46 trillion), which remained at 10th.
Other major economies included Canada (U.S. $1.34 trillion; 10th, down from 9th), India (U.S. $1.31 trillion; remaining at 11th from 12th), Russia (U.S. $1.23 trillion; from 16th to 12th) Mexico (U.S. $875 billion; 14th, down from 11th), Australia (U.S. $925 billion; from 14th to 13th) and South Korea (U.S. $832 billion; 15th, down from 13th).
In terms of purchasing power parity in 2009, the ten largest economies were the United States (U.S. $14.26 trillion), China (U.S. $9.10 trillion), Japan (U.S. $4.14 trillion), India (U.S. $3.75 trillion), Germany (U.S. $2.98 trillion), Russia (U.S. $2.69 trillion), United Kingdom (U.S. $2.26 trillion), France (U.S. $2.17 trillion), Brazil (U.S. $2.02 trillion), and Italy (U.S. $1.92 trillion).
The average house price in the UK, increased by 132% between the fourth quarter of 2000, and 91% during the decade; but the average salary increased only by 40%.
Globalization and its discontents
The removal of trade and investment barriers, the growth of domestic markets, artificially low currencies, the proliferation of education, the rapid development of high tech and information systems industries and the growth of the world economy lead to a significant growth of offshore outsourcing during the decade as many multinational corporations significantly increased subcontracting of manufacturing (and increasingly, services) across national boundaries in developing countries and particularly in China and India, due to many benefits and mainly because the two countries which are the two most populous countries in the world provide huge pools from which to find talent and as because both countries are low cost sourcing countries. As a result of this growth, many of these developing countries accumulated capital and started investing abroad. Other countries, including the United Arab Emirates, Australia, Brazil and Russia, benefited from increased demand for their mineral and energy resources that global growth generated. The hollowing out of manufacturing was felt in Japan and parts of the United States and Europe which had not been able to develop successful innovative industries. Opponents point out that the practice of offshore outsourcing by countries with higher wages leads to the reduction of their own domestic employment and domestic investment. As a result, many customer service jobs as well as jobs in the information technology sectors (data processing, computer programming, and technical support) in countries such as the United States and the United Kingdom have been or are potentially affected.
While global trade rose in the decade (partially driven by China's entry into the WTO in 2001), there was little progress in the multilateral trading system. International trade continued to expand during the decade as emerging economies and developing countries, in particular China and South-Asian countries, benefited low wages costs and most often undervalued currencies. However, global negotiations to reduce tariffs did not make much progress, as member countries of the World Trade Organization did not succeed in finding agreements to stretch the extent of free trade. The Doha Round of negotiations, launched in 2001 by the WTO to promote development, failed to be completed because of growing tensions between regional areas. Nor did the Cancún Conference in 2003 find a consensus on services trade and agricultural subsidies.
The comparative rise of China, India, and other developing countries also contributed to their growing clout in international fora. In 2009, it was determined that the G20, originally a forum of finance ministers and central bank governors, would replace the G8 as the main economic council.
2007 Chinese export recalls– In 2007, a series of product recalls and import bans were imposed by the product safety institutions of the United States, Canada, the European Union, Australia and New Zealand against products manufactured in and exported from the mainland of the People's Republic of China (PRC) because of numerous alleged consumer safety issues.
Events in the confidence crisis included recalls on consumer goods such as pet food, toys, toothpaste, lipstick, and a ban on certain types of seafood. Also included are reports on the poor crash safety of Chinese automobiles, slated to enter the American and European markets in 2008. This created adverse consequences for the confidence in the safety and quality of mainland Chinese manufactured goods in the global economy.
The age of turbulence
The decade was marked by two financial and economic crises. In 2001, the Dot-com bubble burst, causing turmoil in financial markets and a decline in economic activity in the developed economies, in particular in the United States. However, the impact of the crisis on the activity was limited thanks to the intervention of the central banks, notably the U.S. Federal Reserve System. Indeed, Alan Greenspan, leader of the Federal Reserve until 2006, cut the interest rates several times to avoid a severe recession, allowing an economic revival in the U.S.
As the Federal Reserve maintained low interest rates to favor economic growth, a housing bubble began to appear in the United States. In 2007, the rise in interest rates and the collapse of the housing market caused a wave of loan payment failures in the U.S. The subsequent mortgage crisis caused a global financial crisis, because the subprime mortgages had been securitized and sold to international banks and investment funds. Despite the extensive intervention of central banks, including partial and total nationalization of major European banks, the crisis of sovereign debt became particularly acute, first in Iceland, though as events of the early 2010s would show, it was not an isolated European example. Economic activity was severely affected around the world in 2008 and 2009, with disastrous consequences for carmakers.
In 2007, the UK's Chancellor of the Exchequer Gordon Brown, delivered his final Mansion House speech as Chancellor before he moved into Number 10. Addressing financiers: "A new world order has been created", Everyone needed to follow the city's "great example", "an era that history will record as the beginning of a new Golden Age".
Reactions of governments in all developed and developing countries against the economic slowdown were largely inspired by keynesian economics. The end of the decade was characterized by a Keynesian resurgence, while the influence and media popularity of left-wing economists Joseph Stiglitz and Paul Krugman (Nobel Prize recipients in 2001 and 2008, respectively) did not stop growing during the decade. Several international summits were organized to find solutions against the economic crisis and to impose greater control on the financial markets. The G-20 became in 2008 and 2009 a major organization, as leaders of the member countries held two major summits in Washington in November 2008 and in London in April 2009 to regulate the banking and financial sectors, and also succeeding in coordinating their economic action and in avoiding protectionist reactions.
Energy crisis
From the mid-1980s to September 2003, the inflation-adjusted price of a barrel of crude oil on NYMEX was generally under $25/barrel. During 2003, the price rose above $30, reached $60 by August 11, 2005, and peaked at $147.30 in July 2008. Commentators attributed these price increases to many factors, including reports from the United States Department of Energy and others showing a decline in petroleum reserves, worries over peak oil, Middle East tension, and oil price speculation.
For a time, geopolitical events and natural disasters indirectly related to the global oil market had strong short-term effects on oil prices. These events and disasters included North Korean missile tests, the 2006 conflict between Israel and Lebanon, worries over Iranian nuclear plants in 2006 and Hurricane Katrina. By 2008, such pressures appeared to have an insignificant impact on oil prices given the onset of the global recession. The recession caused demand for energy to shrink in late 2008 and early 2009 and the price plunged as well. However, it surged back in May 2009, bringing it back to November 2008 levels.
Many fast-growing economies throughout the world, especially in Asia, also were a major factor in the rapidly increasing demand for fossil fuels, which—along with fewer new petroleum finds, greater extraction costs, and political turmoil—forced two other trends: a soar in the price of petroleum products and a push by governments and businesses to promote the development of environmentally friendly technology (known informally as "green" technology). However, a side-effect of the push by some industrial nations to "go green" and utilize biofuels was a decrease in the supply of food and a subsequent increase in the price of the same. It partially caused the 2007 food price crisis, which seriously affected the world's poorer nations with an even more severe shortage of food.
The rise of the euro
A common currency for most EU member states, the euro, was established electronically in 1999, officially tying all the currencies of each participating nation to each other. The new currency was put into circulation in 2002 and the old currencies were phased out. Only three countries of the then 15 member states decided not to join the euro (the United Kingdom, Denmark and Sweden). In 2004 the EU undertook a major eastward enlargement, admitting 10 new member states (eight of which were former communist states). Two more, Bulgaria and Romania, joined in 2007, establishing a union of 27 nations.
The euro has since become the second largest reserve currency and the second most traded currency in the world after the US$.
, with more than €790 billion in circulation, the euro was the currency with the highest combined value of banknotes and coins in circulation in the world, having surpassed the US$.
Science and technology
Science
These are the 10 most significant scientific researches by year based on the annual award Breakthrough of the Year made by the AAAS journal, Science.
Scientific Marks by Field
Archaeology
2003 – Fossils of a new dwarf species of human, Homo floresiensis, were discovered on the island of Flores, Indonesia. (report published initially October 2004).
2009 – Discovery of Ardipithecus ramidus a species of Hominin classified as an australopithecine of the genus Ardipithecus. A. kadabba was considered to be a subspecies of A. ramidus until 2004.
Biology
2001 – The world's first self-contained artificial heart was implanted in Robert Tools.
2002 – The 2002–2004 SARS outbreak occurred in China and Hong Kong (The big major cities/country).
2003 – The Human Genome Project was completed, with 99% of the human genome sequenced to 99.99% accuracy.
2005 – National Geographic Society and IBM established The Genographic Project, which aims to trace the ancestry of every living human down to a single male ancestor.
2005 – Surgeons in France carried out the first successful partial human face transplant.
2005 – Equipped with genome data and field observations of organisms from microbes to mammals, biologists made huge strides toward understanding the mechanisms by which living creatures evolve.
2006 – Australian scientist Ian Frazer developed a vaccine for the Human Papillomavirus, a common cause of cervical cancer.
2007 – RNA, long upstaged by its more glamorous sibling, DNA, is turning out to have star qualities of its own. Science hails these electrifying discoveries, which are prompting biologists to overhaul their vision of the cell and its evolution.
2008 – By inserting genes that turn back a cell's developmental clock, researchers are gaining insights into disease and the biology of how a cell decides its fate.
2008 – Launch of the 1000 Genomes Project an international research effort to establish by far the most detailed catalogue of human genetic variation.
2009 – Launch of the Human Connectome Project to build a network map that will shed light on the anatomical and functional connectivity within the healthy human brain, as well as to produce a body of data that will facilitate research into brain disorders.
2009 – A new strain of H1N1 virus caused in Mexico and US; spread to the world. The name was swine flu, which was the 2009 swine flu pandemic.
Mathematics
2006 – Grigori Perelman is a Russian mathematician who has made landmark contributions to Riemannian geometry and geometric topology. In 2003, he proved Thurston's geometrization conjecture. This consequently solved in the affirmative the Poincaré conjecture, posed in 1904, which before its solution was viewed as one of the most important and difficult open problems in topology. In August 2006, Perelman was awarded the Fields Medal for "his contributions to geometry and his revolutionary insights into the analytical and geometric structure of the Ricci flow." Perelman declined to accept the award or to appear at the congress, stating: "I'm not interested in money or fame, I don't want to be on display like an animal in a zoo." On December 22, 2006, the journal Science recognized Perelman's proof of the Poincaré conjecture as the scientific "Breakthrough of the Year", the first such recognition in the area of mathematics. The Poincaré conjecture is one of the seven Millennium Problems and the first to be solved.
Physics
2001 – Scientists assembled molecules into basic circuits, raising hopes for a new world of nanoelectronics. If researchers can wire these circuits into intricate computer chip architectures, this new generation of molecular electronics will undoubtedly provide computing power to launch scientific breakthroughs for decades.
2008 – CERN's Large Hadron Collider, the world's largest and highest-energy particle accelerator ever made, was completed in 2008.
Space
2000 – Beginning on November 2, 2000, the International Space Station has remained continuously inhabited. The Space Shuttles helped make it the largest space station in history, despite one of the Shuttles disintegrating upon re-entry in 2003. By the end of 2009 the station was supporting 5 long-duration crew members.
2001 – Space tourism/Private spaceflight begins with American Dennis Tito, paying Russia US$20 million for a week-long stay to the International Space Station.
2004 – The Mars Exploration Rover (MER) Mission successfully reached the surface of Mars in 2004, and sent detailed data and images of the landscape there back to Earth. Opportunity discovers evidence that an area of Mars was once covered in water. Both rovers were each expected to last only 90 days, however both completely exceeded expectations and continued to explore through the end of the decade and beyond.
2004 – Scaled Composites' SpaceShipOne becomes the first privately built and operated manned spacecraft to achieve spaceflight.
2006 – As a result of the discovery of Eris, a Kuiper Belt object larger than Pluto, Pluto is demoted to a "dwarf planet" after being considered a planet for 76 years, redefining the solar system to have eight planets and three dwarf planets.
2009 – After having analyzed the data from the LCROSS lunar impact, in 2009 NASA announced that the discovery of a "significant" quantity of water in the Moon's Cabeus crater.
2009 – Astrophysicists studying the universe confirm its age at 13.7 billion years, discover that it will most likely expand forever without limit, and conclude that only 4% of the universe's contents are ordinary matter (the other 96% being still-mysterious dark matter, dark energy, and dark flow).
Technology
Automobiles
Automotive navigation systems become widely popular making it possible to direct vehicles to any destination in real-time as well as detect traffic and suggest alternate routes with the use of GPS navigation devices.
Greater interest in future energy development due to global warming and the potential exhaustion of crude oil. Photovoltaics increase in popularity as a result.
The Hybrid vehicles market, which became somewhat popular towards the middle of the decade, underwent major advances notably typified by such cars as the Toyota Prius, Ford Escape, and the Honda Insight though by December 2010 they accounted for less than 0.5% of the world cars.
Many more computers and other technologies were implemented in vehicles throughout the decade such as: Xenon HID headlights, GPS, DVD players, self-diagnosing systems, memory systems for car settings, back-up sensors and cameras, in-car media systems, MP3 player compatibility, USB drive compatibility, keyless start and entry, satellite radio, voice-activation, cellphone connectivity, HUD (Head-Up-Display) and infrared cameras. In addition, more safety features were implemented in vehicles throughout the decade such as: advanced pre-collision safety systems, Backup cameras, Blind spot monitor, Adaptive cruise control, Adaptive headlamps, Automatic parking, Lane departure warning systems and the Advanced Automatic Collision Notification system Onstar (on all GM models).
The sale of Crossovers (CUVs), a type of car-based unibody sports utility vehicle, increased in the 2000s. By 2006, the segment came into strong visibility in the U.S., when crossover sales "made up more than 50% of the overall SUV market".
Communications
The popularity of mobile phones and text messaging surged in the 2000s in the Western world. The advent of text messaging made possible new forms of interaction that were not possible before, leading to positive implications such as having the ability to receive information on the move. Nevertheless, it also led to negative social implications such as "cyberbullying" and the rise of traffic collisions caused by drivers who were distracted as they were texting while driving.
Mobile internet, first launched in Japan with the i-mode in 1999, became increasingly popular with people in developed countries throughout the decade, thanks to improving cell phone capabilities and advances in mobile telecommunications technology, such as 3G.
E-mail continued to be popular throughout the decade. It began to replace "snail mail" (also known, more neutrally, as paper mail, postal mail, land mail, or simply mail or post) as the primary way of sending letters and other messages to people in faraway locations, though it has been available since 1971.
Social networking sites arose as a new way for people to stay in touch no matter where they are, as long as they have an internet connection. The first social networking sites were Friendster, Myspace, Facebook, and Twitter in 2002, 2003, 2004, and 2006, respectively. Myspace was the most popular social networking website until June 2009 when Facebook overtook Myspace in the number of American users.
Smartphones, which combine mobile phones with the features of personal digital assistants and portable media players, first emerged in the 1990s but did not become very popular until the late 2000s. Smartphones are rich in features and often have high resolution touchscreens and web browsers. The first modern smartphone was the iPhone. It was released on June 29, 2007, in the United States, and in the United Kingdom, France, Germany, Portugal, the Republic of Ireland and Austria in November 2007. It was the first smartphone to not include a physical keyboard, solely utilizing a touch screen and a home button.
Due to the major success of broadband Internet connections, Voice over IP begins to gain popularity as a replacement for traditional telephone lines.
Computing and Internet
In the 2000s, the Internet became a mainstay, strengthening its grip on Western society while becoming increasingly available in the developing world.
A huge jump in broadband internet usage globally – for example, from 6% of U.S. internet users in June 2000 to what one mid-decade study predicted would be 62% by 2010. By February 2007, over 80% of U.S. Internet users were connected via broadband and broadband internet has been almost a required standard for quality internet browsing.
Wireless internet became prominent by the end of the decade, as well as internet access in devices besides computers, such as mobile phones and gaming consoles.
Email became a standard form of interpersonal written communication, with popular addresses available to the public on Hotmail (now Outlook.com), Gmail and Yahoo! Mail.
Normalisation became increasingly important as massive standardized corpora and lexicons of spoken and written language became widely available to laypeople, just as documents from the paperless office were archived and retrieved with increasing efficiency using XML-based markup.
Peer-to-peer technology gained massive popularity with file sharing systems enabling users to share any audio, video and data files or anything in digital format, as well as with applications which share real-time data, such as telephony traffic.
VPNs (virtual private networks) became likewise accessible to the general public, and data encryption remained a major issue for the stability of web commerce.
Boom in music downloading and the use of data compression to quickly transfer music over the Internet, with a corresponding rise of portable digital audio players. As a result, the entertainment industry struggled through the decade to find digital delivery systems for music, movies, and other media that reduce copyright infringement and preserve profit.
The USB flash drive replaces the floppy disk as the preferred form of low-capacity mobile data storage.
In February 2003, Dell announced floppy drives would no longer be pre-installed on Dell Dimension home computers, although they were still available as a selectable option and purchasable as an aftermarket OEM add-on. On January 29, 2007, PC World stated that only 2% of the computers they sold contained built-in floppy disk drives; once present stocks were exhausted, no more standard floppies would be sold.
During the decade, Windows 2000, XP, Microsoft Office 2003, Vista and Office 2007 (and later Windows 7) become the ubiquitous industry standards in personal computer software until the end of the decade, when Apple began to slowly gain market share. Windows ME and Microsoft Office XP were also released during the decade.
With the advent of the Web 2.0, dynamic technology became widely accessible, and by the mid-2000s, PHP and MySQL became (with Apache and nginx) the backbone of many sites, making programming knowledge unnecessary to publish to the web. Blogs, portals, and wikis become common electronic dissemination methods for professionals, amateurs, and businesses to conduct knowledge management typified by success of the online encyclopedia Wikipedia which launched on January 15, 2001, grew rapidly and became the largest and most popular general reference work on the Internet as well as the best known wiki in the world and the largest encyclopedia in the world.
Open-source software, such as the Linux operating system, the Mozilla Firefox web browser and VLC media player, gain ground.
Internet commerce became standard for reservations; stock trading; promotion of music, arts, literature, and film; shopping; and other activities.
During this decade certain websites and search engines became prominent worldwide as transmitters of goods, services and information. Some of the most popular and successful online sites or search engines of the 2000s included Google, Yahoo!, Wikipedia, Amazon, eBay, MySpace, Facebook, Twitter, and YouTube.
More and more businesses began providing paperless services, clients accessing bills and bank statements directly through a web interface.
In 2007, the fast food chain McDonald's announced the introduction of free high speed wireless internet access at most of its 1,200 restaurants by the end of the year in a move which will make it the UK's biggest provider of such a service.
Electronics
GPS (Global Positioning System) became very popular especially in the tracking of items or people, and the use in cars (see Automotive navigation systems). Games that utilized the system, such as geocaching, emerged and became popular.
Green laser pointers appeared on the market circa 2000, and are the most common type of DPSS lasers (also called DPSSFD for "diode pumped solid state frequency-doubled").
In late 2004 and early 2005, came a significant increase in reported incidents linked to laser pointers – see Lasers and aviation safety. The wave of incidents may have been triggered in part by "copycats" who read press accounts of laser pointer incidents. In one case, David Banach of New Jersey was charged under federal Patriot Act anti-terrorism laws, after he allegedly shone a laser pointer at aircraft.
Chip and PIN is the brand name adopted by the banking industries in the United Kingdom and Ireland for the rollout of the EMV smart card payment system for credit, debit and ATM cards.
Chip and PIN was trialled in Northampton, England from May 2003, and as a result was rolled out nationwide in the United Kingdom in 2004 with advertisements in the press and national television touting the "Safety in Numbers" slogan.
In 2009, Tesco (a British multinational grocery and general merchandise retailer) opened its first UK branch at which service robots were the only option at the checkout, in Kingsley, Northampton – its US chain, Fresh & Easy, already operates several branches like this.
September 7, 2009, an EU watchdog warns of an "alarming increase" in cash machine fraud by organised criminal gangs across Europe using sophisticated skimming technology, together with an explosion in ram-raiding attacks on ATMs.
ATM crime in Europe jumped to €485m (£423m) in 2008 following a 149% rise in attacks on cash machines. Gangs are turning to Bluetooth wireless technology to transmit card and personal identification number (PIN) details to nearby laptops and using increasingly sophisticated techniques to skim cards.
Portable laptops became popular during the late 2000s.
More conventional smash-and-grab attacks are also on the rise, says Enisa, the European Network and Information Security Agency. It reports a 32% rise in physical attacks on ATMs, ranging from ram raids to the use of rotary saws, blowtorches and diamond drills. It blames the increase on gangs from eastern Europe.
Robotics
The U.S. Army used increasingly effective unmanned aerial vehicles in war zones, such as Afghanistan.
Emerging use of robotics, especially telerobotics in medicine, particularly for surgery.
Home automation and home robotics advance in North America; iRobot's "Roomba" is the most successful domestic robot and has sold 1.5 million units.
Transportation
Competition between Airbus and Boeing, the two largest remaining airliner manufacturers, intensified, with pan-European Airbus outselling American Boeing for the first time during this decade.
Airbus launched the double-decker Airbus A380, the largest passenger aircraft ever to enter production.
The Boeing 787 Dreamliner, the first mass-production aircraft manufactured primarily with composite materials, had its maiden flight.
Production of the Boeing 757, Boeing's largest single-aisle airliner, ended with no replacement.
Concorde a turbojet-powered supersonic passenger airliner or supersonic transport (SST), was retired in 2003 due to a general downturn in the aviation industry after the type's only crash in 2000, the 9/11 terrorist attacks in 2001 and a decision by Airbus, the successor firm of Aerospatiale and BAC, to discontinue maintenance support.
December 9, 2005 – The London Transport Executive AEC Routemaster double-decker bus was officially withdrawn from 51 years general service in the UK. In the 2008 London mayoral election campaign, prospective mayor Boris Johnson made several commitments to change the London Buses vehicle policy, namely to introduce a new Routemaster, and remove the bendy buses.
High-speed rail projects opened across Asia and Europe, and rail services saw record passenger numbers.
The Acela Express, the first full high-speed service in North America, started on the Northeast Corridor in 2000.
The Qinhuangdao–Shenyang High-Speed Railway opened, becoming the first high-speed railway in China.
High Speed 1, the first true high-speed line in the United Kingdom, opened in stages between 2003 and 2007, cutting travel times between Paris, Brussels and London considerably.
Taiwan High Speed Rail opened in 2007, connecting cities down the island's west coast.
HSL-Zuid opened in 2009, linking Amsterdam to the European high-speed network for the first time.
Video
Digital cameras become widely popular due to rapid decreases in size and cost while photo resolution steadily increases. As a result, the digital cameras largely supplanted the analog cameras and the integration into mobile phones increase greatly. Since 2007, digital cameras started being manufactured with the face recognition feature built in.
Flat panel displays started becoming widely popular in the second half of the decade displacing cathode ray tubes.
Handheld projectors enter the market and are then integrated into cellphones.
DVR devices such as TiVo became popular, making it possible to record television broadcasts to a hard drive-based digital storage medium and allowing many additional features including the option to fast-forward through commercials or to use an automatic Commercial skipping feature. This feature created controversy, with major television networks and movie studios claiming it violates copyright and should be banned. With the commercial skipping feature, many television channels place advertisements on the bottom on the TV screen.
VOD technology became widely available among cable users worldwide, enabling the users to select and watch video content from a large variety of available content stored on a central server, as well as gaining the possibility to freeze the image, as well as fast-forward and rewind the VOD content.
DVDs, and subsequently Blu-ray Discs, replace VCR technology as the common standard in homes and at video stores.
Free Internet video portals like YouTube, Hulu, and Internet TV software solutions like Joost became new popular alternatives to TV broadcasts.
TV becomes available on the networks run by some mobile phone providers, such as Verizon Wireless's Vcast.
"High-definition television" becomes very popular towards the second half of the decade, with the increase of HD television channels and the conversion from analog to digital signals.
Miscellaneous
The e-cigarette was invented at the beginning of the decade.
Religion and irreligion
New Atheism is the name given to the ideas promoted by a collection of modern atheist writers who have advocated the view that "religion should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises."
The term is commonly associated with individuals such as Richard Dawkins, Daniel Dennett, Sam Harris, and Christopher Hitchens (together called "the Four Horsemen of New Atheism" in a taped 2007 discussion they held on their criticisms of religion, a name that has stuck), along with Victor J. Stenger, Lawrence M. Krauss and A.C. Grayling. Several best-selling books by these authors, published between 2004 and 2007, form the basis for much of the discussion of New Atheism.
Several groups promoting no religious faith or opposing religious faith altogether – including the Freedom From Religion Foundation, American Atheists, Camp Quest, and the Rational Response Squad – have witnessed large increases in membership numbers in recent years, and the number of secularist student organizations at American colleges and universities increased during the 2000s.
David Bario of the Columbia News Service wrote:
Under the Bush administration, organizations that promote abstinence and encourage teens to sign virginity pledges or wear purity rings have received federal grants. The Silver Ring Thing, a subsidiary of a Pennsylvania evangelical church, has received more than $1 million from the government to promote abstinence and to sell its rings in the United States and abroad.
Prominent events and trends during the 2000s:
Increasing Islamophobia and Islamophobic incidents during the 2000s associated with the September 11 attacks or with the increased presence of Muslims in the Western world.Achcar, Gilbert. The Arabs and the Holocaust: The Arab-Israeli War of Narratives, p. 283
In 2000, the Italian Supreme Court ruled that Scientology is a religion for legal purposes.
In 2001, lawsuits were filed in the United States and Ireland, alleging that some priests had sexually abused minors and that their superiors had conspired to conceal and otherwise abet their criminal misconduct. In 2004, the John Jay report tabulated a total of 4,392 priests and deacons in the U.S. against whom allegations of sexual abuse had been made.
The French law on secularity and conspicuous religious symbols in schools bans wearing conspicuous religious symbols in French public (i.e. government-operated) primary and secondary schools; and came into effect on September 2, 2004.
June 27, 2005, – The Supreme Court of the United States ruled on in a 5–4 decision, that a Ten Commandments display at the McCreary County courthouse in Whitley City, Kentucky and a Ten Commandments display at the Pulaski County courthouse—were unconstitutional: McCreary County v. American Civil Liberties Union
France created in 2006 the first French parliamentary commission on cult activities which led to a report registering a number of cults considered as dangerous. Supporters of such movements have criticized the report on the grounds of the respect of religious freedom. Proponents of the measure contend that only dangerous cults have been listed as such, and state secularism ensures religious freedom in France.
November 2009 – Minaret controversy in Switzerland: A referendum, a constitutional amendment banning the construction of new Mosque minarets was approved, sparking reactions from governments and political parties throughout of the world.
2009 – In Pope Benedict XVI's third encyclical Caritas in Veritate, he warns that a purely technocrat mindset where decisions are made only on grounds of efficiency will not deliver true development. Technical decisions must not be divorced from ethics. Benedict discusses bioethics and states that practices such as abortion, eugenics and euthanasia are morally hazardous and that accepting them can lead to greater tolerance for various forms of moral degradation. He turns to another consequence of the technocratic mindset, the viewing of people's personalities in purely psychological terms at the exclusion of the spiritual, which he says can lead to people feeling empty and abandoned even in prosperous societies.
Population and social issues
The decade saw further expansion of LGBT rights, with many European, Oceanic, and American countries recognizing civil unions and partnerships and a number of countries extending civil marriage to same-sex couples. The Netherlands was the first country in the world to legalize same-sex marriage in 2001. By the end of 2009, same-sex marriage was legal and performed in 10 countries worldwide, although only in some jurisdictions in Mexico and the United States.
Population continued to grow in most countries, in particular in developing countries, though overall the rate slowed. According to United Nations estimates, world population reached six billion in late 1999, and continued to climb to 6.8 billion in late 2009. In 2007 the population of the United States reached 300 million inhabitants, and Japan's population peaked at 127 million before going into decline.
In a 2003 memo to a staff member, Britain's Charles, Prince of Wales wrote:
Obesity is a leading preventable cause of death worldwide, with increasing prevalence in adults and children, and authorities view it as one of the most serious public health problems of the 21st century.
In 2001, 46.4% of people in sub-Saharan Africa were living in extreme poverty. Nearly half of all Indian children are undernourished, however, even among the wealthiest fifth one third of children are malnourished.
5 A Day is the name of a number of programs in countries such as the United States, the United Kingdom and Germany, to encourage the consumption of at least five portions of fruit and vegetables each day, following a recommendation by the World Health Organization that individuals consume at least 400g of vegetables daily.
The programme was introduced by the UK Department of Health in the winter of 2002–2003, and received some adverse media attention because of the high and rising costs of fresh fruit and vegetables. After ten years, research suggested that few people were meeting the target.
The London congestion charge is a fee charged on most motor vehicles operating within the Congestion Charge Zone (CCZ) in central London between 07:00 and 18:00 Monday to Friday. It is not charged at weekends, public holidays or between Christmas Day and New Year's Day (inclusive).[1] The charge, which was introduced on February 17, 2003, remains one of the largest congestion charge zones in the world.
On December 3, 2003, New Zealand passed legislation to progressively implement a smoking ban in schools, school grounds, and workplaces by December 2004. On March 29, 2004, Ireland implemented a nationwide ban on smoking in all workplaces. In Norway, similar legislation was put into force on June 1 the same year. Smoking was banned in all public places in the whole of the United Kingdom in 2007, when England became the final region to have the legislation come into effect (the age limit for buying tobacco was also raised from 16 to 18 on October 1, 2007). From 2004 to 2009, the UK's Merseyside police officers, conducted 1,389 section 60 stop and searches (without reasonable suspicion), rising to 23,138 within five years.
In 2005 the cost of alcohol dependence and abuse was estimated to cost the US economy approximately 220 billion dollars per year, more than cancer and obesity.
The number of antidepressants prescribed by the NHS in the United Kingdom almost doubled during one decade, authorities reported in 2010. In 2009, 39.1 million prescriptions for drugs to tackle depression were issued in England, compared with 20.1 million issued in 1999.
In the United States a 2005 independent report stated that 11% of women and 5% of men in the non-institutionalized population (2002) take antidepressants. The use of antidepressants in the United States doubled over one decade, from 1996 to 2005.
Antidepressant drugs were prescribed to 13 million in 1996 and to 27 million people by 2005. In 2008, more than 164 million prescriptions were written.
In the UK, the number of weddings in 2006 was the lowest for 110 years.
Jamie Oliver, is a British chef, restaurateur, media personality, known for his food-focused television shows and cookbooks. In 2006, Oliver began a formal campaign to ban unhealthy food in British schools and to get children eating nutritious food instead. Oliver's efforts to bring radical change to the school meals system, chronicled in the series Jamie's School Dinners, challenged the junk-food culture by showing schools they could serve healthy, cost-efficient meals that kids enjoyed eating. Jamie's efforts brought the subject of school dinners to the political forefront and changed the types of food served in schools.
In 2006, nearly 11 million Plastic surgery procedures were performed in the United States alone. The number of cosmetic procedures performed in the United States has increased over 50 percent since the start of the century.
In November 2006, the Office of Communications (Ofcom) announced that it would ban television advertisements for junk food before, during and after television programming aimed at under-16s in the United Kingdom. These regulations were originally outlined in a proposal earlier in the year. This move has been criticized on both ends of the scale; while the Food and Drink Federation labelled the ban "over the top", others have said the restrictions do not go far enough (particularly due to the fact that soap operas would be exempt from the ban). On April 1, 2007, junk food advertisements were banned from programmes aimed at four to nine-year-olds. Such advertisements broadcast during programmes "aimed at, or which would appeal to," ten to fifteen-year-olds will continue to be phased out over the coming months, with a full ban coming into effect on January 1, 2009.
November 10, 2006 – referring to the UK's annual poppy appeal, British journalist and presenter Jon Snow condemned the attitude of those who insist remembrance poppies are worn. He claimed: there is a rather unpleasant breed of poppy fascism out there.
In January 2007, the British Retail Consortium announced that major UK retailers, including Asda, Boots, Co-op, Iceland, Marks and Spencer, Sainsbury's, Tesco and Waitrose intended to cease adding trans fatty acids to their own products by the end of 2007.
In October 2008 AFP reported on the further expansion of killings of albinos to the Ruyigi region of Burundi. Body parts of the victims are then smuggled to Tanzania, where they are used for witch doctor rituals and potions. Albinos have become "a commercial good", commented Nicodeme Gahimbare in Ruyigi, who established a local safe haven in his fortified house.
A 2009 study found a 30% increase in Chinese diabetes over 7 years.
AIDS continued to expand during the decade, mainly in Sub-Saharan Africa. New diseases of animal origin appeared for a short time, such as the bird flu in 2007. Swine flu was declared a pandemic by the World Health Organization in 2009.
Environment and climate change
Climate change and global warming became household words in the 2000s. Predictions tools made significant progress during the decade, UN-sponsored organisations such as the IPCC gained influence, and studies such as the Stern report influenced public support for paying the political and economic costs of countering climate change.
The global temperature kept climbing during the decade. In December 2009, the World Meteorological Organization (WMO) announced that the 2000s might have been the warmest decade since records began in 1850, with four of the five warmest years since 1850 having occurred in this decade. The NASA and the NOAA later echoed the WMO's findings.
Major natural disasters became more frequent and helped change public opinion. One of the deadliest heat waves in human history happened during the 2000s, mostly in Europe, with the 2003 European heat wave killing 37,451 people over the summer months. In February 2009, a series of highly destructive bushfires started in Victoria, Australia, lasting into the next month. While the fires are believed to have been caused by arson, they were widely reported as having been fueled by an excessive heatwave that was due in part to climate change. It has also been alleged that climate change was a cause of increased storms intensity, notably in the case of Hurricane Katrina.
International actions
Climate change became a major issue for governments, populations and scientists. Debates on global warming and its causes made significant progress, as climate change denials were refuted by most scientific studies. Decisive reports such as the Stern Review and the 2007 IPCC Report almost established a climate change consensus. NGOs' actions and the commitment of political personalities (such as former U.S. Vice President Al Gore) also urged to international reactions against climate change. Documentary films An Inconvenient Truth and Home may have had a decisive impact.
Under the auspices of The UN Convention on Climate Change the Kyoto Protocol (aimed at combating global warming) entered into force on February 16, 2005. As of November 2009, 187 states have signed and ratified the protocol. In addition The UN Convention on Climate Change helped coordinate the efforts of the international community to fight potentially disastrous effects of human activity on the planet and launched negotiations to set an ambitious program of carbon emission reduction that began in 2007 with the Bali Road Map. However, the representatives of the then 192 member countries of the United Nations gathered in December 2009 for the Copenhagen Conference failed to reach a binding agreement to reduce carbon emissions because of divisions between regional areas.
However, as environmental technologies were to make up a potential market, some countries made large investments in renewable energies, energy conservation and sustainable transport. Many governments launched national plans to promote sustainable energy. In 2003, the European Union members created an emission trading scheme, and in 2007 they assembled a climate and energy package to reduce further their carbon emission and improve their energy-efficiency. In 2009, the United States Obama administration set up the Green New Deal, a plan to create millions of jobs in sectors related to environmentalism.
The Household Waste Recycling Act 2003 requires local authorities in England to provide every household with a separate collection of at least two types of recyclable materials by 2010.
Culture
Architecture
Commercialization and globalization resulted in mass migration of people from rural areas to urban areas resulting in high-profile skyscrapers in Asia and Europe. In Asia skyscrapers were constructed in India, China, Thailand, South Korea, and Japan.
The Millennium Bridge, London officially known as the London Millennium Footbridge, is a steel suspension bridge for pedestrians crossing the River Thames in London, England, linking Bankside with the city. Londoners nicknamed the bridge the "Wobbly Bridge" after participants in a charity walk on behalf of Save the Children to open the bridge felt an unexpected, and, for some, uncomfortable, swaying motion on the first two days after the bridge opened. The bridge was closed later that day, and after two days of limited access the bridge was closed for almost two years while modifications were made to eliminate the wobble entirely. It was reopened in 2002.
30 St Mary Axe (informally also known as "the Gherkin" and previously the Swiss Re Building) is a skyscraper in London's financial district, the City of London, completed in December 2003 and opened at the end of May 2004. The building has become an iconic symbol of London and is one of the city's most widely recognised examples of modern architecture.
Wembley Stadium is a football stadium located in Wembley Park, in the Borough of Brent, London, England. It opened in 2007 and was built on the site of the previous 1923 Wembley Stadium. The earlier Wembley stadium, originally called the Empire Stadium, was often referred to as "The Twin Towers" and was one of the world's most famous football stadia until its demolition in 2003.
A major redevelopment of London's Trafalgar Square led by WS Atkins with Foster and Partners as sub-consultants was completed in 2003. The work involved closing the main eastbound road along the north side, diverting the traffic around the other three sides of the square, demolishing the central section of the northern retaining wall and inserting a wide set of steps leading up to a pedestrianised terrace in front of the National Gallery. The construction includes two lifts for disabled access, public toilets, and a small café. Previously, access between the square and the Gallery was by two crossings at the northeast and northwest corners of the square.
Taipei 101 became the tallest building in the world ever built after it officially opened on December 31, 2004, a record it held until the opening of the Burj Khalifa (Formerly known as Burj Dubai) in January 2010, standing at .
Fine arts
Lucian Freud was a German-born British painter. Known chiefly for his thickly impastoed portrait and figure paintings, he was widely considered the pre-eminent British artist of his time.
During a period from May 2000 to December 2001, Freud painted Queen Elizabeth II. There was criticism of this portrayal of the Queen in some sections of the British media. The highest selling tabloid newspaper, The Sun, was particularly condemnatory, describing the portrait as "a travesty".
The Hockney–Falco thesis is a controversial theory of art history, advanced by artist David Hockney and physicist Charles M. Falco, suggesting that advances in realism and accuracy in the history of Western art since the Renaissance were primarily the result of optical aids such as the camera obscura, camera lucida, and curved mirrors, rather than solely due to the development of artistic technique and skill. In a 2001 book, Secret Knowledge: Rediscovering the Lost Techniques of the Old Masters, Hockney analyzed the work of the Old Masters and argued that the level of accuracy represented in their work is impossible to create by "eyeballing it". Since then, Hockney and Falco have produced a number of publications on positive evidence of the use of optical aids, and the historical plausibility of such methods.
Rolf Harris is an Australian entertainer. He is a musician, a singer-songwriter, a composer, a painter, and a television personality.
In 2005 he painted an official portrait of Queen Elizabeth II, which was the subject of a special episode of Rolf on Art.
Harris's portrait of The Queen was voted by readers of the Radio Times the third favourite portrait of her. The royal portrait was exhibited at Buckingham Palace, the Palace of Holyroodhouse in Edinburgh, and was exhibited on a tour of public galleries in the UK.
In April–June 2003, the English visual artists often known as The Chapman Brothers, held a solo show at Modern Art Oxford entitled The Rape of Creativity in which "the enfants terribles of Britart, bought a mint collection of Goya's most celebrated prints – and set about systematically defacing them". The Francisco Goya prints referred to his Disasters of War set of 80 etchings. The duo named their newly defaced works Insult to Injury. BBC described more of the exhibition's art: "Drawings of mutant Ronald McDonalds, a bronze sculpture of a painting showing a sad-faced Hitler in clown make-up and a major installation featuring a knackered old caravan and fake dog turds." The Daily Telegraph commented that the Chapman brothers had "managed to raise the hackles of art historians by violating something much more sacred to the art world than the human body – another work of art"
As a protest against this piece, Aaron Barschak (who later gate-crashed Prince William's 21st birthday party dressed as Osama bin Laden in a frock) threw a pot of red paint over Jake Chapman during a talk he was giving in May 2003.
On May 5, 2004, a 1905 painting titled Garçon à la Pipe (English: Boy with a Pipe) by Pablo Picasso was sold for US$104,168,000 at Sotheby's auction in New York City. At the time, it broke the record for the amount paid for an auctioned painting (when inflation is ignored). The amount, US$104 million, includes the auction price of US$93 million plus the auction house's commission of about US$11 million. Many art critics have stated that the painting's high sale price has much more to do with the artist's name than with the merit or historical importance of the painting. The Washington Posts articleBoy with Pipe or Garcon a la Pipe, 1905 (archived), The Artist Pablo Picasso on the sale contained the following characterisation of the reaction:
On May 24, 2004, more than 100 artworks from the famous collection of art collector and sponsor of the Young British Artists (YBAs) Charles Saatchi's were destroyed in a warehouse fire on an industrial estate in Leyton, east London. Modern art classics such as Tracey Emin's tent and works by Damien Hirst, Sarah Lucas and Gary Hume were lost.
Works by Patrick Caulfield, Craigie Horsfield and 20 pieces by Martin Maloney were also destroyed. They represent some of the cream of the so-named "Britart" movement of celebrated modern artists.
In 2004, during Channel 5 (UK)'s 'Big Art Challenge' television program, despite declaring: "I hold video and photography in profound contempt." English art critic Brian Sewell noted for artistic conservatism and has been described as "Britain's most famous and controversial art critic". and went on to at least 3 times hail video artist (and ultimately the competition's winner) Chris Boyd (aged 21) a "genius".
In June 2007, the English artist, entrepreneur and art collector Damien Hirst gained the European record for the most expensive work of art by a living artist, when his Lullaby Spring, (a 3-metre-wide steel cabinet with 6,136 pills) sold for 19.2 million dollars.
In September 2008, Damien Hirst took an unprecedented move for a living artist by selling a complete show, Beautiful Inside My Head Forever, at Sotheby's by auction and by-passing his long-standing galleries. The auction exceeded all predictions, raising £111 million ($198 million), breaking the record for a one-artist auction.
December 9, 2009 – when the most expensive drawing by an Old Master ever, was sold in an auction. Titled 'Head of a Muse' by Raphael; costing £29,200,000 ($47,788,400), at Christie's, London, UK.
Literature
Carol Ann Duffy, CBE, FRSL (born December 23, 1955) is a British poet and playwright. She is Professor of Contemporary Poetry at Manchester Metropolitan University, and was appointed Britain's poet laureate in May 2009. She is the first woman, the first Scot, and the first openly LGBT person to hold the position.
The phenomenally successful Harry Potter series by J. K. Rowling is concluded in July 2007 (having been first published in 1997), although the film franchise continues until 2011; several spin-off productions are announced in the early 2010s. The Harry Potter series is to date the best-selling book series in world history, with only seven main volumes (and three supplemental works) published and four hundred and fifty million copies sold. The film franchise is also currently the third highest-grossing film franchise in history, with eight films (all but the final two of which were released in the 2000s) and $8,539,253,704 in sales.
Popular culture
Film
The usage of computer-generated imagery became more widespread in films during the 2000s. Documentary and mockumentary films, such as March of the Penguins, Borat, and Super Size Me, were popular in the 2000s. 2004's Fahrenheit 9/11 by Michael Moore is the highest-grossing documentary of all time. Online films became popular, and conversion to digital cinema started. Critically acclaimed movies released in the decade including highlights such as Eternal Sunshine of the Spotless Mind (2004) and Lost in Translation (2003).
December 2009's Avatar, an American science fiction film written and directed by James Cameron, made extensive use of cutting edge motion capture filming techniques, and was released for traditional viewing, 3D viewing (using the RealD 3D, Dolby 3D, XpanD 3D, and IMAX 3D formats). It was also released in "4D" in select South Korean theaters.
3D films became more and more successful throughout the 2000s, culminating in the unprecedented success of 3D presentations of Avatar.
Roger Ebert, described by Forbes as "the most powerful pundit in America", was skeptical of the resurgence of 3D effects in film, which he found unrealistic and distracting.
In August 2004, American horror author Stephen King, in a column, criticized what he saw as a growing trend of leniency towards films from critics. His main criticism was that films, citing Spider-Man 2 as an example, were constantly given four star ratings that they did not deserve: "Formerly reliable critics who seem to have gone remarkably soft – not to say softhearted and sometimes softheaded – in their old age."
In July 2005, it was reported that the Scottish actor and producer Sir Sean Connery had decided to retire, due to disillusionment with the "idiots now in Hollywood"' Telling The New Zealand Herald: "I'm fed up with the idiots... the ever-widening gap between people who know how to make movies and the people who greenlight the movies."
The Lord of the Rings: The Return of the King, a 2003 epic fantasy-drama film directed by Peter Jackson based on the second and third volumes of J. R. R. Tolkien's The Lord of the Rings, was nominated for eleven Academy Awards and won all the categories for which it was nominated. The film is tied for largest number of awards won with Ben-Hur (1959) and Titanic (1997).
The Passion of the Christ, a 2004 American film directed by Mel Gibson and starring Jim Caviezel as Jesus Christ, was highly controversial and received mixed reviews; however, it was a major commercial hit, grossing in excess of $600 million worldwide during its theatrical release.
The superhero film genre experienced renewed and intense interest throughout the 2000s. With high ticket and DVD sales, several new superhero films were released every year. The X-Men, Batman and Spider-Man series were particularly prominent, and other films in the genre included Daredevil (2003), The League of Extraordinary Gentlemen (2003), Hulk (2003), Hellboy (2004), The Incredibles (2004), Fantastic Four (2005), Iron Man (2008), The Incredible Hulk (2008), and Watchmen (2009). Some media commentators attributed the increased popularity of such franchises to the social and political climate in Western society since the September 11 terrorist attacks, although others argued advances in special effects technology played a more significant role.
Animated feature film market changed radically. computer animated films became hugely popular following the release of Shrek, as traditional animation immediately faded into obscurity. Following the failures of The Road to El Dorado, Rugrats Go Wild, Aloha, Scooby-Doo!, Eight Crazy Nights, The Wild Thornberrys Movie, Scooby-Doo in Where's My Mummy and Looney Tunes: Back in Action, studios have stopped their production of traditional 2D animated films, and changed their focus into CGI animation. The only three traditional animated films that did well at the first half of the decade were Rugrats in Paris: The Movie, Spirit: Stallion of the Cimarron and The SpongeBob SquarePants Movie.
20th Century Fox Animation's works in that decade include the Ice Age series, Robots and Horton Hears a Who! which were all made by its Blue Sky Studios subsidiary, and Titan A.E., Waking Life, The Simpsons Movie and Fantastic Mr. Fox
Stop motion animated works in that decade which mostly use live-action or computer animation methods included Chicken Run, Team America: World Police, Wallace & Gromit: The Curse of the Were-Rabbit, Corpse Bride, Flushed Away, Coraline and Mary and Max. Independent animated works in that decade included The Triplets of Belleville, Terkel in Trouble, Laura's Star, A Scanner Darkly, Renaissance, Persepolis, Sita Sings the Blues, The Secret of Kells and A Town Called Panic.Award winnersThe 20 highest-grossing films of the decade are (in order from highest to lowest grossing)Avatar, The Lord of the Rings: The Return of the King, Pirates of the Caribbean: Dead Man's Chest, The Dark Knight, Harry Potter and the Sorcerer's Stone, Pirates of the Caribbean: At World's End, Harry Potter and the Order of the Phoenix, Harry Potter and the Half-Blood Prince, The Lord of the Rings: The Two Towers, Shrek 2, Harry Potter and the Goblet of Fire, Spider-Man 3, Ice Age: Dawn of the Dinosaurs, Harry Potter and the Chamber of Secrets, The Lord of the Rings: The Fellowship of the Ring, Finding Nemo, Star Wars: Episode III – Revenge of the Sith, Transformers: Revenge of the Fallen, Harry Potter and the Prisoner of Azkaban and Shrek the Third.The top 15 highest-grossing film series of the decade are (in order from highest to lowest grossing)Harry Potter film series, The Lord of the Rings film trilogy, Pirates of the Caribbean film series, Spider-Man film series, Shrek film series, Ice Age film series, Transformers film series, X-Men film series, Batman film series' Batman Begins and The Dark Knight, Star Wars Star Wars: Episode II – Attack of the Clones and Star Wars: Episode III – Revenge of the Sith, The Da Vinci Code and Angels & Demons, The Matrix film series' The Matrix Reloaded and The Matrix Revolutions, The Chronicles of Narnia film series, Mission: Impossible film series' and The Mummy film series.
Music
In the 2000s, the Internet allowed consumers unprecedented access to music. The Internet also allowed more artists to distribute music relatively inexpensively and independently without the previously necessary financial support of a record label. Music sales began to decline following the year 2000, a state of affairs generally attributed to unlicensed uploading and downloading of sound files to the Internet, a practice which became more widely prevalent during this time. Business relationships called 360 deals—an arrangement in which a company provides support for an artist, and, in exchange, the artist pays the company a percentage of revenue earned not only from sales of recorded music, but also live performances and publishing—became a popular response by record labels to the loss of music sales attributed to online copyright infringement.
Eminem was named the artist of the decade by Billboard.
In the 2000s, hip hop reached a commercial peak and heavily influenced various aspects of popular culture, dominating the musical landscape of the decade. The best-selling musical artist of the decade was American rapper Eminem, who sold 32 million albums. Other popular hip hop artists included Jay-Z, Nas, Busta Rhymes, Kanye West, Ludacris, Common, Ja Rule, Mos Def, DMX, Missy Elliot, OutKast, Lil John, Fat Joe, Cam'ron, Pharrell, Gorillaz, Snoop Dogg, Twista, 50 Cent, Nelly, Lil Wayne, T.I. and The Game. The genre was extremely diverse stylistically, including subgenres such as gangsta rap and crunk. Many hip hop albums were released to widespread critical acclaim.
R&B also gained prominence throughout the decade, and included artists such as D'Angelo, Aaliyah, Usher, Akon, Black Eyed Peas, R. Kelly, Amy Winehouse, Mary J. Blige, Jamie Foxx, John Legend and Alicia Keys.
In the early and mid 2000s, disco-inspired dance genres became popular; many french house and funky house songs broke into the charts. Popular tracks such as Daft Punk’s "One More Time" Fonzerelli’s "Moonlight Party", Kylie Minogue's "Spinning Around", Jamiroquai's "Little L", Michael Gray’s “The Weekend” and Freemasons "Love on My Mind".
For Latin music Shakira dominated the charts with Fijación Oral, Vol. 1 being the 2nd best selling Spanish album of all-time and the best selling Spanish album of the 2000s being 11x platinum to date.
Billboard magazine named Eminem as the artist with the best performance on the Billboard charts and Beyoncé as the "female artist of the decade", with Nickelback as the "band of the decade". In the UK, the biggest selling artist of the decade is Robbie Williams*and the biggest selling band of the decade is Westlife. American recording artist Michael Jackson died on June 25, 2009, creating the largest global public mourning since the death of Diana, Princess of Wales in 1997.Scott, Jeffry. "Jackson memorial second most-watched in TV history" . The Atlanta Journal-Constitution, July 8, 2009. On August 25, 2001, Aaliyah Haughton – an American recording artist, dancer, actress and model and eight others, were killed in an airplane crash in The Bahamas after filming the music video for the single "Rock the Boat". On April 25, 2002, Lisa Lopes an American: rapper, dancer, and singer-songwriter, best known as a member of the R&B/hip hop girl group TLC by her stage name Left Eye, was killed in a car crash in La Ceiba, Honduras. On October 30, 2002, Jason William Mizell (Jam Master Jay) of the hip hop group Run-D.M.C was shot and killed in a Merrick Boulevard recording studio in Jamaica, Queens. On December 25, 2006, James Brown – an American recording artist known as the "Godfather of Soul", died of pneumonia at the age of 73. On September 12, 2003, Johnny Cash – an American musician known as the "Man in Black", died of diabetes at the age of 71. On June 10, 2004, Ray Charles – an American musician and one of the pioneers of soul music, died of liver failure at the age of 73. On November 29, 2001, George Harrison – an English musician best known of the guitarist of the Beatles, died of lung cancer at the age of 58. Innovator, inventor, performer and guitar virtuoso Les Paul also died on August 12, 2009, at the age of 94.
In 2002, Robbie Williams signed a record-breaking £80 million contract with EMI. So far it is the biggest music deal in British history.
In alternative rock, the garage rock revival and post-punk revival entered the mainstream, with bands such as The Strokes, Interpol, The Killers, Arctic Monkeys, Bloc Party, Yeah Yeah Yeahs and The White Stripes seeing commercial success. Indie rock also saw a proliferation in the 2000s with numerous bands experiencing commercial success, including Modest Mouse, TV on the Radio, Franz Ferdinand, Death Cab for Cutie, Arcade Fire, Vampire Weekend, LCD Soundsystem, The Shins, Wilco, Bright Eyes, Spoon, The Decemberists, Broken Social Scene and many more. Other genres such as post-grunge, post-Britpop, nu metal and metalcore also achieved notability during the decade.
Popular metal or hard rock bands consisted of Avenged Sevenfold, Bullet for My Valentine, Disturbed, Breaking Benjamin, Linkin Park, Slipknot, Mudvayne, Tenacious D, System of a Down, Mastodon, The Mars Volta, Foo Fighters, Queens of the Stone Age, Three Days Grace, Godsmack, Shinedown, 36 Crazyfists, Killswitch Engage, Evanescence, Tool, Deftones, Opeth, and Seether.
Pop-punk and emo-pop became popular in the decade, with bands like The Offspring, Green Day, Good Charlotte, Fall Out Boy and Panic! at the Disco.
The 2000s gave rise to a new trend in music production with the growing use of auto-tune. The effect was first popularized in the early 2000s by Eiffel 65 with their 1998 hit song "Blue (Da Ba Dee)", which came to global prominence in 2000. It was also used in certain tracks off critically acclaimed 2001 albums from Daft Punk (with Discovery) and Radiohead (with Amnesiac). By 2008, auto-tune was part of the music mainstream with artists such as Lil Wayne, T-Pain and Kanye West utilizing it in their hit albums Tha Carter III, Three Ringz and 808s & Heartbreak respectively. Towards the end of the decade, electronic dance music began to dominate western charts (as it would proceed to in the following decade), and in turn helped contribute to a diminishing amount of rock music in the mainstream. Hip hop music also saw a decline in the mainstream in the late 2000s because of electronic music's rising popularity.
According to The Guardian, music styles during the 2000s changed very little from how they were in the latter half of the 1990s. The 2000s had a profound impact on the condition of music distribution. Recent advents in digital technology have fundamentally altered industry and marketing practices as well as players in unusual rapidity. According to Nielsen Soundscan, by 2009 CDs accounted for 79 percent of album sales, with 20 percent coming from digital, representing both a 10 percent drop and gain for both formats in 2 years.
Grime is a style of music that emerged from Bow, East London, England in the early 2000s, primarily as a development of UK garage, drum & bass, hip hop and dancehall. Pioneers of the style include English rappers Dizzee Rascal, Wiley, Roll Deep and Skepta.
Michael Jackson's final album, Invincible, released on October 30, 2001, and costing $30m to record, was the most expensive record ever made.
The general socio-political fallout of Iraq War also extended to popular music. In July 2002, the release of English musician George Michael's song "Shoot the Dog" proved to be controversial. It was critical of George W. Bush and Tony Blair in the lead up to the 2003 invasion of Iraq. The video showed a cartoon version of Michael astride a nuclear missile in the Middle East and Tony and Cherie Blair in bed with President Bush. The Dixie Chicks are an American country music band. During a London concert ten days before the 2003 invasion of Iraq, lead vocalist Maines said, "we don't want this war, this violence, and we're ashamed that the President of the United States [George W. Bush] is from Texas". The positive reaction to this statement from the British audience contrasted with the boycotts that ensued in the U.S., where "the band was assaulted by talk-show conservatives", while their albums were discarded in public protest. The original music video for the title song from American pop singer Madonna's American Life album was banned as music television stations thought that the video, featuring violence and war imagery, would be deemed unpatriotic since America was then at war with Iraq. She also made her widely considered "comeback" album with her tenth studio album Confessions on a Dance Floor which topped the charts worldwide in a record 40 countries. As of 2016 the album has sold more than 11 million copies worldwide. Madonna also made history by completing her Sticky & Sweet Tour which became the highest-grossing tour by a female artist and the tenth highest-grossing tour by an artist during 2008–2009.
Live 8 was a string of benefit concerts that took place on July 2, 2005, in the G8 states and in South Africa. They were timed to precede the G8 conference and summit held at the Gleneagles Hotel in Auchterarder, Scotland from July 6 to 8, 2005; they also coincided with the 20th anniversary of Live Aid. Run in support of the aims of the UK's Make Poverty History campaign and the Global Call for Action Against Poverty, ten simultaneous concerts were held on July 2 and one on July 6. On July 7, the G8 leaders pledged to double 2004 levels of aid to poor nations from US$25 billion to US$50 billion by the year 2010. Half of the money was to go to Africa. More than 1,000 musicians performed at the concerts, which were broadcast on 182 television networks and 2,000 radio networks.
In November 2006, the Rolling Stones' 'A Bigger Bang' tour was declared the highest-grossing tour of all time, earning $437 million.
In December 2009, a campaign was launched on Facebook by Jon and Tracy Morter, from South Woodham Ferrers, which generated publicity in the UK and took the 1992 Rage Against the Machine track "Killing in the Name" to the Christmas Number One slot in the UK Singles Chart, which had been occupied the four consecutive years from 2005 by winners from the TV show The X Factor. Rage's Zack de la Rocha spoke to BBC1 upon hearing the news, stating that:
"...We want to thank everyone that participated in this incredible, organic, grass-roots campaign. It says more about the spontaneous action taken by young people throughout the UK to topple this very sterile pop monopoly."
During the late 2000s, a new wave of chiptune culture took place. This new culture has much more emphasis on live performances and record releases than the demoscene and tracker culture, of which the new artists are often only distantly aware. Country pop saw continued success from the revival period of the 1990s, with new artists like Carrie Underwood and Taylor Swift bringing global appeal to the genre in the second half of the decade. Much of the 2000s in hip hop was characterized as the "bling era", referring to the material commodities that were popular from the early-to-mid part of the decade. However, by the end of the decade, an antecedent emotional rap subgenre gained prominence, with musical projects like Kanye West's fourth studio album 808s & Heartbreak (2008), Kid Cudi's debut album Man on the Moon: The End of Day (2009), and Drake's career catalyzing mixtape So Far Gone (2009) garnering significant popularity and ushering in a new era of hip hop.
Reunions
The original five members of the English new wave band Duran Duran reunited in the early 2000s.
On February 23, 2003, Simon and Garfunkel reunited to perform in public for the first time in a decade, singing "The Sound of Silence" as the opening act of the Grammy Awards.
On May 9, 2006, British five-piece vocal pop Take That returned to the recorded music scene after more than ten years of absence, signing with Polydor Records. The band's comeback album, Beautiful World, entered the UK album chart at no. 1.
On December 10, 2007, English rock band Led Zeppelin reunited for the one-off Ahmet Ertegun Tribute Concert at The O2 Arena in London. According to Guinness World Records 2009, Led Zeppelin set the world record for the "Highest Demand for Tickets for One Music Concert" as 20 million requests for the reunion show were rendered online.
Internet
Prominent websites and apps launched during the decade were Wikipedia (2001), Google Earth (2001), Internet Archive (2001), iTunes (2001), MySpace (2003), 4chan (2003), Facebook (2004), Flickr (2004), Mozilla Firefox (2004), YouTube (2005), Google Maps (2005), Reddit (2005), Twitter (2006), Google Chrome (2008), Spotify (2008), Waze (2009).
Wisdom of the crowd – during the decade, the benefits of the "Wisdom of the crowd" are pushed into the spotlight by social information sites such as Wikipedia, Yahoo! Answers, Reddit and other web resources that rely on human opinion.
Fashion
Fashion trends of the decade drew much inspiration from 1960s, 1970s and 1980s styles. Hair styles included the bleached and spiked hair for boys and men and long and straight hair for girls and women continued, as well as many other hairstyles from the mid-late 1990s. Kelly Clarkson made chunky highlights fashionable in 2002 on American Idol and lasted until about 2007. Both women and men highlighted their hair until the late 2000s.
The decade started with the futuristic Y2K fashion which was built on hype surrounding the new millennium. This dark, slinky style remained popular until 9/11 occurred and casual fashions had made a comeback once again. Baggy cargo pants were extremely popular among both sexes throughout the early and mid 2000s until about late 2007. Bell-bottoms were the dominant pant style for women until about 2006 when fitted pants began rising in popularity. The late 1990s-style baggy pants remained popular throughout the early 2000s, but by 2003 boot-cut pants and jeans became the standard among men until about 2008.
The 2000s saw a revival of 1980s fashion trends such as velour tracksuits in the early 2000s (an early 1980s fashion), and tapered pants in the later years (a late 1980s fashion). Skinny jeans became a staple clothing for young women and men. By 2009 with the Jerkin' movement playing a large part in the popularization of skinny jeans. Mass brands Gap and Levi launched their own lines for skinny jeans.
Throughout the early and mid 2000s, adults and children wore Skechers shoes. The company used many celebrities to their advantage, including Britney Spears, Christina Aguilera, Carrie Underwood, and Ashlee Simpson. By the late 2000s, flatter and more compact shoes came into style as chunky sneakers were no longer the mode.
"Geek chic" refers to a minor fashion trend that arose in the mid-2000s in which young individuals adopted stereotypically "geeky" fashions, such as oversized black Horn-rimmed glasses, suspenders/braces, and highwater trousers. The glasses—worn with non-prescription lenses or without lenses—quickly became the defining aspect of the trend, with the media identifying various celebrities as "trying geek" or "going geek" for their wearing such glasses, such as David Beckham, Justin Timberlake and Myleene Klass. Meanwhile, in the sports world, many NBA players wore "geek glasses" during post-game interviews, drawing comparisons to Steve Urkel.
Emo fashion became popular amongst teenagers for most of the 2000s, associated with the success of bands tied to the subculture (many of whom started at the beginning of the 2000s and rose to fame during the middle part of the decade, such as Brand New, The Used, Hawthorne Heights, My Chemical Romance, Fall Out Boy, Paramore, Panic! at the Disco and more). The style is commonly identified with wearing black/dark coloured skinny jeans, T-shirts bearing the name of emo music groups and long side-swept bangs, often covering one or both eyes. The Scene subculture that emerged in the mid-late 2000s drew much inspiration from Emo style.
Hip hop fashion was popular throughout the 2000s with clothing and shoe brands such as Rocawear, Phat Farm, G-Unit clothing, Billionaire Boys Club, Dipset clothing, Pelle Pelle, BAPE, Nike, Fubu, and Air Jordan. Followers of Hip Hop wore oversized shorts, jewelry, NFL and NBA jerseys, pants, and T-shirts. By the late 2000s this gave way more to fitted and vibrantly colored clothing, with men wearing skinny jeans as influenced by the Hyphy and Jerkin' movements.
In cosmetic applications, a Botox injection, consisting of a small dose of Botulinum toxin, can be used to prevent development of wrinkles by paralyzing facial muscles. As of 2007, it is the most common cosmetic operation, with 4.6 million procedures in the United States, according to the American Society of Plastic Surgeons.
Journalism
"It was, we were soon told, 'the day that changed everything', the 21st century's defining moment, the watershed by which we would forever divide world history: before, and after, 9/11." ~ The Guardian
The BBC's foreign correspondent John Simpson on Rupert Murdoch (March 15, 2010): He says this Murdochisation of national discourse, which was at its height in the UK with The Sun in the 1980s, has now migrated to the US. "Murdoch encouraged an ugly tone, which he has now imported into the US and which we see every day on Fox News, with all its concomitant effects on American public life – that fierce hostility between right and left that never used to be there, not to anything remotely like the same extent."
October 2001, Canadian author and social activist known for her political analyses Naomi Klein's book titled Fences and Windows:
May 15, 2003, Fox News Channel's (which grew during the late 1990s and 2000s to become the dominant cable news network in the United States.) political commentator Bill O'Reilly's "The Talking Points Memo", from his The O'Reilly Factor television talk show:
A poll released in 2004, by the Pew Research Center for the People and the Press, found that 21 percent of people aged 18 to 29 cited The Daily Show (an American late night satirical television program airing each Monday through Thursday) and Saturday Night Live (an American late-night live television sketch comedy and variety show) as a place where they regularly learned presidential campaign news. By contrast, 23 percent of the young people mentioned ABC, CBS or NBC's nightly news broadcasts as a source. When the same question was asked in 2000, Pew found only 9 percent of young people pointing to the comedy shows, and 39 percent to the network news shows. One newspaper, Newsday, has The Daily Show's host Jon Stewart, listed atop a list of the 20 media players who will most influence the upcoming presidential campaign. Random conversations with nine people, aged 19 to 26, waiting to see a taping of The Daily Show, revealed two who admitted they learned much about the news from the program. None said they regularly watched the network evening news shows.
The Guardian, is a British national daily newspaper. In August 2004, for the US presidential election, The Guardian's daily "G2" supplement launched an experimental letter-writing campaign in Clark County, Ohio, an average-sized county in a swing state. G2 editor Ian Katz bought a voter list from the county for $25 and asked readers to write to people listed as undecided in the election, giving them an impression of the international view and the importance of voting against US President George W. Bush. The paper scrapped "Operation Clark County" on October 21, 2004, after first publishing a column of complaints from Bush supporters about the campaign under the headline "Dear Limey assholes". The public backlash against the campaign likely contributed to Bush's victory in Clark County.
March 2005 – Twenty MPs signed a British House of Commons motion condemning the BBC Newsnight presenter Jeremy Paxman for saying that "a sort of Scottish Raj" was running the UK. Mr Paxman likened the dominance of Scots at Westminster to past British rule in India.
August 1, 2007 – News Corp. and Dow Jones entered into a definitive merger agreement. The US$5 billion sale added the largest newspaper in the United States, by circulation The Wall Street Journal to Rupert Murdoch's news empire.
August 30, 2008 – three years before the 2011 England riots, The Socialist Worker wrote: "Those who have responded to the tragedy of knife crime by calling for police crackdowns ought to take note. The criminalisation of a generation of black youth will undoubtedly lead to explosions of anger in the future, just as it did a generation ago with the riots that swept Britain's inner cities."
Ann Coulter is an American conservative social and political commentator, eight-time best-selling author, syndicated columnist, and lawyer. She frequently appears on television, radio, and as a speaker at public and private events. As the 2008 US presidential campaign was getting under way, Coulter was criticised for statements she made at the 2007 Conservative Political Action Conference about presidential candidate John Edwards:
In December 2008, Time magazine named Barack Obama as its Person of the Year for his historic candidacy and election, which it described as "the steady march of seemingly impossible accomplishments".
Print media
The decade saw the steady decline of sales of print media such as books, magazines, and newspapers, as the main conveyors of information and advertisements, in favor of the Internet and other digital forms of information.
News blogs grew in readership and popularity; cable news and other online media outlets became competitive in attracting advertising revenues and capable journalists and writers are joining online organizations. Books became available online, and electronic devices such as Amazon Kindle threatened the popularity of printed books.Times Online, The decline and fall of books. Retrieved December 4, 2009.
According to the National Endowment for the Arts (NEA), the decade showed a continuous increase in reading, although circulation of newspapers has declined.
Radio
The 2000s saw a decrease in the popularity of radio as more listeners starting using MP3 players in their cars to customize driving music. Satellite radio receivers started selling at a much higher rate, which allowed listeners to pay a subscription fee for thousands of ad-free stations. Clear Channel Communications was the largest provider of radio entertainment in the United States with over 900 stations nationwide. Many radio stations began streaming their content over the Internet, allowing a market expansion far beyond the reaches of a radio transmitter.
During the 2000s, FM radio faced its toughest competition ever for in-car entertainment. iPod, satellite radio, and HD radio were all new options for commuters. CD players had a steady decline in popularity throughout the 2000s but stayed prevalent in most vehicles, while cassette tapes became virtually obsolete.
August 27, 2001 – Hot 97 shock jock Star (real name Troi Torain) was suspended indefinitely for mocking R&B singer Aaliyah's death on the air. by playing a tape of a woman screaming while a crash is heard in the background. Close to 32,000 people signed a "No More Star" online petition.
In a 2008 edition of his (American) radio show, John Gibson commented on Australian actor Heath Ledger's death the day before. He opened the segment with funeral music and played a clip of Jake Gyllenhaal's famous line "I wish I knew how to quit you" from Ledger's film Brokeback Mountain; he then said "Well, I guess he found out how to quit you." Among other remarks, Gibson called Ledger a "weirdo" with "a serious drug problem". The next day, he addressed outcry over his remarks by saying that they were in the context of jokes he had been making for months about Brokeback Mountain, and that "There's no point in passing up a good joke." Gibson later apologized on his television and radio shows.The John Gibson Show, Fox News Radio, January 24, 2008
Television
American television in the 2000s saw the sharp increase in popularity of reality television, with numerous competition shows such as American Idol, Dancing with the Stars, Survivor and The Apprentice attracting large audiences, as well as documentary or narrative style shows such as Big Brother, The Hills, The Real Housewives, Cheaters, among many others. Australian television in the 2000s also saw a sharp increase in popularity of reality television, with their own version of shows such as Big Brother and Dancing With The Stars, other shows in the country also saw an increase with comedy such as Spicks and Specks and game show Bert's Family Feud. The decade has since seen a steady decline in the number of sitcoms and an increase in reality shows, crime and medical dramas, such as CSI: Crime Scene Investigation, House M.D., and Grey's Anatomy, paranormal/crime shows like Medium (2005–2011) and Ghost Whisperer (2005–2010), and action/drama shows, including 24 and Lost. Comedy-dramas became more serious, dealing with such hot button issues, such as drugs, teenage pregnancy, and gay rights. Popular comedy-drama programs include Desperate Housewives, Ugly Betty, and Glee. Adult-oriented animated programming also continued a sharp upturn in popularity with controversial cartoons like *South Park (1997–present), Family Guy (1999–2002, 2005–present) and Futurama (1999–2003, 2008–2013, 2023–present) along with the longtime running cartoon The Simpsons (1989–present), while new animated adult series were also produced in that decade such as American Dad!, Aqua Teen Hunger Force, Robot Chicken, Archer, Drawn Together, The Cleveland Show, Sealab 2021 and Total Drama.
The decade also saw the return of prime time soap operas, a genre that had been popular in the 1980s and early 1990s, including Dawson's Creek (1998–2003), The O.C. (2003–2007) and One Tree Hill (2003–2012). Desperate Housewives (2004–2012) was perhaps the most popular television series of this genre since Dallas and Dynasty in the 1980s;
ER started in 1994 and ended its run in 2009, after 15 years.
South Park controversies: Action for Children's Television founder Peggy Charren, despite being an outspoken opponent of censorship, claims that South Parks use of language and racial slurs represents the depravity of Western civilization, and is "dangerous to the democracy".
The series was repeated in 2001 along with a new show. It tackled paedophilia and the moral panic in parts of the British media following the murder of Sarah Payne, focusing on the name-and-shame campaign conducted by the News of the World in its wake.
The WWE made a split in 2002 for the brands Raw and Smackdown!, also known as the WWE Brand Extension. This resulted in the WWE's purchase of their two biggest competitors, WCW and ECW. The brand extension would last until 2011. It also saw the rise of popular wrestlers like John Cena, Randy Orton, Dave Bautista, Jeff Hardy, CM Punk, Chris Jericho, Edge and Brock Lesnar.
The 2001 World Series between the New York Yankees and Arizona Diamondbacks became the first World Series to be played in the wake of the September 11 attacks. Super Bowl XXXVI between the New England Patriots and the St. Louis Rams became the first Super Bowl to be played in the wake of the September 11 attacks.
The X Factor in the UK has been subject to much controversy and criticism since its launch in September 2004.
Super Bowl XXXVIII halftime show controversy:
Super Bowl XXXVIII, which was broadcast live on February 1, 2004, from Houston, Texas, on the CBS television network in the United States, was noted for a controversial halftime show in which singer Janet Jackson's breast, adorned with a nipple shield, was exposed by singer Justin Timberlake for about half a second, in what was later referred to as a "wardrobe malfunction". The incident, sometimes referred to as Nipplegate, was widely discussed. Along with the rest of the halftime show, it led to an immediate crackdown and widespread debate on perceived indecency in broadcasting.
Chappelle's Show was one of the most popular shows of the decade. Upon its release in 2004, the first-season DVD set became the best-selling TV series set of all time.
January 2005 – Jerry Springer: The Opera was the subject of controversy, when its UK television broadcast on BBC Two elicited 55,000 complaints. The most complained about television event ever.
In May 2005, UK viewers inundated the Advertising Standards Authority with complaints regarding the continuous airing of the latest Crazy Frog advertisements. The intensity of the advertising was unprecedented in British television history. According to The Guardian, Jamster bought 73,716 spots across all TV channels in May alone — an average of nearly 2,378 slots daily — at a cost of about £8 million, just under half of which was spent on ITV. 87% of the population saw the Crazy Frog adverts an average of 26 times, 15% of the adverts appeared twice during the same advertising break and 66% were in consecutive ad breaks. An estimated 10% of the population saw the advert more than 60 times. This led to many members of the population finding the crazy frog, as its original name suggests, immensely irritating.
Blue Peter (the world's longest-running children's television programme) rigged a phone-in competition supporting the UNICEF "Shoe Biz Appeal" on November 27, 2006. The person who appeared to be calling in the competition was actually a Blue Peter Team Player who was visiting that day. The visitor pretended to be a caller from an outside line who had won the phone-in and the chance to select a prize. The competition was rigged due to a technical error with receiving the calls.
In July 2007, Blue Peter was given a £50,000 fine, by the Office of Communications (OFCOM) as a result of rigging the competition.
I'm a Celebrity... Get Me Out of Here! is a reality television game show series, originally created in the United Kingdom, and licensed globally to other countries.
In its 2009 series, celebrity chef Gino D'Acampo killed, cooked and ate a rat. The Australian RSPCA investigated the incident and sought to prosecute D'Acampo and actor Stuart Manning for animal cruelty after this episode of the show was aired. ITV was fined £1,600 and the two celebrities involved were not prosecuted for animal cruelty despite being charged with the offense by the New South Wales Police.
Although there were less in this decade than there were in the 1990s, the 2000s still saw many popular and notable sitcoms, including 3rd Rock from the Sun, Two Guys and a Girl, Just Shoot Me!, The Drew Carey Show, Frasier, Friends, That '70s Show, Becker, Spin City, Dharma & Greg, Will & Grace, Yes, Dear, According to Jim, 8 Simple Rules, Less than Perfect, Still Standing, George Lopez, Grounded for Life, Hope & Faith, My Wife and Kids, Sex and the City, Everybody Loves Raymond, Malcolm in the Middle, Girlfriends, The King of Queens, Arrested Development, How I Met Your Mother, Scrubs, Curb Your Enthusiasm, What I Like About You, Reba, The Office, Entourage, My Name is Earl, Everybody Hates Chris, The New Adventures of Old Christine, Rules of Engagement, Two and a Half Men, 'Til Death, The Big Bang Theory, Samantha Who?, It's Always Sunny in Philadelphia, and 30 Rock, among many others. A trend seen in several sitcoms of the late 2000s was the absence of a laugh track.
The decade also saw the rise of premium cable dramas such as The Sopranos, The Wire, Battlestar Galactica, Deadwood, Mad Men, and Breaking Bad. The critic Daniel Mendelsohn wrote a critique of Mad Men in which he also claimed this last decade was a golden age for episodic television, citing Battlestar Galactica, The Wire, and the network series Friday Night Lights as especially deserving of critical and popular attention.
Ended series
The PBS series Mister Rogers' Neighborhood aired its final episode on August 31, 2001. Two years later, its host and creator, Fred Rogers, died from stomach cancer.
Tomorrow's World was a long-running BBC television series, showcasing new developments in the world of science and technology. First aired on July 7, 1965, on BBC1, it ran for 38 years until it was cancelled in early 2003.
That '70s Show was an American television period sitcom based on the 1970s decade. The 1970s retro style permeated the 2000s decade. The show ended on May 18, 2006.
Brookside is a British soap opera set in Liverpool, England. The series began on the launch night of Channel 4 on November 2, 1982, and ran for 21 years until November 4, 2003.
In January 2004, the BBC cancelled the Kilroy show (which had run for 18 years), after an article entitled 'We owe Arabs nothing' written by its host Robert Kilroy-Silk was published in the Sunday Express tabloid newspaper.
Friends is an American sitcom which aired on NBC from September 22, 1994, to May 6, 2004. Friends received positive reviews throughout its run, and its series finale ("The Last One") ranked as the fifth most watched overall television series finale as well as the most watched single television episode of the 2000s on U.S. television.
Frasier, a spin-off TV series of Cheers (that ended in 1993), is an American sitcom that was broadcast on NBC for eleven seasons from September 16, 1993, to May 13, 2004, (only a week after the broadcast of the final episode of Friends). It was one of the most successful spin-off and popular series in television history, as well as one of the most critically acclaimed comedy series.
On June 20, 2006, after 42 years, British music chart show Top of the Pops was formally cancelled and it was announced that the last edition would be broadcast on July 30, 2006.
Grandstand is a British television sport program. Broadcast between 1958 and 2007, it was one of the BBC's longest running sports shows.
After 30 years, British television drama series Grange Hill (originally made by the BBC) was cancelled and the last episode was shown on September 15, 2008.
Series returns
The Flower Pot Men is a British children's programme, produced by BBC television, first transmitted in 1952, and repeated regularly for more than twenty years, which was produced in a new version in 2000.
Absolutely Fabulous, also known as Ab Fab, is a British sitcom.
The show has had an extended and sporadic run. The first three series were broadcast on the BBC from 1992 to 1995, followed by a series finale in the form of a two-part television film entitled The Last Shout in 1996. Its creator Jennifer Saunders revived the show for a fourth series in 2001.
Gadget and the Gadgetinis is a spinoff of the classic series Inspector Gadget (1983–1986), developed by DiC in cooperation with Haim Saban's SIP Animation and produced from 2001 to 2003. There are 52 episodes.
Basil Brush from 1962 to 1984, The Basil Brush Show from 2002 to 2007.
Basil Brush is a fictional anthropomorphic red fox, best known for his appearances on daytime British children's television. He is primarily portrayed by a glove puppet.
Shooting Stars is a British television comedy panel game broadcast on BBC Two as a pilot in 1993, then as 3 full series from 1995 to 1997, then on BBC Choice from January to December 2002 with 2 series before returning to BBC Two for another 3 series from 2008 until its cancellation in 2011.
Doctor Who is a British science fiction television programme produced by the BBC. The show is a significant part of British popular culture.
The programme originally ran from 1963 to 1989. After an unsuccessful attempt to revive regular production in 1996 with a backdoor pilot in the form of a television film, the programme was relaunched in 2005.
Family Fortunes is a British game show, based on the American game show Family Feud. The programme ran on ITV from January 6, 1980, to December 6, 2002, before being revived by the same channel in 2006 under the title of All Star Family Fortunes. Revived episodes are currently being shown on ITV on Sunday evenings and have been presented by Vernon Kay since 2006.
Gladiators is a British television entertainment series, produced by LWT for ITV, and broadcast between October 10, 1992, and January 1, 2000. It is an adaptation of the American format American Gladiators. The success of the British series spawned further adaptations in Australia and Sweden. The series was revived in 2008, before again being cancelled in 2009.
Rab C. Nesbitt is a British sitcom which began in 1988.
The first series began on September 27, 1990, and continued for seven more, ending on June 18, 1999, and returning with a one-off special on December 23, 2008.
Red Dwarf is a British comedy franchise which primarily comprises ten series (including a ninth mini-series named Back To Earth) of a television science fiction sitcom that aired on BBC Two between 1988 and 1993 and from 1997 to 1999 and on Dave in 2009.Primetime Emmy Award for Best DramaVideo games
The world of video games reached the 6th generation of video game consoles including the PlayStation 2, the Xbox, and the GameCube, which started technically in 1998 with the release of Sega's Dreamcast, although some consider the true start in 2000 with the release of Sony's PlayStation 2. The 6th gen remained popular throughout the decade, but decreased somewhat in popularity after its 7th gen successors released technically starting in November 2005 with the release of Microsoft's Xbox 360, however, most people agree that 2006 is a 6th gen year since most games being released still released on 6th gen including the Xbox even though the 360 was already released, and the PlayStation 3 and the Wii didn't release until late 2006 which most people consider to be the true start of the 7th gen. It reached 7th Generation in the form of consoles like the Wii, the PlayStation 3 and Xbox 360 by the mid-2000s. The number-one-selling game console of the decade, the PlayStation 2, was released in 2000 and remained popular up to the end of the decade, even after the PlayStation 3 was released. The PlayStation 2 was discontinued in January 2013. MMORPGs, originating in the mid-to-late 1990s, become a popular PC trend and virtual online worlds become a reality as games such as RuneScape (2001), Final Fantasy XI (2002), Eve Online (2003), Star Wars Galaxies: An Empire Divided (2003), World of Warcraft (2004), and Everquest II (2004), The Lord of the Rings Online: Shadows of Angmar (2007) and Warhammer Online: Age of Reckoning (2008) are released. These worlds come complete with their own economies and social organization as directed by the players as a whole. The persistent online worlds allow the games to remain popular for many years. World of Warcraft, premiered in 2004, remains one of the most popular games in PC gaming and is still being developed into the 2010s.
The Grand Theft Auto series sparked a fad of Mature-rated video games based on including gang warfare, drug use, and perceived "senseless violence" into gameplay. Though violent video games date back to the early 1990s, they became much more common after 2000. Despite the controversy, the 2004 game Grand Theft Auto: San Andreas became the best selling PlayStation 2 game of all time, with 17.33 million copies sold for that console alone, from a total of 21.5 million in all formats by 2009; as of 2011, 27.5 million copies of San Andreas were sold worldwide.
The Nintendo DS launched in Japan in 2004 and by 2005 was available globally. All Nintendo DS models combined have sold over 154.02 million units, thus making it the best selling handheld of all time and the second best selling video game console of all time behind the PlayStation 2.
The Call of Duty series was extremely popular during the 2000s, the diverse shooter franchise released multiple games throughout the 2000s that were positively critically reviewed and commercially successful.
Gears of War was a critically acclaimed and commercially successful third-person shooter franchise that released two games during the mid-late 2000s. Gears of War 1 was released in 2006 and was the first installment to the franchise, it was universally critically acclaimed and went on to sell over 5 million copies. The second installment to the franchise Gears of War 2 was released in 2008 and received widespread critical acclaim and also went on to sell over 5 million copies.
Manhunt 2, a controversial stealth-based psychological horror video game published by Rockstar Games, was suspended by Take-Two Interactive (Rockstar's parent company) when it was refused classification in the United Kingdom, Italy and Ireland, and given an Adults Only (AO) rating in the United States. As neither Sony, Microsoft or Nintendo allow AO titles on their systems, it made Rockstar bring the game down to a Mature (M) game and release in October 2007.
The sixth generation sparked a rise in first person shooter games led by Halo: Combat Evolved, which changed the formula of the first person shooter. Halo 2 started online console gaming and was on top of the Xbox Live charts until its successor, Halo 3 (for Xbox 360), took over. Some other popular first-person shooters during the 2000s include the Medal of Honor series, with Medal of Honor: Frontlines release in 2002 bringing the first game in the series to 6th generation consoles.
In the late 2000s, motion controlled video games grew in popularity, from the PlayStation 2's EyeToy to Nintendo's successful Wii console. During the decade 3D video games become the staple of the video-game industry, with 2D games nearly fading from the market. Partially 3D and fully 2D games were still common in the industry early in the decade, but these have now become rare as developers look almost exclusively for fully 3D games to satisfy the increasing demand for them in the market. An exception to this trend is the indie gaming community, which often produces games featuring 'old-school' or retro gaming elements, such as Minecraft and Shadow Complex. These games, which are not developed by the industry giants, are often available in the form of downloadable content from services such as Microsoft's Xbox Live or Apple's App Store and usually cost much less than more major releases.
Dance Dance Revolution was released in Japan and later the United States, where it became immensely popular among teenagers. Another music game, Guitar Hero, was released in North America in 2005 and had a huge cultural impact on both the music and video games industries. It became a worldwide billion-dollar franchise within three years, spawning several sequels and leading to the creation of a competing franchise, Rock Band.
Japanese media giant Nintendo released 9 out of the 10 top selling games of the 2000s, further establishing the company's dominance over the market.
Arcade video games had declined in popularity so much by the late 1990s, that revenues in the United States dropped to $1.33 billion in 1999, and reached a low of $866 million in 2004. Furthermore, by the early 2000s, networked gaming via computers and then consoles across the Internet had also appeared, replacing the venue of head-to-head competition and social atmosphere once provided solely by arcades.
Cross-platform Game engines originating in the very late-1990s, became extremely popular in the 2000s, as they allowed development for indie games for digital distribution. Noteworthy software include GameMaker and Unity. Well-known indie games made in that decade include I Wanna Be the Guy, Spelunky, Braid, Clean Asia!, Castle Crashers, World of Goo, Dino Run, The Impossible Game and Alien Hominid.
Worldwide, arcade game revenues gradually increased from $1.8 billion in 1998 to $3.2 billion in 2002, rivalling PC game sales of $3.2 billion that same year. In particular, arcade video games are a thriving industry in China, where arcades are widespread across the country. The US market has also experienced a slight resurgence, with the number of video game arcades across the nation increasing from 2,500 in 2003 to 3,500 in 2008, though this is significantly less than the 10,000 arcades in the early 1980s. As of 2009, a successful arcade game usually sells around 4000 to 6000 units worldwide.
Sega Corporation, usually styled as SEGA, is a Japanese multinational video game software developer and an arcade software and hardware development company headquartered in Japan, with various offices around the world. Sega previously developed and manufactured its own brand of home video game consoles from 1983 to 2001, but a restructure was announced on January 31, 2001, that ceased continued production of its existing home console (Dreamcast), effectively exiting the company from the home console business. In spite of that, SEGA would go on to produce several videogames such as Super Monkey Ball franchise, the Sega Ages 2500 PlayStation 2 games, Hatsune Miku: Project DIVA, Sonic Adventure 2, Sonic Heroes, Rez, Shadow the Hedgehog, Virtua Fighter 4, After Burner Climax, Valkyria Chronicles, Sonic Pinball Party, Bayonetta, Jet Set Radio, Puyo Pop Fever, Thunder Force VI, Shenmue II, Phantasy Star Online, Yakuza 2, Gunstar Super Heroes, Astro Boy: Omega Factor, OutRun 2006: Coast 2 Coast and Mario & Sonic at the Olympic Games.
Neo Geo is a family of video game hardware developed by SNK. The brand originated in 1990 with the release of an arcade system, the Neo Geo MVS and its home console counterpart, the Neo Geo AES. The Neo Geo brand was officially discontinued in 2004.Game of the Yearfrom the Game Developers Choice Awards starting in 2001 (awards are given to games of the previous calendar year).Best selling games of every year'''In some years, sources disagree on the best-selling game.2000: Pokémon Stadium or Pokémon Crystal
2001: Madden NFL 2002 or Grand Theft Auto III
2002: Grand Theft Auto: Vice City
2003: Madden NFL 2004 or Call of Duty
2004: Grand Theft Auto: San Andreas
2005: Madden NFL 06 or Nintendogs
2006: Madden NFL 07
2007: Guitar Hero III: Legends of Rock or Wii Sports
2008: Rock Band (video game) or Wii Play
2009: Call of Duty: Modern Warfare 2 or Wii Sports
Writing
The decade saw the rise of digital media as opposed to the use of print, and the steady decline of printed books in countries where e-readers had become available.
The deaths of John Updike, Hunter S. Thompson, and other authors marked the end of various major writing careers influential during the late 20th century.
Popular book series such as Harry Potter, Twilight and Dan Brown's "Robert Langdon" (consisting of Angels & Demons, The Da Vinci Code, and The Lost Symbol) saw increased interest in various genres such as fantasy, romance, vampire fiction, and detective fiction, as well as young adult fiction in general.
Manga (also known as Japanese comics) became popular among the international audience, mostly in English-speaking countries. Such popular manga works include Lucky Star, Fullmetal Alchemist and Naruto.
On July 19, 2001, English author and former politician, Jeffrey Archer, was found guilty of perjury and perverting the course of justice at a 1987 libel trial. He was sentenced to four years' imprisonment.Peter Pan in Scarlet is a novel by Geraldine McCaughrean. It is an official sequel to Scottish author and dramatist J. M. Barrie's Peter and Wendy, authorised by Great Ormond Street Hospital, to whom Barrie granted all rights to the character and original writings in 1929. McCaughrean was selected following a competition launched in 2004, in which novelists were invited to submit a sample chapter and plot outline.
J. K. Rowling was the best-selling author in the decade overall thanks to the Harry Potter book series, although she did not not pen the best-selling book (at least in the UK), being second to The Da Vinci Code, which had 5.2 million in the UK by 2009 and 80 million worldwide by 2012.
Sports
The Sydney 2000 Summer Olympics, followed the centennial anniversary of the modern era Olympic Games, held in Atlanta in 1996. The Athens 2004 Summer Olympics, were a strong symbol, for modern Olympic Games were inspired by the competitions organized in Ancient Greece. Finally, the Beijing Games saw the emergence of China as a major sports power, with the highest number of titles for the first time. The 2002 Salt Lake City and the 2006 Turin Winter Olympic Games were also major events, though slightly less popular.
A number of concerns and controversies over the 2008 Summer Olympics surfaced before, during, and after the 2008 Summer Olympics, and which received major media coverage. Leading up to the Olympics, there were concerns about human rights in China, such that many high-profile individuals, such as politicians and celebrities, announced intentions to boycott the games to protest China's role in the Darfur conflict, and Myanmar, its stance towards Tibet, or other aspects of its human rights record.
In a 2008 Time article entitled "Why Nobody's Boycotting Beijing", Vivienne Walt wrote:
'Leaders in power are more mindful of China's colossal clout in an increasingly shaky world economy, and therefore of the importance of keeping good relations with its government.'
One of the most prominent events of the 2008 Summer Olympics held in Beijing was the achievement of Michael Phelps the American swimmer, frequently cited as the greatest swimmer and one of the greatest Olympians of all time. He has won 14 career Olympic gold medals, the most by any Olympian. As of August 2, 2009, Phelps has broken thirty-seven world records in swimming. Phelps holds the record for the most gold medals won in a single Olympics, his eight at the 2008 Beijing Games surpassed American swimmer Mark Spitz's seven-gold performance at Munich in 1972.
Usain Bolt of Jamaica dominated the male sprinting events at the Beijing Olympics, in which he broke three world records, allowing him to be the first man to ever accomplish this at a single Olympic game. He holds the world record for the 100 metres (despite slowing down before the finish line to celebrate), the 200 metres and, along with his teammates, the 4 × 100 metres relay.
The Los Angeles Lakers won 3 NBA championships in a row in from 2000 to 2002, also known as a Three-peat lead by Kobe Bryant and Shaquille O'Neal.
The rise of Ultimate Fighting Championship after the airing of The Ultimate Fighter in 2005.
In 2001, after the 9/11 attacks, both the National Football League and Major League Baseball canceled their upcoming games for a week. As a result, the World Series would be played in November for the first time and the Super Bowl would be played in February for the first time.
The sport of fox hunting is controversial, particularly in the UK, where it was banned in Scotland in 2002, and in England and Wales in November 2004 (law enforced from February 2005), though shooting foxes as vermin remained legal.
Ron Atkinson, is an English former football player and manager. In recent years he has become one of Britain's best-known football pundits.
Ron Atkinson's media work came to an abrupt halt on April 21, 2004, when he was urged to resign from ITV by Brian Barwick after he broadcast a racial remark live on air about the black Chelsea player Marcel Desailly; believing the microphone to be switched off, he said, "...he [Desailly] is what is known in some schools as a lazy nigger".
Association football's important events included two World Cups, one organized in South Korea and Japan, which saw Brazil win a record fifth title, and the other in Germany, which saw Italy win its fourth title. The regional competitions, the Copa América and UEFA European Championship, saw five nations rising the cup: Colombia (2001) and Brazil (2004, 2007) won the Copa América, while France (2000), Greece (2004) and Spain (2008) won the European Championship.
Rugby increased in size and audience, as the Rugby World Cup became the third most watched sporting event in the world with the 2007 Rugby World Cup organized in France.
Bloodgate is the nickname for a rugby union scandal involving the English team Harlequins in their Heineken Cup match against the Irish side Leinster on April 12, 2009. It was so called because of the use of fake blood capsules, and has been seen by some as one of the biggest scandals in rugby since professionalisation in the mid-1990s, indeed even as an argument against the professional ethos. The name is a pun on Watergate.
The New York Yankees won the first Major League Baseball World Series of the decade in 2000, as well as the last World Series of the decade in 2009. The Boston Red Sox won their first World Series since 1918 in 2004 and then again in 2007.
The Pittsburgh Steelers won a record sixth Super Bowl on February 1, 2009, against the Arizona Cardinals. Pittsburgh's Super Bowl win would remain the championship record for an NFL franchise until a decade later when the New England Patriots defeated the Los Angeles Rams to tie the Super Bowl championship record.
In September 2004, Chelsea footballer Adrian Mutu failed a drugs test for cocaine and was released on October 29, 2004. He also received a seven-month ban and a £20,000 fine from The Football Association.
Michael Schumacher, the most titled F1 driver, won five F1 World Championships during the decade and finally retired in 2006, yet eventually confirming his come-back to F1 for 2010. Lance Armstrong won all the Tour de France between 1999 and 2005, also an all-time record, but was later stripped of all his titles when evidence emerged of his use of performance-enhancing drugs. Swiss tennis player Roger Federer won 16 Grand Slam titles to become the most titled player.
The 2006 Italian football scandal, also known as "Calciopoli", involved Italy's top professional football leagues, Serie A and Serie B. The scandal was uncovered in May 2006 by Italian police, implicating league champions Juventus, and other major teams including A.C. Milan, Fiorentina, Lazio and Reggina when a number of telephone interceptions showed a thick network of relations between team managers and referee organisations. Juventus were the champions of Serie A at the time. The teams have been accused of rigging games by selecting favourable referees.
The 2006 FIFA World Cup Final in Berlin, Zinedine Zidane widely considered by experts and fans as one of the greatest football players of all time, was sent off in the 110th minute of the game, which was to be the last match of his career. After headbutting Marco Materazzi in the chest, Zidane did not participate in the penalty shootout, which Italy won 5–3. It was later discovered through interviews that Materazzi had insulted Zidane's mother and sister that last moment which is what led to Zidane's heightened anger and reaction.
January 11, 2007 – When English footballer David Beckham joined the Major League Soccer's Los Angeles Galaxy, he was given the highest player salary in the league's history; with his playing contract with the Galaxy over the next three years being worth US$6.5 million per year.
October 2007 – US world champion track and field athlete Marion Jones admitted that she took performance-enhancing drugs as far back as the 2000 Summer Olympics, and that she had lied about it to a grand jury investigating performance-enhancer creations.
November 29, 2007 – Portsmouth football manager Harry Redknapp angrily denied any wrongdoing after being arrested by police investigating alleged corruption in football: "If you are telling me this is how you treat anyone, it is not the society I grew up in."
The 2008 Wimbledon final between Roger Federer of Switzerland and Rafael Nadal of Spain, has been lauded as the greatest match ever by many long-time tennis analysts.
British Formula One racing driver Lewis Hamilton, was disqualified from the 2009 Australian Grand Prix for providing "misleading evidence" during the stewards' hearing. He later privately apologised to FIA race director Charlie Whiting for having lied to the stewards.
In 2009, the World football transfer record was set by Spanish football club Real Madrid when it purchased Manchester United's Cristiano Ronaldo for £80 million (€93 million). Manchester United veteran Sir Bobby Charlton said the world-record offer shocked him:
Steroids also spread the sports world throughout the decade, mainly used in Major League Baseball. Players involved included Barry Bonds, Mark McGwire, Sammy Sosa and Alex Rodriguez.
See also
List of decades
Timeline
The following articles contain brief timelines which list the most prominent events of the decade:
Footnotes
References
Further reading
London, Herbert I. The Transformational Decade: Snapshots of a Decade from 9/11 to the Obama Presidency'' (Lanham: University Press of America, 2012) 177 pp.
External links
The fashions, trends and people that defined the decade, VOGUE.COM UK
100 Top Pictures of the Decade – slideshow by Reuters
"A portrait of the decade", BBC, December 14, 2009
2000–2009 Video Timeline
20th century
21st century
Contemporary history |
36651 | https://en.wikipedia.org/wiki/ESP | ESP | ESP most commonly refers to:
Extrasensory perception, a paranormal ability
Spain (España) - ISO three-letter country code for Spain
ESP may also refer to:
Arts, entertainment
Music
ESP Guitars, a manufacturer of electric guitars
E.S. Posthumus, an independent music group formed in 2000, that produces cinematic style music
ESP-Disk, a 1960s free-jazz record label based in New York
The Electric Soft Parade, a British band formed in 2001
Eric Singer Project, side project founded in the 1990s by musician Eric Singer
ESP, a collaboration between Space Tribe and other artists
Songs, albums
E.S.P. (Bee Gees album), 1987 album by the Bee Gees
"E.S.P." (song), title track of the album
E.S.P. (Extra Sexual Persuasion), 1983 album by soul singer Millie Jackson
E.S.P. (Miles Davis album), 1965 album by Miles Davis
"E.S.P.", 1978 song by Buzzcocks from the album Love Bites
"E.S.P.", 1988 song by Cacophony from the album Go Off!
"E.S.P.", 1990 Song by Deee-Lite from the album "World Clique"
ESP, 2000 album by The System
"ESP", 2017 song by N.E.R.D. from the album No One Ever Really Dies
Television
"E.S.P." (UFO), a 1970 episode of UFO
E.S.P. (TV series), a horror Philippine drama by GMA Network
Technology
Electric submersible pump, an artificial lift system that electrically drives multiple centrifugal stage pumps to lift oil and water from wells
Electro-selective pattern, Olympus-proprietary metering technology for digital cameras
Electron spin trapping, a scientific technique
Electronic skip protection in portable CD players
Electronic stability control, also known as Electronic Stability Program, a computer device used in cars to improve traction
Electrostatic precipitator, a device that removes particles from a flowing gas
Enterprise Simulation Platform, the Lockheed-Martin commercial version of the Microsoft Flight Simulator X franchise, now called Prepar3D.
Equalizing snoop probe, a high-speed digital signal probe by Agilent
External static pressure, the air pressure faced by a fan blowing into an air duct in a heating, ventilation, and air conditioning system
External stowage platform, a type of cargo platform used on the International Space Station
Computing
EFI system partition, a partition used by machines that adhere to the Extensible Firmware Interface
'Elder Scrolls plugin', the .ESP file-format used in computer games such as The Elder Scrolls III: Morrowind
Email service provider (marketing), an organization offering e-mail services
Psychology of programming is sometimes referred to as 'empirical study of programming' or ESP
Encapsulating Security Payload, an encryption protocol within the IPsec suite
ESP game, an online human computation game
Event stream processing, technology that acts on data streams
ESP Cheat (video games), displaying contextual info such as the health, name and position of other participants (normally hidden from the player).
Extended Stack Pointer register in Intel IA-32 (x86/32-bit) assembler
Various processors by Espressif Systems, like the ESP32 or the ESP8266
Business
Easy Software Products, a software-development company, originator of the Common UNIX Printing System (CUPS)
Email service provider, a specialist organisation that offers bulk email marketing services
Entertainment Software Publishing, a Japanese video-game publisher
Spain and Spanish
3-letter code for Spain in ISO 3166-1 alpha-3
Spanish language (Español), non-ISO language code
Spanish peseta, the ISO 4217 code for the former currency of Spain
Other uses
Eastern State Penitentiary, a museum in Philadelphia, former prison
Effective Sensory Projection, a term used in the Silva self-help method
Empire State Plaza in Albany, New York, U.S.A
Empire State Pullers, a New York tractor pulling circuit
English for specific purposes, a subset of English language learning and teaching
Equally spaced polynomial in mathematics
European Skeptics Podcast (TheESP), a weekly podcast representing several European skeptic organisations in Europe
Extra-solar planet, a planet located outside the Solar System
Ezilenlerin Sosyalist Partisi, a Turkish political party |
36704 | https://en.wikipedia.org/wiki/HP-UX | HP-UX | HP-UX (from "Hewlett Packard Unix") is Hewlett Packard Enterprise's proprietary implementation of the Unix operating system, based on Unix System V (initially System III) and first released in 1984. Recent versions support the HP 9000 series of computer systems, based on the PA-RISC instruction set architecture, and HPE Integrity Servers, based on Intel's Itanium architecture.
Earlier versions of HP-UX supported the HP Integral PC and HP 9000 Series 200, 300, and 400 computer systems based on the Motorola 68000 series of processors, as well as the HP 9000 Series 500 computers based on HP's proprietary FOCUS architecture.
HP-UX was the first Unix to offer access control lists for file access permissions as an alternative to the standard Unix permissions system. HP-UX was also among the first Unix systems to include a built-in logical volume manager. HP has had a long partnership with Veritas Software, and uses VxFS as the primary file system.
It is one of four commercial operating systems that have versions certified to The Open Group's UNIX 03 standard. (The others are macOS, AIX and Huawei's EulerOS.)
Characteristics
HP-UX 11i offers a common shared disks for its clustered file system. HP Serviceguard is the cluster solution for HP-UX. HP Global Workload Management adjusts workloads to optimize performance, and integrates with Instant Capacity on Demand so installed resources can be paid for in 30-minute increments as needed for peak workload demands.
HP-UX offers operating system-level virtualization features such as hardware partitions, isolated OS virtual partitions on cell-based servers, and HP Integrity Virtual Machines (HPVM) on all Integrity servers. HPVM supports guests running on HP-UX 11i v3 hosts – guests can run Linux, Windows Server, OpenVMS or HP-UX. HP supports online VM guest migration, where encryption can secure the guest contents during migration.
HP-UX 11i v3 scales as follows (on a SuperDome 2 with 32 Intel Itanium 9560 processors):
256 processor cores
8 TB main memory
32 TB maximum file system
16 TB maximum file size
128 million ZB—16 million logical units each up to 8 ZB.
Security
The 11i v2 release introduced kernel-based intrusion detection, strong random number generation, stack buffer overflow protection, security partitioning, role-based access management, and various open-source security tools.
HP classifies the operating system's security features into three categories: data, system and identity:
Context dependent files
Release 6.x (together with 3.x) introduced the context dependent files (CDF) feature, a method of allowing a fileserver to serve different configurations and binaries (and even architectures) to different client machines in a heterogeneous environment. A directory containing such files had its suid bit set and was made hidden from both ordinary and root processes under normal use. Such a scheme was sometimes exploited by intruders to hide malicious programs or data. CDFs and the CDF filesystem were dropped with release 10.0.
Supported hardware platforms
HP-UX operating systems supports a variety of PA-RISC systems. The 11.0 added support for Integrity-based servers for the transition from PA-RISC to Itanium. HP-UX 11i v1.5 is the first version that supported Itanium. On the introduction of HP-UX 11i v2 the operating system supported both of these architectures.
BL series
HP-UX 11i supports HPE Integrity Servers of HP BL server blade family. These servers use the Intel Itanium architecture.
CX series
HP-UX 11i v2 and 11i v3 support HP's CX series servers. CX stands for carrier grade and is used mainly for telco industry with -48V DC support and is NEBS certified. Both of these systems contain Itanium Mad6M processors and are discontinued.
RX series
HP-UX supports HP's RX series of servers.
Release history
Prior to the release of HP-UX version 11.11, HP used a decimal version numbering scheme with the first number giving the major release and the number following the decimal showing the minor release. With 11.11, HP made a marketing decision to name their releases 11i followed by a v(decimal-number) for the version. The i was intended to indicate the OS is Internet-enabled, but the effective result was a dual version-numbering scheme.
Version history
Versions
1.0 (1982) First release for HP 9000 Series 500. HP-UX for Series 500 was substantially different from HP-UX for any other HP machines, as it was layered atop a Series 500 specific operating system called SUNOS (unrelated to Sun Microsystems' SunOS).
1.0 (1984) AT&T System III based. Support for the HP Integral PC (HP 9807A). The kernel runs from ROM; other commands are disk based.
2.0 (1984) First release for HP's early Motorola 68000-based workstations (HP 9816U, HP 9826U, HP 9836U)
5.0 (1985) AT&T System V based. Distinct versions were available for the Integral PC, the Series 200/300 and the Series 500. Introduced the proprietary Starbase graphics API for the Series 200, 300 and 500. The Series 300 5.x releases included a proprietary windowing system built on top of Starbase named HP Windows/9000, which was also available as an optional extra for Series 500 hardware.
3.x (1988) HP 9000 Series 600/800 only. Note: 2.x/3.x (for Series 600/800) were developed in parallel with 5.x/6.x (for Series 200/300/400), so, for example, 3.x was really contemporary with 6.x. The two lines were united at HP-UX 7.x.
6.x (1988) Support for HP 9000 Series 300 only. Introduced sockets from 4.3BSD. This version (together with 3.x) also introduced the above-discussed context dependent files (CDF), which were removed in release 10 because of their security risks. The 6.2 release added X11, superseding HP Windows/9000 and X10. 6.5 allowed Starbase programs to run alongside X11 programs.
7.x (1990) Support for HP 9000 Series 300/400, 600/700 (in 7.03) /800 HP systems. Provided OSF/Motif. Final version to include the HP Windows/9000 windowing system.
8.x (January 1991) Support for HP 9000 Series 300/400 600/700/800 systems. Shared libraries introduced.
9.x (July 1992) 9.00, 9.02, 9.04 (Series 600/800), 9.01, 9.03, 9.05, 9.07 (Series 300/400/700), 9.08, 9.09, 9.09+ (Series 700 only), 9.10 (Series 300/400 only). These provided support for the HP 9000 Series 300, 700 and 800 systems. Introduced System Administration Manager (SAM). The Logical Volume Manager (LVM) was presented in 9.00 for the Series 800. Adopted the Visual User Environment desktop.
10.0 (1995) This major release saw a convergence of the operating system between the HP 9000 Series 700 (workstation) and Series 800 (server) systems, dropping support for previous lines. There was also a significant change in the layout in the system files and directories, based on the AT&T UNIX System V Release 4 standard. Applications were removed from /usr and moved under /opt; startup configuration files were placed under /etc/rc.config.d; users were moved to /home from /users. Software for HP-UX was now packaged, shipped, installed, and removed via the Software Distributor (SD) tools. LVM was also made available for Series 700.
10.10 (1996) Introduced the Common Desktop Environment. UNIX95 compliance.
10.20 (1996) This release included support for 64-bit PA-RISC 2.0 processors. Pluggable Authentication Modules (PAM) were introduced for use within CDE. The root file system could be configured to use the Veritas File System (VxFS). For legacy as well as technical reasons, the file system used for the boot kernel remained Hi Performance FileSystem (HFS, a variant of UFS) until version 11.23. 10.20 also supported 32-bit user and group identifiers. The prior limit was 60,000, or 16-bit. This and earlier releases of HP-UX are now effectively obsolete, and support by HP ended on June 30, 2003.
10.24 This is a Virtual Vault release of HP-UX, providing enhanced security features. Virtual Vault is a compartmentalised operating system in which each file is assigned a compartment and processes only have access to files in the appropriate compartment and unlike most other UNIX systems the superuser (or root) does not have complete access to the system without following correct procedures.
10.30 (1997) This was primarily a developer release with various incremental enhancements. It provided the first support for kernel threads, with a 1:1 thread model (each user thread is bound to one kernel thread).
11.00 (1997) The first HP-UX release to also support 64-bit addressing. It could still run 32-bit applications on a 64-bit system. It supported symmetric multiprocessing, Fibre Channel, and NFS PV3. It also included tools and documentation to convert 32-bit code to 64-bit.
11.04 Virtual Vault release.
11.10 This was a limited release to support the HP 9000 V2500 SCA (Scalable Computing Architecture) and V2600 SCA servers. It also added JFS 3.3, AutoFS, a new ftpd, and support for up to 128 CPUs. It was not available separately.
11.11 (2000) 11i v1 This release of HP-UX introduced the concept of Operating Environments. It was released in December 2000. These are bundled groups of layered applications intended for use with a general category of usage. The available types were the Mission Critical, Enterprise, Internet, Technical Computing, and Minimal Technical OEs. (The last two were intended for HP 9000 workstations.) The main enhancements with this release were support for hard partitions, Gigabit Ethernet, NFS over TCP/IP, loadable kernel modules, dynamic kernel tunable parameters, kernel event Notifications, and protected stacks.
11.20 (2001) 11i v1.5 This release of HP-UX was the first to support the new line of Itanium-based (IA-64) systems. It was not intended for mission critical computing environments and did not support HP's ServiceGuard cluster software. It provided support for running PA-RISC compiled applications on Itanium systems, and for Veritas Volume Manager 3.1.
11.22 (2002) 11i v1.6 An incremental release of the Itanium version of HP-UX. This version achieved 64-way scalability, m:n threads, added more dynamic kernel tunable parameters, and supported HP's Logical Volume Manager on Itanium. It was built from the 11i v1 source code stream.
11.23 (2003) 11i v2 The original release of this version was in September 2003 to support the Itanium-based systems. In September 2004 the OS was updated to provide support for both Itanium and PA-RISC systems. Besides running on Itanium systems, this release includes support for ccNUMA, web-based kernel and device configuration, IPv6, and stronger random number generation.
11.31 (2007) 11i v3 This release supports both PA-RISC and Itanium. It was released on February 15, 2007. Major new features include native multipathing support, a unified file cache, NFSv4, Veritas ClusterFS, multi-volume VxFS, and integrated virtualization. Hyperthreading is supported on Itanium systems with Montecito and Tukwila processors. HP-UX 11i v3 conforms to The Open Group's UNIX 03 standard. Updates for 11i v3 have been released every 6 months, with the latest revision being B.11.31.1805, released in May 2018. HP has moved to a cadence of one major HP-UX operating system update per year.
HP-UX 11i operating environments
HP bundles HP-UX 11i with programs in packages they call Operating Environments (OEs).
The following lists the currently available HP-UX 11i v3 OEs:
HP-UX 11i v3 Base OE (BOE) Includes the full HP-UX 11i operating system plus file system and partitioning software and applications for Web serving, system management and security. BOE includes all the software formerly in FOE & TCOE (see below), plus software formerly sold stand-alone (e.g. Auto Port Aggregator).
HP-UX 11i v3 Virtualization Server OE (VSE-OE) Includes everything in BOE plus GlancePlus performance analysis and software mirroring, and all Virtual Server Environment software which includes virtual partitions, virtual machines, workload management, capacity advisor and applications. VSE-OE includes all the software formerly in EOE (see below), plus additional virtualization software.
HP-UX 11i v3 High Availability OE (HA-OE) Includes everything in BOE plus HP Serviceguard clustering software for system failover and tools to manage clusters, as well as GlancePlus performance analysis and software mirroring applications.
HP-UX 11i v3 Data Center OE (DC-OE) Includes everything in one package, combining the HP-UX 11i operating system with virtualization. Everything in the HA-OE and VSE-OE is in the DC-OE. Solutions for wide-area disaster recovery and the compiler bundle are sold separately.
HP-UX 11i v2 (11.23) HP dropped support for v2 in December 2010. Currently available HP-UX 11i v2 OEs include:
HP-UX 11i v2 Foundation OE (FOE) Designed for Web servers, content servers and front-end servers, this OE includes applications such as HP-UX Web Server Suite, Java, and Mozilla Application Suite. This OE is bundled as HP-UX 11i FOE.
HP-UX 11i v2 Enterprise OE (EOE) Designed for database application servers and logic servers, this OE contains the HP-UX 11i v2 Foundation OE bundles and additional applications such as GlancePlus Pak to enable an enterprise-level server. This OE is bundled as HP-UX 11i EOE.
HP-UX 11i v2 Mission Critical OE (MCOE) Designed for the large, powerful back-end application servers and database servers that access customer files and handle transaction processing, this OE contains the Enterprise OE bundles, plus applications such as MC/ServiceGuard and Workload Manager to enable a mission-critical server. This OE is bundled as HP-UX 11i MCOE.
HP-UX 11i v2 Minimal Technical OE (MTOE) Designed for workstations running HP-UX 11i v2, this OE includes the Mozilla Application Suite, Perl, VxVM, and Judy applications, plus the OpenGL Graphics Developer's Kit. This OE is bundled as HP-UX 11i MTOE.
HP-UX 11i v2 Technical Computing OE (TCOE) Designed for both compute-intensive workstation and server applications, this OE contains the MTOE bundles plus extensive graphics applications, MPI and Math Libraries. This OE is bundled as HP-UX 11i-TCOE.
HP-UX 11i v1 (11.11) According to HP's roadmap, was sold through December 2009, with continued support for v1 at least until December 2015.
See also
HP Roman-8 (character set)
References
Scott W. Y. Wang and Jeff B. Lindberg "HP-UX: Implementation of UNIX on the HP 9000 Series 500 Computer Systems", Hewlett-Packard Journal (volume 35 number 3, March 1984)
Frank McConnell, More about the HP 9000'', gaby.de
Hewlett-Packard Company, "HP-UX Reference, Vol. 1, HP-UX Release 6.5, December 1988", HP Part number 09000-90009
External links
HP-UX Home
HP-UX Software & Update Information
HP software
UNIX System V |
36954 | https://en.wikipedia.org/wiki/SPARC | SPARC | SPARC (Scalable Processor Architecture) is a reduced instruction set computing (RISC) instruction set architecture originally developed by Sun Microsystems. Its design was strongly influenced by the experimental Berkeley RISC system developed in the early 1980s. First developed in 1986 and released in 1987, SPARC was one of the most successful early commercial RISC systems, and its success led to the introduction of similar RISC designs from a number of vendors through the 1980s and 90s.
The first implementation of the original 32-bit architecture (SPARC V7) was used in Sun's Sun-4 workstation and server systems, replacing their earlier Sun-3 systems based on the Motorola 68000 series of processors. SPARC V8 added a number of improvements that were part of the SuperSPARC series of processors released in 1992. SPARC V9, released in 1993, introduced a 64-bit architecture and was first released in Sun's UltraSPARC processors in 1995. Later, SPARC processors were used in symmetric multiprocessing (SMP) and non-uniform memory access (CC-NUMA) servers produced by Sun, Solbourne and Fujitsu, among others.
The design was turned over to the SPARC International trade group in 1989, and since then its architecture has been developed by its members. SPARC International is also responsible for licensing and promoting the SPARC architecture, managing SPARC trademarks (including SPARC, which it owns), and providing conformance testing. SPARC International was intended to grow the SPARC architecture to create a larger ecosystem; SPARC has been licensed to several manufacturers, including Atmel, Bipolar Integrated Technology, Cypress Semiconductor, Fujitsu, Matsushita and Texas Instruments. Due to SPARC International, SPARC is fully open, non-proprietary and royalty-free.
As of September 2017, the latest commercial high-end SPARC processors are Fujitsu's SPARC64 XII (introduced in 2017 for its SPARC M12 server) and Oracle's SPARC M8 introduced in September 2017 for its high-end servers.
On Friday, September 1, 2017, after a round of layoffs that started in Oracle Labs in November 2016, Oracle terminated SPARC design after the completion of the M8. Much of the processor core development group in Austin, Texas, was dismissed, as were the teams in Santa Clara, California, and Burlington, Massachusetts.
Fujitsu will also discontinue their SPARC production (has already shifted to producing their own ARM-based CPUs), after two "enhanced" versions of Fujitsu's older SPARC M12 server in 2020–22 (previously planned for 2021) and again in 2026–27, end-of-sale in 2029, of UNIX severs and a year later for their mainframe and end-of-support in 2034 "to promote customer modernization".
Features
The SPARC architecture was heavily influenced by the earlier RISC designs, including the RISC I and II from the University of California, Berkeley and the IBM 801. These original RISC designs were minimalist, including as few features or op-codes as possible and aiming to execute instructions at a rate of almost one instruction per clock cycle. This made them similar to the MIPS architecture in many ways, including the lack of instructions such as multiply or divide. Another feature of SPARC influenced by this early RISC movement is the branch delay slot.
The SPARC processor usually contains as many as 160 general-purpose registers. According to the "Oracle SPARC Architecture 2015" specification an "implementation may contain from 72 to 640 general-purpose 64-bit" registers. At any point, only 32 of them are immediately visible to software — 8 are a set of global registers (one of which, g0, is hard-wired to zero, so only seven of them are usable as registers) and the other 24 are from the stack of registers. These 24 registers form what is called a register window, and at function call/return, this window is moved up and down the register stack. Each window has 8 local registers and shares 8 registers with each of the adjacent windows. The shared registers are used for passing function parameters and returning values, and the local registers are used for retaining local values across function calls.
The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core (non-privileged) instruction set. One of the architectural parameters that can scale is the number of implemented register windows; the specification allows from three to 32 windows to be implemented, so the implementation can choose to implement all 32 to provide maximum call stack efficiency, or to implement only three to reduce cost and complexity of the design, or to implement some number between them. Other architectures that include similar register file features include Intel i960, IA-64, and AMD 29000.
The architecture has gone through several revisions. It gained hardware multiply and divide functionality in Version 8. 64-bit (addressing and data) were added to the version 9 SPARC specification published in 1994.
In SPARC Version 8, the floating-point register file has 16 double-precision registers. Each of them can be used as two single-precision registers, providing a total of 32 single-precision registers. An odd-even number pair of double-precision registers can be used as a quad-precision register, thus allowing 8 quad-precision registers. SPARC Version 9 added 16 more double-precision registers (which can also be accessed as 8 quad-precision registers), but these additional registers can not be accessed as single-precision registers. No SPARC CPU implements quad-precision operations in hardware as of 2004.
Tagged add and subtract instructions perform adds and subtracts on values checking that the bottom two bits of both operands are 0 and reporting overflow if they are not. This can be useful in the implementation of the run time for ML, Lisp, and similar languages that might use a tagged integer format.
The endianness of the 32-bit SPARC V8 architecture is purely big-endian. The 64-bit SPARC V9 architecture uses big-endian instructions, but can access data in either big-endian or little-endian byte order, chosen either at the application instruction (load–store) level or at the memory page level (via an MMU setting). The latter is often used for accessing data from inherently little-endian devices, such as those on PCI buses.
History
There have been three major revisions of the architecture. The first published version was the 32-bit SPARC Version 7 (V7) in 1986. SPARC Version 8 (V8), an enhanced SPARC architecture definition, was released in 1990. The main differences between V7 and V8 were the addition of integer multiply and divide instructions, and an upgrade from 80-bit "extended-precision" floating-point arithmetic to 128-bit "quad-precision" arithmetic. SPARC V8 served as the basis for IEEE Standard 1754-1994, an IEEE standard for a 32-bit microprocessor architecture.
SPARC Version 9, the 64-bit SPARC architecture, was released by SPARC International in 1993. It was developed by the SPARC Architecture Committee consisting of Amdahl Corporation, Fujitsu, ICL, LSI Logic, Matsushita, Philips, Ross Technology, Sun Microsystems, and Texas Instruments.
Newer specifications always remain compliant with the full SPARC V9 Level 1 specification.
In 2002, the SPARC Joint Programming Specification 1 (JPS1) was released by Fujitsu and Sun, describing processor functions which were identically implemented in the CPUs of both companies ("Commonality"). The first CPUs conforming to JPS1 were the UltraSPARC III by Sun and the SPARC64 V by Fujitsu. Functionalities which are not covered by JPS1 are documented for each processor in "Implementation Supplements".
At the end of 2003, JPS2 was released to support multicore CPUs. The first CPUs conforming to JPS2 were the UltraSPARC IV by Sun and the SPARC64 VI by Fujitsu.
In early 2006, Sun released an extended architecture specification, UltraSPARC Architecture 2005. This includes not only the non-privileged and most of the privileged portions of SPARC V9, but also all the architectural extensions developed through the processor generations of UltraSPARC III, IV IV+ as well as CMT extensions starting with the UltraSPARC T1 implementation:
the VIS 1 and VIS 2 instruction set extensions and the associated GSR register
multiple levels of global registers, controlled by the GL register
Sun's 64-bit MMU architecture
privileged instructions ALLCLEAN, OTHERW, NORMALW, and INVALW
access to the VER register is now hyperprivileged
the SIR instruction is now hyperprivileged
In 2007, Sun released an updated specification, UltraSPARC Architecture 2007, to which the UltraSPARC T2 implementation complied.
In August 2012, Oracle Corporation made available a new specification, Oracle SPARC Architecture 2011, which besides the overall update of the reference, adds the VIS 3 instruction set extensions and hyperprivileged mode to the 2007 specification.
In October 2015, Oracle released SPARC M7, the first processor based on the new Oracle SPARC Architecture 2015 specification. This revision includes VIS 4 instruction set extensions and hardware-assisted encryption and silicon secured memory (SSM).
SPARC architecture has provided continuous application binary compatibility from the first SPARC V7 implementation in 1987 through the Sun UltraSPARC Architecture implementations.
Among various implementations of SPARC, Sun's SuperSPARC and UltraSPARC-I were very popular, and were used as reference systems for SPEC CPU95 and CPU2000 benchmarks. The 296 MHz UltraSPARC-II is the reference system for the SPEC CPU2006 benchmark.
Architecture
SPARC is a load/store architecture (also known as a register-register architecture); except for the load/store instructions used to access memory, all instructions operate on the registers.
Registers
The SPARC architecture has an overlapping register window scheme. At any instant, 32 general purpose registers are visible. A Current Window Pointer (CWP) variable in the hardware points to the current set. The total size of the register file is not part of the architecture, allowing more registers to be added as the technology improves, up to a maximum of 32 windows in SPARC v7 and v8 as CWP is 5 bits and is part of the PSR register.
In SPARC v7 and v8 CWP will usually be decremented by the SAVE instruction (used by the SAVE instruction during the procedure call to open a new stack frame and switch the register window), or incremented by the RESTORE instruction (switching back to the call before returning from the procedure). Trap events (interrupts, exceptions or TRAP instructions) and RETT instructions (returning from traps) also change the CWP.
For SPARC-V9, CWP register is decremented during a RESTORE instruction, and incremented during a SAVE instruction. This is the opposite of PSR.CWP's behavior in SPARC-V8. This change has no effect on nonprivileged instructions.
SPARC registers are shown in the figure above.
Instruction formats
All SPARC instructions occupy a full 32 bit word and start on a word boundary. Four formats are used, distinguished by the first two bits. All arithmetic and logical instructions have 2 source operands and 1 destination operand.
SETHI instruction format copies its 22 bit immediate operand into the high-order 22 bits of any specified register, and sets each of the low-order 10 bits to 0.
Format ALU register, both sources are registers; format ALU immediate, one source is a register and one is a constant in the range -4096 to +4095. Bit 13 selects between them. In both cases, the destination is always a register.
Branch format instructions do control transfers or conditional branches. The icc or fcc field specifies the kind of branch. The 22 bit displacement field give the relative address of the target in words so that conditional branches can go forward or backward up to 8 megabytes. The ANNUL (A) bit is used to get rid of some delay slots. If it is 0 in a conditional branch, the delay slot is executed as usual. If it is 1, the delay slot is only executed if the branch is taken. If it is not taken, the instruction following the conditional branch is skipped.
The CALL instruction uses a 30-bit program counter-relative word offset. This value is enough to reach any instruction within 4 gigabytes of the caller or the entire address space. The CALL instruction deposits the return address in register R15 also known as output register O7.
Just like the arithmetic instructions, the SPARC architecture uses two different formats for load and store instructions. The first format is used for instructions that use one or two registers as the effective address. The second format is used for instructions that use an integer constant as the effective address.
Most arithmetic instructions come in pairs with one version setting the NZVC condition code bits, and the other does not. This is so that the compiler has a way to move instructions around when trying to fill delay slots.
SPARC v7 does not have multiplication or division instructions, but it does have MULSCC, which does one step of a multiplication testing one bit and conditionally adding the multiplicand to the product. This was because MULSCC can complete over one clock cycle in keeping with the RISC philosophy.
SPARC architecture licensees
The following organizations have licensed the SPARC architecture:
Afara Websystems
Bipolar Integrated Technology (BIT)
Cypress Semiconductor
European Space Research and Technology Center (ESTEC)
Fujitsu (and its Fujitsu Microelectronics subsidiary)
Gaisler Research
HAL Computer Systems
Hyundai
LSI Logic
Matra Harris Semiconductors (MHS)
Matsushita Electrical Industrial Co.
Meiko Scientific
Metaflow Technologies
Philips Electronics
Prisma
Ross Technology
Solbourne Computer
Systems & Processes Engineering Corporation (SPEC)
TEMIC
Weitek
Implementations
Notes:
Operating system support
SPARC machines have generally used Sun's SunOS, Solaris, or OpenSolaris including derivatives illumos and OpenIndiana, but other operating systems have also been used, such as NeXTSTEP, RTEMS, FreeBSD, OpenBSD, NetBSD, and Linux.
In 1993, Intergraph announced a port of Windows NT to the SPARC architecture, but it was later cancelled.
In October 2015, Oracle announced a "Linux for SPARC reference platform".
Open source implementations
Several fully open source implementations of the SPARC architecture exist:
LEON, a 32-bit radiation-tolerant, SPARC V8 implementation, designed especially for space use. Source code is written in VHDL, and licensed under the GPL.
OpenSPARC T1, released in 2006, a 64-bit, 32-thread implementation conforming to the UltraSPARC Architecture 2005 and to SPARC Version 9 (Level 1). Source code is written in Verilog, and licensed under many licenses. Most OpenSPARC T1 source code is licensed under the GPL. Source based on existent open source projects will continue to be licensed under their current licenses. Binary programs are licensed under a binary software license agreement.
S1, a 64-bit Wishbone compliant CPU core based on the OpenSPARC T1 design. It is a single UltraSPARC v9 core capable of 4-way SMT. Like the T1, the source code is licensed under the GPL.
OpenSPARC T2, released in 2008, a 64-bit, 64-thread implementation conforming to the UltraSPARC Architecture 2007 and to SPARC Version 9 (Level 1). Source code is written in Verilog, and licensed under many licenses. Most OpenSPARC T2 source code is licensed under the GPL. Source based on existing open source projects will continue to be licensed under their current licenses. Binary programs are licensed under a binary Software License Agreement.
A fully open source simulator for the SPARC architecture also exists:
RAMP Gold, a 32-bit, 64-thread SPARC Version 8 implementation, designed for FPGA-based architecture simulation. RAMP Gold is written in ~36,000 lines of SystemVerilog, and licensed under the BSD licenses.
Supercomputers
For HPC loads Fujitsu builds specialized SPARC64 fx processors with a new instruction extensions set, called HPC-ACE (High Performance Computing – Arithmetic Computational Extensions).
Fujitsu's K computer ranked in the TOP500 June 2011 and November 2011 lists. It combines 88,128 SPARC64 VIIIfx CPUs, each with eight cores, for a total of 705,024 cores—almost twice as many as any other system in the TOP500 at that time. The K Computer was more powerful than the next five systems on the list combined, and had the highest performance-to-power ratio of any supercomputer system. It also ranked in the Green500 June 2011 list, with a score of 824.56 MFLOPS/W. In the November 2012 release of TOP500, the K computer ranked , using by far the most power of the top three. It ranked on the corresponding Green500 release. Newer HPC processors, IXfx and XIfx, were included in recent PRIMEHPC FX10 and FX100 supercomputers.
Tianhe-2 (TOP500 as of November 2014) has a number of nodes with Galaxy FT-1500 OpenSPARC-based processors developed in China. However, those processors did not contribute to the LINPACK score.
See also
ERC32 — based on SPARC V7 specification
Ross Technology, Inc. — a SPARC microprocessor developer during the 1980s and 1990s
Sparcle — a modified SPARC with multiprocessing support used by the MIT Alewife project
LEON — a space rated SPARC V8 processor.
R1000 — a Russian quad-core microprocessor based on SPARC V9 specification
Galaxy FT-1500 — a Chinese 16-core OpenSPARC based processor
References
External links
SPARC International, Inc.
SPARC Technical Documents
OpenSPARC Architecture specification
Hypervisor/Sun4v Reference Materials
Fujitsu SPARC64 V, VI, VII, VIIIfx, IXfx Extensions and X / X+ Specification
Fujitsu SPARC Roadmap
SPARC processor images and descriptions
The Rough Guide to MBus Modules (SuperSPARC, hyperSPARC)
Computer-related introductions in 1985
Instruction set architectures
Sparc
Sun microprocessors
32-bit computers
64-bit computers |
37166 | https://en.wikipedia.org/wiki/Openlaw | Openlaw | Openlaw is a project at the Berkman Klein Center for Internet & Society at Harvard Law School aimed at releasing case arguments under a copyleft license, in order to encourage public suggestions for improvement.
Berkman lawyers specialise in cyberlaw—hacking, copyright, encryption and so on—and the centre has strong ties with the EFF and the open source software community.
In 1998 faculty member Lawrence Lessig, now at Stanford Law School, was asked by online publisher Eldritch Press to mount a legal challenge to US copyright law. Eldritch takes books whose copyright has expired and publishes them on the Web,
but legislation called the Sonny Bono Copyright Term Extension Act extended copyright from 50 to 70 years after the author's death, cutting off its supply of new material.
Lessig invited law students at Harvard and elsewhere to help craft legal arguments challenging the new law on an online forum, which evolved into Open Law.
Normal law firms write arguments the way commercial software companies write code. Lawyers discuss a case behind closed doors, and although their final product is released in court, the discussions or "source code" that produced it remain secret. In contrast, Open Law crafts its arguments in public and releases them under a copyleft. "We deliberately used free software as a model," said Wendy Seltzer, who took over Open Law when Lessig moved to Stanford. Around 50 legal scholars worked on Eldritch's case, and Open Law has taken other cases, too.
"The gains are much the same as for software," Seltzer says. "Hundreds of people scrutinise the 'code' for bugs, and make suggestions how to fix it. And people will take underdeveloped parts of the argument, work on them, then patch them in." Armed with arguments crafted in this way, OpenLaw took Eldritch's case—deemed unwinnable at the outset—right through the system to the Supreme Court. The case, Eldred v. Ashcroft, lost in 2003.
Among the drawbacks to this approach: the arguments are made in public from the start, so OpenLaw can't spring a surprise in court. Nor can it take on cases where confidentiality is important. But where there's a strong public interest element, open sourcing has big advantages. Citizens' rights groups, for example, have taken parts of Open Law's legal arguments and used them elsewhere. "People use them on letters to Congress, or put them on flyers," Seltzer says.
Read further
Open Law project
Legal Research 2.0: the Power of a Million Attorneys. This article was inspired in part by OpenLaw and has spawned The Wiki Legal Journal, a site set up by members of the Wake Forest Law Review where authors can submit papers for critique in a wiki environment.
This modified article was originally written by New Scientist magazine (see https://www.newscientist.com/hottopics/copyleft/) and released under the copyleft license.
Copyright law
Harvard University
Copyleft |
37310 | https://en.wikipedia.org/wiki/Federal%20Standard%201037C | Federal Standard 1037C | Federal Standard 1037C, titled Telecommunications: Glossary of Telecommunication Terms, is a United States Federal Standard issued by the General Services Administration pursuant to the Federal Property and Administrative Services Act of 1949, as amended.
This document provides federal departments and agencies a comprehensive source of definitions of terms used in telecommunications and directly related fields by international and U.S. government telecommunications specialists.
As a publication of the U.S. government, prepared by an agency of the U.S. government, it appears to be mostly available as a public domain resource, but a few items are derived from copyrighted sources: where this is the case, there is an attribution to the source.
This standard was superseded in 2001 by American National Standard T1.523-2001, Telecom Glossary 2000, which is published by ATIS. The old standard is still frequently used, because the new standard is protected by copyright, as usual for ANSI standards.
A newer proposed standard is the "ATIS Telecom Glossary 2011", ATIS-0100523.2011.
See also
Automatic message exchange
Bilateral synchronization
Decrypt
List of telecommunications encryption terms
List of telecommunications terminology
Net operation
Online and offline
References
External links
ATIS Telecom Glossary 2000 T1.523-2001 (successor)
Development Site for proposed Revisions to American National Standard T1.523-2001
Glossaries
Publications of the United States government
Reference works in the public domain
Telecommunications standards |
37314 | https://en.wikipedia.org/wiki/Cypherpunk | Cypherpunk | A cypherpunk is any individual advocating widespread use of strong cryptography and privacy-enhancing technologies as a route to social and political change. Originally communicating through the Cypherpunks electronic mailing list, informal groups aimed to achieve privacy and security through proactive use of cryptography. Cypherpunks have been engaged in an active movement since at least the late 1980s.
History
Before the mailing list
Until about the 1970s, cryptography was mainly practiced in secret by military or spy agencies. However, that changed when two publications brought it into public awareness: the US government publication of the Data Encryption Standard (DES), a block cipher which became very widely used; and the first publicly available work on public-key cryptography, by Whitfield Diffie and Martin Hellman.
The technical roots of Cypherpunk ideas have been traced back to work by cryptographer David Chaum on topics such as anonymous digital cash and pseudonymous reputation systems, described in his paper "Security without Identification: Transaction Systems to Make Big Brother Obsolete" (1985).
In the late 1980s, these ideas coalesced into something like a movement.
Etymology and the Cypherpunks mailing list
In late 1992, Eric Hughes, Timothy C. May and John Gilmore founded a small group that met monthly at Gilmore's company Cygnus Solutions in the San Francisco Bay Area, and was humorously termed cypherpunks by Jude Milhon at one of the first meetings - derived from cipher and cyberpunk. In November 2006, the word was added to the Oxford English Dictionary.
The Cypherpunks mailing list was started in 1992, and by 1994 had 700 subscribers. At its peak, it was a very active forum with technical discussion ranging over mathematics, cryptography, computer science, political and philosophical discussion, personal arguments and attacks, etc., with some spam thrown in. An email from John Gilmore reports an average of 30 messages a day from December 1, 1996 to March 1, 1999, and suggests that the number was probably higher earlier. The number of subscribers is estimated to have reached 2000 in the year 1997.
In early 1997, Jim Choate and Igor Chudov set up the Cypherpunks Distributed Remailer, a network of independent mailing list nodes intended to eliminate the single point of failure inherent in a centralized list architecture. At its peak, the Cypherpunks Distributed Remailer included at least seven nodes. By mid-2005, al-qaeda.net ran the only remaining node. In mid 2013, following a brief outage, the al-qaeda.net node's list software was changed from Majordomo to GNU Mailman and subsequently the node was renamed to cpunks.org. The CDR architecture is now defunct, though the list administrator stated in 2013 that he was exploring a way to integrate this functionality with the new mailing list software.
For a time, the cypherpunks mailing list was a popular tool with mailbombers, who would subscribe a victim to the mailing list in order to cause a deluge of messages to be sent to him or her. (This was usually done as a prank, in contrast to the style of terrorist referred to as a mailbomber.) This precipitated the mailing list sysop(s) to institute a reply-to-subscribe system. Approximately two hundred messages a day was typical for the mailing list, divided between personal arguments and attacks, political discussion, technical discussion, and early spam.
The cypherpunks mailing list had extensive discussions of the public policy issues related to cryptography and on the politics and philosophy of concepts such as anonymity, pseudonyms, reputation, and privacy. These discussions continue both on the remaining node and elsewhere as the list has become increasingly moribund.
Events such as the GURPS Cyberpunk raid lent weight to the idea that private individuals needed to take steps to protect their privacy. In its heyday, the list discussed public policy issues related to cryptography, as well as more practical nuts-and-bolts mathematical, computational, technological, and cryptographic matters. The list had a range of viewpoints and there was probably no completely unanimous agreement on anything. The general attitude, though, definitely put personal privacy and personal liberty above all other considerations.
Early discussion of online privacy
The list was discussing questions about privacy, government monitoring, corporate control of information, and related issues in the early 1990s that did not become major topics for broader discussion until at least ten years later. Some list participants were highly radical on these issues.
Those wishing to understand the context of the list might refer to the history of cryptography; in the early 1990s, the US government considered cryptography software a munition for export purposes. (PGP source code was published as a paper book to bypass these regulations and demonstrate their futility.) In 1992, a deal between NSA and SPA allowed export of cryptography based on 40-bit RC2 and RC4 which was considered relatively weak (and especially after SSL was created, there was many contests to break it). The US government had also tried to subvert cryptography through schemes such as Skipjack and key escrow. It was also not widely known that all communications were logged by government agencies (which would later be revealed during the NSA and AT&T scandals) though this was taken as an obvious axiom by list members.
The original cypherpunk mailing list, and the first list spin-off, coderpunks, were originally hosted on John Gilmore's toad.com, but after a falling out with the sysop over moderation, the list was migrated to several cross-linked mail-servers in what was called the "distributed mailing list." The coderpunks list, open by invitation only, existed for a time. Coderpunks took up more technical matters and had less discussion of public policy implications. There are several lists today that can trace their lineage directly to the original Cypherpunks list: the cryptography list (cryptography@metzdowd.com), the financial cryptography list (fc-announce@ifca.ai), and a small group of closed (invitation-only) lists as well.
Toad.com continued to run with the existing subscriber list, those that didn't unsubscribe, and was mirrored on the new distributed mailing list, but messages from the distributed list didn't appear on toad.com. As the list faded in popularity, so too did it fade in the number of cross-linked subscription nodes.
To some extent, the cryptography list acts as a successor to cypherpunks; it has many of the people and continues some of the same discussions. However, it is a moderated list, considerably less zany and somewhat more technical. A number of current systems in use trace to the mailing list, including Pretty Good Privacy, /dev/random in the Linux kernel (the actual code has been completely reimplemented several times since then) and today's anonymous remailers.
Main principles
The basic ideas can be found in A Cypherpunk's Manifesto (Eric Hughes, 1993): "Privacy is necessary for an open society in the electronic age. ... We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy ... We must defend our own privacy if we expect to have any. ... Cypherpunks write code. We know that someone has to write software to defend privacy, and ... we're going to write it."
Some are or were quite senior people at major hi-tech companies and others are well-known researchers (see list with affiliations below).
The first mass media discussion of cypherpunks was in a 1993 Wired article by Steven Levy titled Crypto Rebels:
The three masked men on the cover of that edition of Wired were prominent cypherpunks Tim May, Eric Hughes and John Gilmore.
Later, Levy wrote a book, Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age,
covering the crypto wars of the 1990s in detail. "Code Rebels" in the title is almost synonymous with cypherpunks.
The term cypherpunk is mildly ambiguous. In most contexts it means anyone advocating cryptography as a tool for social change, social impact and expression. However, it can also be used to mean a participant in the Cypherpunks electronic mailing list described below. The two meanings obviously overlap, but they are by no means synonymous.
Documents exemplifying cypherpunk ideas include Timothy C. May's The Crypto Anarchist Manifesto (1992) and The Cyphernomicon (1994), A Cypherpunk's Manifesto.
Privacy of communications
A very basic cypherpunk issue is privacy in communications and data retention. John Gilmore said he wanted "a guarantee -- with physics and mathematics, not with laws -- that we can give ourselves real privacy of personal communications."
Such guarantees require strong cryptography, so cypherpunks are fundamentally opposed to government policies attempting to control the usage or export of cryptography, which remained an issue throughout the late 1990s. The Cypherpunk Manifesto stated "Cypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act."
This was a central issue for many cypherpunks. Most were passionately opposed to various government attempts to limit cryptography — export laws, promotion of limited key length ciphers, and especially escrowed encryption.
Anonymity and pseudonyms
The questions of anonymity, pseudonymity and reputation were also extensively discussed.
Arguably, the possibility of anonymous speech and publication is vital for an open society and genuine freedom of speech — this is the position of most cypherpunks. That the Federalist Papers were originally published under a pseudonym is a commonly-cited example.
Privacy and self-revelation
A whole set of issues around privacy and the scope of self-revelation were perennial topics on the list.
Consider a young person who gets "carded" when he or she enters a bar and produces a driver's license as proof of age. The license includes things like full name and home address; these are completely irrelevant to the question of legal drinking. However, they could be useful to a lecherous member of bar staff who wants to stalk a hot young customer, or to a thief who cleans out the apartment when an accomplice in the bar tells him you look well off and are not at home. Is a government that passes a drinking age law morally obligated to create a privacy-protecting form of ID to go with it, one that only shows you can legally drink without revealing anything else about you? In the absence of that, is it ethical to acquire a bogus driver's license to protect your privacy? For most cypherpunks, the answer to both those questions is "Yes, obviously!"
What about a traffic cop who asks for your driver's license and vehicle registration? Should there be some restrictions on what he or she learns about you? Or a company that issues a frequent flier or other reward card, or requires registration to use its web site? Or cards for toll roads that potentially allow police or others to track your movements? Or cameras that record license plates or faces on a street? Or phone company and Internet records? In general, how do we manage privacy in an electronic age?
Cypherpunks naturally consider suggestions of various forms of national uniform identification card too dangerous; the risks of abuse far outweigh any benefits.
Censorship and monitoring
In general, cypherpunks opposed the censorship and monitoring from government and police.
In particular, the US government's Clipper chip scheme for escrowed encryption of telephone conversations (encryption supposedly secure against most attackers, but breakable by government) was seen as anathema by many on the list. This was an issue that provoked strong opposition and brought many new recruits to the cypherpunk ranks. List participant Matt Blaze found a serious flaw in the scheme, helping to hasten its demise.
Steven Schear first suggested the warrant canary in 2002 to thwart the secrecy provisions of court orders and national security letters. , warrant canaries are gaining commercial acceptance.
Hiding the act of hiding
An important set of discussions concerns the use of cryptography in the presence of oppressive authorities. As a result, Cypherpunks have discussed and improved steganographic methods that hide the use of crypto itself, or that allow interrogators to believe that they have forcibly extracted hidden information from a subject. For instance, Rubberhose was a tool that partitioned and intermixed secret data on a drive with fake secret data, each of which accessed via a different password. Interrogators, having extracted a password, are led to believe that they have indeed unlocked the desired secrets, whereas in reality the actual data is still hidden. In other words, even its presence is hidden. Likewise, cypherpunks have also discussed under what conditions encryption may be used without being noticed by network monitoring systems installed by oppressive regimes.
Activities
As the Manifesto says, "Cypherpunks write code"; the notion that good ideas need to be implemented, not just discussed, is very much part of the culture of the mailing list. John Gilmore, whose site hosted the original cypherpunks mailing list, wrote: "We are literally in a race between our ability to build and deploy technology, and their ability to build and deploy laws and treaties. Neither side is likely to back down or wise up until it has definitively lost the race."
Software projects
Anonymous remailers such as the Mixmaster Remailer were almost entirely a cypherpunk development. Among the other projects they have been involved in were PGP for email privacy, FreeS/WAN for opportunistic encryption of the whole net, Off-the-record messaging for privacy in Internet chat, and the Tor project for anonymous web surfing.
Hardware
In 1998, the Electronic Frontier Foundation, with assistance from the mailing list, built a $200,000 machine that could brute-force a Data Encryption Standard key in a few days. The project demonstrated that DES was, without question, insecure and obsolete, in sharp contrast to the US government's recommendation of the algorithm.
Expert panels
Cypherpunks also participated, along with other experts, in several reports on cryptographic matters.
One such paper was "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". It suggested 75 bits was the minimum key size to allow an existing cipher to be considered secure and kept in service. At the time, the Data Encryption Standard with 56-bit keys was still a US government standard, mandatory for some applications.
Other papers were critical analysis of government schemes. "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption", evaluated escrowed encryption proposals. Comments on the Carnivore System Technical Review. looked at an FBI scheme for monitoring email.
Cypherpunks provided significant input to the 1996 National Research Council report on encryption policy,
Cryptography's Role In Securing the Information Society (CRISIS).
This report, commissioned by the U.S. Congress in 1993, was developed via extensive hearings across the nation from all interested stakeholders, by a committee of talented people. It recommended a gradual relaxation of the existing U.S. government restrictions on encryption. Like many such study reports, its conclusions were largely ignored by policy-makers. Later events such as the final rulings in the cypherpunks lawsuits forced a more complete relaxation of the unconstitutional controls on encryption software.
Lawsuits
Cypherpunks have filed a number of lawsuits, mostly suits against the US government alleging that some government action is unconstitutional.
Phil Karn sued the State Department in 1994 over cryptography export controls after they ruled that, while the book Applied Cryptography could legally be exported, a floppy disk containing a verbatim copy of code printed in the book was legally a munition and required an export permit, which they refused to grant. Karn also appeared before both House and Senate committees looking at cryptography issues.
Daniel J. Bernstein, supported by the EFF, also sued over the export restrictions, arguing that preventing publication of cryptographic source code is an unconstitutional restriction on freedom of speech. He won, effectively overturning the export law. See Bernstein v. United States for details.
Peter Junger also sued on similar grounds, and won.
Civil disobedience
Cypherpunks encouraged civil disobedience, in particular US law on the export of cryptography. Until 1997, cryptographic code was legally a munition and fall until ITAR, and the key length restrictions in the EAR was not removed until 2000.
In 1995 Adam Back wrote a version of the RSA algorithm for public-key cryptography in three lines of Perl and suggested people use it as an email signature file:
#!/bin/perl -sp0777i<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<j]dsj
$/=unpack('H*',$_);$_=`echo 16dio\U$k"SK$/SM$n\EsN0p[lN*1
lK[d2%Sa2/d0$^Ixp"|dc`;s/\W//g;$_=pack('H*',/((..)*)$/)
Vince Cate put up a web page that invited anyone to become an international arms trafficker; every time someone clicked on the form, an export-restricted item — originally PGP, later a copy of Back's program — would be mailed from a US server to one in Anguilla.
Cypherpunk fiction
In Neal Stephenson's novel Cryptonomicon many characters are on the "Secret Admirers" mailing list. This is fairly obviously based on the cypherpunks list, and several well-known cypherpunks are mentioned in the acknowledgements. Much of the plot revolves around cypherpunk ideas; the leading characters are building a data haven which will allow anonymous financial transactions, and the book is full of cryptography.
But, according to the author the book's title is — in spite of its similarity — not based on the Cyphernomicon, an online cypherpunk FAQ document.
Legacy
Cypherpunk achievements would later also be used on the Canadian e-wallet, the MintChip, and the creation of bitcoin. It was an inspiration for CryptoParty decades later to such an extent that the Cypherpunk Manifesto is quoted at the header of its Wiki, and Eric Hughes delivered the keynote address at the Amsterdam CryptoParty on 27 August 2012.
Notable cypherpunks
Cypherpunks list participants included many notable computer industry figures. Most were list regulars, although not all would call themselves "cypherpunks". The following is a list of noteworthy cypherpunks and their achievements:
Marc Andreessen: co-founder of Netscape which invented SSL
Jacob Appelbaum: Former Tor Project employee, political advocate
Julian Assange: WikiLeaks founder, deniable cryptography inventor, journalist; co-author of Underground; author of Cypherpunks: Freedom and the Future of the Internet; member of the International Subversives. Assange has stated that he joined the list in late 1993 or early 1994. An archive of his cypherpunks mailing list posts is at the Mailing List Archives.
Derek Atkins: computer scientist, computer security expert, and one of the people who factored RSA-129
Adam Back: inventor of Hashcash and of NNTP-based Eternity networks; co-founder of Blockstream
Jim Bell: author of Assassination Politics'
Steven Bellovin: Bell Labs researcher; later Columbia professor; Chief Technologist for the US Federal Trade Commission in 2012
Matt Blaze: Bell Labs researcher; later professor at University of Pennsylvania; found flaws in the Clipper Chip
Eric Blossom: designer of the Starium cryptographically secured mobile phone; founder of the GNU Radio project
Jon Callas: technical lead on OpenPGP specification; co-founder and Chief Technical Officer of PGP Corporation; co-founder with Philip Zimmermann of Silent Circle
Bram Cohen: creator of BitTorrent
Matt Curtin: founder of Interhack Corporation; first faculty advisor of the Ohio State University Open Source Club; lecturer at Ohio State University
Hugh Daniel (deceased): former Sun Microsystems employee; manager of the FreeS/WAN project (an early and important freeware IPsec implementation)
Suelette Dreyfus: deniable cryptography co-inventor, journalist, co-author of Underground Hal Finney (deceased): cryptographer; main author of PGP 2.0 and the core crypto libraries of later versions of PGP; designer of RPOW
Eva Galperin: malware researcher and security advocate; Electronic Frontier Foundation activist
John Gilmore*: Sun Microsystems' fifth employee; co-founder of the Cypherpunks and the Electronic Frontier Foundation; project leader for FreeS/WAN
Mike Godwin: Electronic Frontier Foundation lawyer; electronic rights advocate
Ian Goldberg*: professor at University of Waterloo; co-designer of the off-the-record messaging protocol
Rop Gonggrijp: founder of XS4ALL; co-creator of the Cryptophone
Matthew D. Green, influential in the development of the Zcash system
Sean Hastings: founding CEO of Havenco; co-author of the book God Wants You Dead Johan Helsingius: creator and operator of Penet remailer
Nadia Heninger: assistant professor at University of Pennsylvania; security researcher
Robert Hettinga: founder of the International Conference on Financial Cryptography; originator of the idea of Financial cryptography as an applied subset of cryptography
Mark Horowitz: author of the first PGP key server
Tim Hudson: co-author of SSLeay, the precursor to OpenSSL
Eric Hughes: founding member of Cypherpunks; author of A Cypherpunk's Manifesto Peter Junger (deceased): law professor at Case Western Reserve University
Paul Kocher: president of Cryptography Research, Inc.; co-author of the SSL 3.0 protocol
Ryan Lackey: co-founder of HavenCo, the world's first data haven
Brian LaMacchia: designer of XKMS; research head at Microsoft Research
Ben Laurie: founder of The Bunker, core OpenSSL team member, Google engineer.
Jameson Lopp: software engineer, CTO of Casa
Morgan Marquis-Boire: researcher, security engineer, and privacy activist
Matt Thomlinson (phantom): security engineer, leader of Microsoft's security efforts on Windows, Azure and Trustworthy Computing, CISO at Electronic Arts
Timothy C. May (deceased): former Assistant Chief Scientist at Intel; author of A Crypto Anarchist Manifesto and the Cyphernomicon; a founding member of the Cypherpunks mailing list
Jude Milhon (deceased; aka "St. Jude"): a founding member of the Cypherpunks mailing list, credited with naming the group; co-creator of Mondo 2000 magazine
Vincent Moscaritolo: founder of Mac Crypto Workshop; Principal Cryptographic Engineer for PGP Corporation; co-founder of Silent Circle and 4th-A Technologies, LLC
Sameer Parekh: former CEO of C2Net and co-founder of the CryptoRights Foundation human rights non-profit
Vipul Ved Prakash: co-founder of Sense/Net; author of Vipul's Razor; founder of Cloudmark
Runa Sandvik: Tor developer, political advocate
Len Sassaman (deceased): maintainer of the Mixmaster Remailer software; researcher at Katholieke Universiteit Leuven; biopunk
Steven Schear: creator of the warrant canary; street performer protocol; founding member of the International Financial Cryptographer's Association and GNURadio; team member at Counterpane; former Director at data security company Cylink and MojoNation
Bruce Schneier*: well-known security author; founder of Counterpane
Richard Stallman: founder of Free Software Foundation, privacy advocate
Nick Szabo: inventor of smart contracts; designer of bit gold, a precursor to Bitcoin
Wei Dai: Created b-money; cryptocurrency system and co-proposed the VMAC message authentication algorithm. The smallest subunit of Ether, the wei, is named after him.
Zooko Wilcox-O'Hearn: DigiCash and MojoNation developer; founder of Zcash; co-designer of Tahoe-LAFS
Jillian C. York: Director of International Freedom of Expression at the Electronic Frontier Foundation (EFF)
John Young: anti-secrecy activist and co-founder of Cryptome
Philip Zimmermann: original creator of PGP v1.0 (1991); co-founder of PGP Inc. (1996); co-founder with Jon Callas of Silent Circle
* indicates someone mentioned in the acknowledgements of Stephenson's Cryptonomicon. References
Further reading
Andy Greenberg: This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World's Information''. Dutton Adult 2012,
Punk
Internet privacy |
37368 | https://en.wikipedia.org/wiki/General%20Atomics%20MQ-1%20Predator | General Atomics MQ-1 Predator | The General Atomics MQ-1 Predator is an American remotely piloted aircraft (RPA) built by General Atomics that was used primarily by the United States Air Force (USAF) and Central Intelligence Agency (CIA). Conceived in the early 1990s for aerial reconnaissance and forward observation roles, the Predator carries cameras and other sensors. It was modified and upgraded to carry and fire two AGM-114 Hellfire missiles or other munitions. The aircraft entered service in 1995, and saw combat in the war in Afghanistan, Pakistan, the NATO intervention in Bosnia, 1999 NATO bombing of Yugoslavia, the Iraq War, Yemen, the 2011 Libyan civil war, the 2014 intervention in Syria, and Somalia.
The USAF describes the Predator as a "Tier II" MALE UAS (medium-altitude, long-endurance unmanned aircraft system). The UAS consists of four aircraft or "air vehicles" with sensors, a ground control station (GCS), and a primary satellite link communication suite. Powered by a Rotax engine and driven by a propeller, the air vehicle can fly up to to a target, loiter overhead for 14 hours, then return to its base.
The RQ-1 Predator was the primary remotely piloted aircraft used for offensive operations by the USAF and the CIA in Afghanistan and the Pakistani tribal areas from 2001 until the introduction of the MQ-9 Reaper; it has also been deployed elsewhere. Because offensive uses of the Predator are classified by the U.S., U.S. military officials have reported an appreciation for the intelligence and reconnaissance-gathering abilities of RPAs but declined to publicly discuss their offensive use. The United States Air Force retired the Predator in 2018, replacing it with the Reaper.
Civilian applications for drones have included border enforcement and scientific studies, and to monitor wind direction and other characteristics of large forest fires (such as the drone that was used by the California Air National Guard in the August 2013 Rim Fire).
Development
The Central Intelligence Agency (CIA) and the Pentagon began experimenting with unmanned reconnaissance aircraft (drones) in the early 1980s. The CIA preferred small, lightweight, unobtrusive drones, in contrast to the United States Air Force (USAF). In the early 1990s, the CIA became interested in the "Amber", a drone developed by Leading Systems, Inc. The company's owner, Abraham Karem, was the former chief designer for the Israeli Air Force, and had immigrated to the U.S. in the late 1970s. Karem's company went bankrupt and was bought by a U.S. defense contractor, from whom the CIA secretly bought five drones (now called the "Gnat"). Karem agreed to produce a quiet engine for the vehicle, which had until then sounded like "a lawnmower in the sky". The new development became known as the "Predator".
General Atomics Aeronautical Systems (GA) was awarded a contract to develop the Predator in January 1994, and the initial Advanced Concept Technology Demonstration (ACTD) phase lasted from January 1994 to June 1996. First flight took place on 3 July 1994 at the El Mirage airfield in the Mojave Desert. The aircraft itself was a derivative of the GA Gnat 750. During the ACTD phase, three systems were purchased from GA, comprising twelve aircraft and three ground control stations.
From April through May 1995, the Predator ACTD aircraft were flown as a part of the Roving Sands 1995 exercises in the U.S. The exercise operations were successful which led to the decision to deploy the system to the Balkans later in the summer of 1995.
During the ACTD, Predators were operated by a combined Army/Navy/Air Force/Marine team managed by the Navy's Joint Program Office for Unmanned Aerial Vehicles (JPO-UAV) and first deployed to Gjader, Albania, for operations in the former Yugoslavia in spring 1995.
By the start of the United States Afghan campaign in 2001, the USAF had acquired 60 Predators, but lost 20 of them in action. Few if any of the losses were from enemy action, the worst problem apparently being foul weather, particularly icy conditions. Some critics within the Pentagon saw the high loss rate as a sign of poor operational procedures. In response to the losses caused by cold weather conditions, a few of the later USAF Predators were fitted with de-icing systems, along with an uprated turbocharged engine and improved avionics. This improved "Block 1" version was referred to as the "RQ-1B", or the "MQ-1B" if it carried munitions; the corresponding air vehicle designation was "RQ-1L" or "MQ-1L".
The Predator system was initially designated the RQ-1 Predator. The "R" is the United States Department of Defense designation for reconnaissance and the "Q" refers to an unmanned aircraft system. The "1" describes it as being the first of a series of aircraft systems built for unmanned reconnaissance. Pre-production systems were designated as RQ-1A, while the RQ-1B (not to be confused with the Predator B, which became the MQ-9 Reaper) denotes the baseline production configuration. These are designations of the system as a unit. The actual aircraft themselves were designated RQ-1K for pre-production models, and RQ-1L for production models. In 2002, the USAF officially changed the designation to MQ-1 ("M" for multi-role) to reflect its growing use as an armed aircraft.
Command and sensor systems
During campaign in the former Yugoslavia, a Predator's pilot would sit with several payload specialists in a van near the runway of the drone's operating base. Direct radio signals controlled the drone's takeoff and initial ascent. Then communications shifted to military satellite networks linked to the pilot's van. Pilots experienced a delay of several seconds between moving their sticks and the drone's response. But by 2000, improvements in communications systems made it possible, at least in theory, to fly the drone remotely from great distances. It was no longer necessary to use close-up radio signals during the Predator's takeoff and ascent. The entire flight could be controlled by satellite from any command and control center with the right equipment. The CIA proposed to attempt over Afghanistan the first fully remote Predator flight operations, piloted from the agency's headquarters at Langley.
The Predator air vehicle and sensors are controlled from the ground control station (GCS) via a C-band line-of-sight data link or a Ku-band satellite data link for beyond-line-of-sight operations. During flight operations the crew in the GCS is a pilot and two sensor operators. The aircraft is equipped with the AN/AAS-52 Multi-spectral Targeting System, a color nose camera (generally used by the pilot for flight control), a variable aperture day-TV camera, and a variable aperture thermographic camera (for low light/night). Previously, Predators were equipped with a synthetic aperture radar for looking through smoke, clouds or haze, but lack of use validated its removal to reduce weight and conserve fuel. The cameras produce full motion video and the synthetic aperture radar produced still frame radar images. There is sufficient bandwidth on the datalink for two video sources to be used together, but only one video source from the sensor ball can be used due to design limitations. Either the daylight variable aperture or the infrared electro-optical sensor may be operated simultaneously with the synthetic aperture radar, if equipped.
All later Predators are equipped with a laser designator that allows the pilot to identify targets for other aircraft and even provide the laser guidance for manned aircraft. This laser is also the designator for the AGM-114 Hellfire that are carried on the MQ-1.
Deployment methodology
Each Predator air vehicle can be disassembled into six modules and loaded into a container nicknamed "the coffin." This enables all system components and support equipment to be rapidly deployed worldwide. The largest component is the ground control station (GCS) which is designed to roll into a C-130 Hercules. The Predator primary satellite link consists of a 6.1-meter (20-ft) satellite dish with associated support equipment. The satellite link provides communications between the GCS and the aircraft when it is beyond line-of-sight and links to networks that disseminate secondary intelligence. The RQ-1A system needs 1,500 by 40 meters (5,000 by 125 ft) of hard surface runway with clear line-of-sight to each end from the GCS to the air vehicles. Initially, all components needed to be located on the same airfield.
, the U.S. Air Force used a concept called "Remote-Split Operations" where the satellite datalink is placed in a different location and is connected to the GCS through fiber optic cabling. This allows Predators to be launched and recovered by a small "Launch and Recovery Element" and then handed off to a "Mission Control Element" for the rest of the flight. This allows a smaller number of troops to be deployed to a forward location, and consolidates control of the different flights in one location.
The improvements in the MQ-1B production version include an ARC-210 radio, an APX-100 IFF/SIF with mode 4, a glycol-weeping "wet wings" de-icing system, upgraded turbo-charged engine, fuel injection, longer wings, dual alternators as well as other improvements.
On 18 May 2006, the Federal Aviation Administration (FAA) issued a certificate of authorization which will allow the M/RQ-1 and M/RQ-9 aircraft to be used within U.S. civilian airspace to search for survivors of disasters. Requests had been made in 2005 for the aircraft to be used in search and rescue operations following Hurricane Katrina, but because there was no FAA authorization in place at the time, the assets were not used. The Predator's infrared camera with digitally enhanced zoom has the capability of identifying the infrared signature of a human body from an altitude of 3 km (10,000 ft), making the aircraft an ideal search and rescue tool.
The longest declassified Predator flight lasted for 40 hours, 5 minutes. The total flight time reached 1 million hours in April 2010, according to General Atomics Aeronautical Systems Inc.
Armed versions
The USAF BIG SAFARI program office managed the Predator program and was directed on 21 June 2000 to explore armament options. This led to reinforced wings with munitions storage pylons, as well as a laser designator. The RQ-1 conducted its first firing of a Hellfire anti-tank missile on 16 February 2001 over a bombing range near Indian Springs Air Force Station north of Las Vegas, Nevada, with an inert AGM-114C successfully striking a tank target. Then on 21 February 2001 the Predator fired three Hellfire missiles, scoring hits on a stationary tank with all three missiles. Following the February tests, phase two involved more complex tests to hunt for simulated moving targets from greater altitudes with the more advanced AGM-114K version. The armed Predators were put into service with the designation MQ-1A. The Predator gives little warning of attack because it is relatively quiet and the Hellfire is supersonic, so it strikes before it is heard by the target.
In the winter of 2000–2001, after seeing the results of Predator reconnaissance in Afghanistan, Cofer Black, head of the CIA's Counterterrorist Center (CTC), became a vocal advocate of arming the Predator with missiles to target Osama bin Laden in country. He believed that CIA pressure and practical interest were causing the USAF's armed Predator program to be significantly accelerated. Black, and "Richard", who was in charge of the CTC's Bin Laden Issue Station, continued to press during 2001 for a Predator armed with Hellfire missiles.
Further weapons tests occurred between 22 May and 7 June 2001, with mixed results. While missile accuracy was excellent, there were some problems with missile fuzing. In the first week of June, in the Nevada desert, a Hellfire missile was successfully launched on a replica of bin Laden's Afghanistan Tarnak residence. A missile launched from a Predator exploded inside one of the replica's rooms; it was concluded that any people in the room would have been killed. However, the armed Predator was not deployed before the September 11 attacks.
The USAF also investigated using the Predator to drop battlefield ground sensors and to carry and deploy the "Finder" mini-UAV.
Other versions and fate
Two unarmed versions, known as the General Atomics ALTUS were built, ALTUS I for the Naval Postgraduate School and ALTUS II for the NASA ERAST Project in 1997 and 1996, respectively.
Based on the MQ-1 Predator, the General Atomics MQ-1C Gray Eagle was developed for the U.S. Army.
The USAF ordered a total of 259 Predators, and due to retirements and crashes the number in Air Force operation was reduced to 154 as of May 2014. Budget proposals planned to retire the Predator fleet between FY 2015 and 2017 in favor of the larger MQ-9 Reaper, which has greater payload and range. The Predators were to be stored at Davis-Monthan Air Force Base or given to other agencies willing to take them. The U.S. Customs and Border Protection showed interest, but already had higher-performance Reapers and were burdened with operating costs. The U.S. Coast Guard also showed interest in land-based UAV surveillance. Foreign sales were also an option, but the MQ-1 is subject to limitations of the Missile Technology Control Regime because it can be armed; export markets are also limited by the existence of the Reaper. Given the Predator's pending phase-out and its size, weight, and power limitations, the Air Force decided not to pursue upgrades to make it more effective in contested environments, and determined its only use in defended airspace would be as a decoy to draw fire away from other aircraft. Due to airborne surveillance needs after the Islamic State of Iraq and the Levant (ISIL) invaded Iraq, the Predator's retirement was delayed to 2018. MQ-1s will probably be placed in non-recoverable storage at the Boneyard and not sold to allies, although antenna, ground control stations, and other components may be salvaged for continued use on other airframes.
General Atomics completed the final RQ-1 ordered by Italy by October 2015, marking the end of Predator A production after two decades. The last Predator for the USAF was completed in 2011; later Predator aircraft were built on the Predator XP assembly line.
The United States Air Force announced plans to retire the MQ-1 on 9 March 2018. The Predator was officially retired from USAF service in March 2018.
Operational history
As of March 2009, the U.S. Air Force had 195 MQ-1 Predators and 28 MQ-9 Reapers in operation. Predators and Reapers fired missiles 244 times in Iraq and Afghanistan in 2007 and 2008. A report in March 2009 indicated that U.S. Air Force had lost 70 Predators in air crashes during its operational history. Fifty-five were lost to equipment failure, operator error, or weather. Five were shot down in Bosnia, Kosovo, Syria and Iraq. Eleven more were lost to operational accidents on combat missions. In 2012, the Predator, Reaper and Global Hawk were described as "... the most accident-prone aircraft in the Air Force fleet."
On 3 March 2011, the U.S. Air Force took delivery of its last MQ-1 Predator in a ceremony at General Atomics' flight operations facility. Since its first flight in July 1994, the MQ-1 series accumulated over 1,000,000 flight hours and maintained a fleet fully mission capable rate over 90 percent.
On 22 October 2013, the U.S. Air Force's fleets of MQ-1 Predators and MQ-9 Reaper remotely piloted aircraft reached 2,000,000 flight hours. The RPA program began in the mid-1990s, taking 16 years for them to reach 1 million flight hours. The 2 million hour mark was reached just two and a half years after that.
On 9 March 2018, the U.S. Air Force officially retired the MQ-1 Predator from operational service. The aircraft was first operationally deployed in 1995 and in 2011 the last of 268 Predators were delivered to the service, of which just over 100 were still in service by the start of 2018. While the Predator was phased out by the Air Force in favor of the heavier and more capable MQ-9 Reaper, the Predator continues to serve in the MQ-1C Gray Eagle derivative for the U.S. Army as well as with several foreign nations.
Squadrons and operational units
During the initial ACTD phase, the United States Army led the evaluation program, but in April 1996, the Secretary of Defense selected the U.S. Air Force as the operating service for the RQ-1A Predator system. The 3d Special Operations Squadron at Cannon Air Force Base, 11th, 15th, 17th, and 18th Reconnaissance Squadrons, Creech Air Force Base, Nevada, and the Air National Guard's 163d Reconnaissance Wing at March Air Reserve Base, California, currently operate the MQ-1.
In 2005, the U.S. Department of Defense recommended retiring Ellington Field's 147th Fighter Wing's F-16 Fighting Falcon fighter jets (a total of 15 aircraft), which was approved by the Base Realignment and Closure committee. They will be replaced with 12 MQ-1 Predator UAVs, and the new unit should be fully equipped and outfitted by 2009. The wing's combat support arm will remain intact. The 272d Engineering Installation Squadron, an Air National Guard unit currently located off-base, will move into Ellington Field in its place.
The 3d Special Operations Squadron is currently the largest Predator squadron in the United States Air Force.
U.S. Customs and Border Protection was reported in 2013 to be operating 10 Predators and to have requested 14 more.
On 21 June 2009, the United States Air Force announced that it was creating a new MQ-1 squadron at Whiteman Air Force Base that would become operational by February 2011. In September 2011, the U.S. Air National Guard announced that despite current plans for budget cuts, they will continue to operate the Air Force's combat UAVs, including MQ-1B.
On 28 August 2013, a Predator belonging to the 163d Reconnaissance Wing was flying at 18,000 to 20,000 feet over the Rim Fire in California providing infrared video of lurking fires, after receiving emergency approvals. Rules limit the Predator behavior; it must be accompanied by a manned aircraft, and its camera must only be active above the fire.
In September 2013, the Air Force Special Operations Command tested the ability to rapidly deploy Predator aircraft. Two MQ-1s were loaded into a Boeing C-17 Globemaster III in a cradle system that also carried a control terminal, maintenance tent, and the crew. The test was to prove the UAVs could be deployed and set up at an expeditionary base within four hours of landing. In a recent undisclosed deployment, airmen set up a portable hangar in a tent and a wooden taxiway to operate MQ-1s for a six-week period.
The Balkans
The first overseas deployment took place in the Balkans, from July to November 1995, under the name Nomad Vigil. Operations were based in Gjader, Albania. Four disassembled Predators were flown into Gjadër airbase in a C-130 Hercules. The UAVs were assembled and flown first by civilian contract personnel. The U.S. deployed more than 70 military intelligence personnel. Intelligence collection missions began in July 1995. One of the Predators was lost over Bosnia on 11 August 1995; a second was deliberately destroyed on 14 August after suffering an engine failure over Bosnia, which may have been caused by hostile ground fire. The wreckage of the first Predator was handed over to Russia, according to Serb sources. Its original 60-day stay was extended to 120 days. The following spring, in March 1996, the system was redeployed to the Balkans area and operated out of Taszar, Hungary.
Several others were destroyed in the course of Operation Noble Anvil, the 1999 NATO bombing of Yugoslavia:
One aircraft (serial 95-3017) was lost on 18 April 1999, following fuel system problems and icing.
A second aircraft (serial 95-3019) was lost on 13 May, when it was shot down by a Serbian Strela-1M surface-to-air missile over the village of Biba. A Serbian TV crew videotaped this incident.
A third aircraft (serial number 95-3021) crashed on 20 May near the town of Talinovci, and Serbian news reported that this, too, was the result of anti-aircraft fire.
Afghanistan
In 2000, a joint CIA-DoD effort was agreed to locate Osama bin Laden in Afghanistan. Dubbed "Afghan Eyes", it involved a projected 60-day trial run of Predators over the country. The first experimental flight was held on 7 September 2000. White House security chief Richard A. Clarke was impressed by the resulting video footage; he hoped that the drones might eventually be used to target Bin Laden with cruise missiles or armed aircraft. Clarke's enthusiasm was matched by that of Cofer Black, head of the CIA's Counterterrorist Center (CTC), and Charles Allen, in charge of the CIA's intelligence-collection operations. The three men backed an immediate trial run of reconnaissance flights. Ten out of the ensuing 15 Predator missions over Afghanistan were rated successful. On at least two flights, a Predator spotted a tall man in white robes at bin Laden's Tarnak Farm compound outside Kandahar; the figure was subsequently deemed to be "probably bin Laden". By October 2000, deteriorating weather conditions made it difficult for the Predator to fly from its base in Uzbekistan, and the flights were suspended.
On 16 February 2001 at Nellis Air Force Base, a Predator successfully fired three Hellfire AGM-114C missiles into a target. The newly armed Predators were given the designation of MQ-1A. In the first week of June 2001, a Hellfire missile was successfully launched on a replica of bin Laden's Afghanistan Tarnak residence built at a Nevada testing site. A missile launched from a Predator exploded inside one of the replica's rooms; it was concluded that any people in the room would have been killed. On 4 September 2001 (after the Bush cabinet approved a Qaeda/Taliban plan), CIA chief Tenet ordered the agency to resume reconnaissance flights. The Predators were now weapons-capable, but didn't carry missiles because the host country (presumably Uzbekistan) hadn't granted permission.
Subsequent to 9/11, approval was quickly granted to ship the missiles, and the Predator aircraft and missiles reached their overseas location on 16 September 2001. The first mission was flown over Kabul and Kandahar on 18 September without carrying weapons. Subsequent host nation approval was granted on 7 October and the first armed mission was flown on the same day.
In February 2002, armed Predators are thought to have been used to destroy a sport utility vehicle belonging to suspected Taliban leader Mullah Mohammed Omar and mistakenly killed Afghan scrap metal collectors near Zhawar Kili because one of them resembled Osama bin Laden.
On 4 March 2002, a CIA-operated Predator fired a Hellfire missile into a reinforced Taliban machine gun bunker that had pinned down an Army Ranger team whose CH-47 Chinook had crashed on the top of Takur Ghar Mountain in Afghanistan. Previous attempts by flights of F-15 and F-16 Fighting Falcon aircraft were unable to destroy the bunker. This action took place during what has become known as the "Battle of Roberts Ridge", a part of Operation Anaconda. This appears to be the first use of such a weapon in a close air support role.
On 6 April 2011, 2 US soldiers were killed in Afghanistan when the Predator had its first friendly fire incident. This occurred when observers in Indiana did not relay their doubts about the target to the operators at Creech Air Force Base in Nevada.
On 5 May 2013, an MQ-1 Predator surpassed 20,000 flight hours over Afghanistan by a single Predator. Predator P107 achieved the milestone while flying a 21-hour combat mission; P107 was first delivered in October 2004.
Pakistan
From at least 2003 until 2011, the U.S. Central Intelligence Agency has allegedly been operating the drones out of Shamsi airfield in Pakistan to attack militants in Pakistan's Federally Administered Tribal Areas. During this period, the MQ-1 Predator fitted with Hellfire missiles was successfully used to kill a number of prominent al Qaeda operatives.
On 13 January 2006, 18 civilians were unintentionally killed by the Predator. According to Pakistani authorities, the U.S. strike was based on faulty intelligence.
Iraq
An Iraqi MiG-25 shot down a Predator performing reconnaissance over the no fly zone in Iraq on 23 December 2002. This was the first time in history a conventional aircraft and a drone had engaged each other in combat. Predators had been armed with AIM-92 Stinger air-to-air missiles, and were purportedly being used to "bait" Iraqi fighters, then run. However, Predators are slower than MIG-25s and the service ceiling is nearly lower, making the "run" segment of any "bait and run" mission a difficult task. In this incident, the Predator did not run (or could not run fast enough), but instead fired one of its Stingers. The Stinger's heat-seeker became "distracted" by the MiG's missile and missed the MiG. The Predator was hit by the MiG's missile and destroyed. Another two Predators had been shot down earlier by Iraqi SAMs.
During the initial phases of the 2003 U.S. invasion of Iraq, a number of older Predators were stripped down and used as decoys to entice Iraqi air defenses to expose themselves by firing. From July 2005 to June 2006, the 15th Reconnaissance Squadron participated in more than 242 separate raids, engaged 132 troops in contact-force protection actions, fired 59 Hellfire missiles; surveyed 18,490 targets, escorted four convoys, and flew 2,073 sorties for more than 33,833 flying hours.
Iraqi insurgents intercepted video feeds, which were not encrypted, using a $26 piece of Russian software named SkyGrabber. The encryption for the ROVER feeds was removed for performance reasons. Work to secure the data feeds was to be completed by 2014.
On 27 June 2014, the Pentagon confirmed that a number of armed Predators had been sent to Iraq along with U.S. Special Forces following advances by the Islamic State of Iraq and the Levant. The Predators were flying 30 to 40 missions a day in and around Baghdad with government permission, and intelligence was shared with Iraqi forces. On 8 August 2014, an MQ-1 Predator fired a missile at a militant mortar position. From the beginning of Operation Inherent Resolve to January 2016, five UASF Predators were lost; four crashed from technical failures in Iraq, one in June 2015, two in October 2015, and one in January 2016.
Yemen
On 3 November 2002, a Hellfire missile was fired at a car in Yemen, killing Qaed Salim Sinan al-Harethi, an al-Qaeda leader thought to be responsible for the USS Cole bombing. It was the first direct U.S. strike in the War on Terrorism outside Afghanistan.
In 2004, the Australian Broadcasting Corporation's (ABC-TV) international affairs program Foreign Correspondent investigated this targeted killing and the involvement of the then U.S. Ambassador as part of a special report titled "The Yemen Option". The report also examined the evolving tactics and countermeasures in dealing with Al Qaeda inspired attacks.
On 30 September 2011, a Hellfire fired from an American UAV killed Anwar al-Awlaki, an American-citizen cleric and Al Qaeda leader, in Yemen. Also killed was Samir Khan, an American born in Saudi Arabia, who was editor of al-Qaeda's English-language webzine, Inspire.
On 14 February 2017, a United Arab Emiates UAV MQ-1B was shot down by Houthi anti aircraft missile over Marib province.
On 23 March 2019, Houthis announced the shot down a US MQ-1 drone over Saana, Yemen. Later displaying images of the wreckage.
On 14 May 2019, a United Arab Emirates MQ-1 Predator was shot down by Houthi fire during a night flight in Saana, Houthi fighters used an air-to-air missile (R-27T or R-73) with a modified land operator device.
On 25 February 2022, Houthi forces shot down a UAEAF MQ-1 drone of the Saudi led Coalition in Al-Jawf province. Publishing footage of the drone wreck and photos.
Libya
U.S. Air Force MQ-1B Predators have been involved in reconnaissance and strike sorties in Operation Unified Protector. An MQ-1B fired its first Hellfire missile in the conflict on 23 April 2011, striking a BM-21 Grad. There are also some suggestions that a Predator was involved in the final attack against Gaddafi.
Predators returned to Libya in 2012, after the attack that killed the US Ambassador in Benghazi. MQ-9 Reapers were also deployed.
Somalia
On 25 June 2011, US Predator drones attacked an Al-Shabaab (militant group) training camp south of Kismayo. Ibrahim al-Afghani, a senior al Shabaab leader was rumored to be killed in the strike.
Four Al-Shabaab fighters, including a Kenyan, were killed in a drone strike late February 2012.
Iran
On 1 November 2012, two Iranian Sukhoi Su-25 attack aircraft engaged an unarmed Predator conducting routine surveillance over the Persian Gulf just before 05:00 EST. The Su-25s made two passes at the drone firing their 30 mm cannon; the Predator was not hit and returned to base. The incident was not revealed publicly until 8 November. The U.S. stated that the Predator was over international waters, away from Iran and never entered its airspace. Iran states that the drone entered Iran's airspace and that its aircraft fired warning shots to drive it away.
On 12 March 2013, an Iranian F-4 Phantom pursued an MQ-1 flying over the Persian Gulf. The unarmed reconnoitering Predator was approached by the F-4, coming within 16 miles of the UAV. Two U.S. fighters were escorting the Predator and verbally warned the jet, which made the Iranian F-4 break off. All American aircraft remained over international waters. An earlier statement by the Pentagon that the escorting planes fired a flare to warn the Iranian jet was later amended. The Air Force later revealed that the American jet that forced the Iranian F-4 to break off was an F-22 Raptor.
India
India has inducted two American Predator drones — Sea Guardian, an unarmed version of the deadly Predator series — into the Navy on lease under the emergency procurement in the backdrop of the tensions with China in Ladakh. The Drones have has been leased by US Firm General Atomics, for a year for surveillance in the Indian Ocean Region. The Drones are under the full operational control of the Indian Navy and it will have exclusive access to all the information that the drone will capture.
The only role of the American firm is to ensure the availability of the two drones based on the contract signed. Recently Indian Navy has shown its further interest to acquire additional Predator Drones to the US.
Syria
Armed MQ-1 are used in Operation Inherent Resolve against IS over Syria and Iraq. On 17 March 2015, a US MQ-1 was shot down by a Syrian government S-125 SAM battery when it overflew the Port of Latakia, a region not involved in the international military operation.
Philippines
A 2012 New York Times article claimed that U.S. forces used a Predator drone to try and kill Indonesian terrorist Umar Patek in the Philippines in 2006. The Philippines' military denied this action took place, however. It was reported that a drone was responsible for killing al-Qaeda operative Zulkifli bin Hir on Jolo island on 2 February 2012. The strike reportedly killed 15 Abu Sayyaf operatives. The Philippines stated the strike was done by manned OV-10 aircraft with assistance from the U.S.
Other users
The Predator has also been used by the Italian Air Force. A contract for 6 version A Predators (later upgraded to A+) was signed in July 2002 and delivery begun in December 2004. It was used in these missions:
Iraq, Tallil: from January 2005 to November 2006 for "Antica Babilonia" mission (1.600 hours flew)
Afghanistan, Herat: from June 2007 to January 2014 (beginning with Predator A, then A+ and finally replaced by MQ-9 Reaper). Flew 6.000 hours in 750 missions only from June 2007 to May 2011.
Djibouti: 2 x Predator A+, since 6 August 2014 for support Atalanta EU mission – counter piracy – and for EUTM mission in Somalia (first mission flew 9 August 2014; detachment of about 70 Italian air force airmen )
Two civil-registered unarmed MQ-1s have been operated by the Office of the National Security Advisor in the Philippines since 2006.
The Predator has been licensed for sale to Egypt, Morocco, Saudi Arabia, and UAE.
Variants
RQ-1 series
RQ-1A: Pre-production designation for the Predator system – four aircraft, Ground Control Station (GCS), and Predator Primary Satellite Link (PPSL).
RQ-1K: Pre-production designation for individual airframe.
RQ-1B: Production designation for the Predator UAV system.
RQ-1L: Production designation for individual airframe.
MQ-1 series
The M designation differentiates Predator airframes capable of carrying and deploying ordnance.
MQ-1A Predator: Early airframes capable of carrying ordnance (AGM-114 Hellfire ATGM or AIM-92 Stinger). Nose-mounted AN/ZPQ-1 Synthetic Aperture Radar removed.
MQ-1B Predator: Later airframes capable of carrying ordnance. Modified antenna fit, including introduction of spine-mounted VHF fin. Enlarged dorsal and ventral air intakes for Rotax engine.
MQ-1B Block 10 / 15: Current production aircraft include updated avionics, datalinks, and countermeasures, modified v-tail planes to avoid damage from ordnance deployment, upgraded AN/AAS-52 Multi-Spectral Targeting System, wing deicing equipment, secondary daylight and infrared cameras in the nose for pilot visual in case of main sensor malfunction, and a 3 ft (0.91 m) wing extension from each wingtip. Some older MQ-1A aircraft have been partially retrofitted with some Block 10 / 15 features, primarily avionics and the modified tail planes.
Predator XP Export variant of the Predator designed specifically to be unable to carry weapons to allow for wider exportation opportunities. Markets for it are expected in the Middle East and Latin America. First flight on 27 June 2014. Features winglets with an endurance of 35 hours and a service ceiling of . Is equipped with the Lynx synthetic aperture radar, may contain laser rangefinder and laser designator for target illumination for other aircraft.
MQ-1C
The U.S. Army selected the MQ-1C Warrior as the winner of the Extended-Range Multi-Purpose UAV competition August 2005. The aircraft became operational in 2009 as the MQ-1C Gray Eagle.
Aircraft on display
MQ-1B 03-33120 is preserved at the American Air Museum in Britain at IWM Duxford, and is the first UAV to be displayed at Duxford. The MQ-1B in question was formerly operated by the 432nd Wing of Creech Air Force Base.
Operators
Italian Air Force
32° Stormo (32nd Wing) Armando Boetto—Foggia, Amendola Air Force Base
28° Gruppo (28th Unmanned Aerial Vehicle Squadron)
61° Gruppo (61st Unmanned Aerial Vehicle Squadron)
Turkish Air Force The Turkish Air Force has 6 MQ-1 Predators on order via the USA's Foreign Military Sales mechanism. The Turkish Air Force also operates 3 MQ-1 Predator systems on lease from the US as a stop gap measure as of 2011. The leased MQ-1s are under Turkish command (UAV Base Group Command) but operated by a joint Turkish-US unit.
United Arab Emirates Air Force signed a US$197 million deal in February 2013 for an unspecified number of Predators, XP version, marking its first sale. One system of four aircraft is planned to begin delivery in mid-2016. General Atomics stated on 16 February 2017 that it finished deliveries, declining comment on the number delivered.
Royal Moroccan Air Force received four Predator A aircraft.
Former operators
U.S. Customs and Border Protection
United States Air Force
Air Combat Command
432d Wing—Creech Air Force Base, Nevada
11th Attack Squadron
15th Attack Squadron
17th Attack Squadron
18th Attack Squadron
20th Attack Squadron, Whiteman AFB, Missouri
49th Wing—Holloman Air Force Base, New Mexico
6th Attack Squadron
53d Wing—Eglin Air Force Base, Florida
556th Test and Evaluation Squadron Creech Air Force Base, Nevada
Air Force Special Operations Command
27th Special Operations Wing Cannon Air Force Base, New Mexico
3d Special Operations Squadron
58th Special Operations Wing
551st Special Operations Squadron
Air National Guard
Texas Air National Guard
147th Reconnaissance Wing—Ellington Field
111th Reconnaissance Squadron
California Air National Guard
163d Reconnaissance Wing—March Joint Air Reserve Base
196th Reconnaissance Squadron
North Dakota Air National Guard
119th Fighter Wing—Hector International Airport
178th Reconnaissance Squadron
Arizona Air National Guard
214th Reconnaissance Group—Davis-Monthan Air Force Base
214th Reconnaissance Squadron
Air Force Reserve Command
919th Special Operations Wing—Duke Field
2d Special Operations Squadron-GSU at Hurlburt Field
Central Intelligence Agency
Special Operations Group in Langley, VA
Specifications
See also
Notes
References
Parts of this article are taken from the MQ-1 PREDATOR fact sheet.
This article contains material that originally came from the web article Unmanned Aerial Vehicles by Greg Goebel, which exists in the public domain.
Further reading
External links
General Atomics Predator page
MQ-1B Predator US Air Force Fact Sheet
MQ-1 Predator page on armyrecognition.com
Predator page and UAV Sensor page on defense-update.com
How the Predator Works – Howstuffworks.com
British Daily Telegraph article – 'In Las Vegas a pilot pulls the trigger. In Iraq a Predator fires its missile'
Accident report from 20 March 2006 MQ-1L crash
Missile strike emphasizes Al-Qaida
Q-01
Unmanned aerial vehicles of the United States
General Atomics MQ-1 Predator
Medium-altitude long-endurance unmanned aerial vehicles
Signals intelligence
War on terror
V-tail aircraft
Single-engined pusher aircraft
General Atomics MQ-1
Synthetic aperture radar
Aircraft first flown in 1994 |
37439 | https://en.wikipedia.org/wiki/AAI%20RQ-7%20Shadow | AAI RQ-7 Shadow | The AAI RQ-7 Shadow is an American unmanned aerial vehicle (UAV) used by the United States Army, Australian Army, Swedish Army, Turkish Air Force and Italian Army for reconnaissance, surveillance, target acquisition and battle damage assessment. Launched from a trailer-mounted pneumatic catapult, it is recovered with the aid of arresting gear similar to jets on an aircraft carrier. Its gimbal-mounted, digitally stabilized, liquid nitrogen-cooled electro-optical/infrared (EO/IR) camera relays video in real time via a C-band line-of-sight data link to the ground control station (GCS).
The US Army's 2nd Battalion, 13th Aviation Regiment at Fort Huachuca, Arizona, trains soldiers, Marines, and civilians in the operation and maintenance of the Shadow UAS. The Shadow is operated in the U.S. Army at brigade-level.
Development
The RQ-7 Shadow is the result of a continued US Army search for an effective battlefield UAS after the cancellation of the Alliant RQ-6 Outrider aircraft. AAI Corporation followed up their RQ-2 Pioneer with the Shadow 200, a similar, more refined UAS. In late 1999, the army selected the Shadow 200 to fill the tactical UAS requirement, redesignating it the RQ-7. Army requirements specified a UAS that used an aviation gasoline engine, could carry an electro-optic/infrared imaging sensor turret, and had a minimum range of 31 miles (50 kilometers) with four-hour, on-station endurance. The Shadow 200 offered at least twice that range. The specifications also dictated that UAS would be able to land in an athletic field.
Design
The RQ-7 Shadow 200 unmanned aircraft system is of a high-wing, constant chord pusher configuration with a twin-tailboom empennage and an inverted v-tail. The aircraft is powered by a AR741-1101 Wankel engine designed and manufactured by UAV Engines Ltd in the United Kingdom. Onboard electrical systems are powered by a GEC/Plessey 28 volt, direct current, 2 kW generator.
Currently, the primary load of the aircraft is the Israeli Aircraft Industries POP300 Plug-in Optical Payload which consists of a forward-looking infrared camera, a daytime TV camera with a selectable near-infrared filter and a laser pointer. The aircraft has fixed tricycle landing gear. Takeoffs are assisted by a trailer-mounted pneumatic launcher which can accelerate the 170 kg (375 pound) aircraft to in .
Landings are guided by a Tactical Automatic Landing System, developed by the Sierra Nevada Corporation, which consists of a ground-based micro-millimeter wavelength radar and a transponder carried on the aircraft. Once on the ground, a tailhook mounted on the aircraft catches an arresting wire connected to two disk brake drums which can stop the aircraft in less than .
The aircraft is part of a larger system which currently uses the M1152-series of Humvees for ground transport of all ground and air equipment. A Shadow 200 system consists of four aircraft, three of which are transported in the Air Vehicle Transporter (AVT). The fourth is transported in a specially designed storage container to be used as a spare. The AVT also tows the launcher.
The AVT Support Vehicle and trailer contain extra equipment to launch and recover the aircraft, such as the Tactical Automatic Landing System. Maintenance equipment for the aircraft is stored in the Maintenance Section Multifunctional (MSM) vehicle and trailer as well as the M1165 MSM Support Vehicle and its associated trailer.
Two Humvee-mounted Ground Control Stations (GCS), also part of the Shadow 200 system, control the aircraft in flight. Each station has an associated Ground Data Terminal (GDT), which takes commands generated by the GCS and modulates them into radio waves received by the aircraft in flight. The GDT receives video imagery from the payload, as well as telemetry from the aircraft, and sends this information to the GCS.
A trailer, towed by the M1165 GCS support vehicle, carries the GDT and houses a 10 kW Tactical Quiet Generator to provide power for its associated GCS. The Shadow 200 system also includes a Portable Ground Control Station (PGCS) and Portable Ground Data Terminal (PGDT), which are stripped-down versions of the GCS and GDT designed as a backup to the two GCSs.
A fielded Shadow 200 system requires 22 soldiers to operate it. Army modelling indicates that crew workload is highest at takeoff, and second-highest at landing.
The Shadow is restricted from operating in bad weather conditions, not being meant to fly through rain and with sensors that cannot see through clouds.
Operational history
By July 2007, the Shadow platform accumulated 200,000 flight hours, doubling its previous record of 100,000 hours in 13 months. The system then surpassed 300,000 flight hours in April 2008, and by May 2010, the Shadow system had accumulated over 500,000 flight hours. As of 2011, the Shadow had logged over 709,000 hours. The Shadow platform has flown over 37,000 sorties in support of operations in Iraq and Afghanistan by US Army and Army National Guard units.
On 6 August 2012, AAI announced that the Shadow had achieved 750,000 flight hours during more than 173,000 missions. More than 900,000 flight hours had been logged by Shadow UAVs by the end of June 2014.
The Shadow did not see service in the Afghanistan campaign of 2001–2002, but it did fly operational missions in support of Operation Iraqi Freedom. The operating conditions in Iraq proved hard on the UAVs, with heat and sand leading to engine failures, resulting in a high-priority effort to find fixes with changes in system technology and operating procedures. Shadow UAS have since flown more than 600,000 combat hours in support of the Wars in Iraq and Afghanistan.
In 2007, the United States Marine Corps began to transition from the RQ-2 Pioneer to the RQ-7 Shadow. VMU-1, VMU-2 completed their transition from the RQ-2 to the RQ-7 and ScanEagle while VMU-3 and VMU-4 were activated as Shadow and ScanEagle elements. VMU-3 was activated on 12 September 2008 and VMU-4 conducted its inaugural flight on 28 September 2010 in Yuma, Arizona. In October 2007, VMU-1 became the first Marine Corps squadron to see combat in Iraq. VMU-2 deployed a Shadow detachment to Afghanistan in 2009, with VMU-3 following in January 2010.
The Navy provided personnel for four Shadow platoons in support of army brigades deployed in Iraq. The first two platoons returned from 6-month tours in Iraq in January and February 2008. The Navy personnel went through the Army's training program at Fort Huachuca, Arizona.
The U.S. Army is implementing a plan to reform its aerial scout capabilities by scrapping its fleet of OH-58 Kiowa helicopters from 2015 to 2019 and replacing them with AH-64 Apache attack helicopters teamed with Shadow and MQ-1C Gray Eagle UAVs. Using unmanned assets to scout ahead would put the pilots of manned aircraft out of reach of potential harm. Reformed combat aviation brigades (CAB) would consist of a battalion of 24 Apaches for attack missions and an armed reconnaissance squadron of another 24 Apaches teamed with three Shadow platoons totaling 12 RQ-7s overall; it would also include a Gray Eagle company. The manned-unmanned teaming of Apaches and Unmanned Aircraft (UA) can meet 80 percent of aerial scout requirements.
On 16 March 2015, the 1st Battalion, 501st Aviation Regiment was reflagged the 3rd Squadron, 6th Cavalry Regiment, making it the first of 10 Apache battalions to be converted to a heavy attack reconnaissance squadron by eliminating the Kiowa scout helicopter and having three RQ-7 Shadow platoons organically assigned; the attack battalions will also be aligned with an MQ-1C Gray Eagle company assigned to each division. Moving Shadows from brigade combat team level to the battalions themselves reduces lines of communication, distance issues, and allows operators and pilots to better train and work together.
In early July 2014, the U.S. Army sent RQ-7 Shadows to Baghdad as part of efforts to protect embassy personnel against Islamic State militant attacks, along with Apache attack helicopters which could use them through manned and unmanned teaming to share information and designate targets.
On 29 July 2018, the U.S. Marine conducted its final launch of the RQ-7B during RIMPAC exercises before retiring it. Since first deploying with Marines to Iraq in October 2007, the aircraft eventually equipped four tactical UAS squadrons, flying some 39,000 hours during 11 operational deployments. The Shadow was replaced by the RQ-21 Blackjack, which was first deployed in 2014.
In March 2019, the U.S. Army selected Martin UAV and AAI Corporation to "provide unmanned aircraft systems for platoons to try out as candidates to replace the Shadow tactical UAS." The Army seeks better acoustics and runway independence as compared to the old Shadow, as well as lower equipment requirements. Shortly after the selection of the first teams, L3Harris Technologies and Arcturus-UAV (later under AeroVironment) were also picked to submit candidates. The four aircraft were used to evaluate requirements and assess new capabilities, and in August 2021 the Army decided to proceed with a competition for the Future Tactical Unmanned Aircraft System (FTUAS); a fielding decision is planned for 2025.
Variants
RQ-7A Shadow
The RQ-7A was the initial version of the Shadow 200 UAS developed by AAI. The first low-rate initial-production systems were delivered to the US Army in 2002 with the first full-scale production systems being delivered in September 2003. The RQ-7A was long and had a wingspan of with a max takeoff weight. The aircraft's endurance ranged between 4 and 5.5 hours depending on mission. The "A" model aircraft also had the AR741-1100 engine which could use either 87 octane automotive gasoline or 100LL aviation fuel. The "A" model also featured IAI's POP200 payload.
RQ-7B Shadow
Production of Shadow aircraft shifted to a generally improved RQ-7B variant in the summer of 2004. The RQ-7B features new wings increased in length to . The new wings are not only more aerodynamically efficient, they are "wet" to increase fuel storage up to 44 liters for an endurance of up to 6 hours. The payload capability has been increased to .
After reports from Iraq that engines were failing, in 2005, the Army's UAV project manager called for the use of 100LL, an aviation fuel, rather than the conventional 87 octane mogas. Avionics systems have been generally improved, and the new wing is designed to accommodate a communications relay package, which allows the aircraft to act as a relay station. This allows commanders or even the aircraft operators themselves to communicate via radio to the troops on ground in locations that would otherwise be "dead" to radio traffic.
The Shadow can operate up to from its brigade tactical operations center, and recognize tactical vehicles up to above the ground at more than slant range.
Other incremental improvements to the system include replacing the AR741-1100 engine with the AR741-1101 which increases reliability through the use of dual spark plugs as well as limiting the fuel to 100LL. Also, the older POP200 payload was replaced with the newer POP300 system. In February 2010, AAI began a fleet update program to improve the Shadow system. The improvements include installing the wiring harnesses and software updates for IAI's POP300D payload which includes a designator for guiding laser-guided bombs.
Other improvements in the program will include an electronic fuel injection engine and fuel system to replace the AR741-1101's carburetted engine. The most visible improvement to the system will be a wider wing of in span which is designed to increase fuel capacity and allow for mission endurance of almost 9 hours. The new wings will also include hardpoints for external munitions.
A joint Army-Marine program is testing IED jamming on a Shadow at MCAS Yuma. Another joint effort is to view a ground area from 3,650 m (12,000 feet).
The Army is now proposing the upgraded Shadow 152A, which includes Soldier Radio Waveform software, which allows both the command post and their troops to see the images that the UAV is projecting, as long as they are on the same frequency. It also increases the distance and area of communication.
Preliminary TCDL testing conducted at Dugway Proving Ground was a success. This led to an estimated fielding date of May 2010 for TCDL. In March 2015, the first Shadow unit was equipped with the upgraded RQ-7BV2 Shadow version. New capabilities for the BV2 include the TCDL, encryption of video and control data-links, software that allows interoperability between other UAS platforms, integration of a common control station and control terminal for all Army UAS platforms, an electronic fuel-injection engine, and increased endurance to nine hours through a lengthened wingspan of , with weight increased to . Shadow systems are being upgraded at a rate of 2-3 per month, with all Army Shadows planned to become BV2s by 2019.
In 2020, the Army introduced the Shadow Block III. The configuration allows the Shadow to fly in rainy conditions of up to two inches per hour, a four-fold increase over previous versions, carries the L3 Wescam MX-10 EO/IR camera with enhanced image collection, has a Joint Tactical Radio System to enable communications relay, and uses a more reliable and powerful engine configuration with reduced noise.
Armed Shadow
On 19 April 2010 the Army issued a "solicitation for sources sought" from defense contractors for a munition for the Shadow system with a deadline for proposals due no later than 10 May 2010. Although no specific munition has been chosen yet, some possible munitions include the Raytheon Pyros bomb, the General Dynamics 81 mm 4.5 kg (10-pound) air-dropped guided mortar, as well as the QuickMEDS system for delivering medical supplies to remote and stranded troops.
The Army subsequently slowed work, and the Marine Corps then took the lead on arming the RQ-7 Shadow. Raytheon has conducted successful flight tests with the Small Tactical Munition, and Lockheed Martin has tested the Shadow Hawk glide weapon from an RQ-7. On 1 November 2012, General Dynamics successfully demonstrated their guided 81 mm Air Dropped Mortar, with three launches at hitting within seven meters of the target grid.
As of August 2011, the Marine Corps has received official clearance to experiment with armed RQ-7s, and requires AAI to select a precision munition ready for deployment. AAI was awarded $10 million for this in December 2011, and claims a weapon has already been fielded by the Shadow. In 2014, Textron launched the Fury precision weapon from a Shadow 200.
By May 2015, the Marine Corps had run out of funding for weaponizing the RQ-7, and the Army had shown little interest in continuing the effort. The Army's stance is that the Shadow's primary capability is persistent surveillance, while there are many other ways to drop bombs on targets and adding that to the Shadow would add weight and decrease endurance.
Nightwarden
A test version called STTB flew in summer 2011. AAI is developing a bigger version called M2 with a blended wing to include a 3-cylinder Lycoming heavy fuel engine, and began flight testing in August 2012. The Shadow M2 has a conformal blended body that reduces drag, wingspan increased to , and is heavier. It can fly for 16 hours at altitudes up to . Its endurance and service ceiling are comparable to Group 4 UASs like the MQ-1 Predator, so the company is pitching the M2 as a budget-conscious alternative to larger unmanned aircraft..
It has a greater payload to carry synthetic aperture radar (SAR), wide-area surveillance, navigation, signals intelligence, and electronic warfare packages. It also has the ability to be controlled beyond line-of-sight through a SATCOM link. Although the M2 uses the same internal components as the RQ-7B Shadow 200 and is compatible with existing support equipment and ground infrastructure, its greater weight necessitates changes to the existing launcher. The Shadow M2 uses 80-85 percent of the components of the Shadow V2, while allowing for an additional of capability with total airframe weight increased to .
In June 2017, Textron introduced the Nightwarden TUAS as a production-ready model of the developmental Shadow M2, the change in name due to significant improvements and enhancements to the system such as greater flexibility and combat capability, SATCOM features, and enhanced command-and-control. The aircraft has a range of , maximum speed of , endurance of 15 hours, can fly at an altitude of , and has a maximum takeoff weight of with a dual-payload bay with a capacity of .
Shadow 600
AAI has also built a scaled-up Pioneer derivative known as the "Shadow 600". It also resembles a Pioneer, except that the outer panels of the wings are distinctively swept back, and it has a stronger Wankel engine, the UAV EL 801, with . A number of Shadow 600s are in service in several nations, including Romania.
SR/C Shadow
AAI, in conjunction with Textron sister company Bell Helicopter, intends to modify two Shadows with a Carter rotor on top for vertical take-off and landing, eliminating the need for the recovery and pneumatic launcher systems, while increasing payload and endurance. , it is expected to fly in 2012. AAI also expected to use the SR/C technology for the Shadow Knight, a powered-rotor two-propeller surveillance aircraft for the US Navy MRMUAS program; however, the MRMUAS program was cancelled in 2012.
Operators
Current operators
Australian Army: The Australian Government has bought 18 aircraft and has replaced ScanEagle, and began using them in Afghanistan in May 2012.
Italian Army: In July 2010, the Italian army ordered four Shadow 200 systems.
Romanian Air Force: The Romanian Air Force has purchased 11 Shadow 600s, a larger, fuel injected Shadow variant.
Swedish Army: 8 aircraft (2 systems) were delivered early in 2011. These systems were then modified by SAAB to be more suited for Swedish use, named UAV03 Örnen. Set to be replaced.
Turkish Air Force: Turkish Air Force; The Turkish Air Force also operates 9 RQ-7 Shadow 600s.
United States Army: 450 RQ-7Bs, 20 more on order plus additional 68 ordered
Former operators
United States Marine Corps
Incidents and accidents
On 15 August 2011, a US Air Force C-130 cargo plane collided with a RQ-7 while on approach to FOB Sharana in Paktika Province, Afghanistan. The C-130 made an emergency landing with damage to two engines and one wing, while the RQ-7 was destroyed completely. The collision caused the cargo aircraft to be grounded for several months while being fixed, while the RQ-7 wreckage was never recovered. Early reports indicating that the mishap occurred when the C-130 took off without clearance were incorrect. The investigating board determined that the mishap was largely due to poor local air traffic control training and supervision.
On 3 April 2014, a Pennsylvania Army National Guard RQ-7 participating in training exercises at Fort Indiantown Gap crashed near an elementary school in Pennsylvania and was then hit by a civilian vehicle destroying the drone. No injuries were reported.
On 10 July 2019, an Army RQ-7 operated by the 25th Infantry Division (United States) crashed in the Waianae mountains near Schofield Barracks.
On 17 July 2019, a Wisconsin National Guard RQ-7 lost its link to its operator at Volk Field during a training exercise. The drone went down into trees south of Interstate 90/94 between Oakdale and Camp Douglas. No injuries or damage were reported. The drone suffered "significant" damage.
Specifications (200 Family)
Note: When outfitted with IE (Increased Endurance) Wings, the CRP (Communications Relay Package) and the 1102 engine, endurance time is increased to 9 hours, wing span is increased to approx. , and the service ceiling is 5,500 m (18,000 ft) (only with authorization).
See also
References
This article contains material that originally came from the web article Unmanned Aerial Vehicles by Greg Goebel, which exists in the Public Domain.
External links
RQ-7 Shadow 200 Tactical UAV
Shadow TUAV update
UAV payloads
Iran Protests U.S. Aerial Drones (RQ-7 crashes in Iran), The Washington Post, 8 November 2005
Q-07
2000s United States military reconnaissance aircraft
Unmanned military aircraft of the United States
Twin-boom aircraft
Single-engined pusher aircraft
High-wing aircraft
Airborne military robots
Aircraft first flown in 1991 |
37481 | https://en.wikipedia.org/wiki/Intranet | Intranet | An intranet is a computer network for sharing information, easier communication, collaboration tools, operational systems, and other computing services within an organization, usually to the exclusion of access by outsiders. The term is used in contrast to public networks, such as the Internet, but uses most of the same technology based on the Internet Protocol Suite.
A company-wide intranet can constitute an important focal point of internal communication and collaboration, and provide a single starting point to access internal and external resources. In its simplest form, an intranet is established with the technologies for local area networks (LANs) and wide area networks (WANs). Many modern intranets have search engines, user profiles, blogs, mobile apps with notifications, and events planning within their infrastructure.
An intranet is sometimes contrasted to an extranet. While an intranet is generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol).
Uses
Increasingly, intranets are being used to deliver tools, e.g. collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and customer relationship management tools, project management etc.,
Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues.
In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness.
Larger businesses allow users within their intranet to access public internet through firewall servers. They have the ability to screen messages coming and going, keeping security intact. When part of an intranet is made accessible to customers and others outside the business, it becomes part of an extranet. Businesses can send private messages through the public network, using special encryption/decryption and other security safeguards to connect one part of their intranet to another.
Intranet user-experience, editorial, and technology teams work together to produce in-house sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these.
Because of the scope and variety of content and the number of system interfaces, intranets of many organizations are much more complex than their respective public websites. Intranets and their use are growing rapidly. According to the Intranet design annual 2007 from Nielsen Norman Group, the number of pages on participants' intranets averaged 200,000 over the years 2001 to 2003 and has grown to an average of 6 million pages over 2005–2007.
Benefits
Workforce productivity: Intranets can help users to locate and view information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any database the organization wants to make available, anytime and — subject to security provisions — from anywhere within the company workstations, increasing the employees ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users.
Time: Intranets allow organizations to distribute information to employees on an as-needed basis; Employees may link to relevant information at their convenience, rather than being distracted indiscriminately by email.
Communication: Intranets can serve as powerful tools for communication within an organization, vertically strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and whom to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Some examples of communication would be chat, email, and/or blogs. A great real-world example of where an intranet helped a company communicate is when Nestle had a number of food processing plants in Scandinavia. Their central support system had to deal with a number of queries every day. When Nestle decided to invest in an intranet, they quickly realized the savings. McGovern says the savings from the reduction in query calls was substantially greater than the investment in the intranet.
Web publishing allows cumbersome corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies.Examples include employee manuals, benefits documents, company policies, business standards, news feeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is usually available to employees using the intranet.
Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise.
Workflow: a collective term that reduces delay, such as automating meeting scheduling and vacation planning
Cost-effectiveness: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. This can potentially save the business money on printing, duplicating documents, and the environment as well as document maintenance overhead. For example, the HRM company PeopleSoft "derived significant cost savings by shifting HR processes to the intranet". McGovern goes on to say the manual cost of enrolling in benefits was found to be US$109.48 per enrollment. "Shifting this process to the intranet reduced the cost per enrollment to $21.79; a saving of 80 percent". Another company that saved money on expense reports was Cisco. "In 1996, Cisco processed 54,000 reports and the amount of dollars processed was USD19 million".
Enhance collaboration: Information is easily accessible by all authorised users, which enables teamwork. Being able to communicate in real-time through integrated third party tools, such as an instant messenger, promotes the sharing of ideas and removes blockages to communication to help boost a business' productivity.
Cross-platform capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX.
Built for one audience: Many companies dictate computer specifications which, in turn, may allow Intranet developers to write applications that only have to work on one browser (no cross-browser compatibility issues). Being able to specifically address one's "viewer" is a great advantage. Since intranets are user-specific (requiring database/network authentication prior to access), users know exactly who they are interfacing with and can personalize their intranet based on role (job title, department) or individual ("Congratulations Jane, on your 3rd year with our company!").
Promote common corporate culture: Every user has the ability to view the same information within the intranet.
Supports a distributed computing architecture: The intranet can also be linked to a company's management information system, for example a time keeping system.
Employee Engagement: Since "involvement in decision making" is one of the main drivers of employee engagement, offering tools (like forums or surveys) that foster peer-to-peer collaboration and employee participation can make employees feel more valued and involved.
Planning and creation
Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as determining the purpose and goals of the intranet, identifying persons or departments responsible for implementation and management and devising functional plans, page layouts and designs.
The appropriate staff would also ensure that implementation schedules and phase-out of existing systems were organized, while defining and implementing security of the intranet and ensuring it lies within legal boundaries and other constraints. In order to produce a high-value end product, systems planners should determine the level of interactivity (e.g. wikis, on-line forms) desired.
Planners may also consider whether the input of new data and updating of existing data is to be centrally controlled or devolve. These decisions sit alongside to the hardware and software considerations (like content management systems), participation issues (like good taste, harassment, confidentiality), and features to be supported.
Intranets are often static sites; they are a shared drive, serving up centrally stored documents alongside internal articles or communications (often one-way communication). By leveraging firms which specialise in 'social' intranets, organisations are beginning to think of how their intranets can become a 'communication hub' for their entire team. The actual implementation would include steps such as securing senior management support and funding, conducting a business requirement analysis and identifying users' information needs.
From the technical perspective, there would need to be a co-ordinated installation of the web server and user access network, the required user/client applications and the creation of document framework (or template) for the content to be hosted.
The end-user should be involved in testing and promoting use of the company intranet, possibly through a parallel adoption methodology or pilot programme. In the long term, the company should carry out ongoing measurement and evaluation, including through benchmarking against other company services.
Maintenance
Some aspects are non-static.
Staying current
An intranet structure needs key personnel committed to maintaining the intranet and keeping content current. For feedback on the intranet, social networking can be done through a forum for users to indicate what they want and what they do not like.
Privacy protection
The European Union's General Data Protection Regulation went into effect May 2018.
Enterprise private network
An enterprise private network''' is a computer network built by a business to interconnect its various company sites (such as production sites, offices and shops) in order to share computer resources.
Beginning with the digitalisation of telecommunication networks, started in the 1970s in the US by AT&T, and propelled by the growth in computer systems availability and demands, enterprise networks have been built for decades without the need to append the term private to them. The networks were operated over telecommunication networks and, as for voice communications, a certain amount of security and secrecy was expected and delivered.
But with the Internet in the 1990s came a new type of network, virtual private networks, built over this public infrastructure, using encryption to protect the data traffic from eaves-dropping. So the enterprise networks are now commonly referred to as enterprise private networks'' in order to clarify that these are private networks, in contrast to public networks.
See also
eGranary Digital Library
Enterprise portal
Intranet portal
Intranet strategies
Intranet Wiki
Intraweb
Kwangmyong (intranet)
Virtual workplace
Web portal
References
Computer networks
Internet privacy |
37983 | https://en.wikipedia.org/wiki/Typex | Typex | In the history of cryptography, Typex (alternatively, Type X or TypeX) machines were British cipher machines used from 1937. It was an adaptation of the commercial German Enigma with a number of enhancements that greatly increased its security. The cipher machine (and its many revisions) was used until the mid-1950s when other more modern military encryption systems came into use.
Description
Like Enigma, Typex was a rotor machine. Typex came in a number of variations, but all contained five rotors, as opposed to three or four in the Enigma. Like the Enigma, the signal was sent through the rotors twice, using a "reflector" at the end of the rotor stack. On a Typex rotor, each electrical contact was doubled to improve reliability.
Of the five rotors, typically the first two were stationary. These provided additional enciphering without adding complexity to the rotor turning mechanisms. Their purpose was similar to the plugboard in the Enigmas, offering additional randomization that could be easily changed. Unlike Enigma's plugboard, however, the wiring of those two rotors could not be easily changed day-to-day. Plugboards were added to later versions of Typex.
The major improvement the Typex had over the standard Enigma was that the rotors in the machine contained multiple notches that would turn the neighbouring rotor. This eliminated an entire class of attacks on the system, whereas Enigma's fixed notches resulted in certain patterns appearing in the cyphertext that could be seen under certain circumstances.
Some Typex rotors came in two parts, where a slug containing the wiring was inserted into a metal casing. Different casings contained different numbers of notches around the rim, such as 5, 7 or 9 notches. Each slug could be inserted into a casing in two different ways by turning it over. In use, all the rotors of the machine would use casings with the same number of notches. Normally five slugs were chosen from a set of ten.
On some models, operators could achieve a speed of 20 words a minute, and the output ciphertext or plaintext was printed on paper tape. For some portable versions, such as the Mark III, a message was typed with the left hand while the right hand turned a handle.
Several Internet Typex articles say that only Vaseline was used to lubricate Typex machines and that no other lubricant was used. Vaseline was used to lubricate the rotor disc contacts. Without this there was a risk of arcing which would burn the insulation between the contacts. For the rest of the machine two grades of oil (Spindle Oils 1 and 2) were used. Regular cleaning and maintenance was essential.
In particular, the letters/figures cam-cluster balata discs had to be kept lubricated.
History and development
By the 1920s, the British Government was seeking a replacement for its book cipher systems, which had been shown to be insecure and which proved to be slow and awkward to use. In 1926, an inter-departmental committee was formed to consider whether they could be replaced with cipher machines. Over a period of several years and at large expense, the committee investigated a number of options but no proposal was decided upon. One suggestion was put forward by Wing Commander Oswyn G. W. G. Lywood to adapt the commercial Enigma by adding a printing unit but the committee decided against pursuing Lywood's proposal.
In August 1934, Lywood began work on a machine authorised by the RAF. Lywood worked with J. C. Coulson, Albert P. Lemmon, and Ernest W. Smith at Kidbrooke in Greenwich, with the printing unit provided by Creed & Company. The first prototype was delivered to the Air Ministry on 30 April 1935. In early 1937, around 30 Typex Mark I machines were supplied to the RAF. The machine was initially termed the "RAF Enigma with Type X attachments".
The design of its successor had begun by February 1937. In June 1938, Typex Mark II was demonstrated to the cipher-machine committee, who approved an order of 350 machines. The Mark II model was bulky, incorporating two printers: one for plaintext and one for ciphertext. As a result, it was significantly larger than the Enigma, weighing around , and measuring × × . After trials, the machine was adopted by the RAF, Army and other government departments. During World War II, a large number of Typex machines were manufactured by the tabulating machine manufacturer Powers-Samas.
Typex Mark III was a more portable variant, using the same drums as the Mark II machines powered by turning a handle (it was also possible to attach a motor drive). The maximum operating speed is around 60 letters a minute, significantly slower than the 300 achievable with the Mark II.
Typex Mark VI was another handle-operated variant, measuring × ×, weighing and consisting of over 700 components.
Plugboards for the reflector were added to the machine from November 1941.
For inter-Allied communications during World War II, the Combined Cipher Machine (CCM) was developed, used in the Royal Navy from November 1943. The CCM was implemented by making modifications to Typex and the United States ECM Mark II machine so that they would be compatible.
Typex Mark VIII was a Mark II fitted with a morse perforator.
Typex 22 (BID/08/2) and Typex 23 (BID/08/3) were late models, that incorporated plugboards for improved security. Mark 23 was a Mark 22 modified for use with the CCM. In New Zealand, Typex Mark II and Mark III were superseded by Mark 22 and Mark 23 on 1 January 1950. The Royal; Air Force used a combination of the Creed Teleprinter and Typex until 1960. This amalgamation allowed a single operator to use punch tape and printouts for both sending and receiving encrypted material.
Erskine (2002) estimates that around 12,000 Typex machines were built by the end of World War II.
Security and use
Less than a year into the war, the Germans could read all British military encryption other than Typex, which was used by the British armed forces and by Commonwealth countries including Australia, Canada and New Zealand. The Royal Navy decided to adopt the RAF Type X Mark II in 1940 after trials; eight stations already had Type X machines. Eventually over 600 machines would be required. New Zealand initially got two machines at a cost of £115 (GBP) each for Auckland and Wellington.
From 1943 the Americans and the British agreed upon a Combined Cipher Machine (CCM). The British Typex and American ECM Mark II could be adapted to become interoperable. While the British showed Typex to the Americans, the Americans never permitted the British to see the ECM, which was a more complex design. Instead, attachments were built for both that allowed them to read messages created on the other.
In 1944 the Admiralty decided to supply 2 CCM Mark III machines (the Typex Mark II with adaptors for the American CCM) for each "major" war vessel down to and including corvettes but not submarines; RNZN vessels were the Achilles, Arabis (then out of action), Arbutus, Gambia and Matua.
Although a British test cryptanalytic attack made considerable progress, the results were not as significant as against the Enigma, due to the increased complexity of the system and the low levels of traffic.
A Typex machine without rotors was captured by German forces at Dunkirk during the Battle of France and more than one German cryptanalytic section proposed attempting to crack Typex; however, the B-Dienst codebreaking organisation gave up on it after six weeks, when further time and personnel for such attempts were refused.
One German cryptanalyst stated that the Typex was more secure than the Enigma since it had seven rotors, therefore no major effort was made to crack Typex messages as they believed that even the Enigma's messages were unbreakable.
Although the Typex has been attributed as having good security, the historic record is much less clear. There was an ongoing investigation into Typex security that arose out of German POWs in North Africa claiming that Typex traffic was decipherable.
A brief excerpt from the report
<blockquote><poem>TOP SECRET U [ZIP/SAC/G.34]
THE POSSIBLE EXPLOITATION OF TYPEX BY THE GERMAN SIGINT SERVICES
The following is a summary of information so far received on German attempts to break into the British Typex machine, based on P/W interrogations carried out during and subsequent to the war. It is divided into (a) the North African interrogations, (b) information gathered after the end of the war, and (c) an attempt to sum up the evidence for and against the possibility of German successes.Apart from an unconfirmed report from an agent in France on 19 July 1942 to the effect that the GAF were using two British machines captured at DUNKIRK for passing their own traffic between BERLIN and GOLDAP, our evidence during the war was based on reports that OKH was exploiting Typex material left behind in TOBRUK in 1942.</poem></blockquote>
Typex machines continued in use long after World War II. The New Zealand military used TypeX machines until the early 1970s, disposing of its last machine in about 1973.
Advantages over Enigma
All the versions of the Typex had advantages over the German military versions of the Enigma machine. The German equivalent teleprinter machines in World War II (used by higher-level but not field units) were the Lorenz SZ 40/42 and Siemens and Halske T52 using Fish cyphers.
Most versions of the Enigma required two operators to operate effectively—one operator to input text into the Enigma and the other to copy down the enciphered or deciphered characters—Typex required just one operator.
Typex avoided operator copying errors, as the enciphered or deciphered text was automatically printed on paper tape.
Unlike Enigma, Typex I machines were linked to teleprinters while Typex II machines could be if required.
Enigma messages had to be written, enciphered, transmitted (by Morse), received, deciphered, and written again, while Typex messages were typed and automatically enciphered and transmitted all in one step, with the reverse also true.
See also
Cryptanalysis of the Enigma
Cryptanalysis of the Lorenz cipher
Mercury (Typex Mark X)—a Typex descendant used for on-line traffic.
Notes
References
Martin Campbell-Kelly, ICL: A Business and Technical History, Oxford University Press, 1990.
Dorothy Clarkson, "Cypher Machines: Maintenance and Restoration Spanning Sixty Years", Cryptologia, 27(3), July 2003, pp. 209–212.
Cipher A. Deavours and Louis Kruh, "Machine Cryptography and Modern Cryptanalysis", Artech House, 1985, pp. 144–145; 148–150.
Ralph Erskine, "The Admiralty and Cipher Machines During the Second World War: Not So Stupid after All". Journal of Intelligence History 2(2) (Winter 2002).
Ralph Erskine, "The Development of Typex", The Enigma Bulletin'' 2 (1997): pp. 69–86
Kruh and Deavours, "The Typex Cryptograph" Cryptologia 7(2), pp. 145–167, 1983
Eric Morgon, "The History of Communications Security in New Zealand", Part 1 (PDF).Possibly related page as html
External links
A series of photographs of a Typex Mk III
Jerry Proc's page on Typex
Typex graphical simulator for Microsoft Windows
Virtual Typex online simulator
Crypto Museum page on Typex
Cryptographic hardware
Rotor machines
World War II military equipment of the United Kingdom |
38309 | https://en.wikipedia.org/wiki/Regulation%20of%20Investigatory%20Powers%20Act%202000 | Regulation of Investigatory Powers Act 2000 | The Regulation of Investigatory Powers Act 2000 (c.23) (RIP or RIPA) is an Act of the Parliament of the United Kingdom, regulating the powers of public bodies to carry out surveillance and investigation, and covering the interception of communications. It was introduced by the Tony Blair Labour government ostensibly to take account of technological change such as the growth of the Internet and strong encryption.
The Regulation of Investigatory Powers (RIP) Bill was introduced in the House of Commons on 9 February 2000 and completed its Parliamentary passage on 26 July.
Following a public consultation and Parliamentary debate, Parliament approved new additions in December 2003, April 2005, July 2006 and February 2010. A draft bill was put before Parliament during 4 November 2015.
Summary
RIPA regulates the manner in which certain public bodies may conduct surveillance and access a person's electronic communications. The Act:
enables certain public bodies to demand that an ISP provide access to a customer's communications in secret;
enables mass surveillance of communications in transit;
enables certain public bodies to demand ISPs fit equipment to facilitate surveillance;
enables certain public bodies to demand that someone hand over keys to protected information;
allows certain public bodies to monitor people's Internet activities;
prevents the existence of interception warrants and any data collected with them from being revealed in court.
Powers
Agencies with investigative powers
Communications data
The type of communications data that can be accessed varies with the reason for its use, and cannot be adequately explained here. Refer to the legislation for more specific information.
Charity Commission
Criminal Cases Review Commission
Common Services Agency for the Scottish Health Service
a county council or district council in England, a London borough council, the Common Council of the City of London in its capacity as a local authority, the Council of the Isles of Scilly, and any county council or county borough council in Wales
Department for Transport, for the purposes of:
Marine Accident Investigation Branch
Rail Accident Investigation Branch
Air Accidents Investigation Branch
Maritime and Coastguard Agency
a district council within the meaning of the Local Government Act (Northern Ireland) 1972
Department of Agriculture and Rural Development for Northern Ireland
Department of Enterprise, Trade and Investment for Northern Ireland (for the purposes of Trading Standards)
Department of Health (for the purposes of the Medicines and Healthcare products Regulatory Agency)
Department of Trade and Industry
Environment Agency
Financial Conduct Authority
a fire and rescue authority
Fire Authority for Northern Ireland
Food Standards Agency
Gambling Commission
Gangmasters Licensing Authority
Government Communications Headquarters
Health and Safety Executive
HM Revenue and Customs
Home Office (for the purposes of the UK Border Agency)
Independent Police Complaints Commission
Information Commissioner
a Joint Board where it is a fire authority
Office of Communications
Office of Fair Trading
The Pensions Regulator
Office of the Police Ombudsman for Northern Ireland
Port of Dover Police
Port of Liverpool Police
Post Office Investigation Branch
Postal Services Commission
NHS ambulance service Trust
NHS Counter Fraud and Security Management Service
Northern Ireland Ambulance Service Health and Social Services Trust
Northern Ireland Health and Social Services Central Services Agency
Royal Navy Regulating Branch
Royal Military Police
Royal Air Force Police
Scottish Ambulance Service Board
a Scottish council where it is a fire authority
Scottish Environment Protection Agency
Secret Intelligence Service
Security Service
Serious Fraud Office
the special police forces (including the Scottish Drug Enforcement Agency)
the territorial police forces
Welsh Ambulance Services NHS Trust
Directed surveillance and covert human intelligence sources
The reasons for which the use of directed surveillance & covert human intelligence sources is permitted vary with each authority. Refer to the legislation for more specific information.
the armed forces
Charity Commission
Commission for Healthcare Audit and Inspection
a county council or district council in England, a London borough council, the Common Council of the City of London in its capacity as a local authority, the Council of the Isles of Scilly, and any county council or county borough council in Wales
Department for Environment, Food and Rural Affairs (for the purposes of the Marine Fisheries Agency)
Department of Health (for the purposes of the Medicines and Healthcare products Regulatory Agency)
Department of Trade and Industry
Department for Transport (for the purposes of transport security, Vehicle and Operator Services Agency, Driving Standards Agency and Maritime and Coastguard Agency)
Department for Work and Pensions
Environment Agency
Financial Conduct Authority
a fire authority
Food Standards Agency
Gambling Commission
Gangmasters Licensing Authority
Government Communications Headquarters
Commissioners of Revenue and Customs
Home Office (for the purposes of HM Prison Service and the UK Border Agency)
Ministry of Defence
Northern Ireland Office (for the purposes of the Northern Ireland Prison Service)
Ofcom
Office of Fair Trading
Office of the Deputy Prime Minister
Office of the Police Ombudsman for Northern Ireland
Postal Services Commission
Port of Dover Police
Port of Liverpool Police
Royal Mail
Secret Intelligence Service
Security Service
Serious Fraud Office
Welsh Government (for the purposes of the NHS Directorate, NHS Finance Division, Common Agricultural Policy Management Division and Care Standards Inspectorate for Wales)
a territorial police force or special police force
Directed surveillance
The reasons for which the use of directed surveillance is permitted vary with each authority. Refer to the legislation for more specific information.
Health & Safety Executive
Information Commissioner
Her Majesty's Chief Inspector of Schools in England (for the purposes of the Complaints, Investigation and Enforcement Team)
General Pharmaceutical Council
Controversy
Critics claim that the spectres of terrorism, internet crime and paedophilia were used to push the act through and that there was little substantive debate in the House of Commons. The act has numerous critics, many of whom regard the RIPA regulations as excessive and a threat to civil liberties in the UK. Campaign group Big Brother Watch published a report in 2010 investigating the improper use of RIPA by local councils. Critics such as Keith Vaz, the chairman of the House of Commons home affairs committee, have expressed concern that the act is being abused for "petty and vindictive" cases. Similarly, Brian Binley, Member of Parliament (MP) for Northampton South has urged councils to stop using the law, accusing them of acting like comic strip detective Dick Tracy.
The Trading Standards Institute has been very critical of these views, stating that the use of surveillance is critical to their success (see TSI press release).
The "deniable encryption" features in free software such as FreeOTFE, TrueCrypt and BestCrypt could be said to make the task of investigations featuring RIPA much more difficult.
Another concern is that the Act requires sufficiently large UK Internet Service Providers to install technical systems to assist law enforcement agencies with interception activity. Although this equipment must be installed at the ISPs' expense, RIPA does provide that Parliament will examine appropriate funding for ISPs if the cost burden became unfairly high.
Accusations of oppressive use
In April 2008, it became known that council officials in Poole put three children and their parents under surveillance, at home and in their daily movements, to check whether they lived in a particular school catchment area. Council officials carried out directed surveillance on the family a total of 21 times. Tim Martin, the council's head of legal services, had authorised the surveillance and tried to argue that it was justified under RIPA, but in a subsequent ruling by the Investigatory Powers Tribunal – its first ever ruling – the surveillance was deemed to be unlawful. The same council put fishermen under covert surveillance to check for the illegal harvesting of cockles and clams in ways that are regulated by RIPA. David Smith, deputy commissioner at the ICO (Information Commissioner's Office) stated that he was concerned about the surveillance which took place in Poole. Other councils in the UK have conducted undercover operations regulated by RIPA against dog fouling and fly-tipping. In April 2016, 12 councils said that they use unmanned aerial vehicles for "covert operations", and that such flights are covered by the Regulation of Investigatory Powers Act 2000.
Despite claims in the press that local councils are conducting over a thousand RIPA-based covert surveillance operations every month for petty offences such as under-age smoking and breaches of planning regulations, the Office of Surveillance Commissioners' last report shows that public bodies granted 8,477 requests for Directed Surveillance, down over 1,400 on the previous year. Less than half of those were granted by Local Authorities, and the commissioner reported that, "Generally speaking, local authorities use their powers sparingly with over half of them granting five or fewer authorisations for directed surveillance. Some sixteen per cent granted none at all."
In June 2008, the chairman of the Local Government Association, Sir Simon Milton, sent out a letter to the leaders of every council in England, urging local governments not to use the new powers granted by RIPA "for trivial matters", and suggested "reviewing these powers annually by an appropriate scrutiny committee".
Especially contentious was Part III of the Act, which requires persons to (allegedly) self-incriminate by disclosing a password to government representatives. Failure to do so is a criminal offence, with a penalty of two years in jail or five years in cases involving national security or child indecency. Using the mechanism of secondary legislation, some parts of the Act required activation by a ministerial order before attaining legal force. Such orders have been made in respect of the relevant sections of Part I and Part II of the RIP Act and Part III. The latter became active in October 2007. The first case where the powers were used was against animal rights activists in November 2007.
Identification of journalists' sources
In October 2014, it was revealed that RIPA had been used by UK police forces to obtain information about journalists' sources in at least two cases. These related to the so-called Plebgate inquiry and the prosecution of Chris Huhne for perversion of the course of justice. In both cases, journalists' telephone records were obtained using the powers of the act in order to identify their sources, bypassing the usual court proceedings needed to obtain such information.
The UK newspaper The Sun made an official written complaint to the Investigatory Powers Tribunal to seek a public review of the London Metropolitan Police's use of anti-terror laws to obtain the phone records of Tom Newton Dunn, its political editor, in relation to its inquiry into the "Plebgate" affair. The Sun’s complaint coincided with confirmation that the phone records of the news editor of the Mail on Sunday and one of its freelance journalists had also been obtained by Kent police force when they investigated Chris Huhne's speeding fraud. Journalists' sources are usually agreed to be privileged and protected from disclosure under European laws with which the UK complies. However, by using RIPA an investigating office just needs approval from a senior officer rather than the formal approval of a court hearing. Media lawyers and press freedom groups are concerned by the use of RIPA because it happens in secret and the press have no way of knowing whether their sources have been compromised. Responding to The Sun's complaint Sir Paul Kennedy, the interception of communications commissioner, launched a full inquiry and urged Home Office ministers to accelerate the introduction of promised protections for journalists, lawyers and others who handle privileged information, including confidential helplines, from such police surveillance operations. He said: "I fully understand and share the concerns raised by the protection of journalistic sources so as to enable a free press. Today I have written to all chief constables and directed them under section 58 (1) of the Regulation of Investigatory Powers Act (Ripa) to provide me with full details of all investigations that have used Ripa powers to acquire communications data to identify journalistic sources. My office will undertake a full inquiry into these matters and report our findings to the prime minister".
On 12 October 2014, the justice minister, Simon Hughes, confirmed on Sky News's Murnaghan programme that the UK government will reform RIPA to prevent the police using surveillance powers to discover journalists' sources. He said that the police's use of RIPA's powers had been "entirely inappropriate" and in future the authorisation of a judge would be needed for police forces to be given approval to access journalists' phone records in pursuit of a criminal investigation. The presumption would be that if a journalist was acting in the public interest, they would be protected, he added. Hughes further said that if the police made an application to a court he would assume a journalist would be informed that the authorities were seeking to access his phone records. More than 100,000 RIPA requests are made every year for access to communications data against targets including private citizens. It is not known how many have involved journalists' phones.
Prosecutions under RIPA
A number of offences have been prosecuted involving the abuse of investigatory powers. Widely reported cases include the Stanford/Liddell case, the Goodman/Mulcaire Royal voicemail interception, and Operation Barbatus.
Cliff Stanford, and George Nelson Liddell pleaded guilty to offences under the Regulation of Investigatory Powers Act in 2005. They were found to have intercepted emails at the company Redbus Interhouse. Stanford was sentenced to six months' imprisonment suspended for two years, and fined £20,000. It was alleged Stanford had intercepted emails between Dame Shirley Porter and John Porter (Chairman of Redbus Interhouse).
In 2007, News of the World royal editor Clive Goodman was sentenced to four months in jail for intercepting the voicemail of members of the Royal Family. His associate Glenn Mulcaire received a six-month sentence.
In 2007, Operation Barbatus exposed a sophisticated criminal surveillance business organised by corrupt police officers. A former Metropolitan Police officer, Jeremy Young, was jailed for 27 months for various offences including six counts of conspiracy to intercept communications unlawfully. A second former policeman, Scott Gelsthorpe, was sentenced to 24 months for offences including conspiracy to intercept communications unlawfully. 3 other former police officers and a private detective were also jailed for their part in running a private detective agency called Active Investigation Services.
In 2008, four people were cautioned for 'Unlawful intercepting of a postal, public or private telecommunications scheme', under S.1(1), (2) & (7). The circumstances of the offences are not known at the time of writing. Three people were tried for 'Failure to disclose key to protected information' under S.53 (of which 2 were tried). One person was tried for 'Disclosing details of Section 49 Notice' under S.54.
In August 2009 it was announced that two people had been prosecuted and convicted for refusing to provide British authorities with their encryption keys, under Part III of the Act. The first of these was sentenced to a term of 9 months' imprisonment. In a 2010 case, Oliver Drage, a 19-year-old takeaway worker being investigated as part of a police investigation into a child exploitation network, was sentenced, at Preston Crown Court, to four months imprisonment. Mr Drage was arrested in May 2009, after investigating officers searched his home near Blackpool. He had been required, under this act, to provide his 50-character encryption key but had not complied.
In a further case in 2010 Poole Borough Council was accused of spying unfairly on a family. Although the Council invoked powers under RIPA to establish whether a family fell into a certain school catchment area, when taken before the Investigatory Powers Tribunal it was found guilty of improper use of surveillance powers.
Amendments
In October 2020 the Government introduced the Covert Human Intelligence Sources (Criminal Conduct) Bill which would permit, in certain circumstances, to authorise security, intelligence and police agencies to participate in criminal conduct during their operations. This Bill would amend the RIPA where required.
Investigatory Powers Tribunal
The 2000 Act established the Investigatory Powers Tribunal to hear complaints about surveillance by public bodies. The Tribunal replaced the Interception of Communications Tribunal, the Security Service Tribunal, and the Intelligence Services Tribunal with effect from 2 October 2000.
Between 2000 and 2009 the tribunal upheld only 4 out of 956 complaints.
See also
Human Rights Act 1998
Investigatory Powers Act 2016
Mass surveillance in the United Kingdom
Phone hacking
Rubber-hose cryptanalysis
Plausible deniability
Interception Modernisation Programme
United States v. Boucher, a case in the US courts which determined that a criminal defendant cannot be forced to reveal his encryption passphrase but can be forced to provide a plaintext (decrypted) copy of their encrypted data, if the defendant had previously willingly shown the authorities the drive's contents (i.e., having previously incriminated himself with those contents)
References
External links
Regulation of Investigatory Powers Information Centre (against RIP)
Parliament "didn't understand RIP Act"
Articles on aspects of RIPA and Surveillance Law
BBC News Website (April 2008) – RIPA Spy law 'used in dog fouling war'
Computing legislation
Cryptography law
History of telecommunications in the United Kingdom
Mass surveillance
United Kingdom Acts of Parliament 2000
Law enforcement in the United Kingdom
Home Office (United Kingdom) |
38809 | https://en.wikipedia.org/wiki/GNU%20Privacy%20Guard | GNU Privacy Guard | GNU Privacy Guard (GnuPG or GPG) is a free-software replacement for Symantec's PGP cryptographic software suite. The software is compliant with RFC 4880, the IETF standards-track specification of OpenPGP. Modern versions of PGP are interoperable with GnuPG and other OpenPGP-compliant systems.
GnuPG is part of the GNU Project and received major funding from the German government in 1999.
Overview
GnuPG is a hybrid-encryption software program because it uses a combination of conventional symmetric-key cryptography for speed, and public-key cryptography for ease of secure key exchange, typically by using the recipient's public key to encrypt a session key which is used only once. This mode of operation is part of the OpenPGP standard and has been part of PGP from its first version.
The GnuPG 1.x series uses an integrated cryptographic library, while the GnuPG 2.x series replaces this with Libgcrypt.
GnuPG encrypts messages using asymmetric key pairs individually generated by GnuPG users. The resulting public keys may be exchanged with other users in a variety of ways, such as Internet key servers. They must always be exchanged carefully to prevent identity spoofing by corrupting public key ↔ "owner" identity correspondences. It is also possible to add a cryptographic digital signature to a message, so the message integrity and sender can be verified, if a particular correspondence relied upon has not been corrupted.
GnuPG also supports symmetric encryption algorithms. By default, GnuPG uses the AES symmetrical algorithm since version 2.1, CAST5 was used in earlier versions. GnuPG does not use patented or otherwise restricted software or algorithms. Instead, GnuPG uses a variety of other, non-patented algorithms.
For a long time, it did not support the IDEA encryption algorithm used in PGP. It was in fact possible to use IDEA in GnuPG by downloading a plugin for it, however, this might require a license for some uses in countries in which IDEA was patented. Starting with versions 1.4.13 and 2.0.20, GnuPG supports IDEA because the last patent of IDEA expired in 2012. Support of IDEA is intended "to get rid of all the questions from folks either trying to decrypt old data or migrating keys from PGP to GnuPG", and hence is not recommended for regular use.
As of 2.2 versions, GnuPG supports the following algorithms:
Public key RSA, ElGamal, DSA, ECDH, ECDSA, EdDSA
Cipher 3DES, IDEA (since versions 1.4.13 and 2.0.20), CAST5, Blowfish, Twofish, AES-128, AES-192, AES-256, Camellia-128, -192 and -256 (since versions 1.4.10 and 2.0.12)
Hash MD5, SHA-1, RIPEMD-160, SHA-256, SHA-384, SHA-512, SHA-224
Compression Uncompressed, ZIP, ZLIB, BZIP2
More recent releases of GnuPG 2.x ("modern" and the now deprecated "stable" series) expose most cryptographic functions and algorithms Libgcrypt (its cryptography library) provides, including support for elliptic curve cryptography (ECDH, ECDSA and EdDSA) in the "modern" series (i.e. since GnuPG 2.1).
History
GnuPG was initially developed by Werner Koch. The first production version, version 1.0.0, was released on September 7, 1999, almost two years after the first GnuPG release (version 0.0.0). The German Federal Ministry of Economics and Technology funded the documentation and the port to Microsoft Windows in 2000.
GnuPG is a system compliant to the OpenPGP standard, thus the history of OpenPGP is of importance; it was designed to interoperate with PGP, an email encryption program initially designed and developed by Phil Zimmermann.
On February 7, 2014, a GnuPG crowdfunding effort closed, raising €36,732 for a new Web site and infrastructure improvements.
Branches
Since the release of a stable GnuPG 2.3, starting with version 2.3.3 in October 2021, three stable branches of GnuPG are actively maintained:
A "stable branch", which currently is (as of 2021) the 2.3 branch.
A "LTS (long-term support) branch", which currently is (as of 2021) the 2.2 branch (which was formerly called "modern branch", in comparison to the 2.0 branch).
The old "legacy branch" (formerly called "classic branch"), which is and will stay the 1.4 branch.
Before GnuPG 2.3, two stable branches of GnuPG were actively maintained:
"Modern" (2.2), with numerous new features, such as elliptic curve cryptography, compared to the former "stable" (2.0) branch, which it replaced with the release of GnuPG 2.2.0 on August 28, 2017. It was initially released on November 6, 2014.
"Classic" (1.4), the very old, but still maintained stand-alone version, most suitable for outdated or embedded platforms. Initially released on December 16, 2004.
Different GnuPG 2.x versions (e.g. from the 2.2 and 2.0 branches) cannot be installed at the same time. However, it is possible to install a "classic" GnuPG version (i.e. from the 1.4 branch) along with any GnuPG 2.x version.
Before the release of GnuPG 2.2 ("modern"), the now deprecated "stable" branch (2.0) was recommended for general use, initially released on November 13, 2006. This branch reached its end-of-life on December 31, 2017; Its last version is 2.0.31, released on December 29, 2017.
Before the release of GnuPG 2.0, all stable releases originated from a single branch; i.e., before November 13, 2006, no multiple release branches were maintained in parallel. These former, sequentially succeeding (up to 1.4) release branches were:
1.2 branch, initially released on September 22, 2002, with 1.2.6 as the last version, released on October 26, 2004.
1.0 branch, initially released on September 7, 1999, with 1.0.7 as the last version, released on April 30, 2002.
(Note that before the release of GnuPG 2.3.0, branches with an odd minor release number (e.g. 2.1, 1.9, 1.3) were development branches leading to a stable release branch with a "+ 0.1" higher version number (e.g. 2.2, 2.0, 1.4); hence branches 2.2 and 2.1 both belong to the "modern" series, 2.0 and 1.9 both to the "stable" series, while the branches 1.4 and 1.3 both belong to the "classic" series.
With the release of GnuPG 2.3.0, this nomenclature was altered to be composed of a "stable" and "LTS" branch from the "modern" series, plus 1.4 as the last maintained "classic" branch. Also note that even or odd minor release numbers do not indicate a stable or development release branch, anymore.)
Platforms
Although the basic GnuPG program has a command-line interface, there exists various front-ends that provide it with a graphical user interface. For example, GnuPG encryption support has been integrated into KMail and Evolution, the graphical email clients found in KDE and GNOME, the most popular Linux desktops. There are also graphical GnuPG front-ends, for example Seahorse for GNOME and KGPG and Kleopatra for KDE.
GPGTools provides a number of front-ends for OS integration of encryption and key management as well as GnuPG installations via Installer packages installs all related OpenPGP applications (GPG Keychain), plugins (GPG Mail) and dependencies (MacGPG), along with GPG Services (integration into macOS Services menu) to use GnuPG based encryption.
Instant messaging applications such as Psi and Fire can automatically secure messages when GnuPG is installed and configured. Web-based software such as Horde also makes use of it. The cross-platform extension Enigmail provides GnuPG support for Mozilla Thunderbird and SeaMonkey. Similarly, Enigform provides GnuPG support for Mozilla Firefox. FireGPG was discontinued June 7, 2010.
In 2005, g10 Code GmbH and Intevation GmbH released Gpg4win, a software suite that includes GnuPG for Windows, GNU Privacy Assistant, and GnuPG plug-ins for Windows Explorer and Outlook. These tools are wrapped in a standard Windows installer, making it easier for GnuPG to be installed and used on Windows systems.
Limitations
As a command-line-based system, GnuPG 1.x is not written as an API that may be incorporated into other software. To overcome this, GPGME (abbreviated from GnuPG Made Easy) was created as an API wrapper around GnuPG that parses the output of GnuPG and provides a stable and maintainable API between the components. This currently requires an out-of-process call to the GnuPG executable for many GPGME API calls; as a result, possible security problems in an application do not propagate to the actual cryptography code due to the process barrier. Various graphical front-ends based on GPGME have been created.
Since GnuPG 2.0, many of GnuPG's functions are available directly as C APIs in Libgcrypt.
Vulnerabilities
The OpenPGP standard specifies several methods of digitally signing messages. In 2003, due to an error in a change to GnuPG intended to make one of those methods more efficient, a security vulnerability was introduced. It affected only one method of digitally signing messages, only for some releases of GnuPG (1.0.2 through 1.2.3), and there were fewer than 1000 such keys listed on the key servers. Most people did not use this method, and were in any case discouraged from doing so, so the damage caused (if any, since none has been publicly reported) would appear to have been minimal. Support for this method has been removed from GnuPG versions released after this discovery (1.2.4 and later).
Two further vulnerabilities were discovered in early 2006; the first being that scripted uses of GnuPG for signature verification may result in false positives, the second that non-MIME messages were vulnerable to the injection of data which while not covered by the digital signature, would be reported as being part of the signed message. In both cases updated versions of GnuPG were made available at the time of the announcement.
In June 2017, a vulnerability (CVE-2017-7526) was discovered within Libgcrypt by Bernstein, Breitner and others: a library used by GnuPG, which enabled a full key recovery for RSA-1024 and about more than 1/8th of RSA-2048 keys. This side-channel attack exploits the fact that Libgcrypt used a sliding windows method for exponentiation which leads to the leakage of exponent bits and to full key recovery. Again, an updated version of GnuPG was made available at the time of the announcement.
In October 2017, the ROCA vulnerability was announced that affects RSA keys generated by YubiKey 4 tokens, which often are used with PGP/GPG. Many published PGP keys were found to be susceptible.
Around June 2018, the SigSpoof attacks were announced. These allowed an attacker to convincingly spoof digital signatures.
In January 2021, Libgcrypt 1.9.0 was released, which was found to contain a severe bug that was simple to exploit. A fix was released 10 days later in Libgcrypt 1.9.1.
Application support
Notable applications, front ends, and browser extensions that support GPG include the following:
Claws Mail – an email client with GPG plugin
Enigmail – a Mozilla Thunderbird and SeaMonkey extension
Evolution – a GNOME Mail application with native GnuPG support
FireGPG – a Firefox extension (discontinued)
Gnus – a message and news reader in GNU Emacs
Gpg4win – a Windows package with tools and manuals for email and file encryption
GpgFrontend – a cross-platform graphical front end for GnuPG
GPG Mail – a macOS Mail.app plug-in
KGPG – a KDE graphical front end for GnuPG
Kleopatra – KDE new graphical front end for GnuPG, also used in GPG4Win
KMail – email client / email component of Kontact (PIM software), that uses GPG for cryptography
MCabber – a Jabber client
Mailvelope – a Google Chrome and Firefox extension for end-to-end encryption of email traffic
Mutt – an email client with PGP/GPG support built-in
Psi (instant messaging client)
The Bat! – email client, that can use GnuPG as an OpenPGP provider
WinPT – a graphical front end to GPG for Windows (discontinued)
In popular culture
In May 2014, The Washington Post reported on a 12-minute video guide "GPG for Journalists" posted to Vimeo in January 2013 by a user named anon108. The Post identified anon108 as fugitive NSA whistleblower Edward Snowden, who it said made the tutorial—"narrated by a digitally disguised voice whose speech patterns sound similar to those of Snowden"—to teach journalist Glenn Greenwald email encryption. Greenwald said that he could not confirm the authorship of the video. There is a similarity between the tutorial and interviews Snowden has participated in, such as mentioning a password of "margaretthatcheris110%sexy" in both this video and an interview conducted with John Oliver in 2015.
See also
Acoustic cryptanalysis
Key signing party
Off-the-Record Messaging – also known as OTR
OpenPGP card – a smartcard with many GnuPG functions
Package manager
RetroShare – a friend-to-friend network based on PGP authentication
Web of trust
Notes
References
External links
A Short History of the GNU Privacy Guard, written by Werner Koch, published on GnuPG's 10th birthday
1999 software
Cross-platform software
Cryptographic software
Free security software
Privacy Guard
Linux security software
OpenPGP
Privacy software |
38838 | https://en.wikipedia.org/wiki/Cyclic%20redundancy%20check | Cyclic redundancy check | A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data. Blocks of data entering these systems get a short check value attached, based on the remainder of a polynomial division of their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used for error correction (see bitfilters).
CRCs are so called because the check (data verification) value is a redundancy (it expands the message without adding information) and the algorithm is based on cyclic codes. CRCs are popular because they are simple to implement in binary hardware, easy to analyze mathematically, and particularly good at detecting common errors caused by noise in transmission channels. Because the check value has a fixed length, the function that generates it is occasionally used as a hash function.
Introduction
CRCs are based on the theory of cyclic error-correcting codes. The use of systematic cyclic codes, which encode messages by adding a fixed-length check value, for the purpose of error detection in communication networks, was first proposed by W. Wesley Peterson in 1961.
Cyclic codes are not only simple to implement but have the benefit of being particularly well suited for the detection of burst errors: contiguous sequences of erroneous data symbols in messages. This is important because burst errors are common transmission errors in many communication channels, including magnetic and optical storage devices. Typically an n-bit CRC applied to a data block of arbitrary length will detect any single error burst not longer than n bits, and the fraction of all longer error bursts that it will detect is .
Specification of a CRC code requires definition of a so-called generator polynomial. This polynomial becomes the divisor in a polynomial long division, which takes the message as the dividend and in which the quotient is discarded and the remainder becomes the result. The important caveat is that the polynomial coefficients are calculated according to the arithmetic of a finite field, so the addition operation can always be performed bitwise-parallel (there is no carry between digits).
In practice, all commonly used CRCs employ the Galois field, or more simply a finite field, of two elements, GF(2). The two elements are usually called 0 and 1, comfortably matching computer architecture.
A CRC is called an n-bit CRC when its check value is n bits long. For a given n, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degree n, which means it has terms. In other words, the polynomial has a length of ; its encoding requires bits. Note that most polynomial specifications either drop the MSB or LSB, since they are always 1. The CRC and associated polynomial typically have a name of the form CRC-n-XXX as in the table below.
The simplest error-detection system, the parity bit, is in fact a 1-bit CRC: it uses the generator polynomial (two terms), and has the name CRC-1.
Application
A CRC-enabled device calculates a short, fixed-length binary sequence, known as the check value or CRC, for each block of data to be sent or stored and appends it to the data, forming a codeword.
When a codeword is received or read, the device either compares its check value with one freshly calculated from the data block, or equivalently, performs a CRC on the whole codeword and compares the resulting check value with an expected residue constant.
If the CRC values do not match, then the block contains a data error.
The device may take corrective action, such as rereading the block or requesting that it be sent again. Otherwise, the data is assumed to be error-free (though, with some small probability, it may contain undetected errors; this is inherent in the nature of error-checking).
Data integrity
CRCs are specifically designed to protect against common types of errors on communication channels, where they can provide quick and reasonable assurance of the integrity of messages delivered. However, they are not suitable for protecting against intentional alteration of data.
Firstly, as there is no authentication, an attacker can edit a message and recompute the CRC without the substitution being detected. When stored alongside the data, CRCs and cryptographic hash functions by themselves do not protect against intentional modification of data. Any application that requires protection against such attacks must use cryptographic authentication mechanisms, such as message authentication codes or digital signatures (which are commonly based on cryptographic hash functions).
Secondly, unlike cryptographic hash functions, CRC is an easily reversible function, which makes it unsuitable for use in digital signatures.
Thirdly, CRC satisfies a relation similar to that of a linear function (or more accurately, an affine function):
where depends on the length of and . This can be also stated as follows, where , and have the same length
as a result, even if the CRC is encrypted with a stream cipher that uses XOR as its combining operation (or mode of block cipher which effectively turns it into a stream cipher, such as OFB or CFB), both the message and the associated CRC can be manipulated without knowledge of the encryption key; this was one of the well-known design flaws of the Wired Equivalent Privacy (WEP) protocol.
Computation
To compute an n-bit binary CRC, line the bits representing the input in a row, and position the ()-bit pattern representing the CRC's divisor (called a "polynomial") underneath the left end of the row.
In this example, we shall encode 14 bits of message with a 3-bit CRC, with a polynomial . The polynomial is written in binary as the coefficients; a 3rd-degree polynomial has 4 coefficients (). In this case, the coefficients are 1, 0, 1 and 1. The result of the calculation is 3 bits long, which is why it is called a 3-bit CRC. However, you need 4 bits to explicitly state the polynomial.
Start with the message to be encoded:
11010011101100
This is first padded with zeros corresponding to the bit length n of the CRC. This is done so that the resulting code word is in systematic form. Here is the first calculation for computing a 3-bit CRC:
11010011101100 000 <--- input right padded by 3 bits
1011 <--- divisor (4 bits) = x³ + x + 1
------------------
01100011101100 000 <--- result
The algorithm acts on the bits directly above the divisor in each step. The result for that iteration is the bitwise XOR of the polynomial divisor with the bits above it. The bits not above the divisor are simply copied directly below for that step. The divisor is then shifted right to align with the highest remaining 1 bit in the input, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation:
11010011101100 000 <--- input right padded by 3 bits
1011 <--- divisor
01100011101100 000 <--- result (note the first four bits are the XOR with the divisor beneath, the rest of the bits are unchanged)
1011 <--- divisor ...
00111011101100 000
1011
00010111101100 000
1011
00000001101100 000 <--- note that the divisor moves over to align with the next 1 in the dividend (since quotient for that step was zero)
1011 (in other words, it doesn't necessarily move one bit per iteration)
00000000110100 000
1011
00000000011000 000
1011
00000000001110 000
1011
00000000000101 000
101 1
-----------------
00000000000000 100 <--- remainder (3 bits). Division algorithm stops here as dividend is equal to zero.
Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the right-hand end of the row. These n bits are the remainder of the division step, and will also be the value of the CRC function (unless the chosen CRC specification calls for some postprocessing).
The validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes. The remainder should equal zero if there are no detectable errors.
11010011101100 100 <--- input with check value
1011 <--- divisor
01100011101100 100 <--- result
1011 <--- divisor ...
00111011101100 100
......
00000000001110 100
1011
00000000000101 100
101 1
------------------
00000000000000 000 <--- remainder
The following Python code outlines a function which will return the initial CRC remainder for a chosen input and polynomial, with either 1 or 0 as the initial padding. Note that this code works with string inputs rather than raw numbers:
def crc_remainder(input_bitstring, polynomial_bitstring, initial_filler):
"""Calculate the CRC remainder of a string of bits using a chosen polynomial.
initial_filler should be '1' or '0'.
"""
polynomial_bitstring = polynomial_bitstring.lstrip('0')
len_input = len(input_bitstring)
initial_padding = (len(polynomial_bitstring) - 1) * initial_filler
input_padded_array = list(input_bitstring + initial_padding)
while '1' in input_padded_array[:len_input]:
cur_shift = input_padded_array.index('1')
for i in range(len(polynomial_bitstring)):
input_padded_array[cur_shift + i] \
= str(int(polynomial_bitstring[i] != input_padded_array[cur_shift + i]))
return ''.join(input_padded_array)[len_input:]
def crc_check(input_bitstring, polynomial_bitstring, check_value):
"""Calculate the CRC check of a string of bits using a chosen polynomial."""
polynomial_bitstring = polynomial_bitstring.lstrip('0')
len_input = len(input_bitstring)
initial_padding = check_value
input_padded_array = list(input_bitstring + initial_padding)
while '1' in input_padded_array[:len_input]:
cur_shift = input_padded_array.index('1')
for i in range(len(polynomial_bitstring)):
input_padded_array[cur_shift + i] \
= str(int(polynomial_bitstring[i] != input_padded_array[cur_shift + i]))
return ('1' not in ''.join(input_padded_array)[len_input:])
>>> crc_remainder('11010011101100', '1011', '0')
'100'
>>> crc_check('11010011101100', '1011', '100')
True
CRC-32 algorithm
This is a practical algorithm for the CRC-32 variant of CRC. The CRCTable is a memoization of a calculation that would have to be repeated for each byte of the message ().
Function CRC32
Input:
data: Bytes // Array of bytes
Output:
crc32: UInt32 // 32-bit unsigned CRC-32 value
// Initialize CRC-32 to starting value
crc32 ← 0xFFFFFFFF
for each byte in data do
nLookupIndex ← (crc32 xor byte) and 0xFF;
crc32 ← (crc32 shr 8) xor CRCTable[nLookupIndex] // CRCTable is an array of 256 32-bit constants
// Finalize the CRC-32 value by inverting all the bits
crc32 ← crc32 xor 0xFFFFFFFF
return crc32
In C, the algorithm looks as such:
#include <inttypes.h> // uint32_t, uint8_t
uint32_t CRC32(const uint8_t data[], size_t data_length) {
uint32_t crc32 = 0xFFFFFFFFu;
for (size_t i = 0; i < data_length; i++) {
const uint32_t lookupIndex = (crc32 ^ data[i]) & 0xff;
crc32 = (crc32 >> 8) ^ CRCTable[lookupIndex]; // CRCTable is an array of 256 32-bit constants
}
// Finalize the CRC-32 value by inverting all the bits
crc32 ^= 0xFFFFFFFFu;
return crc32;
}
Mathematics
Mathematical analysis of this division-like process reveals how to select a divisor that guarantees good error-detection properties. In this analysis, the digits of the bit strings are taken as the coefficients of a polynomial in some variable x—coefficients that are elements of the finite field GF(2) (the integers modulo 2, i.e. either a zero or a one), instead of more familiar numbers. The set of binary polynomials is a mathematical ring.
Designing polynomials
The selection of the generator polynomial is the most important part of implementing the CRC algorithm. The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities.
The most important attribute of the polynomial is its length (largest degree(exponent) +1 of any one term in the polynomial), because of its direct influence on the length of the computed check value.
The most commonly used polynomial lengths are 9 bits (CRC-8), 17 bits (CRC-16), 33 bits (CRC-32), and 65 bits (CRC-64).
A CRC is called an n-bit CRC when its check value is n-bits. For a given n, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degree n, and hence terms (the polynomial has a length of ). The remainder has length n. The CRC has a name of the form CRC-n-XXX.
The design of the CRC polynomial depends on the maximum total length of the block to be protected (data + CRC bits), the desired error protection features, and the type of resources for implementing the CRC, as well as the desired performance. A common misconception is that the "best" CRC polynomials are derived from either irreducible polynomials or irreducible polynomials times the factor , which adds to the code the ability to detect all errors affecting an odd number of bits. In reality, all the factors described above should enter into the selection of the polynomial and may lead to a reducible polynomial. However, choosing a reducible polynomial will result in a certain proportion of missed errors, due to the quotient ring having zero divisors.
The advantage of choosing a primitive polynomial as the generator for a CRC code is that the resulting code has maximal total block length in the sense that all 1-bit errors within that block length have different remainders (also called syndromes) and therefore, since the remainder is a linear function of the block, the code can detect all 2-bit errors within that block length. If is the degree of the primitive generator polynomial, then the maximal total block length is , and the associated code is able to detect any single-bit or double-bit errors. We can improve this situation. If we use the generator polynomial , where is a primitive polynomial of degree , then the maximal total block length is , and the code is able to detect single, double, triple and any odd number of errors.
A polynomial that admits other factorizations may be chosen then so as to balance the maximal total blocklength with a desired error detection power. The BCH codes are a powerful class of such polynomials. They subsume the two examples above. Regardless of the reducibility properties of a generator polynomial of degree r, if it includes the "+1" term, the code will be able to detect error patterns that are confined to a window of r contiguous bits. These patterns are called "error bursts".
Specification
The concept of the CRC as an error-detecting code gets complicated when an implementer or standards committee uses it to design a practical system. Here are some of the complications:
Sometimes an implementation prefixes a fixed bit pattern to the bitstream to be checked. This is useful when clocking errors might insert 0-bits in front of a message, an alteration that would otherwise leave the check value unchanged.
Usually, but not always, an implementation appends n 0-bits (n being the size of the CRC) to the bitstream to be checked before the polynomial division occurs. Such appending is explicitly demonstrated in the Computation of CRC article. This has the convenience that the remainder of the original bitstream with the check value appended is exactly zero, so the CRC can be checked simply by performing the polynomial division on the received bitstream and comparing the remainder with zero. Due to the associative and commutative properties of the exclusive-or operation, practical table driven implementations can obtain a result numerically equivalent to zero-appending without explicitly appending any zeroes, by using an equivalent, faster algorithm that combines the message bitstream with the stream being shifted out of the CRC register.
Sometimes an implementation exclusive-ORs a fixed bit pattern into the remainder of the polynomial division.
Bit order: Some schemes view the low-order bit of each byte as "first", which then during polynomial division means "leftmost", which is contrary to our customary understanding of "low-order". This convention makes sense when serial-port transmissions are CRC-checked in hardware, because some widespread serial-port transmission conventions transmit bytes least-significant bit first.
Byte order: With multi-byte CRCs, there can be confusion over whether the byte transmitted first (or stored in the lowest-addressed byte of memory) is the least-significant byte (LSB) or the most-significant byte (MSB). For example, some 16-bit CRC schemes swap the bytes of the check value.
Omission of the high-order bit of the divisor polynomial: Since the high-order bit is always 1, and since an n-bit CRC must be defined by an ()-bit divisor which overflows an n-bit register, some writers assume that it is unnecessary to mention the divisor's high-order bit.
Omission of the low-order bit of the divisor polynomial: Since the low-order bit is always 1, authors such as Philip Koopman represent polynomials with their high-order bit intact, but without the low-order bit (the or 1 term). This convention encodes the polynomial complete with its degree in one integer.
These complications mean that there are three common ways to express a polynomial as an integer: the first two, which are mirror images in binary, are the constants found in code; the third is the number found in Koopman's papers. In each case, one term is omitted. So the polynomial may be transcribed as:
0x3 = 0b0011, representing (MSB-first code)
0xC = 0b1100, representing (LSB-first code)
0x9 = 0b1001, representing (Koopman notation)
In the table below they are shown as:
Obfuscation
CRCs in proprietary protocols might be obfuscated by using a non-trivial initial value and a final XOR, but these techniques do not add cryptographic strength to the algorithm and can be reverse engineered using straightforward methods.
Standards and common use
Numerous varieties of cyclic redundancy checks have been incorporated into technical standards. By no means does one algorithm, or one of each degree, suit every purpose; Koopman and Chakravarty recommend selecting a polynomial according to the application requirements and the expected distribution of message lengths. The number of distinct CRCs in use has confused developers, a situation which authors have sought to address. There are three polynomials reported for CRC-12, twenty-two conflicting definitions of CRC-16, and seven of CRC-32.
The polynomials commonly applied are not the most efficient ones possible. Since 1993, Koopman, Castagnoli and others have surveyed the space of polynomials between 3 and 64 bits in size, finding examples that have much better performance (in terms of Hamming distance for a given message size) than the polynomials of earlier protocols, and publishing the best of these with the aim of improving the error detection capacity of future standards. In particular, iSCSI and SCTP have adopted one of the findings of this research, the CRC-32C (Castagnoli) polynomial.
The design of the 32-bit polynomial most commonly used by standards bodies, CRC-32-IEEE, was the result of a joint effort for the Rome Laboratory and the Air Force Electronic Systems Division by Joseph Hammond, James Brown and Shyan-Shiang Liu of the Georgia Institute of Technology and Kenneth Brayer of the Mitre Corporation. The earliest known appearances of the 32-bit polynomial were in their 1975 publications: Technical Report 2956 by Brayer for Mitre, published in January and released for public dissemination through DTIC in August, and Hammond, Brown and Liu's report for the Rome Laboratory, published in May. Both reports contained contributions from the other team. During December 1975, Brayer and Hammond presented their work in a paper at the IEEE National Telecommunications Conference: the IEEE CRC-32 polynomial is the generating polynomial of a Hamming code and was selected for its error detection performance. Even so, the Castagnoli CRC-32C polynomial used in iSCSI or SCTP matches its performance on messages from 58 bits to 131 kbits, and outperforms it in several size ranges including the two most common sizes of Internet packet. The ITU-T G.hn standard also uses CRC-32C to detect errors in the payload (although it uses CRC-16-CCITT for PHY headers).
CRC-32C computation is implemented in hardware as an operation () of SSE4.2 instruction set, first introduced in Intel processors' Nehalem microarchitecture. ARM AArch64 architecture also provides hardware acceleration for both CRC-32 and CRC-32C operations.
Polynomial representations of cyclic redundancy checks
The table below lists only the polynomials of the various algorithms in use. Variations of a particular protocol can impose pre-inversion, post-inversion and reversed bit ordering as described above. For example, the CRC32 used in Gzip and Bzip2 use the same polynomial, but Gzip employs reversed bit ordering, while Bzip2 does not.
Note that even parity polynomials in GF(2) with degree greater than 1 are never primitive. Even parity polynomial marked as primitive in this table represent a primitive polynomial multiplied by . The most significant bit of a polynomial is always 1, and is not shown in the hex representations.
Implementations
Implementation of CRC32 in GNU Radio up to 3.6.1 (ca. 2012)
C class code for CRC checksum calculation with many different CRCs to choose from
CRC catalogues
Catalogue of parametrised CRC algorithms
CRC Polynomial Zoo
See also
Mathematics of cyclic redundancy checks
Computation of cyclic redundancy checks
List of hash functions
List of checksum algorithms
Information security
Simple file verification
LRC
References
Further reading
External links
Cyclic Redundancy Checks, MathPages, overview of error-detection of different polynomials
Algorithm 4 was used in Linux and Bzip2.
, Slicing-by-4 and slicing-by-8 algorithms
— Bitfilters
— theory, practice, hardware, and software with emphasis on CRC-32.
Reverse-Engineering a CRC Algorithm
— includes links to PDFs giving 16 and 32-bit CRC Hamming distances
ISO/IEC 13239:2002: Information technology -- Telecommunications and information exchange between systems -- High-level data link control (HDLC) procedures
CRC32-Castagnoli Linux Library
Binary arithmetic
Finite fields
Polynomials
Wikipedia articles with ASCII art
Articles with example Python (programming language) code |
39070 | https://en.wikipedia.org/wiki/Maximilian%20I%2C%20Holy%20Roman%20Emperor | Maximilian I, Holy Roman Emperor | Maximilian I (22 March 1459 – 12 January 1519) was King of the Romans from 1486 and Holy Roman Emperor from 1508 until his death. He was never crowned by the pope, as the journey to Rome was blocked by the Venetians. He was instead proclaimed emperor elect by Pope Julius II at Trent, thus breaking the long tradition of requiring a Papal coronation for the adoption of the Imperial title. Maximilian was the son of Frederick III, Holy Roman Emperor, and Eleanor of Portugal. He ruled jointly with his father for the last ten years of the latter's reign, from until his father's death in 1493.
Maximilian expanded the influence of the House of Habsburg through war and his marriage in 1477 to Mary of Burgundy, the ruler of the Burgundian State, heir of Charles the Bold, though he also lost his family's original lands in today's Switzerland to the Swiss Confederacy. Through marriage of his son Philip the Handsome to eventual queen Joanna of Castile in 1498, Maximilian helped to establish the Habsburg dynasty in Spain, which allowed his grandson Charles to hold the thrones of both Castile and Aragon. The historian Thomas A. Brady Jr. describes him as "the first Holy Roman Emperor in 250 years who ruled as well as reigned" and also, the "ablest royal warlord of his generation."
Nicknamed "Coeur d’acier" (“Heart of steel”) by Olivier de la Marche and later historians (either as praise for his courage and martial qualities or reproach for his ruthlessness as a warlike ruler), Maximilian has entered the public consciousness as "the last knight" (der letzte Ritter), especially since the eponymous poem by Anastasius Grün was published (although the nickname likely existed even in Maximilian's lifetime). Scholarly debates still discuss whether he was truly the last knight (either as an idealized medieval ruler leading people on horseback, or a Don Quixote-type dreamer and misadventurer), or the first Renaissance prince — an amoral Machiavellian politician who carried his family "to the European pinnacle of dynastic power" largely on the back of loans. Historians of the second half of the nineteenth century like Leopold von Ranke tended to criticize Maximilian for putting the interest of his dynasty above that of Germany, hampering the nation's unification process. Ever since Hermann Wiesflecker's Kaiser Maximilian I. Das Reich, Österreich und Europa an der Wende zur Neuzeit (1971-1986) became the standard work, a much more positive image of the emperor has emerged. He is seen as an essentially modern, innovative ruler who carried out important reforms and promoted significant cultural achievements, even if the financial price weighed hard on the Austrians and his military expansion caused the deaths and sufferings of tens of thousands of people.
Through an "unprecedented" image-building program, with the help of many notable scholars and artists, in his lifetime, the emperor – "the promoter, coordinator, and prime mover, an artistic impresario and entrepreneur with seemingly limitless energy and enthusiasm and an unfailing eye for detail" – had built for himself "a virtual royal self" of a quality that historians call "unmatched" or "hitherto unimagined". To this image, new layers have been added by the works of later artists in the centuries following his death, both as continuation of deliberately crafted images developed by his program as well as development of spontaneous sources and exploration of actual historical events, creating what Elaine Tennant dubs the "Maximilian industry".
Background and childhood
Maximilian was born at Wiener Neustadt on 22 March 1459. His father, Frederick III, Holy Roman Emperor, named him for an obscure saint, Maximilian of Tebessa, who Frederick believed had once warned him of imminent peril in a dream. In his infancy, he and his parents were besieged in Vienna by Albert of Austria. One source relates that, during the siege's bleakest days, the young prince wandered about the castle garrison, begging the servants and men-at-arms for bits of bread. He was the favourite child of his mother, whose personality was a contrast to his father (although there seemed to be communication problems between mother and son, as she spoke Portuguese). Reportedly she told Maximilian that, "If I had known, my son, that you would become like your father, I would have regretted having born you for the throne." Her early death pushed him even more towards a man's world, where one grew up first as a warrior rather than a politician.
Despite the efforts of his father Frederick and his tutor Peter Engelbrecht (whom Maximilian held in contempt all his life because of his violent teaching methods which, according to Cuspinianus, only made Maximilian hate science), Maximilian became an indifferent, at times belligent student, who much preferred physical activities than learning (he would later rediscover the love of science and culture on his own terms though, especially during his time in Burgundy, under the influence of Mary of Burgundy). Although the two remained on good terms overall and the emperor encouraged Maximilian's interest in weapons and the hunt, as well as let him attend important meetings, Frederick was horrified by his only surviving son and heir's overzealousness in chivalric contests, extravagance, and especially a heavy tendency towards wine, feasts and young women, which became evident during their trips in 1473–74. Even though he was still very young, the prince's skills and physical attractiveness made him the center everywhere he went. Although Frederick had forbidden the princes of the Empire from fighting with Maximilian in tournaments, Maximilian gave himself the necessary permission at the first chance he got. Frederick did not allow him to participate in the 1474 war against Burgundy though and placed him under the care of the Bishop of Augsburg instead.
Charles the Bold, was the chief political opponent of Maximilian's father Frederick III. Frederick was concerned about Burgundy's expansive tendencies on the western border of his Holy Roman Empire, and, to forestall military conflict, he attempted to secure the marriage of Charles' only daughter, Mary of Burgundy, to his son Maximilian. After the Siege of Neuss (1474–75), he was successful.
Perhaps as preparation for his task in the Netherlands, in 1476, at the age of 17, in the name of his father, apparently Maximilian commanded a military campaign against Hungary – the first actual battlefield experience in his life (command responsibility was likely shared with more experienced generals though).
The wedding between Maximilian and Mary took place on 19 August 1477.
Reign in Burgundy and the Netherlands
Maximilian's wife had inherited the large Burgundian domains in France and the Low Countries upon her father's death in the Battle of Nancy on 5 January 1477.
The Duchy of Burgundy was also claimed by the French crown under Salic Law, with Louis XI of France vigorously contesting the Habsburg claim to the Burgundian inheritance by means of military force. Maximilian at once undertook the defence of his wife's dominions. Without support from the Empire and with an empty treasury left by Charles the Bold's campaigns (Mary had to pawn her jewels to obtain loans), he carried out a campaign against the French during 1478–1479 and reconquered Le Quesnoy, Conde and Antoing. He defeated the French forces at Battle of Guinegate (1479), the modern Enguinegatte, on 7 August 1479. Despite winning, Maximilian had to abandon the siege of Thérouanne and disband his army, either because the Netherlanders did not want him to become too strong or because his treasury was empty. The battle was an important mark in military history though: the Burgundian pikemen were the precursors of the Landsknechte, while the French side derived the momentum for military reform from their loss.
According to some, Maximilian and Mary's wedding contract stipulated that their children would succeed them but that the couple could not be each other's heirs. Mary tried to bypass this rule with a promise to transfer territories as a gift in case of her death, but her plans were confounded. After Mary's death in a riding accident on 27 March 1482 near the Wijnendale Castle, Maximilian's aim was now to secure the inheritance to his and Mary's son, Philip the Handsome. According to Haemers and Sutch, the original marriage contract stipulated that Maximilian could not inherit her Burgundian lands if they had children.
The Guinegate victory made Maximilian popular, but as an inexperienced ruler, he hurt himself politically by trying to centralize authority without respecting traditional rights and consulting relevant political bodies. The Belgian historian Eugène Duchesne comments that these years were among the saddest and most turbulent in the history of the country, and despite his later great imperial career, Maximilian unfortunately could never compensate for the mistakes he made as regent in this period. Some of the Netherlander provinces were hostile to Maximilian, and, in 1482, they signed a treaty with Louis XI in Arras that forced Maximilian to give up Franche-Comté and Artois to the French crown. They openly rebelled twice in the period 1482–1492, attempting to regain the autonomy they had enjoyed under Mary. Flemish rebels managed to capture Philip and even Maximilian himself, but they released Maximilian when Frederick III intervened. In 1489, as he turned his attention to his hereditary lands, he left the Low Countries in the hands of Albert of Saxony, who proved to be an excellent choice, as he was less emotionally committed to the Low Countries and more flexible as a politician than Maximilian, while also being a capable general. By 1492, rebellions were completely suppressed. Maximilian revoked the Great Privilege and established a strong ducal monarchy undisturbed by particularism. But he would not reintroduce Charles the Bold's centralizing ordinances. Since 1489 (after his departure), the government under Albert of Saxony had made more efforts in consulting representative institutions and showed more restraint in subjugating recalcitrant territories. Notables who had previously supported rebellions returned to city administrations. The Estates General continued to develop as a regular meeting place of the central government. The harsh suppression of the rebellions did have an unifying effect, in that provinces stopped behaving like separate entities each supporting a different lord. Helmut Koenigsberger opines that it was not the erratic leadership of Maximilian, who was brave but hardly understood the Netherlands, but the Estates' desire for the survival of the country that made the Burgundian monarchy survive. Jean Berenger and C.A. Simpson argue that Maximilian, as a gifted military champion and organizer, did save the Netherlands from France, although the conflict between the Estates and his personal ambitions caused a catastrophic situation in the short term. Peter Spufford opines that the invasion was prevented by a combination of the Estates and Maximilian, although the cost of war, Maximilian's spendthrift liberality and the interests enforced by his German bankers did cause huge expenditure while income was falling. Jelle Haemers comments that the Estates stopped their support towards the young and ambitious impresario (director) of war (who took personal control of both the military and financial details during the war) because they knew that after Guinegate, the nature of the war was not defensive anymore. Maximilian and his followers had managed to achieve remarkable success in stabilizing the situation though, and a stalemate was kept in Ghent as well as in Bruges, before the tragic death of Mary in 1482 completely turned the political landscape in the whole country upside down. According to Haemers, while Willem Zoete's indictment of Maximilian's government was a one-sided picture that exaggerated the negative points and the Regency Council displayed many of the same problems, Maximilian and his followers could have been more prudent when dealing with the complaints of their opponents before matters became bigger.
While it has been suggested that Maximilian displayed a class-based mentality that favoured the aristocrats (a modern historian who shares this viewpoint is Koenigsberger), recent studies suggest that, as evidenced by the court ordinance of 1482 (at this point, before Mary's death, threats to his rule seemed to have been eliminated) among others, he sought to promote "parvenus" who were beholden to himself (often either functionaries who had risen under Charles the Bold and then proved loyalty to Maximilian, or representatives of the mercantile elites), and at an alarming speed for the traditional elites. After the rebellions, concerning the aristocracy, although Maximilian punished few with death (unlike what he himself later desbribed in Theuerdank), their properties were largely confiscated and they were replaced with a new elite class loyal to the Habsburgs – among whom, there were noblemen who had been part of traditional high nobility but elevated to supranational importance only in this period. The most important of these were John III and Frederik of Egmont, Engelbrecht II of Nassau, Henry of Witthem and the brothers of Glymes–Bergen.
In early 1486, he retook Mortaigne, l'Ecluse, Honnecourt and even Thérouanne, but the same thing like in 1479 happened – he lacked financial resources to exploit and keep his gains. Only in 1492, with a stable internal situation, he was able to reconquer and keep Franche-Comté and Arras on the pretext that the French had repudiated his daughter. In 1493, Maximilian and Charles VIII of France signed the Treaty of Senlis, with which Artois and Franche-Comté returned to Burgundian rule while Picardy was confirmed as French possession. The French also continued to keep the Duchy of Burgundy. Thus a large part of the Netherlands (known as the Seventeen Provinces) stayed in the Habsburg patrimony.
On 8 January 1488, using a similar 1373 French ordinance as the model, together with Philip, he issued the Ordinance of Admiralty, that organized the Admiralty as a state institution and strove to centralize maritime authority (this was a departure from the policy of Philip the Good, whose 1458 ordinance tried to restore maritime order by decentralizing power). This was the beginning of the Dutch navy, although initially the policy faced opposition and unfavourable political climate, which only improved with the appointment of Philip of Burgundy-Beveren in 1491. A permanent navy only took shape after 1555 under the governorship of his granddaughter Mary of Hungary.
In 1493, Frederick III died, thus Maximilian I became de facto leader of the Holy Roman Empire. He decided to transfer power to the 15-year-old Philip. During the time in the Low Countries, he contracted such emotional problems that except for rare, necessary occasions, he would never return to the land again after gaining control. When the Estates sent a delegation to offer him the regency after Philip's death in 1506, he evaded them for months.
As suzerain, Maximilian continued to involve himself with the Low Countries from afar. His son's and daughter's governments tried to maintain a compromise between the states and the Empire. Philip, in particular, sought to maintain an independent Burgundian policy, which sometimes caused disagreements with his father. As Philip preferred to maintain peace and economic development for his land, Maximilian was left fighting Charles of Egmond over Guelders on his own resources. At one point, Philip let French troops supporting Guelders' resistance to his rule pass through his own land. Only at the end of his reign, Philip decided to deal with this threat together with his father. By this time, Guelders had been affected by the continuous state of war and other problems. The duke of Cleves and the bishop of Utrecht, hoping to share spoils, gave Philip aid. Maximilian invested his own son with Guelders and Zutphen. Within months and with his father's skilled use of field artillery, Philip conquered the whole land and Charles of Egmond was forced to prostrate himself in front of Philip. But as Charles later escaped and Philip was at haste to make his 1506 fatal journey to Spain, troubles would soon arise again, leaving Margaret to deal with the problems. By this time, her father was less inclined to help though. He suggested to her that the Estates in the Low Countries should defend themselves, forcing her to sign the 1513 treaty with Charles. Habsburg Netherlands would only be able to incorporate Guelders and Zutphen under Charles V.
Following Margaret's strategy of defending the Low Countries with foreign armies, in 1513, at the head of Henry VIII's army, Maximilian gained a victory against the French at the Battle of the Spurs, at little cost to himself or his daughter (in fact according to Margaret, the Low Countries got a profit of one million of gold from supplying the English army). For the sake of his grandson Charles's Burgundian lands, he ordered Thérouanne's walls to be demolished (the stronghold had often served as a backdoor for French interference in the Low Countries).
Reign in the Holy Roman Empire
Recapture of Austria and expedition to Hungary
Maximilian was elected King of the Romans on 16 February 1486 in Frankfurt-am-Main at his father's initiative and crowned on 9 April 1486 in Aachen. Much of the Austrian territories and Vienna were under the rule of king Matthias Corvinus of Hungary, as a result of the Austrian–Hungarian War (1477–1488). Maximilian was now a king without lands. After the death of king Matthias, from July 1490, Maximilian began a series of short sieges that reconquered cities and fortresses that his father had lost in Austria. Maximilian entered Vienna without siege, already evacuated by the Hungarians, in August 1490. He was injured while attacking the citadel guarded by a garrison of 400 Hungarians troops who twice repelled his forces, but after some days they surrendered. With money from Innsbruck and southern German towns, he raised enough cavalry and Landsknechte to campaign into Hungary itself. Despite Hungary's gentry's hostility to the Habsburg, he managed to gain many supporters, including several of Corvinus's former supporters. One of them, Jakob Székely, handed over the Styrian castles to him. He claimed his status as King of Hungary, demanding allegiance through Stephen of Moldavia. In seven weeks, they conquered a quarter of Hungary. His mercenaries committed the atrocity of totally sacking Székesfehérvár, the country's main fortress. When encountering the frost, the troops refused to continue the war though, requesting Maximilian to double their pay, which he could not afford. The revolt turned the situation in favour of the Jagiellonian forces. Maximilian was forced to return. He depended on his father and the territorial estates for financial support. Soon he reconquered Lower and Inner Austria for his father, who returned and settled at Linz. Worrying about his son's adventurous tendencies, Frederick decided to starve him financially though.
Beatrice of Naples (1457–1508), Mathias Corvinus's widow, initially supported Maximilian out of hope that he would marry her, but Maximilian did not want this liaison. The Hungarian magnates found Maximilian impressive, but they wanted a king they could dominate. The crown of Hungary thus fell to King Vladislaus II, who was deemed weaker in personality and also agreed to marry Beatrice. Tamás Bakócz, the Hungarian chancellor allied himself with Maximilian and helped him to circumvent the 1505 Diet which declared that no foreigner could be elected as King of Hungary. In 1491, they signed the peace treaty of Pressburg, which provided that Maximilian recognized Vladislaus as King of Hungary, but the Habsburgs would inherit the throne on the extinction of Vladislaus's male line and the Austrian side also received 100,000 golden florins as war reparations. It was with Maximilian that the Croatians began to harbour a connection to the House of Habsburg. Except the two most powerful noblemen (Duke Ivanis Corvinus and Bernardin Frankopan), the Croatian nobility wanted him as King. Worrying that a protracted, multi-fronted war would leave him overextended though, Maximilian evacuated from Croatia (he had conquered the whole northern part of the country previously) and accepted the treaty with the Jagiellons.
In addition, the County of Tyrol and Duchy of Bavaria went to war in the late 15th century. Bavaria demanded money from Tyrol that had been loaned on the collateral of Tyrolean lands. In 1490, the two nations demanded that Maximilian I step in to mediate the dispute. His Habsburg cousin, the childless Archduke Sigismund, was negotiating to sell Tyrol to their Wittelsbach rivals rather than let Emperor Frederick inherit it. Maximilian's charm and tact though led to a reconciliation and a reunited dynastic rule in the 1490. Because Tyrol had no law code at this time, the nobility freely expropriated money from the populace, which caused the royal palace in Innsbruck to fester with corruption. After taking control, Maximilian instituted immediate financial reform. Gaining control of Tyrol for the Habsburgs was of strategic importance because it linked the Swiss Confederacy to the Habsburg-controlled Austrian lands, which facilitated some imperial geographic continuity.
Maximilian became ruler of the Holy Roman Empire upon the death of his father in 1493.
Italian and Swiss wars
As the Treaty of Senlis had resolved French differences with the Holy Roman Empire, King Louis XII of France had secured borders in the north and turned his attention to Italy, where he made claims for the Duchy of Milan. In 1499–1500 he conquered it and drove the Sforza regent Lodovico il Moro into exile. This brought him into a potential conflict with Maximilian, who on 16 March 1494 had married Bianca Maria Sforza, a daughter of Galeazzo Maria Sforza, duke of Milan. However, Maximilian was unable to hinder the French from taking over Milan. The prolonged Italian Wars resulted in Maximilian joining the Holy League to counter the French. His campaigns in Italy generally were not successful, and his progress there was quickly checked. Maximilian's Italian campaigns tend to be criticized for being wasteful and gaining him little. Despite the emperor's work in enhancing his army technically and organization-wise, due to financial difficulties, the forces he could muster were always too small to make a decisive difference. In Italy, he gained the derisive nickname of "Massimiliano di pochi denari" (Maximilian the Moneyless). One particularly humiliating episode happened in 1508, with a force mustered largely from hereditary lands and with limited resources, the emperor decided to attack Venice. The diversionary force under Sixt Trautson were routed by Bartolomeo d'Alviano (Sixt Trautson himself was among the fallen), while Maximilian's own advance was blocked by the main Venetian force under Niccolò di Pitigliano and a French army under Alessandro Trivulzio. Bartolomeo d'Alviano then pushed into the Imperial territory, seizing Gorizia and Trieste, forcing Maximilian to sign a very unfavourable truce. Afterwards, he formed the League of Cambrai together with Spain, France and Pope Julius II and won back the territories he had conceded and some Venetian possessions. Most of the Slovene-inhabited areas were transferred to the Habsburgs. But atrocities and expenses for war devastated Austria and Carniola. Lack of financial means meant that he depended on allies' resources, and just like in the Low Countries, he sometimes practically functioned as the condottiero. When Schiner suggested that they should let war feed war though, he did not agree or was not brutal enough to do that. He acknowledged French control of Milan in 1515.
The situation in Italy was not the only problem Maximilian had at the time. The Swiss won a decisive victory against the Empire in the Battle of Dornach on 22 July 1499. Maximilian had no choice but to agree to a peace treaty signed on 22 September 1499 in Basel that granted the Swiss Confederacy independence from the Holy Roman Empire.
Jewish policy
Jewish policy under Maximilian fluctuated greatly, usually influenced by financial considerations and the emperor's vacillating attitude when facing opposing views. In 1496, Maximilian issued a decree which expelled all Jews from Styria and Wiener Neustadt. Between 1494 and 1510, he authorized no less than thirteen expulsions of Jews in return of sizeable fiscal compensations from local government (The expelled Jews were allowed to resettle in Lower Austria. Buttaroni comments that this inconsistency showed that even Maximilian himself did not believe his expulsion decision was just.). After 1510 though, this happened only once, and he showed an unusually resolute attitude in resisting a campaign to expel Jews from Regensburg. David Price comments that during the first seventeen years of his reign, he was a great threat to the Jews, but after 1510, even if his attitude was still exploitative, his policy gradually changed. A factor that probably played a role in the change was Maximilian's success in expanding imperial taxing over German Jewry: at this point, he probably considered the possibility of generating tax money from stable Jewish communities, instead of temporary financial compensations from local jurisdictions who sought to expel Jews.
In 1509, relying on the influence of Kunigunde, Maximilian's pious sister and the Cologne Dominicans, the anti-Jewish agitator Johannes Pfefferkorn was authorized by Maximilian to confiscate all offending Jewish books (including prayer books), except the Bible. The confiscations happened in Frankfurt, Bingen, Mainz and other German cities. Responding to the order, the archbishop of Mainz, the city council of Frankfurt and various German princes tried to intervene in defense of the Jews. Maximilian consequently ordered the confiscated books to be returned. On 23 May 1510 though, influenced by a supposed "host desecration" and blood libel in Brandenburg, as well as pressure from Kunigunde, he ordered the creation of an investigating commission and asked for expert opinions from German universities and scholars. The prominent humanist Johann Reuchlin argued strongly in defense of the Jewish books, especially the Talmud. Reuchlin's arguments seemed to leave an impression on the emperor (who followed his advice, against the recommendation of his own commission), who gradually developed an intellectual interest in the Talmud and other Jewish books. Maximilian later urged the Hebraist Petrus Galatinus to defend Reuchlin's position. Galatinus dedicated his work De Arcanis Catholicae Veritatis, which provided 'a literary "threshold" where Jews and gentiles might meet', to the emperor. In 1514, he appointed Paulus Ricius, a Jew who converted to Christianity, as his personal physician. He was more interested in Ricius's Hebrew skills than in his medical abilities though. In 1515, he reminded his treasurer Jakob Villinger that Ricius was admitted for the purpose of translating the Talmud into Latin, and urged Villinger to keep an eye on him. Perhaps overwhelmed by the emperor's request, Ricius only managed to translate two out of sixty-three Mishna tractates before the emperor's death. Ricius managed to publish a translation of Joseph Gikatilla's Kabbalistic work The Gates of Light, which was dedicated to Maximilian, though.
Reforms
Within the Holy Roman Empire, there was also a consensus that deep reforms were needed to preserve the unity of the Empire. The reforms, which had been delayed for a long time, were launched in the 1495 Reichstag at Worms. A new organ was introduced, the Reichskammergericht, that was to be largely independent from the Emperor. A new tax was launched to finance the Empire's affairs (above all military campaigns), the Gemeine Pfennig, although this would only be collected under Charles V and Ferdinand I, and not fully. To create a rival for the Reichskammergericht, Maximilian establish the Reichshofrat, which had its seat in Vienna. Unlike the Reichskammergericht, the Reichshofrat looked into criminal matters and even allowed the emperors the means to depose rulers who did not live up to expectations. Pavlac and Lott note that, during Maximilian's reign, this council was not popular though. According to Barbara Stollberg-Rilinger though, throughout the early modern period, the Aulic Council remained by far the faster and more efficient among the two Courts. The Reichskammergericht on the other hand was often torn by matters related to confessional alliance. Around 1497-1498, as part of his administrative reforms, he restructured his Privy Council (Geheimer Rat), a decision which today induces much scholarly discussion. Apart from balancing the Reichskammergericht with the Reichshofrat, this act of restructuring seemed to suggest that, as Westphal quoting Ortlieb, the "imperial ruler – independent of the existence of a supreme court – remained the contact person for hard pressed subjects in legal disputes as well, so that a special agency to deal with these matters could appear sensible" (as also shown by the large number of supplications he received).
In 1500, as Maximilian urgently needed assistance for his military plans, he agreed to establish an organ called the Reichsregiment (central imperial government, consisting of twenty members including the Electors, with the Emperor or his representative as its chairman), first organized in 1501 in Nuremberg and consisted of the deputies of the Emperor, local rulers, commoners, and the prince-electors of the Holy Roman Empire. Maximilian resented the new organization as it weakened his powers, and the Estates failed to support it. The new organ proved politically weak, and its power returned to Maximilian in 1502.
According to Thomas Brady Jr. and Jan-Dirk Müller, the most important governmental changes targeted the heart of the regime: the chancery. Early in Maximilian's reign, the Court Chancery at Innsbruck competed with the Imperial Chancery (which was under the elector-archbishop of Mainz, the senior Imperial chancellor). By referring the political matters in Tyrol, Austria as well as Imperial problems to the Court Chancery, Maximilian gradually centralized its authority. The two chanceries became combined in 1502. Jan-Dirk Müller opines that this chancery became the decisive government institution since 1502. In 1496, the emperor created a general treasury (Hofkammer) in Innsbruck, which became responsible for all the hereditary lands. The chamber of accounts (Raitkammer) at Vienna was made subordinate to this body. Under Paul von Liechtenstein, the Hofkammer was entrusted with not only hereditary lands' affairs, but Maximilian's affairs as the German king too.
Historian Joachim Whaley points out that there are usually two opposite views on Maximilian's rulership: one side is represented by the works of nineteenth century historians like Heinrich Ullmann or Leopold von Ranke, which criticize him for selfishly exploiting the German nation and putting the interest of his dynasty over his Germanic nation, thus impeding the unification process; the more recent side is represented by Hermann Wiesflecker's biography of 1971–86, which praises him for being "a talented and successful ruler, notable not only for his Realpolitik but also for his cultural activities generally and for his literary and artistic patronage in particular".
According to Brady Jr., Ranke is right regarding the fact Berthold von Henneberg and other princes did play the leading role in presenting the proposals for creating institutions (that would also place the power in the hands of the princes) in 1495. However, what Maximilian opposed was not reform per se. He generally shared their sentiments regarding ending feuds, sounder administrative procedures, better record-keeping, qualifications for offices etc. Responding to the proposal that an Imperial Council (the later Reichsregiment) should be created, he agreed and welcomed the participation of the Estates, but he alone should be the one who appointed members and the council should function only during his campaigns. He supported modernizing reforms (which he himself pioneered in his Austrian lands), but also wanted to tie it to his personal control, above all by permanent taxation, which the Estates consistently opposed. In 1504, when he was strong enough to propose his own ideas of such a Council, the cowered Estates tried to resist. At his strongest point though, he still failed to find a solution for the common tax matter, which led to disasters in Italy later. Meanwhile, he explored Austria's potential as a base for Imperial power and built his government largely with officials drawn from the lower aristocracy and burghers in Southern Germany. Whaley notes that the real foundation of his Imperial power lay with his networks of allies and clients, especially the less powerful Estates, who helped him to recover his strength in 1502 - his first reform proposals as King of the Romans in 1486 were about the creation of a network of regional unions. According to Whaley, "More systematically than any predecessor, Maximilian exploited the potential of regional leagues and unions to extend imperial influence and to create the possibility of imperial government in the Reich." To the Empire, the mechanisms involving such regional institutions bolstered the Land Piece (Ewiger Landfriede) declared in 1495 as well as the creation of the Reichskreise (Imperial Circles, which would serve the purpose of organize imperial armies, collect taxes and enforce orders of the imperial institutions: there were six at first; in 1512, the number increased to ten), between 1500 and 1512, although they were only fully functional some decades later. While Brady describes Maximilian's thinking as "dynastic and early modern", Heinz Angermeier (also focusing on his intentions at the 1495 Diet) writes that for Maximilian, "the first politician on the German throne", dynastic interests and imperial politics had no contradiction. Rather, the alliance with Spain, imperial prerogatives, anti-Ottoman agenda, European leadership and inner politics were all tied together. In Austria, Maximilian defined two administrative units: Lower Austria and Upper Austria (Further Austria was included in Upper Austria).
Another development arising from the reform was that, amidst the prolonged struggles between the monarchical-centralism of the emperor and the estates-based federalism of the princes, the Reichstag (Imperial Diet) became the all-important political forum and the supreme legal and constitutional institution (without any declared legal basis or inaugural act), which would act as a guarantee for the preservation of the Empire in the long run.
Ultimately, the results of the reform movement presided over by Maximilian, as presented in the shape of newly formed structures as well as the general framework (functioning as a constitutional framework), were a compromise between emperor and estates, who more or less shared common cause but separate interests. Although the system of insitutions arose from this were not complete, a flexible, adaptive problem-solving mechanism for the Empire was formed. Stollberg also links the development of the reform to the concentration of supranational power in the Habsburgs' hand, which manifested in the successful dynastic marriages of Maximilian and his descendants (and the successful defense of those lands, notably the rich Low Countries) as well as Maximilian's development of a revolutionary post system that helped the Habsburgs to maintain control of their territories.
According to Whaley, if Maximilian ever saw Germany as a source of income and soldiers only, he failed miserably in extracting both. His hereditary lands and other sources always contributed much more (the Estates gave him the equivalent of 50,000 gulden per year, a lower than even the taxes paid by Jews in both the Reich and hereditary lands, while Austria contributed 500,000 to 1,000,000 gulden per year). On the other hand, the attempts he demonstrated in building the imperial system alone shows that he did consider the German lands "a real sphere of government in which aspirations to royal rule were actively and purposefully pursued." Whaley notes that, despite struggles, what emerged at the end of Maximilian's rule was a strengthened monarchy and not an oligarchy of princes. If he was usually weak when trying to act as a monarch and using imperial instituations like the Reichstag, Maximilian's position was often strong when acting as a neutral overlord and relying on regional leagues of weaker principalities such as the Swabian league, as shown in his ability to call on money and soldiers to mediate the Bavaria dispute in 1504, after which he gained significant territories in Alsace, Swabia and Tyrol. His fiscal reform in his hereditary lands provided a model for other German princes. Benjamin Curtis opines that while Maximilian was not able to fully create a common government for his lands (although the chancellery and court council were able to coordinate affairs across the realms), he strengthened key administrative functions in Austria and created central offices to deal with financial, political and judicial matters - these offices replaced the feudal system and became representative of a more modern system that was administered by professionalized officials. After two decades of reforms, the emperor retained his position as first emong equals, while the empire gained common institutions through which the emperor shared power with the estates.
In 1508, Maximilian, with the assent of Pope Julius II, took the title Erwählter Römischer Kaiser ("Elected Roman Emperor"), thus ending the centuries-old custom that the Holy Roman Emperor had to be crowned by the Pope.
At the 1495 Diet of Worms, the Reception of Roman Law was accelerated and formalized. The Roman Law was made binding in German courts, except in the case it was contrary to local statutes. In practice, it became the basic law throughout Germany, displacing Germanic local law to a large extent, although Germanic law was still operative at the lower courts. Other than the desire to achieve legal unity and other factors, the adoption also highlighted the continuity between the Ancient Roman empire and the Holy Roman Empire. To realize his resolve to reform and unify the legal system, the emperor frequently intervened personally in matters of local legal matters, overriding local charters and customs. This practice was often met with irony and scorn from local councils, who wanted to protect local codes. Maximilian had a general reputation of justice and clemency, but could occasionally act in a violent and resentful manner if personally affronted.
In 1499, as the ruler of Tyrol, he introduced the Maximilianische Halsgerichtsordnung (the Penal Code of Maximilian). This was the first codified penal law in the German speaking world. The law attempted to introduce regularity into contemporary discrete practices of the courts. This would be part of the basis for the Constitutio Criminalis Carolina established under Charles V in 1530. Regarding the use of torture, the court needed to decide whether someone should be tortured. If such a decision was made, three council members and a clerk should be present and observe whether a confession was made only because of the fear of torture or the pain of torture, or that another person would be harmed.
During the Austrian-Hungarian war (1477-1488), Maximilian's father Frederick III issued the first modern regulations to strengthen military discipline. In 1508, using this ordinance as the basis, Maximilian devised the first military code (“Articles”). This code included 23 articles. The first five articles prescribed total obedience to imperial authority. Article 7 established the rules of conduct in camps. Article 13 exempted churches from billeting while Article 14 forbade violence against civilians: “You shall swear that you will not harm any pregnant women, widows and orphans, priests, honest maidens and mothers, under the fear of punishment for perjury and death”. These actions that indicated the early developments of a "military revolution" in European laws had a tradition in the Roman concept of a just war and ideas of sixteenth-century scholars, who developed this ancient doctrine with a main thesis which advocated that war was a matter between two armies and thus the civilians (especially women, children and old people) should be given immunity. The code would be the basis for further ordinances by Charles V and new "Articles" by Maximilian II (1527-1576), which became the universal military code for the whole Holy Roman Empire until 1642.
The legal reform seriously weakened the ancient Vehmic court (Vehmgericht, or Secret Tribunal of Westphalia, traditionally held to be instituted by Charlemagne but this theory is now considered unlikely), although it would not be abolished completely until 1811 (when it was abolished under the order of Jérôme Bonaparte).
In 1518, after a general diet of all Habsburg hereditary lands, the emperor issued the Innsbrucker Libell which set out the general
defence order (Verteidigungsordnung) of Austrian provinces, which "gathered together all the elements that had appeared and developed over the preceding centuries.". The provincial army, based on noble cavalry, was for defence only; bonded labourers were conscripted using a proportional conscription system; upper and lower Austrian provinces agreed on a mutual defence pact in which they would form a joint command structure if either were attacked. The military system and other reforms were threatened after Maximilian's deạth but would be restored and reorganized later under Ferdinand I.
According to Brady Jr., Maximilian was no reformer of the church though. Personally pious, he was also a practical caesaropapist who was only interested in the ecclesiastical organization as far as reforms could bring him political and fiscal advantages.
Finance and Economy
Maximilian was always troubled by financial shortcomings; his income never seemed to be enough to sustain his large-scale goals and policies. For this reason he was forced to take substantial credits from Upper German banker families, especially from the Gossembrot, Baumgarten, Fugger and Welser families. Jörg Baumgarten even served as Maximilian's financial advisor. The connection between the emperor and banking families in Augsburg was so widely known that Francis I of France derisively nicknamed him "the Mayor of Augsburg" (another story recounts that a French courtier called him the alderman of Augsburg, to which Louis XII replied: "Yes, but every time that this alderman rings the tocsin from his belfry, he makes all France tremble.", referring to Maximilian's military ability). Around 70 percent of his income went to wars (and by 1510s, he was waging wars on almost all sides of his border). At the end of Maximilian's rule, the Habsburgs' mountain of debt totalled six million gulden to six and a half million gulden, depending on the sources. By 1531, the remaining amount of debt was estimated at 400,000 gulden (about 282,669 Spanish ducats). In his entire reign, he had spent around 25 million gulden, much of which was contributed by his most loyal subjects – the Tyrolers. The historian Thomas Brady comments: "The best that can be said of his financial practices is that he borrowed democratically from rich and poor alike and defaulted with the same even-handedness". By comparison, when he abdicated in 1556, Charles V left Philip a total debt of 36 million ducats (equal to the income from Spanish America for his entire reign), while Ferdinand I left a debt of 12.5 million gulden when he died in 1564.
Economy and economic policies under the reign of Maximilian is a relatively unexplored topic, according to Benecke.
Overall, according to Whaley, "The reign of Maximilian I saw recovery and growth but also growing tension. This created both winners and losers.", although Whaley opines that this is no reason to expect a revolutionary explosion (in connection to Luther and the Reformation). Whaley points out, though, that because Maximilian and Charles V tried to promoted the interests of the Netherlands, after 1500, the Hanseatic League was negatively affected and their growth relative to England and the Netherlands declined.
In the Low Countries, during his regency, to get more money to pay for his campaigns, he resorted to debase coins in the Burgundian mints, causing more conflicts with the interests of the Estates and the merchant class.
In Austria, although this was never enough for his needs, his management of mines and salt works proved efficient, with a marked increase in revenue, the fine silver production in Schwaz increased from 2,800 kg in 1470 to 14,000 kg in 1516. Benecke remarks that Maximilian was a ruthless, exploitative businessman while Holleger sees him as a clearheaded manager with sober cost-benefit analysis. Ultimately, he had to mortgage these properties to the Fuggers to get quick cash. The financial price would ultimately fall on the Austrian population. Fichtner states that Maximilian's pan-European vision was very expensive, and his financial practices antagonized his subjects both high and low in Burgundy, Austria and Germany (who tried to temper his ambitions, although they never came to hate the charismatic ruler personally), this was still modest in comparison with what was about to come, and the Ottoman threat gave the Austrians a reason to pay.
Leipzig started its rise into one of the largest European trade fair cities after Maximilian granted them wide-ranged privileges in 1497 (and raised their three markets to the status of Imperial Fair in 1507).
Tu felix Austria nube
As part of the Treaty of Arras, Maximilian betrothed his three-year-old daughter Margaret to the Dauphin of France (later Charles VIII), son of his adversary Louis XI. Under the terms of Margaret's betrothal, she was sent to Louis to be brought up under his guardianship. Despite Louis's death in 1483, shortly after Margaret arrived in France, she remained at the French court. The Dauphin, now Charles VIII, was still a minor, and his regent until 1491 was his sister Anne.
Dying shortly after signing the Treaty of Le Verger, Francis II, Duke of Brittany, left his realm to his daughter Anne. In her search of alliances to protect her domain from neighboring interests, she betrothed Maximilian I in 1490. About a year later, they married by proxy.
However, Charles VIII and his sister wanted her inheritance for France. So, when the former came of age in 1491, and taking advantage of Maximilian and his father's interest in the succession of their adversary Mathias Corvinus, King of Hungary, Charles repudiated his betrothal to Margaret, invaded Brittany, forced Anne of Brittany to repudiate her unconsummated marriage to Maximilian, and married Anne of Brittany himself.
Margaret then remained in France as a hostage of sorts until 1493, when she was finally returned to her father with the signing of the Treaty of Senlis.
In the same year, as the hostilities of the lengthy Italian Wars with France were in preparation, Maximilian contracted another marriage for himself, this time to Bianca Maria Sforza, daughter of Galeazzo Maria Sforza, Duke of Milan, with the intercession of his brother, Ludovico Sforza, then regent of the duchy after the former's death.
Years later, in order to reduce the growing pressures on the Empire brought about by treaties between the rulers of France, Poland, Hungary, Bohemia, and Russia, as well as to secure Bohemia and Hungary for the Habsburgs, Maximilian met with the Jagiellonian kings Ladislaus II of Hungary and Bohemia and Sigismund I of Poland at the First Congress of Vienna in 1515. There they arranged for Maximilian's granddaughter Mary to marry Louis, the son of Ladislaus, and for Anne (the sister of Louis) to marry Maximilian's grandson Ferdinand (both grandchildren being the children of Philip the Handsome, Maximilian's son, and Joanna of Castile). The marriages arranged there brought Habsburg kingship over Hungary and Bohemia in 1526. In 1515, Louis was adopted by Maximilian. Maximilian had to serve as the proxy groom to Anna in the betrothal ceremony, because only in 1516 did Ferdinand agree to enter into the marriage, which would happen in 1521.
Thus Maximilian through his own marriages and those of his descendants (attempted unsuccessfully and successfully alike) sought, as was current practice for dynastic states at the time, to extend his sphere of influence. The marriages he arranged for both of his children more successfully fulfilled the specific goal of thwarting French interests, and after the turn of the sixteenth century, his matchmaking focused on his grandchildren, for whom he looked away from France towards the east.
These political marriages were summed up in the following Latin elegiac couplet, reportedly spoken by Matthias Corvinus: Bella gerant aliī, tū fēlix Austria nūbe/ Nam quae Mars aliīs, dat tibi regna Venus, "Let others wage war, but thou, O happy Austria, marry; for those kingdoms which Mars gives to others, Venus gives to thee."
Contrary to the implication of this motto though, Maximilian waged war aplenty (In four decades of ruling, he waged 27 wars in total). His general strategy was to combine his intricate systems of alliance, military threats and offers of marriage to realize his expansionist ambitions. Using overtures to Russia, Maximilian succeeded in coercing Bohemia, Hungary and Poland into acquiesce in the Habsburgs' expansionist plans. Combining this tactic with military threats, he was able to gain the favourable marriage arrangements In Hungary and Bohemia (which were under the same dynasty).
At the same time, his sprawling panoply of territories as well as potential claims constituted a threat to France, thus forcing Maximilian to continuously launch wars in defense of his possessions in Burgundy, the Low Countries and Italy against four generations of French kings (Louis XI, Charles VIII, Louis XII, Francis I). Coalitions he assembled for this purpose sometimes consisted of non-imperial actors like England. Edward J. Watts comments that the nature of these wars was dynastic, rather than imperial.
Fortune was also a factor that helped to bring about the results of his marriage plans. The double marriage could have given the Jagiellon a claim in Austria, while a potential male child of Margaret and John, a prince of Spain, would have had a claim to a portion of the maternal grandfather's possessions as well. But as it turned out, Vladislaus's male line became extinct, while the frail John died (possibly of overindulgence in sexual activities with his bride) without offsprings, so Maximilian's male line was able to claim the thrones.
Death and succession
During his last years, Maximilian began to focus on the question of his succession. His goal was to secure the throne for a member of his house and prevent Francis I of France from gaining the throne. According to the traditional view, the resulting "election campaign" was unprecedented due to the massive use of bribery. The Fugger family provided Maximilian a credit of one million gulden, which was used to bribe the prince-electors. However, the bribery claims have been challenged. At first, this policy seemed successful, and Maximilian managed to secure the votes from Mainz, Cologne, Brandenburg and Bohemia for his grandson Charles V. The death of Maximilian in 1519 seemed to put the succession at risk, but in a few months the election of Charles V was secured.
In 1501, Maximilian fell from his horse and badly injured his leg, causing him pain for the rest of his life. Some historians have suggested that Maximilian was "morbidly" depressed: from 1514, he travelled everywhere with his coffin. In 1518, feeling his death near after seeing an eclipse, he returned to his beloved Innsbruck, but the city's innskeepers and purveyors did not grant the emperor's entourage further credit. The resulting fit led to a stroke that left him bedridden on 15 December 1518. He continued to read documents and received foreign envoys right until the end though. Maximilian died in Wels, Upper Austria, at three o'clock in the morning on 12 January 1519. He was succeeded as Emperor by his grandson Charles V, his son Philip the Handsome having died in 1506. For penitential reasons, Maximilian gave very specific instructions for the treatment of his body after death. He wanted his hair to be cut off and his teeth knocked out, and the body was to be whipped and covered with lime and ash, wrapped in linen, and "publicly displayed to show the perishableness of all earthly glory". Gregor Reisch, the emperor's friend and confessor who closed his eyes, did not obey the instruction though. He placed a rosary in Maximilian's hand and other sacred objects near the corpse. He was buried on borrowed money.
Although he is buried in the Castle Chapel at Wiener Neustadt, an extremely elaborate cenotaph tomb for Maximilian is in the Hofkirche, Innsbruck, where the tomb is surrounded by statues of heroes from the past. Much of the work was done in his lifetime, but it was not completed until decades later.
Legacy
Despite his reputation as "the last knight" (and his penchant for personally commanding battles and leading a peripatetic court), as a politician, Maximilian also carried out "herculean tasks of bureaucracy" every day of his adult life (the emperor boasted that he could dictate, simultaneously, to half a dozen secretaries). At the same time, James M. Bradburne remarks that, "Naturally every ruler wanted to be seen as a victor, but Maximilian aspired to the role of Apollo Musagetes." The circle of humanists gathered around him and other contemporary admirers also tended to depict him as such. Maximilian was a universal patron, whose intellect and imagination, according to historian Sydney Anglo, made the courtier of Castiliogne look like a scaled-down version. Anglo points out, though, that the emperor treated his artists and scholars like mere tools (whom he also tended to fail to pay adequately or timely) to serve his purposes, and never autonomous forces. Maximilian did not play the roles of the sponsor and commissioner only, but as organizer, stimulator and planner, he joined the creative processes, drew up the programmes, suggested improvements, checked and decided on the details, invented devices, almost regardless of the time and material resources required. His creativity was not limited to the practical issues of politics, economy and war, but extended to the areas of arts, sciences, hunting, fishing and especially technical innovations, inclụding the creation of all kinds of military equipments, fortifications, precious metal processing or the mining industry. These activities though were time-consuming and the effort the emperor poured in such activities was sometimes criticized as excessive, or that they distracted him from the main tasks of a ruler. In the nineteenth century and early twentieth century, some even criticized him for possessing the qualities that befitted a genius more than a ruler, or that his intellect that saw too far made him unwisely try to force the march of time.
Military innovation, chivalry and equipments
Maximilian was a capable commander (Although, he lost many wars, usually due to the lack of financial resources. The notable commentators of his time, including Machiavelli, Piero Vettori and Guicciardini rated him as a great general, or in the words of Machivelli, "second to none", but pointed out that extravagance, terrible management of financial resources and other character defects tended to lead to the failures of grand schemes). and a military innovator who contributed to the modernization of warfare. He and his condottiero George von Frundsberg organized the first formations of the Landsknechte based on inspiration from Swiss pikement, but increased the ratio of pikemen and favoured handgunners over the crossbowmen, with new tactics being developed, leading to improvement in performance. Discipline, drilling and a highly developed staff by the standard of the era were also instilled. The "war apparatus" he created later played an essential role in Austria's rank as great power. Maximilian was the founder and organiser of the arms industry of the Habsburgs. He started the standardization of the artillery (according to the weight of the cannonballs) and made them more mobile. He sponsored new types of cannons, initiated many innovations that improved the range and damage so that cannons worked better against thick walls, and concerned himself with the metallurgy, as cannons often exploded when ignited and caused damage among his own troops. According to contemporary accounts, he could field an artillery of 105 cannons, including both iron and bronze guns of various sizes. The artillery force is considered by some to be the most developed of the day. The arsenal in Innsbruck, created by Maximilian, was one of the most notable artillery arsenal in Europe. His typical tactic was: artillery should attack first, the cavalry would act as shock troops and attack the flanks, infantry fought in tightly knitted formation at the middle.
Maximilian was described by the nineteenth century politician Anton Alexander Graf von Auersperg as 'the last knight' (der letzte Ritter) and this epithet has stuck to him the most. Some historians note that the epithet rings true, yet ironic: as the father of the Landsknechte (of which the paternity he shared with George von Frundsberg) and "the first cannoneer of the nation", he ended the combat supremacy of the cavalry and his death heralded the military revolution of the next two centuries. Moreover, his multifaceted reforms broke the back of the knight class both military and politically. He threw his own weight behind the promotion of the infantry soldier, leading them in battles on foot with a pike on his shoulder and giving the commanders honours and titles. To Maximilian, the rise of the new martial ethic – including even its violent aspect – associated with the rise of the Landsknechte, was also an unextractable part of his own masculine identity. He believed that fighting alongside his footsoldiers legitimized his right to rule more than any noble trapping or title. In his time, though, social tensions were brewing, and the nobles resisted this belief. At the Siege of Padua 1509, commanding a French-German allied army, Maximilian ordered the noble knights to dismount to help the Landsknechte to storm a breach, but Chevalier Bayard criticized him for putting noblemen at risk alongside "cobblers, blacksmiths, bakers, and laborers, and who do not hold their honor in like esteem as gentlemen." – even merely mixing the two on the same battlefield was considered insulting. The French then refused to obey. The siege broke when the German knights refused to continue their assaults on foot and demanded to fight on horseback, also on basis of status. A furious Maximilian left the camp and order the army to retreat.
With Maximilian's establishment and use of the Landsknechte, the military organisation in Germany was altered in a major way. Here began the rise of military enterprisers, who raised mercenaries with a system of subcontractors to make war on credit, and acted as the commanding generals of their own armies. Maximilian became an expert military enterpriser himself, leading his father to consider him a spendthrift military adventurer who wandered into new wars and debts while still recovering from the previous campaigns.
Regarding the cavalry, in 1500, using the French gendarmes as a model, he organized his heavy cavalry, called the kyrisser. These cavalrymen, still mostly noblemen, were still fully armed, but more lightly – these were the predecessors of cuirassiers. Non-nobles began to be accepted into the cavalry (mostly serving as light cavalry – each lanze, or lance, contained one kyrisser and six to seven light cavalrymen) and occasionally, he knighted them too. For heavy as well as light cavalry, firearms began to replace cold weapons.
{{multiple image
|background color = #F2CECE
|image1 = Große Hakenbüchsen, Zeugbuch Maximilians.jpg
|caption1 = Great arquebus for two shooters, fol.72r. In battles, the main force of a Landskneckt regiment created a formation called Gewalthaufen. After the first encounter, those armed with melee weapons attacked the enemies at close range while arquebusiers moved in front or between various formations, while the artillery would be covered by the rear guards.
|image2 = Das Hauptstück Der Leo, Zeugbuch Maximilians.jpg
|caption2 = Hauptstück (main gun) Der Leo, used in the Siege of Kufstein (1504). Although one of the heaviest cannons, it failed to breach Kufstein's walls together with other cannons that shot stone balls. Only the Purlepaus and the Weckauf, the two largest cannons of the time, destroyed Kufstein almost all on their own with iron balls.
|total_width = 400
|align = right
|footer_align = center
|footer = Images from [[c:File:Book_of_Armaments_of_Emperor_Maximilian_I_WDL8971.pdf|Book of Armaments' (Zeugbuch) of Maximilian]]
}}
In military medicine, Maximilian introduced structured triage (triage itself had existed since the Ancient Egypt). It was in his armies that the wounded was first categorized and treated according to an order of priority – in times of war, higher priority was given to military personnel over civilians and the higher-ranked over the lower-ranked. The practice spread to other armies in the following centuries and coined “triage” by the French. During the Middle Age, European armies tended to bring with them workers who served the soldiers both as barbers (this was their chief function, thus the origin of their name in German, Feldscherer, or field shearer) and low-skilled paramedics (as opposed to a trained medicus) who worked on their external wounds. Beginning with Maximilian, each captain of a detachment (of 200–500 men) was compelled to bring a capable Feldscherer and provide him with medicine and equipments. These paramedics were subject to a level of control under a Oberfeldarzt (chief field doctor), although their organization was not stabilized until the seventeenth century and it also took a long time before the average level of these paramedics was raised substantially.
The emperor would not live to see the fruits of his military reforms, which were also widely adopted by the territories in the Empire and other nations in Europe. Moreover, the landsknechte's mode of fighting boosted the strength of the territorial polities, while more centralized nations were able to utilize them in ways German rulers could not. Kleinschmidt concludes that, in the end, Maximilian did good service to the competitors of his own grandson.
While favouring more modern methods in his actual military undertakings, Maximilian had a genuine interest in promoting chivalric traditions like the tournament, being an exceptional jouster himself. The tournaments helped to enhance his personal image and solidify a network of princes and nobles over whom he kept a close watch, fostering fidelity and fraternity among the competitors. Taking inspiration from the Burgundy tournament, he developed the German tournament into a distinctive entity. In addition, during at least two occasions in his campaigns, he challenged and killed French knights in duel-like preludes to battles.
Knights reacted to their decreased condition and loss of privileges in different ways. Some asserted their traditional rights in violent ways and became robber knights like Götz von Berlichingen. The knights as a social group became an obstacle to Maximilian's law and order and the relationship between them and "the last knight" became antagonistic. Some probably also felt slighted by the way imperial propaganda presented Maximilian as the sole defender of knightly values. In the Diet of Worms in 1495, the emperor, the archbishops, great princes and free cities joined force to initiate the Perpetual Land Peace (Ewige Landfriede), forbidding all private feuding, in order to protect the rising tide of commerce. The tournament sponsored by the emperor was thus a tool to appease the knights, although it became a recreational, yet still deadly extreme sport. After spending 20 years creating and supporting policies against the knights though, Maximilian changed his ways and began trying to engage them to integrate them into his frame of rulership. In 1517, he lifted the ban on Franz von Sickingen, a leading figure among the knights and took him into his service. In the same year, he summoned the Rhenish knights and introduced his Ritterrecht (Knight's Rights), which would provide the free knight with a special law court, in exchange of their oaths for being obedient to the emperor and abstaining from evil deeds. He did not succeed in collecting taxes from them or creating a knights' association, but an ideology or frame emerged, that allowed the knights to retain their freedom while fostering the relationship between the crown and the sword.
Maximilian had a great passion for armour, not only as equipment for battle or tournaments, but as an art form. He prided himself on his armor designing expertise and knowledge of metallurgy. Under his patronage, "the art of the armorer blossomed like never before." Master armorers across Europe like Lorenz Helmschmid, Konrad Seusenhofer, Franck Scroo and Daniel Hopfer (who was the first to etch on iron as part of an artistic process, using an acid wash) created custom-made armors that often served as extravagant gifts to display Maximilian's generosity and devices that would produce special effects (often initiated by the emperor himself) in tournaments. The style of armour that became popular during the second half of his reign featured elaborate fluting and metalworking, and became known as Maximilian armour. It emphasized the details in the shaping of the metal itself, rather than the etched or gilded designs popular in the Milanese style. Maximilian also gave a bizarre jousting helmet as a gift to King Henry VIII – the helmet's visor features a human face, with eyes, nose and a grinning mouth, and was modelled after the appearance of Maximilian himself. It also sports a pair of curled ram's horns, brass spectacles, and even etched beard stubble. Knowing that the extinct Treizsaurbeyn (likely Treitzsauerwein) family had a method to make extra tough armours that could not be shot through by any crossbow, he sought their servant Caspar Riederer, who helped Konrad Seusenhofer to recreate the armour type. With knowledge gained from Riederer, Maximilian invented a method "so that in his workshops 30 front and back plates could be made at once", in order to help his soldiers and especially his Landsknechte. The details of the process described are currently not known, but likely utilizing matrices with where armour parts could be stamped out from sheet metal.
Maximilian associated the practical art of hunting (as well as fishing and falconry) with his status as prince and knight. He introduced parforce and park hunting to Germany. He also published essays on these topics. In this he followed Frederick II Hohenstaufen and was equally attentive to naturalist details but less scientific. His Tyrol Fishery Book (Tiroler Fischereibuch) was composed with the help from his fish master Martin Fritz and Wolfgang Hohenleiter. To keep fish fresh, he invented a special kind of fish container.
While he was unconcerned with the disappearance or weakening of the knight class due to the development of artillery and infantry, Maximilian worried greatly about the vulnerability of ibexes, described by him as "noble creatures", in front of handguns and criticized the peasants in particular for having no moderation. In 1517, the emperor banned the manufacturing and possession of the wheellock, which was designed and especially effective for hunting. Another possible reason for this earliest attempt at gun control might be related to worries about the spreading of crimes. He investigated, classified and protected game reserves, which also damaged the farmers' crops as he forbade them to erect fences. Game population quickly increased though. In one case, he became an unintentional species conservationist: As he had Tyrolean mountain lakes stocked with trouts, a variety of the last trout originating from the Danube, the Kaiser Max trout, has survived until this day in Gossenköllesee.
Since he was young, in Germany and especially in the Low Countries, he paid attention to the burghers' art of archery, joined archery competitions and gave patronage to crossbow and archery guilds (in military affairs though, he officially abolished the crossbow in 1517 despite its continued use in other countries). Although he never gained complete popular support in Flanders, these patronage activities helped him to build up a relationship with guild members who participated in his campaigns, notably for Guinegate (1479), and rally urban support during his time in the Low Countries. His name heads the list of lords in the huge 1488 Saint George guild-book in Ghent. In the early sixteenth century, he built a Guildhouse for the St.Sebastian's Archers at The Hague.
The 1511 Landlibell (a military statue and "a cornerstone of Tyrol's democracy", which established the foundation for Tyrol's separate defence organization by exempting the population from military service outside their borders but requiring them to serve in the defence of their region, and recognizing the connection between freedom and the rights to bear arms), which remained largely in effect until the fall of monarchy, led to the establishment of armed militia formations called (Tiroler) Schützen. The term Schützen had been used to refer to men armed with crossbows, but Maximilian enthusiastically encouraged riflemen and firearms. These formations still exist, although they have become non-governmental since 1918. In 2019, they organized a great shooting event in commemoration of the emperor.
Another art associated with chivalry and military activities was dancing. As the landsknechte's fighting techniques were developed, they no longer preferred fighting along a straight line (as exercised by even the Swiss until the end of the fifteenth century), but leaned towards a circle-wise movement that enhanced the use of the space around the combatant and allowed them to attack the opponents from different angles. The circle-wise formation described by Jean Molinet as the "snail" would become the hallmark of landsknechte's combat. The new types of combat also required the maintenance of a stable bodily equilibrium. Maximilian, an innovator of these types of movements, also saw value in their effects over the maintenance of group discipline (apart from the control of centralized institutions). As Maximilian and his commanders sought to popularize these forms of movements (which only became daily practice at the end of the fifteen century and gained dominance after Maximilian's death in 1519), he promoted them in tournaments, in fencing and in dancing as well – which started to focus on steps and the movements of the feet over the movements of the head and the arms. The courtly festivals became a playground for innovations, foreshadowing developemts in military practices. Regarding dancing, other elements favoured by Maximilian's court were the Moriskentan ("Moors' dance", "Morris-dance", or Moresca), the masquerades (mummerei) and the use of torchbearers. Torchbearers are a part of almost all of the illustrated costumed circle dances in the Weisskunig and Freydal, with Maximilian himself usually being one of them. Masquerades usually included dancing to the music of fifes and drums, performed by the same musicians who served the new infantry forces. The famous humanist philosopher Julius Caesar Scaliger, who grew up as a page at Maximilian's court, reportedly performed the Pyrrhic war dance, which he reconstructed from ancient sources, in front of the emperor. The annual Tänzelfest, the oldest children's festival in Bavaria, reportedly founded by Maximilian in 1497 (the event only appeared in written sources from 1658), includes dancing, processions, and reenactment of city life under Maximilian.
Cultural patronage, reforms and image building
Maximilian was a keen supporter of the arts and sciences, and he surrounded himself with scholars such as Joachim Vadian and Andreas Stoberl (Stiborius), promoting them to important court posts. Many of them were commissioned to assist him complete a series of projects, in different art forms, intended to glorify for posterity his life and deeds and those of his Habsburg ancestors. He referred to these projects as Gedechtnus ("memorial"), which included a series of stylised autobiographical works: the epic poems Theuerdank and Freydal, and the chivalric novel Weisskunig, both published in editions lavishly illustrated with woodcuts. In this vein, he commissioned a series of three monumental woodblock prints: The Triumphal Arch (1512–18, 192 woodcut panels, 295 cm wide and 357 cm high – approximately 9'8" by 11'8½"); and a Triumphal Procession (1516–18, 137 woodcut panels, 54 m long), which is led by a Large Triumphal Carriage (1522, 8 woodcut panels, 1½' high and 8' long), created by artists including Albrecht Dürer, Albrecht Altdorfer and Hans Burgkmair.Triumphal Procession According to The Last Knight: The Art, Armor, and Ambition of Maximilian I, Maximilian dictated large parts of the books to his secretary and friend Marx Treitzsaurwein who did the rewriting. Authors of the book Emperor Maximilian I and the Age of Durer cast doubt on his role as a true patron of the arts though, as he tended to favor pragmatic elements over high arts. On the other hand, he was a perfectionist who involved himself with every stage of the creative processes. His goals extended far beyond the emperor's own glorification too: commemoration also included the documentation in details of the presence and the restoration of source materials and precious artifacts.
Notorious for his micro-managing, there was a notable case in which the emperor allowed and encouraged free-ranging, even wild improvisations: his Book of Prayers. The work shows a lack of constraint, and no consistent iconographic program on the part of the artist (Dürer), which would be realized and highly praised by Goethe in 1811.
In 1504, Maximilian commissioned the Ambraser Heldenbuch, a compendium of German medieval narratives (the majority was heroic epics), which was written by Hans Ried. The work was of great importance to German literature because among its twenty five narratives, fifteen was unique. This would be the last time the Nibelungenlied was enshrined in German literature before being rediscovered again 250 years later. Maximilian was also a patron of Ulrich von Hutten whom he crowned as Poet Laureate in 1517 and the humanist Willibald Pirckheimer, who was one of Germany's most important patrons of arts in his own right.
As Rex litteratus, he supported all the literary genres that had been supported by his predecessors, in addition to drama, a genre that had been gaining in popularity in his era. Joseph Grünpeck attracted his attention with Comoediae duae, presumably the first German Neo-Latin festival plays. He was impressed with Joseph Grünpeck's Streit zwischen Virtus und Fallacicaptrix, a morality play in which Maximilian himself was asked to choose between virtue and base pleasure. Celtis wrote for him Ludus Dianae and Rhapsodia de laudibus et victoria Maximiliani de Boemannis. The Ludus Dianae displays the symbiotic relationship between ruler and humanist, who are both portrayed as Apollonian or Phoebeian, while Saturn – as counterpole of Phoebus – is a negative force and Bacchus as well as Venus display dangerous aspects in tempting humans towards a depraving life. Locher wrote the first German Neo-Latin tragedy, also the first German Humanist tragedy, the Historia de Rege Frantie. Other notable authors included Benedictus Chelidonius and Hieronymus Vehus. These plays often doubled as encomium or dramatized newe zeittung (news reports) in support of imperial or princely politics. Douglas A.Russel remarks that the academic mode of theater associated with the new interest Humanism and the Classics at that time that was mainly the work of Konrad Celtis, Joachim von Watt (who was a poet laureate crowned by Maximilian and at age 32 was Rector at the University of Vienna), and Benedictus Chelidonius. William Cecil MacDonald comments that, in the context of German medieval literary patronage, "Maximilian's literary activities not only 'summarize' the literary patronage of the Middle Ages, but also represent a point of departure — a beacon for a new age." Moreover, "Like Charlemagne, Otto the Great, Henry II, and Frederick Barbarossa, Maximilian was a fostering spirit, i.e. he not only commissioned literature, but through his policies and the force of his personality he created a climate conducive to the flowering of the arts."
Under his rule, the University of Vienna reached its apogee as a centre of humanistic thought. He established the College of Poets and Mathematicians which was incorporated into the university. Maximilian invited Conrad Celtis, the leading German scientist of their day to University of Vienna. Celtis found the Sodalitas litteraria Danubiana (which was also supported by Maximilian), an association of scholars from the Danube area, to support literature and humanist thought. Maximilian also promoted the development of the young Habsburg University of Freiburg and its host city, in consideration of the city's strategic position. He gave the city privileges, helped it to turn the corner financially while utilizing the university's professors for important diplomatic missions and key positions at the court. The Chancellor Konrad Stürtzel, twice the university's rector, acted as the bridge between Maximilian and Freiburg. Maximilian supported and utilized the humanists partly for propaganda effect, partly for his genealogical projects, but he also employed several as secretaries and counsellors - in their selection he rejected class barriers, believing that "intelligent minds deriving their nobility from God", even if this caused conflicts (even physical attacks) with the nobles. He relied on his humanists to create a nationalistic imperial myth, in order to unify the Reich against the French in Italy, as pretext for a later Crusade (the Estates protested against investing their resources in Italy though). Maximilian told his Electors each to establish a university in their realm. Thus in 1502 and 1506, together with the Elector of Saxony and the Elector of Brandenburg, respectively, he co-found the University of Wittenberg and the University of Frankfurt. The University of Wittenberg was the first German university established without a Papal Bull, signifying the secular imperial authority concerning universities. This first center in the North where old Latin scholarly traditions were overthrown would become the home of Luther and Melanchthon.
As he was too distant, his patronage of Humanism and humanistic books in particular did not reach the Netherlands (and as Mary of Burgundy died too young while Philip the Fair and Charles V were educated in the Burgundian tradition, there was no sovereign who fostered humanistic Latin culture in the Netherlands, although they had their own mode of learning). There were exchanges and arguments over political ideas and values between the two sides though. Maximilian greatly admired his father-in-law Charles the Bold (he even adopted Charles's motto as one of his own, namely, "I dared it!", or "Je l'ay emprint!") and promoted his conception that the sovereign's power and magnificence came directly from God and not through the mediation of the Church. This concept was part of Maximilian's political program (including other elements such as the recovery of Italy, the position of the Emperor as dominus mundi, expansionism, the crusade...etc.), supported in the Netherlands by Paul of Middelburg but considered extreme by Erasmus. Concerning heroic models that rulers should follow (especially concerning the education of Archiduke Charles, who would later be influenced much by his grandfather's knightly image), Antoine Haneron proposed ancient heroes, above all Alexander (who Charles also adopted as a great role model for all his life) while Molinet presented Alexander, Caesar and Semiramis, but Erasmus protested Alexander, Caesar and Xerxes as models. Maximilian strongly promoted the first two though, as well as St.George (both in "Frederican" and "Burgundian" forms). The idea of peace also became more pronounced in Maximilian's court in his last years though, likely influenced by Flemish humanism, especially Erasmian (Maximilian himself was an unabashed warlike prince, but late in his life, he did recognize that his 27 wars only served the devil). Responding to the intense Burgundian humanistic discourse on nobility by birth and nobility by virtue, the emperor pushed his own modification: office versus birth (with a strong emphasis on the primacy of office over birth). Noflatscher opines that the emperor was probably the most important mediator of the Burgundian model himself, with Bianca Maria also having influence (although she could only partially fulfill her role).
In philosophy, besides Humanism, esotericism had a notable influence during Maximilian's time. In 1505, he sent Johannes Trithemius eight questions concerning spiritual and religious matters (Questions 3, 5, 6, 7 were concerned with witchcraft) that Trithemius answered and later published in the 1515 book Liber octo questionum (Books of eight questions). Maximilian displayed a skeptical aspect, posing questions such as why God permitted witches and their powers to control evil spirits. The authors (now usually identified as Heinrich Kramer alone) of the most notorious work on witchcraft of the time, Malleus Maleficarum, claimed that they had his letter of approval (supposedly issued in November 1486) to act as inquisitors, but according to Behringer, Durrant and Bailey, he likely never supported them (although Kramer apparently went to Brussels, the Burgundian capital, in 1486, hoping to influence the young King – they did not dare to involve Frederick III, whom Kramer had offended some years earlier). Trithemius dedicated the De septem secundeis ("The Seven Secondary Intelligences"), which argued that the cycle of ages was ruled by seven planetary angels, in addition to God (the First Intelligence). The historian Martin Holleger notes though that Maximilian himself did not share the cyclical view of history, typical for their contemporaries, nor believed that their age would be the last age. He had a linear understanding of time – that progresses would make the world better. The kabbalistic elements in the court as well as Trithemius himself influenced the thinking of the famous polymath and occultist Heinrich Cornelius Agrippa (who in Maximilian's time served mainly as secretary, soldier and diplomatic spy). The emperor, having interest in the occult himself, intended to write two books on magic (Zauberpuech) and black magic (Schwartzcunnstpuech) but did not have time for them.
The Italian philosopher Gianfrancesco Pico della Mirandola dedicated his 1500 work De imaginatione, a treatise on the human mind (in which he synthesized Aristotle, Neoplatonism and Girolamo Savonarola), to Maximilian. The Italian philosopher and theologian Tommaso Radini Tedeschi also dedicated his 1511 work La Calipsychia sive de pulchritudine animae to the emperor.
The establishment of the new Courts and the formal Reception of Roman Law in 1495 led to the formation of a professional lawyer class as well as a bureaucratic judiciary. Legal scholars trained in mos italicus (either in Italian universities or in newly established German universities) became in demand. Among the prominent lawyers and legal scholars who served Maximilian in various capacities and provided legal advices to the emperor were Mercurino Gattinara, Sebastian Brandt and Ulrich Zasius. Together with the aristocrats and the literati (who participated in Maximilian's propaganda and intellectual projects), the lawyers and legal scholars became one of three main groups in Maximilian's court. Konrad Stürtzel, the Chancellor, belonged to this group. In Maximilian's court – more egalitarian than any previous German or Imperial court, with its burghers and peasants – all these groups were treated equally in promotions and rewards. The individuals were also blending in many respects, usually through marriage alliances.
Maximilian was an energetic patron of the library. Previous Habsburg rulers such as Albert III and Maximilian's father Frederick III (who collected the 110 books that were the core inventory of the later library) had also been instrumental in centralizing art treasures and book collections. Maximilian became a bibliophile during the time he was in the Low Countries. As husband of Mary of Burgundy, he would come into possession of the huge Burgundian library, which according to some sources was brought to Austria when he returned to his native land. According to the official website of the Austrian National Library though, the Habsburgs only brought the collection to Vienna in 1581. Maximilian also inherited the Tyrol library of his uncle Sigismund, also a great cultural patron (which had received a large contribution from Eleanor of Scotland, Sigismund's wife and also a great lover of books). When he married Bianca Maria, Italian masterpieces were incorporated into the collection. The collection became more organized when Maximilian commissioned Ladislaus Sunthaim, Jakob Mennel and Johannes Cuspinian to acquire and compose books. By the beginning of the sixteenth century, the library had acquired significant Bohemian, French and Italian book art. In 1504, Conrad Celtis spoke the first time of the Bibliotheca Regia (which would evolve into the Imperial Library, and as it is named today, the Österreichische Nationalbibliothek or the Austrian National Library), an organized library that had been expanded through purchases. Maximilian's collection was dispersed between Innsbruck, Vienna and Wiener Neustadt. The Wiener Neustadt part was under Conrad Celtis's management. The more valuable part was in Innbruck. Already in Maximilian's time, the idea and function of libraries were changing and it became important that scholars gained access to the books. Under Maximilian, who was casual in his attitude to scholars (which marvelled the French chronicler Pierre Frossart, it was fairly easy for a scholar to gain access to the emperor, the court and thus the library. But despite the intention of rulers like Maximilian II (and his chief Imperial Librarian Blotius) and Charles VI to make the library open to the general public, the process was only completed in 1860.
During Maximilian's time, there were several projects of an encyclopaedic nature, among them the incomplete projects of Conrad Celtis. However, as the founder of the Collegium poetarum et mathematicorum and a "program thinker" (programmdenker, term used by Jan-Dirk Müller and Hans-Joachim Ziegeler), Celtis established an encyclopaedic-scientific model that increasingly integrated and favoured mechanical arts in relation to the combination between natural sciences and technology and associated them with divina fabrica (God's creation in the six days). In consistence with Celtis's design, the university's curriculum and the political and scientific order of Maximilian's time (which was also influenced by developments in the previous eras), the humanist Gregor Reisch, who was also Maximilian's confessor, produced the Margarita Philosophica, "the first modern encyclopaedia of any importance", first published in 1503. The work covers rhetoric, grammar, logic, music, mathematical topics, childbirth, astronomy, astrology, chemical topics (including alchemy), and hell.
An area that saw many new developments under Maximilian was cartography, of which the important center in Germany was Nuremberg. In 1515 Dürer and Johannes Stabius created the first world map projected on a solid geometric sphere. Bert De Munck and Antonella Romano make a connection between the mapmaking activities of Dürer and Stabius with efforts to grasp, manipulate and represent time and space, which was also associated with Maximilian's "unprecedented dynastic mythmaking" and pioneering printed works like the Triumphal Arch and the Triumphal Procession. Maximilian assigned Johannes Cuspinianus and Stabius to compile a topography of Austrian lands and a set of regional maps. Stabius and his friend Georg Tannstetter worked together on the maps. The work appeared in 1533 but without maps. The 1528 Lazarus-Tannstetter map of Tabulae Hungariae (one of the first regional maps in Europe) though seemed to be related to the project. The cartographers Martin Waldseemüller and Matthias Ringmann dedicated their famous work Universalis Cosmographia to Maximilian, although the direct backer was Rene II of Loraine. The 1513 edition of Geography, which contained this map and was also dedicated to Maximilian, by Jacobus Aeschler and Georgius Ubelin, is considered by Armando Cortes to be the climax of a cartography revolution. The emperor himself dabbled in cartography. According to Buisseret, Maximilian could "call upon a variety of cartographic talent unrivalled anywhere else in Europe at that time" (that included Celtis, Stabius, Cuspinianus, Jacob Ziegler, Johannes Aventinus and Tannstetter). The development in cartography was tied to the emperor's special interest in sea route exploration, as an activity concerning his global monarchy concept, and his responsibilities as Duke consort to Mary of Burgundy, grandfather of the future ruler of Spain as well as ally and close relation to Portuguese kings. He sent men like Martin Behaim und Hieronymus Münzer to the Portuguese court to cooperate in their exploration efforts as well as to act as his own representatives. Another involved in the network was the Flemish Josse van Huerter or Joss de Utra who would become the first settler of the island of Faial in the Portuguese Azores. Maximilian also played an essential role in connecting the financial houses in Augsburg and Nuremberg (including the companies of Höchstetter, Fugger and Welser etc.) to Portuguese expeditions. In exchange for financial backing, King Manuel provided German investors with generous privileges. The humanist Conrad Peutinger was an important agent who acted as advisor to financiers, translator of voyage records and imperial councillor. Harald Kleinschmidt opines that regarding the matter of world exploration as well as the "transformation of European world picture" in general, Maximilian was "a crucial though much underestimated figure" of his time.
The evolution of cartography was connected to development in ethnography and the new Humanist science of chorography (promoted by Celtis at the University of Vienna). As Maximilian already promoted the Ur-German after much archaeological and textual excavation as well as embraced the early German wildness, Peutinger correctly deduced that he would support German exploration of another primitive people as well. Using the Welser's commercial ventures as a pretext, Peutinger goaded Maximilian into backing his ethnographical interests in the Indians and supporting the 1505–1506 voyage of Balthasar Springer around Africa to India. Besides, this endeavour added to the emperor's image as a conqueror and ruler, also to rival the claims of his arch-rival Suleiman the Magnificent regarding a global empire. Based on an instruction dictated by Maximilian in 1512 regarding Indians in the Triumphal Procession, Jörg Kölderer executed a series of (now lost) drawings, which served as the guideline for Altdorfer's miniatures in 1513–1515, which in turn became the model for woodcuts (half of them based on now lost 1516–1518 drawings by Burgkmair) showing "the people of Calicut." In 1508, Burgkmair produced the People of Africa and India series, focusing on depicting the peoples whom Springer encountered along coastal Africa and India. The series brought into being "a basic set of analytic categories that ethnography would take as its methodological foundation". As part of his dealings with Moscow, the Jagiellons and the Slavic East in general, Maximilian surrounded himself with people from the Slovenian territories and familiar with Slavic languages, such as Sigismund von Herberstein (himself a prominent ethnographer), Petrus Bonomo, George Slatkonia and Paulus Oberstain. Political necessities overcame the prejudice against living languages, which started to find a place along Latin throughout central Europe, also in scholarly areas.
The emperor's program of restoring the University of Vienna to its former pre-eminence was also concerned with astrology and astronomy. He realized the potential of the print press when combined with these branches of learning, and employed Georg Tannstetter (who, in 1509, was appointed by Maximilian as the Professor of Astronomy at the University of Vienna and also worked for a joint calendar reform attempt with the Pope) to produce yearly practica and wall calendars. In 1515, Stabius (who also acted as the court astronomer), Dürer and the astronomer Konrad Heinfogel produced the first planispheres of both southern and northerns hemispheres, also the first printed celestial maps. These maps prompted the revival of interest in the field of uranometry throughout Europe. The Ensisheim meteorite fell on earth during the reign of Maximilian (7 November 1492). This was one of the oldest meteorite impacts in recorded history. King Maximilian, who was on his way to a campaign against France, ordered for it to be dug up and preserved at a local church. The meteorite, as a good omen, was utilized for propaganda against France through the use of broadsheets with dramatic pictures under the direction of the poet Sebastian Brandt (as Maximilian defeated a far larger French army to his own in Senlis two months later, the news would spread even more). On the subject of calendars and calendar reform, already in 1484, the famous Flemish scientist Paul of Middelburg dedicated his Praenostica ad viginti annos duratura to Maximilian. His 1513 magnum opus Paulina de recta Paschae celebratione was also dedicated to Maximilian, together with Leo X.
In addition to maps, other astrological, geometrical and horological instruments were also developed, chiefly by Stiborius and Stabius, who understood the need to cooperate with the emperors to make these instruments into useful tools for propaganda also. The extraordinarily luxurious planetarium, that took twelve men to carry and was given as a diplomatic gift by Ferdinand I to Suleiman the Magnificent in 1541, originally belonged to Maximilian. He loved to introduce newly invented musical instruments. In 1506, he had a special regal, likely the apfelregal seen in one of Hans Weiditz's woodcuts, built for Paul Hofhaimer. The emperor's favourite musical instrument maker was Hans Georg Neuschel of Nuremberg, who created an improved trombone (Neuschel was a talented trombonist himself). In 1500, an elaborated lathe (Drehbank) was created for the emperor's personal carpentry hobby. This is the earliest extant lathe, the earliest known surviving lapidary instrument as well as one of the earliest examples of scientific and technological furniture. The earliest surviving screwdriver has also been found attached to one of his suits of armour. Regiomontanus reportedly made an eagle automaton that moved and greeted him when he came to Nuremberg. Augsburg also courted “their” emperor by building the legendary Nachttor or Night Gate (famous for its many secret mechanisms), intended to make his entrance safer if he returned to the city at night, in 1514. The gate was destroyed in 1867, but plans and descriptions remain so recently Ausburg has created a virtual version. He liked to end his festivals with fireworks. In 1506, on the surface of Lake Konstanz, on the occasion of the gathering of the Reichstag, he staged a show of firework (this was the first recorded German firework, inspired by the example of Italian princes), completed with firework music provided by singers and court trumpeters. Machiavelli judged him as extravagant, but these were not fireworks done for pleasure, peaceful celebration or religious purpose as the type often seen in Italy, but a core ritual of Maximilian's court, that demonstrated the link between pyrotechnics and military technology. The show caused a stir (the news about the event was distributed through a Briefzeitung, or "letter newspaper"), leading to fireworks becoming fashionable. In the Baroque era, it would be a common form of self-stylization for monarchs.
A lot of these scientific and artistic instruments and technical marvels came from Nuremberg, by then the great mechanical, metalworking and precision industry centre of German Renaissance. From 1510, Stabius also took up permanent residence there after travelling with the emperor for years. The city's precision industry and its secondary manufacturing industries were connected to the mining industry, that the leading financiers from the neighbouring Augsburg (which had a flourishing printing industry and was also important for the emperor politically) heavily invested into in partnership with princes like Maximilian.
The development in astronomy, astrology, cosmography and cartography as well as a developing economy with demand for training in book-keeping were tied with the change in status and professionalization of mathematical studies (that once stood behind medicine, jurisprudence and theology as the lowest art) in the universities. The leading figure was George Tanstetter (also the emperor's astrologer and physician), who provided his students with reasonably priced books through the collection and publication of works done by Joannes de Muris, Peuerbach and Regiomontanus and others, as well as wrote Viri Mathematici (Lives of Mathematicians), the first historical study of mathematics of Austria (and also a work to consolidate the position of astronomers, astrologers in Maximilian's court, in imitation of Maximilian's genealogical projects that reinforced his imperial titles). The foremost exponent (and one of the founders) of "descriptive geometry" was Albrecht Dürer himself, whose work Melencolia I was a prime representation and inspired a lot of discussions, including its relation or non-relation to Maximilian's status as the most known melancholic of the time, his and his humanists' fear of the influence of the planet Saturn (some say that the engraving was a self-portrayal of Dürer while others think that it was a talisman for Maximilian to counter Saturn), the Triumphal Arch, hieroglyphics and other esoteric developments in his court, respectively etc.
Maximilian continued with the strong tradition of supporting physicians at court, started by his father Frederick III, despite Maximilian himself had little personal use for them (he usually consulted everyone's opinions and then opted for some self-curing folk practices). He kept on his payroll about 23 court physicians, whom he "poached" during his long travels from the courts of his relatives, friends, rivals and urban hosts. An innovative solution was entrusting these physicians with healthcare in the most important cities, for which purpose an allowance and horses were made available to them. Alessandro Benedetti dedicated his Historia Corporis Humani: sive Anatomice (The Account of Human Body: or Anatomy) to the emperor. As Humanism was established, the Medical Faculty of the University of Vienna increasingly abandoned Scholasticism and focused on studying laws of disease and anatomy based on actual experiences. the early fifteenth century, the Medical Faculty of the university tried to gain influence upon the apothecaries of the city in order to enhance the dispensed medicines' quality and to enforce uniform preparation modes. Finally, in 1517, Maximilian granted them a privilege which allowed the faculty to inspect the Viennese pharmacies and to check the identity, quality and proper storage of the ingredients as well as the formulated preparations. Likely a victim of syphilis (dubbed the "French disease" and used by Maximilian and his humanists like Joseph Grünpeck in their propaganda and artistic works against France) himself, Maximilian had an interest in the disease, which led him to establish eight hospitals in various hereditary lands. He also retained an interest in the healing properties of berries and herbs all his life and invented a recipe for an invigorating stone beer.
Maximilian had an interest in archaeology, "creative and participatory rather than objective and distancing" (and sometimes destructive), according to Christopher S.Wood. His chief advisor on archaeological matters was Konrad Peutinger, who was also the founder of classical Germanic and Roman studies. Peutinger commenced an ambitious project, the Vitae Imperatorum Augustorum, a series of biographies of emperors from Augustus to Maximilian (each biography would also include epigraphic and numismatic evidences), but only the early sections were completed. The search for medals ultimately led to a broad craze in Germany for medals as an alternative for portraiture. At the suggestion of the emperor, the scholar published his collection of Roman inscriptions. Maximilian did not distinguish between the secular and the sacred, the Middle Ages and antiquity, and considered equal in archaeological value the various searches and excavations of the Holy Tunic (rediscovered in Trier in 1513 after Maximilian demanded to see it, and then exhibited, reportedly attracting 100,000 pilgrims), Roman and German reliefs and inscriptions, etc. and the most famous quest of all, the search for the remains of hero Siegfried. Maximilian's private collection activities were carried out by his secretary, the humanist Johann Fuchsmagen, on his behalf. Sometimes, the emperor came in contact with antiquities during his campaigns – for example, an old German inscription found in Kufstein in 1504, that he immediately sent to Peutinger. Around 1512–1514, Pirckheimer translated and presented Maximilian with Horapollo's Hieroglyphica. The hieroglyphics would be incorporated by Dürer into the Triumphal Arch, which Rudolf Wittkower considers "the greatest hieroglyphic monument".
Maximilian's time was an era of international development for cryptography. His premier expert on cryptography was the Abbot Trithemius, who dedicated Polygraphiae libri sex (controversially disguised as a treatise on occult, either because its real target audience was the selected few such as Maximilian or to attract public attention to a tedious field) to the emperor and wrote another work on steganography (Steganographia, posthumously published). As practitioner, Maximilian functioned as the Empire's first cipher expert himself. It was under his reign that proven use of encrypted messages in the German chancellery was first recorded, although it was not as elaborate as the mature Italian and Spanish systems. Maximilian experimented with different encryption methods, even in his private correspondence, often based on the Upper Italian models.
In the fields of history and historiography, Trithemius was also a notable forger and inventive historian who helped to connect Maximilian to Trojan heroes, the Merovingians and the Carolingians. The project had contributions from Maximilian's other court historiographers and genealogists such as Ladislaus Suntheim, Johann Stabius, Johannes Cuspinian and Jakob Mennel. While his colleagues like Jakob Mennel and Ladislaus Suntheim often inserted invented ancient ancestors for the missing links, Trithemius invented entire sources, such as Hunibald (supposedly a Scythian historian), Meginfrid and Wastald. The historiographer Josef Grünpeck wrote the work Historia Friderici III et Maximiliani I (which would be dedicated to Charles V). The first history of Germany based on original sources (patronized by Maximilian and cultivated by Peutinger, Aventin, Pirchkheimer, Stabius, Cuspianian and Celtis) was the Epitome Rerum Germanicarum written by Jakob Wimpheling, in which it was argued that the Germans possessed their own flourishing culture.
Maximilian's time was the age of great world chronicles. The most famous and influential is the Nuremberg Chronicle, of which the author, Hartmann Schedel, is usually considered one of the important panegyrists and propagandists, hired and independent, of the emperor and his anti-Ottoman propaganda agenda.
According to Maria Golubeva, Maximilian and his court preferred the fictional settings and reimagination of history (such as the Weisskunig, a "unique mixture of history and heroic romance"), so no outstanding works of historiography (such as those of Molinet and Chastelain at the Burgundian court) were produced. The authors of The Oxford History of Historical Writing: Volume 3: 1400-1800 point out three major distinctives in the historical literature within the imperial circle. The first was genealogical research, which Maximilian elevated to new heights and represented most prominently by the Fürstliche Chronik, written by Jakob Mennel. The second encompassed projects associated with the printing revolution, such as Maximilian's autobiographical projects and Dürer's Triumphal Arch. The third, and also the most sober strain of historical scholarship, constituted "a serious engagement with imperial legacy", with the scholar Johannes Cuspinianus being its most notable representative. Seton-Watson remarks that all his important works show the connection to Maximilian, with the Commentarii de Romanorum Consulibus being "the most profound and critical"; the De Caesaribus et Imperatoribus Romanorum (also considered by Cesc Esteve as his greatest work) possessing the most practical interest, especially regarding Maximilian's life, and the Austria giving a complete history of the nation up to 1519.
He had notable influence on the development of the musical tradition in Austria and Germany as well. Several historians credit Maximilian with playing the decisive role in making Vienna the music capital of Europe. Under his reign, the Habsburg musical culture reached its first high point and he had at his service the best musicians in Europe. He began the Habsburg tradition of supporting large-scale choirs, which he staffed with the brilliant musicians of his days like Paul Hofhaimer, Heinrich Isaac and Ludwig Senfl. His children inherited the parents' passion for music and even in their father's lifetime, supported excellent chapels in Brussels and Malines, with masters such as Alexander Agricola, Marbriano de Orto (who worked for Philip), Pierre de La Rue and Josquin Desprez (who worked for Margaret). After witnessing the brilliant Burgundian court culture, he looked to the Burgundian court chapel to create his own imperial chapel. As he was always on the move, he brought the chapel as well as his whole peripatetic court with him. In 1498 though, he established the imperial chapel in Vienna, under the direction of George Slatkonia, who would later become the Bishop of Vienna. Music benefitted greatly through the cross-fertilization between several centres in Burgundy, Italy, Austria and Tyrol (where Maximilian inherited the chapel of his uncle Sigismund).
In the service of Maximilian, Isaac (the first Continental composer who provided music on demand for the monarch-employer) cultivated "the mass-proper genre with an intensity unrivalled anywhere else in Europe". He created a huge cycle of polyphonic Mass Propers, most of which was published posthumously in the collection Choralis Constantinus, printed between 1550 and 1555 – David J. Rothenberg comments that, like many of the other artistic projects commissioned (and instilled with Maximilian's bold artistic vision and imperial ideology), it was never completed. A notable artistic monument, seemingly of great symbolic value to the emperor, was Isaac's motet Virgo prudentissima, which affiliated the reigns of two sovereign monarches – the Virgin Mary of Heaven and Maximilian of the Holy Roman Empire. The motet describes the Assumption of the Virgin, in which Mary, described as the most prudent Virgin (allusion to Parable of the Ten Virgins), “beautiful as the moon”, “excellent as the sun" and “glowing brightly as the dawn”, was crowned as Queen of Heaven and united with Christ, her bridegroom and son, at the highest place in Heaven. Rothenberg opines that Dürer's Festival of the Rose Garlands (see below) was its "direct visual counterpart". The idea was also reflected in the scene of the Assumption seen in the Berlin Book of hours of Mary of Burgundy and Maximilian (commissioned when Mary of Burgundy was still alive, with some images added posthumously).
Among some authors, Maximilian has a reputation as the "media emperor". The historian Larry Silver describes him as the first ruler who realized and exploited the propaganda potential of the print press both for images and texts. The reproduction of the Triumphal Arch (mentioned above) in printed form is an example of art in service of propaganda, made available for the public by the economical method of printing (Maximilian did not have money to actually construct it). At least 700 copies were created in the first edition and hung in ducal palaces and town halls through the Reich.
Historian Joachim Whaley comments that: "By comparison with the extraordinary range of activities documented by Silver, and the persistence and intensity with which they were pursued, even Louis XIV appears a rather relaxed amateur." Whaley notes, though, that Maximilian had an immediate stimulus for his "campaign of self-aggrandizement through public relation": the series of conflicts that involved Maximilian forced him to seek means to secure his position. Whaley further suggests that, despite the later religious divide, "patriotic motifs developed during Maximilian's reign, both by Maximilian himself and by the humanist writers who responded to him, formed the core of a national political culture."
Historian Manfred Hollegger notes though that the emperor's contemporaries certainly did not see Maximilian as a "media emperor": "He achieved little political impact with pamphlets, leaflets and printed speeches. Nevertheless, it is certainly true that he combined brilliantly all the media available at that time for his major literary and artistic projects". Tupu Ylä-Anttila notes that while his daughter (to whom Maximilian entrusted a lot of his diplomacy) often maintained a sober tone and kept a competent staff of advisors who helped her with her letters, her father did not demonstrate such an effort and occasionally sent emotional and erratic letters (the letters of Maximilian and Margaret were often presented to foreign diplomats to prove their trust in each other). Maria Golubeva opines that with Maximilian, one should use the term "propaganda" in the sense suggested by Karl Vocelka: "opinion-making". Also, according to Golubeva, unlike the narrative usually presented by Austrian historians including Wiesflecker, Maximilian's "propaganda", that was associated with 'militarism', universal imperial claims and court historiography, with a tendency towards world domination, was not the simple result of his Burgundian experience – his 'model of political competition' (as shown in his semi-autobiographical works), while equally secular, ignored the negotiable and institutional aspects inherent in the Burgundian model and, at the same time, emphasized top-down decision making and military force.
During Maximilian's reign, with encouragement from the emperor and his humanists, iconic spiritual figures were reintroduced or became notable. The humanists rediscovered the work Germania, written by Tacitus. According to Peter H. Wilson, the female figure of Germania was reinvented by the emperor as the virtuous pacific Mother of Holy Roman Empire of the German Nation. Inheriting the work of Klosterneuburg canons and his father Frederick III, he promoted Leopold III, Margrave of Austria (who had family ties to the emperor), who was canonized in 1485 and became the Patron of Austria in 1506. To maximize the effect that consolidated his rule, the emperor delayed the translation of Leopold's bones for years until he could personally be there.
He promoted the association between his own wife Mary of Burgundy and the Virgin Mary, that had already been started in her lifetime by members of the Burgundian court before his arrival. These activities included the patronage (by Maximilian, Philip the Fair and Charles V) of the devotion of the Seven Sorrows as well as the commission (by Maximilian and his close associates) of various artworks dedicating to the topic such as the famous paintings Feast of the Rosary (1506) and Death of the Virgin (1518, one year before the emperor's death) by Albrecht Dürer, the famous diptych of Maximilian's extended family (after 1515) by Strigel, the Manuscript VatS 160 by the composer Pierre Alamire.
Maximilian's reign witnessed the gradual emergence of the German common language. His chancery played a notable role in developing new linguistic standards. Martin Luther credited Maximilian and the Wettin Elector Frederick the Wise with the unification of German language. Tennant and Johnson opine that while other chanceries have been considered significant and then receded in important when the research direction changes, the chanceries of these two rulers have always been considered important from the beginning. As a part of his influential literary and propaganda projects, Maximilian had his autobiographical works embellished, reworked and sometimes ghostwritten in the chancery itself. He is also credited with a major reform of the imperial chancery office: "Maximilian is said to have caused a standardization and streamlining in the language of
his Chancery, which set the pace for chanceries and printers throughout the Empire." The form of written German language he introduced into his chancery was called Maximilian's Chancery Speech (Maximilianische Kanzleisprache) and considered a form of Early New High German. It replaced older forms of written language that were close to Middle High German. This new form was used by the imperial chanceries until the end of the 17th century and therefore also referred to as the Imperial speech.
Architecture
Always short of money, Maximilian could not afford large scale building projects. However, he left a few notable constructions, among which the most remarkable is the cenotaph (designed by Maximilian) he began in the Hofkirche, Innsbruck, which was completed long after his death, and has been praised as the most important monument of Renaissance Austria and considered the "culmination of Burgundian tomb tradition" (especially for the groups of statues of family members) that displayed Late Gothic features, combined with Renaissance traditions like reliefs and busts of Roman emperors. The monument was vastly expanded under his grandson Ferdinand I, who added the tumba, the portal, and on the advice of his Vice Chancellor Georg Sigmund Seld, commissioned the 24 marble reliefs based on the images on the Triumphal Arch. The work was only finished under Archduke Ferdinand II (1529-1595). The reliefs were carved by the Flemish sculptor Alexander Colyn while the statues were cast by the bronze-founder Stefan Godl following the designs of Gilg Sesshelschreiber and Jörg Kölderer. The bronze busts of Roman emperors were created by Jörg Muskat.
After taking Tyrol, in order to symbolize his new wealth and power, he built the Golden Roof, the roof for a balcony overlooking the town center of Innsbruck, from which to watch the festivities celebrating his assumption of rule over Tyrol. The roof is made with gold-plated copper tiles. The structure was a symbol of the presence of the ruler, even when he was physically absent. It began the vogue of using reliefs to decorate oriel windows. The Golden Roof is also considered one of the most notable Habsburg monuments. Like Maximilian's cenotaph, it is in an essentially Gothic idiom. The structure was built by Niclas Türing (Nikolaus Turing) while the paintings was done by Jörg Kölderer.
The Innsbruck Hofburg was redesigned and expanded, chiefly under Niclas Türing. By the time Maximilian died in 1519, the palace was one of the most beautiful and renowned secular structures of the era (but would be rebuilt later in the Baroque style by Maria Theresa).
The famous sculpture Schutzmantelmadonna (Virgin of Mercy), donated in 1510 by Maximilian to the Frauenstein Pilgrimage Church in Molln, was a work by Gregor Erhart.
From 1498 onwards, Maximilian caused many castles and palaces in Vienna, Graz, Wiener Neustadt, Innsbruck and Linz to be renovated and modernized. Not only the facade was redesigned and glazed bricks were integrated, Maximilian also paid special attention to the sanitation aspect, issuing precise instructions concerning the "secret chamber", the deflection of waste into a cesspit through pipes and the purification of smells through the use of "herbal essences". In many towns, he caused streets and alleys to be cobbled and added gutters for rain water. He issued regulations that ordered open drains for waste water to be bricked up and forbade the keeping of animals in the towns. It was also ordained that no rubbish was allowed in the streets overnight. Directions related to fire prevention were also issued, leading to fire walls being constructed between the houses and tiled roofs in many towns. In the hereditary lands and Southern Germany, through his financial blessings, there were wooden cities that were transformed into stone ones.
Modern postal system and printing
Together with Franz von Taxis, in 1490, Maximilian developed the first modern postal service in the world. The system was originally built to improve communication between his scattered territories, connecting Burgundy, Austria, Spain and France and later developing to a Europe-wide, fee-based system. Fixed postal routes (the first in Europe) were developed, together with regular and reliable service. From the beginning of the sixteenth century, the system became open to private mail. The initiative was immediately emulated by France and England, although rulers there restricted the spread of private mails and private postal networks. Systematic improvement allowed communication to reach Maximilian, wherever he was, twice as fast as normal, to the point Wolfgang Behringer remarks that "perception of temporal and spatial dimensions was changed". The new development, usually described as the communcation revolution, could largely be traced back to Maximilian's initiative, with contributions from Frederick III and Charles the Bold in developing the messenger networks, the Italian courier model and possibly influence from the French model.
The establishment of the postal network also signaled the beginning of a commercial market for news, together with the emergence of commercial newsagents and news agencies, which the emperor actively encouraged. According to Michael Kunczik, he was the first to utilize one-sided battle reports targeting the mass, including the use of the early predecessors of modern newspapers (neue zeitungen).
The capital resources he poured into the postage system as well as support for the related printing press (when Archduke, he opened a school for sophisticated engraving techniques) were on a level unprecedented by European monarches, and earned him stern rebuke from the father.
His patronage drew to Augsburg printmakers from the Netherlands (especially from Antwerp) like Jost de Negker and the brothers Cornelis I (died 1528) and Willem Liefrinck, who came there as teens. After his death, as teams dispersed, Negker remained in Augsburg while the Liefrincks returned to their homeland, established there a printmaking dynasty and introduced German-styled workshops. According to Printing Colour 1400-1700: History, Techniques, Functions and Receptions, there was a dramatic drop in both quantity and quality of print projects in Augsburg as a whole once Charles V took over.
The development of the printing press led to a search for a national font. In 1508 or 1510, Maximilian (possibly with Dürer's advice) commissioned the calligrapher Leonhard Wagner to create a new font. Wagner dedicated his calligraphy work Proba centum scripturatum (including one hundred fonts) to Maximilian, who chose the Schwabacher-based font Fraktur, deemed the most beautiful one. While he originally envisioned this font for Latin works, it became the predominant font for German writings, while German printers would use Antiqua for works written in foreign languages. The font would spread to German-influenced countries and remain popular in Germany until being banned in 1941 by the Nazi government as a "Jewish" font. Burgkmair was the chief designer for most of his printing projects. Augsburg was the great center of the printing industry, where the emperor patronized printing and other types of craft through the agency of Conrad Peutinger, giving impetus for the formation of an "imperial" style. Burgkmair and Erhard Ratdolt created new printing techniques. As for his own works, as he wanted to produce the appearance of luxury manuscripts, he mixed handcrafted elements with printing: his Book of Prayers and Theuerdank (Weisskunig and Freydal were unfinished before the emperor's death) were printed with a type that resembled calligraphy (the Imperial Fraktur created by Johannes Schönperger). For prestigious recipients, he used parchment rather than paper. At least one copy of the Book of Hours was decorated by hand by Burgkmair, Dürer, Hans Baldung, Jörg Breu and Cranach.
Political legacy
Maximilian had appointed his daughter Margaret as the Regent of the Netherlands, and she fulfilled this task well. Tupu Ylä-Anttila opines that Margaret acted as de facto queen consort in a political sense, first to her father and then Charles V, "absent rulers" who needed a representative dynastic presence that also complemented their characteristics. Her queenly virtues helped her to play the role of diplomat and peace-maker, as well as guardian and educator of future rulers, whom Maximilian called "our children" or "our common children" in letters to Margaret. This was a model that developed as part of the solution for the emerging Habsburg composite monarchy and would continue to serve later generations.
Through wars and marriages he extended the Habsburg influence in every direction: to the Netherlands, Spain, Bohemia, Hungary, Poland, and Italy. This influence lasted for centuries and shaped much of European history. The Habsburg Empire survived as the Austro-Hungarian Empire until it was dissolved 3 November 1918 – 399 years 11 months and 9 days after the passing of Maximilian.
Geoffrey Parker summarizes Maximilian's political legacy as follows:
By the time Charles received his presentation copy of Der Weisskunig in 1517, Maximilian could point to four major successes. He had protected and reorganized the Burgundian Netherlands, Whose political future had seemed bleak when he became their ruler forty years earlier. Likewise, he had overcome the obstacles posed by individual institutions, traditions and languages to forge the sub-Alpine lands he inherited from his father into a single state: ‘Austria’, ruled and taxed by a single administration that he created at Innsbruck. He had also reformed the chaotic central government of the Holy Roman Empire in ways that, though imperfect, would last almost until its demise three centuries later. Finally, by arranging strategic marriages for his grandchildren, he had established the House of Habsburg as the premier dynasty in central and eastern Europe, creating a polity that his successors would expand over the next four centuries.
The Britannica Encyclopaedia comments on Maximilian's achievements:
Maximilian I [...] made his family, the Habsburgs, dominant in 16th-century Europe. He added vast lands to the traditional Austrian holdings, securing the Netherlands by his own marriage, Hungary and Bohemia by treaty and military pressure, and Spain and the Spanish empire by the marriage of his son Philip [...] Great as Maximilian's achievements were, they did not match his ambitions; he had hoped to unite all of western Europe by reviving the empire of Charlemagne [...] His military talents were considerable and led him to use war to attain his ends. He carried out meaningful administrative reforms, and his military innovations would transform Europe's battlefields for more than a century, but he was ignorant of economics and was financially unreliable.
Holleger notes that, as Maximilian could not persuade his imperial estates to support his plans, he cultivated a system of alliances, in which the germ of modern European powers could be seen – like in the game of chess, no piece could be moved without thinking ahead about the others.
Hugh Trevor-Roper opines that, although Maximilian's politics and wars accomplished little, "By harnessing the arts, he surrounded his dynasty with a lustrous aura it had previously lacked. It was to this illusion that his successors looked for their inspiration. To them, he was not simply the second founder of the dynasty; he was the creator of its legend - one that transcended politics, nationality, even religion." Paula Sutter Fichtner opines that Maximilian was the author of a "basic but imperfect script for the organization of a Habsburg government now charged with administering a territorial complex that extended far beyond the dynasty's medieval patrimony in central Europe." – He used his revenues profligately for wars. Although aware of the dangers of over-extended credit, in order to protect his borders, imperial prerogatives and advance Habsburg interests, all of which he considered seriously, he could not internalize fiscal discipline. The role of the emperor in the government was very personalized – only when Maximilian's health failed him badly in 1518 did he set up a Hofrat including 18 jurists and nobles from the Empire and Austrian lands to assist him with the responsibilities he was incapable of handling anymore.
Maximilian's life is still commemorated in Central Europe centuries later. The Order of St. George, which he sponsored, still exists. In 2011, for example, a monument was erected for him in Cortina d’Ampezzo. Also in 1981 in Cormons on the Piazza Liberta a statue of Maximilian, which was there until the First World War, was put up again. On the occasion of the 500th anniversary of his death there were numerous commemorative events in 2019 at which Karl von Habsburg, the current head of the House of Habsburg, represented the imperial dynasty.Josef Achleitner "Kaiser Maximilian I. – Herrscher über den eigenen Ruhm" In: OÖN, 12.01.2019. A barracks in Wiener Neustadt, Maximilian-Kaserne (formerly Artilleriekaserne), a military base for the Jagdkommando of the Austrian Armed Forces, was named after Maximilian.
Amsterdam still retains close ties with the emperor. His 1484 pilgrimage to Amsterdam boosted the popularity of the Heilige Stede and the city's "miracle industry" to new heights. The city supported him financially in his military expeditions, he granted its citizens the right to use the image of his crown, which remains a symbol of the city as part of its coat-of-arms. The practice survived the later revolt against Habsburg Spain. The central canal in Amsterdam was named in 1615 as the Keizersgracht (Emperor's Canal) after Maximilian. The city beer (Brugse Zot, or The Fools of Bruges) of Bruges, which suffered a four century long decline that was partially inflicted by Maximilian's orders (that required foreign merchants to transfer operations to Antwerp – later he would withdraw the orders but it proved too late.), is associated with the emperor, who according to legend told the city in a conciliatory celebration that they did not need to build an asylum, as the city was full of fools. The swans of the city are considered a perpetual remembrance (allegedly ordered by Maximilian) for Lanchals (whose name meant "long necks" and whose emblem was a swan), the loyalist minister who got beheaded while Maximilian was forced to watch. In Mechelen, Burgundian capital under Margaret of Austria, every 25 years, an ommegang that commemorates Maximilian's arrival as well as other major events is organized.
Ancestry
Official styleWe, Maximilian, by the Grace of God, elected Roman Emperor, always Augmenter of the Empire, King of Hungary, Dalmatia, Croatia, etc. Archduke of Austria, Duke of Burgundy, Britany, Lorrain, Brabant, Stiria, Carinthia, Carniola, Limbourg, Luxembourg, and Guldres; Count of Flanders, Habsburg, Tyrol, Pfiert, Kybourg, Artois, and Burgundy Count Palatine of Haynault, Holland, Zeland, Namur, and Zutphen; Marquess of the Roman Empire and of Burgau, Landgrave of Alsatia, Lord of Friesland, the Wendish Mark, Portenau, Salins, and Malines, etc. etc.Chivalric orders
On 30 April 1478, Maximilian was knighted by Adolf of Cleves (1425-1492), a senior member of the Order of the Golden Fleece and on the same day he became the sovereign of this exalted order. As its head, he did everything in his power to restore its glory as well as associate the order with the Habsburg lineage. He expelled the members who had defected to France and rewarded those loyal to him, and also invited foreign rulers to join its ranks.
Maximilian I was a member of the Order of the Garter, nominated by King Henry VII of England in 1489. His Garter stall plate survives in St George's Chapel, Windsor Castle.
Maximilian was patron of the Order of Saint George founded by his father, and also the founder of its secular confraternity.
Appearance and personality
Maximilian was strongly built with an upright posture (he was over six feet tall), had blue eyes, neck length blond or reddish hair, a large hooked nose and a jutting jaw (like his father, he always shaved his beard, as the jutting jaw was considered a noble feature). Although not conventionally handsome, he was well-proportioned and considered physically attractive, with abiding youthfulness and an affable, pleasing manner. A womanizer since his teenager days, he increasingly sought distraction from the tragedy of the first marriage and the frustration of the second marriage in the company of "sleeping women" in all corners of his empire. Sigrid-Maria Grössing describes him as a charming heartbreaker for all his life.
Maximilian was a late developer. According to his teacher Johannes Cuspinian, he did not speak until he was nine-year-old, and after that only developed slowly. Frederick III recalled that when his son was twelve, he still thought that the boy was either mute or stupid. In his adulthood, he spoke six languages (he learned French from his wife Mary) and was a genuinely talented author. Other than languages, mathematics and religion, he painted and played various instruments and was also trained in farming, carpentry and blacksmithing, although the focus of his education was naturally kingship. According to Fichtner, he did not learn much from formal training though, because even as a boy, he never sat still and tutors could not do much about that. Gerhard Benecke opines that by nature he was the man of action, a "vigorously charming extrovert" who had a "conventionally superficial interest in knowledge, science and art combined with excellent health in his youth" (he remained virile into his late thirties and only stopped jousting after an accident damaged a leg). He was brave to the point of recklessness, and this did not only show in battles. He once entered a lion's enclosure in Munich alone to tease the lion, and at another point climbed to the top of the Cathedral of Ulm, stood on one foot and turned himself round to gain a full view, at the trepidation of his attendants. In the nineteenth century, an Austrian officer lost his life trying to repeat the emperor's "feat", while another succeeded.
Historian Ernst Bock, with whom Benecke shares the same sentiment, writes the following about him:
His rosy optimism and utilitarianism, his totally naive amorality in matters political, both unscrupulous and machiavellian; his sensuous and earthy naturalness, his exceptional receptiveness towards anything beautiful especially in the visual arts, but also towards the various fashions of his time whether the nationalism in politics, the humanism in literature and philosophy or in matters of economics and capitalism; further his surprising yearning for personal fame combined with a striving for popularity, above all the clear consciousness of a developed individuality: these properties Maximilian displayed again and again.
Historian Paula Fichtner describes Maximilian as a leader who was ambitious and imaginative to a fault, with self-publicizing tendencies as well as territorial and administrative ambitions that betrayed a nature both "soaring and recognizably modern", while dismissing Benecke's presentation of Maximilian as "an insensitive agent of exploitation" as influenced by the author's personal political leaning.
Berenger and Simpson consider Maximilian a greedy Renaissance prince, and also, "a prodigious man of action whose chief fault was to have 'too many irons in the fire'". On the other hand, Steven Beller criticizes him for being too much of a medieval knight who had a hectic schedule of warring, always crisscrossing the whole continent to do battles (for example, in August 1513, he commanded Henry VIII's English army in the second Guinegate, and a few weeks later joined the Spanish forces in defeating the Venetians) with little resources to support his ambitions. According to Beller, Maximilian should have spent more time at home persuading the estates to adopt a more efficient governmental and fiscal system.
Thomas A. Brady Jr. praises the emperor's sense of honour, but criticizes his financial immorality — according to Geoffrey Parker, both points, together with Maximilian's martial qualities and hard-working nature, would be inherited from the grandfather by Charles V:
[...]though punctilious to a fault about his honor, he lacked all morals about money. Every florin was spent, mortgaged, and promised ten times over before it ever came in; he set his courtiers a model for their infamous venality; he sometimes had to leave his queen behind as pledge for his debts; and he borrowed continuously from his servitors—large sums from top officials, tiny ones from servants — and never repaid them. Those who liked him tried to make excuses.
Certain English sources describe him as a ruler who habitually failed to keep his words. According to Wiesflecker, people could often depend on his promises more than those of most princes of his days, although he was no stranger to the "clausola francese“ and also tended to use a wide variety of statements to cover his true intentions, which unjustly earned him the reputation of being fickle. Holleger concurs that Maximilian's court officials, except Eitelfriedrich von Zollern and Wolfgang von Fürstenberg, did expect gifts and money for tips and help, and the emperor usually defended his counselors and servants even if he acted against the more blatant displays of material greed. Maximilian though was not a man who could be controlled or influenced easily by his officials. Holleger also opines that while many of his political and artistic schemes leaned towards megalomania, there was a sober realist who believed in progression and relied on modern modes of management underneath. Personally, "frequently described as humane, gentle, and friendly, he reacted with anger, violence, and vengefulness when he felt his rights had been injured or his honor threatened, both of which he valued greatly." The price for his warlike ruling style and his ambition for a globalized monarchy (that ultimately achieved considerable successes) was a continuous succession of war, that earned him the sobriquet “Heart of steel” (Coeur d’acier).
Marriages and offspring
Maximilian was married three times, but only the first marriage produced offspring:
Maximilian's first wife was Mary of Burgundy (1457–1482). They were married in Ghent on 19 August 1477, and the marriage was ended by Mary's death in a riding accident in 1482. Mary was the love of his life. Even in old age, the mere mention of her name moved him to tears (although, his sexual life, contrary to his chivalric ideals, was unchaste). The grand literary projects commissioned and composed in large part by Maximilian many years after her death were in part tributes to their love, especially Theuerdank, in which the hero saved the damsel in distress like he had saved her inheritance in real life. His heart is buried inside her sarcophagus in Bruges according to his wish.
Beyond her beauty, the inheritance and the glory she brought, Mary corresponded to Maximilian's ideal of a woman: the spirited grand "Dame" who could stand next to him as sovereigns. To their daughter Margaret, he described Mary: from her eyes shone the power (Kraft) that surpassed any other woman.
The marriage produced three children:
Philip I of Castile (1478–1506) who inherited his mother's domains following her death, but predeceased his father. He married Joanna of Castile, becoming king-consort of Castile upon her accession in 1504, and was the father of the Holy Roman Emperors Charles V and Ferdinand I.
Margaret of Austria (1480–1530), who was first engaged at the age of 2 to the French dauphin (who became Charles VIII of France a year later) to confirm peace between France and Burgundy. She was sent back to her father in 1492 after Charles repudiated their betrothal to marry Anne of Brittany. She was then married to the crown prince of Castile and Aragon John, Prince of Asturias, and after his death to Philibert II of Savoy, after which she undertook the guardianship of her deceased brother Philip's children, and governed Burgundy for the heir, Charles.
Francis of Austria, who died shortly after his birth in 1481.
Maximilian's second wife was Anne of Brittany (1477–1514) – they were married by proxy in Rennes on 18 December 1490, but the contract was dissolved by the pope in early 1492, by which time Anne had already been forced by the French king, Charles VIII (the fiancé of Maximilian's daughter Margaret of Austria) to repudiate the contract and marry him instead.
Maximilian's third wife was Bianca Maria Sforza (1472–1510) – they were married in 1493, the marriage bringing Maximilian a rich dowry and allowing him to assert his rights as imperial overlord of Milan. The marriage was unhappy, and they had no children.
In Maximilian's view, while Bianca might surpass his first wife Mary in physical beauty, she was just a "child" with "a mediocre mind", who could neither make decisions nor be presented as a respectable lady to the society. Benecke opines that this seems unfair, as while Bianca was always concerned with trivial, private matters (Recent research though indicates that Bianca was an educated woman who was politically active), she was never given the chance to develop politically, unlike the other women in Maximilian's family including Margaret of Austria or Catherine of Saxony. Despite her unsuitability as an empress, Maximilian tends to be criticized for treating her with coldness and neglect, which after 1500 only became worse. Bianca, on the other hand, loved the emperor deeply and always tried to win his heart with heartfelt letters, expensive jewels and allusions to sickness, but did not even get back a letter, developed eating disorders and mental illness, and died a childless woman.
In addition, he had several illegitimate children, but the number and identities of those are a matter of great debate. Johann Jakob Fugger writes in Ehrenspiegel (Mirror of Honour'') that the emperor began fathering illegitimate children after becoming a widower, and there were eight children in total, four boys and four girls.
By a widow in Den Bosch, whom Maximilian met in a campaign:
Barbara Disquis (1482 – 1568): Her birth and Mary of Burgundy's death seemed to connect to Maximilian's near fatal illness in 1482 and later pilgrimage to Amsterdam in 1484. In her teenage years, she rebelled against her father and entered St Gertrude's convent. Philip the Fair was introduced to her in 1504 when he and Maximilian went to Den Bosch to preparing for the war against Guelders. Philip tried to persuade her to leave the convent and get married on their father's behalf, but she refused. The place she was buried was now a square named after her. In 2021, a recently built city walk has been named "Mijn lieven dochter" ("My dear daughter") and also close to Maximilian's statue.
By unknown mistress:
Martha von Helfenstein or Margaretha, Mathilde, Margareta, née von Edelsheim (?-1537), wife of Johann von Hille (died 1515), remarried Ludwig Helferich von Helfenstein (1493-1525, married in 1517 or 1520); Ludwig was killed by peasants on 16 April 1525 in the Massacre of Weinsberg during the German Peasants' War. They had a surviving son named Maximilian (1523-1555) Some sources reported that she was born in 1480 or her mother was Margareta von Edelsheim, née Rappach.
By Margareta von Edelsheim, née Rappach (?-1522) Dingel reports that she was born around 1470 while others report that in 1494 she was still a minor when she married von Rottal:
Barbara von Rottal (1500–1550), wife of Siegmund von Dietrichstein. Some report that she was the daughter of Margareta von Edelsheim, née Rappach, while Benecke lists the mother as unidentified.
George of Austria (1505–1557), Prince-Bishop of Liège.
By Anna von Helfenstein:
Cornelius (1507–).
Maximilian Friedrich von Amberg (1511–1553), Lord of Feldkirch.
Leopold (), bishop of Córdoba, Spain (1541–1557), with illegitimate succession.
Dorothea (1516–1572), heiress of Falkenburg, Durbuy and Halem, lady in waiting to Queen Maria of Hungary; wife of Johan I of East Frisia.
Anna Margareta (1517–1545), lady in waiting to Queen Maria of Hungary; wife of François de Melun ( –1547), 2nd count of Epinoy.
Anne (1519–?). She married Louis d'Hirlemont.
Elisabeth (d. 1581–1584), wife of Ludwig III von der Marck, Count of Rochefort.
Barbara, wife of Wolfgang Plaiss.
Christoph Ferdinand ( ).
By unknown mistress (parentage uncertain):
Guielma, wife of Rudiger (Rieger) von Westernach.
Triumphal woodcuts
A set of woodcuts called the Triumph of Emperor Maximilian I. See also Category:Triumphal Procession of Maximilian I – Wikimedia Commons
Notes
See also
Family tree of the German monarchs. He was related to every other king of Germany.
First Congress of Vienna - The First Congress of Vienna was held in 1515, attended by the Holy Roman Emperor, Maximilian I, and the Jagiellonian brothers, Vladislaus II, King of Hungary and King of Bohemia, and Sigismund I, King of Poland and Grand Duke of Lithuania
Landsknecht - The German Landsknechts, sometimes also rendered as Landsknechte were colorful mercenary soldiers with a formidable reputation, who became an important military force through late 15th- and 16th-century Europe
References
Works
Bibliography
In 5 volumes.
Further reading
External links
Austria Family Tree
AEIOU Encyclopedia | Maximilian I
1459 births
1519 deaths
15th-century Holy Roman Emperors
16th-century Holy Roman Emperors
15th-century archdukes of Austria
16th-century archdukes of Austria
House of Valois
Rulers of the Habsburg Netherlands
Dukes of Styria
Dukes of Carinthia
Counts of Tyrol
Knights of the Garter
Knights of the Golden Fleece
Maximilian
Pretenders to the Hungarian throne
People from Wiener Neustadt
Grand Masters of the Order of the Golden Fleece
Jure uxoris officeholders
Burials in Wiener Neustadt, Austria
15th-century peers of France
Sons of kings
Dukes of Carniola |
39184 | https://en.wikipedia.org/wiki/NTFS | NTFS | New Technology File System (NTFS) is a proprietary journaling file system developed by Microsoft. Starting with Windows NT 3.1, it is the default file system of the Windows NT family. It superseded File Allocation Table (FAT) as the preferred filesystem on Windows and is supported in Linux and BSD as well. NTFS reading and writing support is provided using a free and open-source kernel implementation known as NTFS3 in Linux and the NTFS-3G driver in BSD. Windows can convert FAT32/16/12 into NTFS without the need to rewrite all files. NTFS uses several files typically hidden from the user to store metadata about other files stored on the drive which can help improve speed and performance when reading data. Unlike FAT and High Performance File System (HPFS), NTFS supports access control lists (ACLs), filesystem encryption, transparent compression, sparse files and file system journaling. NTFS also supports shadow copy to allow backups of a system while it is running, but the functionality of the shadow copies varies between different versions of Windows.
History
In the mid-1980s, Microsoft and IBM formed a joint project to create the next generation of graphical operating system; the result was OS/2 and HPFS. Because Microsoft disagreed with IBM on many important issues, they eventually separated; OS/2 remained an IBM project and Microsoft worked to develop Windows NT and NTFS.
The HPFS file system for OS/2 contained several important new features. When Microsoft created their new operating system, they "borrowed" many of these concepts for NTFS. The original NTFS developers were Tom Miller, Gary Kimura, Brian Andrew, and David Goebel.
Probably as a result of this common ancestry, HPFS and NTFS use the same disk partition identification type code (07). Using the same Partition ID Record Number is highly unusual, since there were dozens of unused code numbers available, and other major file systems have their own codes. For example, FAT has more than nine (one each for FAT12, FAT16, FAT32, etc.). Algorithms identifying the file system in a partition type 07 must perform additional checks to distinguish between HPFS and NTFS.
Versions
Microsoft has released five versions of NTFS:
The version number (e.g. v5.0 in Windows 2000) is based on the operating system version; it should not be confused with the NTFS version number (v3.1 since Windows XP).
Although subsequent versions of Windows added new file system-related features, they did not change NTFS itself. For example, Windows Vista implemented NTFS symbolic links, Transactional NTFS, partition shrinking, and self-healing. NTFS symbolic links are a new feature in the file system; all the others are new operating system features that make use of NTFS features already in place.
Scalability
NTFS is optimized for 4 KB clusters, but supports a maximum cluster size of 2MB. (Earlier implementations support up to 64KB) The maximum NTFS volume size that the specification can support is clusters, but not all implementations achieve this theoretical maximum, as discussed below.
The maximum NTFS volume size implemented in Windows XP Professional is clusters, partly due to partition table limitations. For example, using 64KB clusters, the maximum size Windows XP NTFS volume is 256TB minus 64KB. Using the default cluster size of 4KB, the maximum NTFS volume size is 16TB minus 4KB. Both of these are vastly higher than the 128GB limit in Windows XP SP1. Because partition tables on master boot record (MBR) disks support only partition sizes up to 2TB, multiple GUID Partition Table (GPT or "dynamic") volumes must be combined to create a single NTFS volume larger than 2TB. Booting from a GPT volume to a Windows environment in a Microsoft supported way requires a system with Unified Extensible Firmware Interface (UEFI) and 64-bit support.
The NTFS maximum theoretical limit on the size of individual files is 16EB ( or ) minus 1KB, which totals 18,446,744,073,709,550,592 bytes. With Windows 10 version 1709 and Windows Server 2019, the maximum implemented file size is 8PB minus 2MB or 9,007,199,252,643,840 bytes.
Interoperability
Windows
While the different NTFS versions are for the most part fully forward- and backward-compatible, there are technical considerations for mounting newer NTFS volumes in older versions of Microsoft Windows. This affects dual-booting, and external portable hard drives. For example, attempting to use an NTFS partition with "Previous Versions" (Volume Shadow Copy) on an operating system that does not support it will result in the contents of those previous versions being lost. A Windows command-line utility called convert.exe can convert supporting file systems to NTFS, including HPFS (only on Windows NT 3.1, 3.5, and 3.51), FAT16 and FAT32 (on Windows 2000 and later).
FreeBSD
FreeBSD 3.2 released in May 1999 included read-only NTFS support written by Semen Ustimenko. This implementation was ported to NetBSD by Christos Zoulas and Jaromir Dolecek and released with NetBSD 1.5 in December 2000. The FreeBSD implementation of NTFS was also ported to OpenBSD by Julien Bordet and offers native read-only NTFS support by default on i386 and amd64 platforms as of version 4.9 released 1 May 2011.
Linux
Linux kernel versions 2.1.74 and later include a driver written by Martin von Löwis which has the ability to read NTFS partitions; kernel versions 2.5.11 and later contain a new driver written by Anton Altaparmakov (University of Cambridge) and Richard Russon which supports file read. The ability to write to files was introduced with kernel version 2.6.15 in 2006 which allows users to write to existing files but does not allow the creation of new ones. Paragon's NTFS driver (see below) has been merged into kernel version 5.15, and it supports read/write on normal, compressed and sparse files, as well as journal replaying.
NTFS-3G is a free GPL-licensed FUSE implementation of NTFS that was initially developed as a Linux kernel driver by Szabolcs Szakacsits. It was re-written as a FUSE program to work on other systems that FUSE supports like macOS, FreeBSD, NetBSD, OpenBSD, Solaris, QNX, and Haiku and allows reading and writing to NTFS partitions. A performance enhanced commercial version of NTFS-3G, called "Tuxera NTFS for Mac", is also available from the NTFS-3G developers.
Captive NTFS, a 'wrapping' driver that uses Windows' own driver , exists for Linux. It was built as a Filesystem in Userspace (FUSE) program and released under the GPL but work on Captive NTFS ceased in 2006.
Linux kernel versions 5.15 onwards carry NTFS3, a fully functional NTFS Read-Write driver which works on NTFS versions up to 3.1 and is maintained primarily by the Paragon Software Group with the source code found here.
Mac OS
Mac OS X 10.3 included Ustimenko's read-only implementation of NTFS from FreeBSD. Then in 2006 Apple hired Anton Altaparmakov to write a new NTFS implementation for Mac OS X 10.6. Native NTFS write support is included in 10.6 and later, but is not activated by default, although workarounds do exist to enable the functionality. However, user reports indicate the functionality is unstable and tends to cause kernel panics.
Paragon Software Group sells a read-write driver named NTFS for Mac OS X, which is also included on some models of Seagate hard drives.
OS/2
The NetDrive package for OS/2 (and derivatives such as eComStation and ArcaOS) supports a plugin which allows read and write access to NTFS volumes.
DOS
There is a free-for-personal-use read/write driver for MS-DOS by Avira called "NTFS4DOS".
Ahead Software developed a "NTFSREAD" driver (version 1.200) for DR-DOS 7.0x between 2002 and 2004. It was part of their Nero Burning ROM software.
Security
NTFS uses access control lists and user-level encryption to help secure user data.
Access control lists (ACLs)
In NTFS, each file or folder is assigned a security descriptor that defines its owner and contains two access control lists (ACLs). The first ACL, called discretionary access control list (DACL), defines exactly what type of interactions (e.g. reading, writing, executing or deleting) are allowed or forbidden by which user or groups of users. For example, files in the folder may be read and executed by all users but modified only by a user holding administrative privileges. Windows Vista adds mandatory access control info to DACLs. DACLs are the primary focus of User Account Control in Windows Vista and later.
The second ACL, called system access control list (SACL), defines which interactions with the file or folder are to be audited and whether they should be logged when the activity is successful, failed or both. For example, auditing can be enabled on sensitive files of a company, so that its managers get to know when someone tries to delete them or make a copy of them, and whether he or she succeeds.
Encryption
Encrypting File System (EFS) provides user-transparent encryption of any file or folder on an NTFS volume. EFS works in conjunction with the EFS service, Microsoft's CryptoAPI and the EFS File System Run-Time Library (FSRTL). EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes a relatively small amount of time to encrypt and decrypt large amounts of data than if an asymmetric key cipher is used. The symmetric key that is used to encrypt the file is then encrypted with a public key that is associated with the user who encrypted the file, and this encrypted data is stored in an alternate data stream of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key that is stored in the data stream. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. Also, in case of a user losing access to their key, support for additional decryption keys has been built into the EFS system, so that a recovery agent can still access the files if needed. NTFS-provided encryption and NTFS-provided compression are mutually exclusive; however, NTFS can be used for one and a third-party tool for the other.
The support of EFS is not available in Basic, Home, and MediaCenter versions of Windows, and must be activated after installation of Professional, Ultimate, and Server versions of Windows or by using enterprise deployment tools within Windows domains.
Features
Journaling
NTFS is a journaling file system and uses the NTFS Log ($LogFile) to record metadata changes to the volume. It is a feature that FAT does not provide and critical for NTFS to ensure that its complex internal data structures will remain consistent in case of system crashes or data moves performed by the defragmentation API, and allow easy rollback of uncommitted changes to these critical data structures when the volume is remounted. Notably affected structures are the volume allocation bitmap, modifications to MFT records such as moves of some variable-length attributes stored in MFT records and attribute lists, and indices for directories and security descriptors.
The ($LogFile) format has evolved through several versions:
The incompatibility of the $LogFile versions implemented by Windows 8.1 and Windows 10 prevents Windows 8 (and earlier versions of Windows) from recognizing version 2.0 of the $LogFile. Backward compatibility is provided by downgrading the $LogFile to version 1.1 when an NTFS volume is cleanly dismounted. It is again upgraded to version 2.0 when mounting on a compatible version of Windows. However, when hibernating to disk in the logoff state (a.k.a. Hybrid Boot or Fast Boot, which is enabled by default), mounted file systems are not dismounted, and thus the $LogFiles of any active file systems are not downgraded to version 1.1. The inability to process version 2.0 of the $LogFile by versions of Windows older than 8.1 results in an unnecessary invocation of the CHKDSK disk repair utility. This is particularly a concern in a multi-boot scenario involving pre- and post-8.1 versions of Windows, or when frequently moving a storage device between older and newer versions. A Windows Registry setting exists to prevent the automatic upgrade of the $LogFile to the newer version. The problem can also be dealt with by disabling Hybrid Boot.
The USN Journal (Update Sequence Number Journal) is a system management feature that records (in $Extend\$UsnJrnl) changes to files, streams and directories on the volume, as well as their various attributes and security settings. The journal is made available for applications to track changes to the volume. This journal can be enabled or disabled on non-system volumes.
Hard links
The hard link feature allows different file names to directly refer to the same file contents. Hard links may link only to files in the same volume, because each volume has its own MFT.
Hard links were originally included to support the POSIX subsystem in Windows NT.
Although Hard links use the same MFT record (inode) which records file metadata such as file size, modification date, and attributes, NTFS also caches this data in the directory entry as a performance enhancement. This means that when listing the contents of a directory using FindFirstFile/FindNextFile family of APIs, (equivalent to the POSIX opendir/readdir APIs) you will also receive this cached information, in addition to the name and inode. However, you may not see up-to-date information, as this information is only guaranteed to be updated when a file is closed, and then only for the directory from which the file was opened. This means where a file has multiple names via hard links, updating a file via one name does not update the cached data associated with the other name. You can always obtain up-to-date data using GetFileInformationByHandle (which is the true equivalent of POSIX stat function). This can be done using a handle which has no access to the file itself (passing zero to CreateFile for dwDesiredAccess), and closing this handle has the incidental effect of updating the cached information.
Windows uses hard links to support short (8.3) filenames in NTFS. Operating system support is needed because there are legacy applications that can work only with 8.3 filenames, but support can be disabled. In this case, an additional filename record and directory entry is added, but both 8.3 and long file name are linked and updated together, unlike a regular hard link.
The NTFS file system has a limit of 1024 hard links on a file.
Alternate data stream (ADS)
Alternate data streams allow more than one data stream to be associated with a filename (a fork), using the format "filename:streamname" (e.g., "text.txt:extrastream").
NTFS Streams were introduced in Windows NT 3.1, to enable Services for Macintosh (SFM) to store resource forks. Although current versions of Windows Server no longer include SFM, third-party Apple Filing Protocol (AFP) products (such as GroupLogic's ExtremeZ-IP) still use this feature of the file system. Very small ADSs (named "Zone.Identifier") are added by Internet Explorer and recently by other browsers to mark files downloaded from external sites as possibly unsafe to run; the local shell would then require user confirmation before opening them. When the user indicates that he no longer wants this confirmation dialog, this ADS is deleted.
Alternate streams are not listed in Windows Explorer, and their size is not included in the file's size. When the file is copied or moved to another file system without ADS support the user is warned that alternate data streams cannot be preserved. No such warning is typically provided if the file is attached to an e-mail, or uploaded to a website. Thus, using alternate streams for critical data may cause problems. Microsoft provides a tool called Streams to view streams on a selected volume. Starting with Windows PowerShell 3.0, it is possible to manage ADS natively with six cmdlets: Add-Content, Clear-Content, Get-Content, Get-Item, Remove-Item, Set-Content.
Malware has used alternate data streams to hide code. As a result, malware scanners and other special tools now check for alternate data streams.
File compression
Compression is enabled on a per-folder or per-file basis and may be compressed or decompressed individually (via changing the advanced attributes). When compression is set on a folder, any files moved or saved to that folder will be automatically compressed. Files are compressed using LZNT1 algorithm (a variant of LZ77). Since Windows 10, Microsoft has introduced additional algorithms, namely XPRESS4K/8K/16K and LZX. Both algorithms are based on LZ77 with Huffman entropy coding, which LZNT1 lacked. These algorithms were taken from the Windows Imaging Format. They are mainly used for new CompactOS feature, which compresses the entire system partition using one of these algorithms. They can also be manually turned on per file with the flag of the command. When used on files, CompactOS algorithm avoids fragmentation by writing compressed data in contiguously allocated chunks.
Files are compressed in 16 cluster chunks. With 4 KB clusters, files are compressed in 64 KB chunks. The compression algorithms in NTFS are designed to support cluster sizes of up to 4 KB. When the cluster size is greater than 4 KB on an NTFS volume, NTFS compression is not available.
Advantages
Users of fast multi-core processors will find improvements in application speed by compressing their applications and data as well as a reduction in space used. Note that SSDs with Sandforce controllers already compress data. However, since less data is transferred, there is a reduction in I/Os. Compression works best with files that have repetitive content, are seldom written, are usually accessed sequentially, and are not themselves compressed. Single-user systems with limited hard disk space can benefit from NTFS compression for small files, from 4KB to 64KB or more, depending on compressibility. Files smaller than approximately 900 bytes are stored within the directory entry of the MFT.
Disadvantages
Large compressible files become highly fragmented since every chunk smaller than 64KB becomes a fragment. Flash memory, such as SSD drives do not have the head movement delays and high access time of mechanical hard disk drives, so fragmentation has only a smaller penalty.
Maximum Compression Size
According to research by Microsoft's NTFS Development team, 50–60GB is a reasonable maximum size for a compressed file on an NTFS volume with a 4KB (default) cluster (block) size. This reasonable maximum size decreases sharply for volumes with smaller cluster sizes. If the compression reduces 64KB of data to 60KB or less, NTFS treats the unneeded 4KB pages like empty sparse file clusters—they are not written. This allows for reasonable random-access times as the OS merely has to follow the chain of fragments.
Boot failures
If system files that are needed at boot time (such as drivers, NTLDR, winload.exe, or BOOTMGR) are compressed, the system may fail to boot correctly, because decompression filters are not yet loaded. Later editions of Windows do not allow important system files to be compressed.
Sparse files
Sparse files are files interspersed with empty segments for which no actual storage space is used. To the applications, the file looks like an ordinary file with empty regions seen as regions filled with zeros. A sparse file does not necessarily include sparse zeros areas; the "sparse file" attribute just means that the file is allowed to have them.
Database applications, for instance, may use sparse files. As with compressed files, the actual sizes of sparse files are not taken into account when determining quota limits.
Volume Shadow Copy
The Volume Shadow Copy Service (VSS) keeps historical versions of files and folders on NTFS volumes by copying old, newly overwritten data to shadow copy via copy-on-write technique. The user may later request an earlier version to be recovered. This also allows data backup programs to archive files currently in use by the file system.
Windows Vista also introduced persistent shadow copies for use with System Restore and Previous Versions features. Persistent shadow copies, however, are deleted when an older operating system mounts that NTFS volume. This happens because the older operating system does not understand the newer format of persistent shadow copies.
Transactions
As of Windows Vista, applications can use Transactional NTFS (TxF) to group multiple changes to files together into a single transaction. The transaction will guarantee that either all of the changes happen, or none of them do, and that no application outside the transaction will see the changes until they are committed.
It uses similar techniques as those used for Volume Shadow Copies (i.e. copy-on-write) to ensure that overwritten data can be safely rolled back, and a CLFS log to mark the transactions that have still not been committed, or those that have been committed but still not fully applied (in case of system crash during a commit by one of the participants).
Transactional NTFS does not restrict transactions to just the local NTFS volume, but also includes other transactional data or operations in other locations such as data stored in separate volumes, the local registry, or SQL databases, or the current states of system services or remote services. These transactions are coordinated network-wide with all participants using a specific service, the DTC, to ensure that all participants will receive same commit state, and to transport the changes that have been validated by any participant (so that the others can invalidate their local caches for old data or rollback their ongoing uncommitted changes). Transactional NTFS allows, for example, the creation of network-wide consistent distributed file systems, including with their local live or offline caches.
Microsoft now advises against using TxF: "Microsoft strongly recommends developers utilize alternative means" since "TxF may not be available in future versions of Microsoft Windows".
Quotas
Disk quotas were introduced in NTFS v3. They allow the administrator of a computer that runs a version of Windows that supports NTFS to set a threshold of disk space that users may use. It also allows administrators to keep track of how much disk space each user is using. An administrator may specify a certain level of disk space that a user may use before they receive a warning, and then deny access to the user once they hit their upper limit of space. Disk quotas do not take into account NTFS's transparent file-compression, should this be enabled. Applications that query the amount of free space will also see the amount of free space left to the user who has a quota applied to them.
Reparse points
Introduced in NTFS v3, NTFS reparse points are used by associating a reparse tag in the user space attribute of a file or directory. Microsoft includes several default tags including symbolic links, directory junction points and volume mount points. When the Object Manager parses a file system name lookup and encounters a reparse attribute, it will reparse the name lookup, passing the user controlled reparse data to every file system filter driver that is loaded into Windows. Each filter driver examines the reparse data to see whether it is associated with that reparse point, and if that filter driver determines a match, then it intercepts the file system request and performs its special functionality.
Limitations
Resizing
Starting with Windows Vista Microsoft added the built-in ability to shrink or expand a partition. However, this ability does not relocate page file fragments or files that have been marked as unmovable, so shrinking a volume will often require relocating or disabling any page file, the index of Windows Search, and any Shadow Copy used by System Restore. Various third-party tools are capable of resizing NTFS partitions.
OneDrive
Since 2017, Microsoft requires the OneDrive file structure to reside on an NTFS disk. This is because OneDrive Files On-Demand feature uses NTFS reparse points to link files and folders that are stored in OneDrive to the local filesystem, making the file or folder unusable with any previous version of Windows, with any other NTFS file system driver, or any file system and backup utilities not updated to support it.
Structure
NTFS is made up of several components including: a partition boot sector (PBS) that holds boot information; the master file table that stores a record of all files and folders in the filesystem; a series of meta files that help structure meta data more efficiently; data streams and locking mechanisms.
Internally, NTFS uses B-trees to index file system data. A file system journal is used to guarantee the integrity of the file system metadata but not individual files' content. Systems using NTFS are known to have improved reliability compared to FAT file systems.
NTFS allows any sequence of 16-bit values for name encoding (e.g. file names, stream names or index names) except 0x0000. This means UTF-16 code units are supported, but the file system does not check whether a sequence is valid UTF-16 (it allows any sequence of short values, not restricted to those in the Unicode standard). In Win32 namespace, any UTF-16 code units are case insensitive whereas in POSIX namespace they are case sensitive. File names are limited to 255 UTF-16 code units. Certain names are reserved in the volume root directory and cannot be used for files. These are $MFT, $MFTMirr, $LogFile, $Volume, $AttrDef, . (dot), $Bitmap, $Boot, $BadClus, $Secure, $UpCase, and $Extend. . (dot) and $Extend are both directories; the others are files. The NT kernel limits full paths to 32,767 UTF-16 code units. There are some additional restrictions on code points and file names.
Partition Boot Sector (PBS)
This boot partition format is roughly based upon the earlier FAT filesystem, but the fields are in different locations. Some of these fields, especially the "sectors per track", "number of heads" and "hidden sectors" fields may contain dummy values on drives where they either do not make sense or are not determinable.
The OS first looks at the 8 bytes at 0x30 to find the cluster number of the $MFT, then multiplies that number by the number of sectors per cluster (1 byte found at 0x0D). This value is the sector offset (LBA) to the $MFT, which is described below.
Master File Table
In NTFS, all file, directory and metafile data—file name, creation date, access permissions (by the use of access control lists), and size—are stored as metadata in the Master File Table (MFT). This abstract approach allowed easy addition of file system features during Windows NT's development—an example is the addition of fields for indexing used by the Active Directory software. This also enables fast file search software to locate named local files and folders included in the MFT very quickly, without requiring any other index.
The MFT structure supports algorithms which minimize disk fragmentation. A directory entry consists of a filename and a "file ID" (analogous to the inode number), which is the record number representing the file in the Master File Table. The file ID also contains a reuse count to detect stale references. While this strongly resembles the W_FID of Files-11, other NTFS structures radically differ.
A partial copy of the MFT, called the MFT mirror, is stored to be used in case of corruption. If the first record of the MFT is corrupted, NTFS reads the second record to find the MFT mirror file. Locations for both files are stored in the boot sector.
Metafiles
NTFS contains several files that define and organize the file system. In all respects, most of these files are structured like any other user file ($Volume being the most peculiar), but are not of direct interest to file system clients. These metafiles define files, back up critical file system data, buffer file system changes, manage free space allocation, satisfy BIOS expectations, track bad allocation units, and store security and disk space usage information. All content is in an unnamed data stream, unless otherwise indicated.
These metafiles are treated specially by Windows, handled directly by the NTFS.SYS driver and are difficult to directly view: special purpose-built tools are needed. As of Windows 7, the NTFS driver completely prohibits user access, resulting in a BSoD whenever an attempt to execute a metadata file is made. One such tool is the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools". For example, to obtain information on the "$MFT"-Master File Table Segment the following command is used: nfi.exe c:\$MFT Another way to bypass the restriction is to use 7-Zip's file manager and go to the low-level NTFS path \\.\X:\ (where X:\ resembles any drive/partition). Here, 3 new folders will appear: $EXTEND, [DELETED] (a pseudo-folder that 7-Zip uses to attach files deleted from the file system to view), and [SYSTEM] (another pseudo-folder that contains all the NTFS metadata files). This trick can be used from removable devices (USB flash drives, external hard drives, SD Cards, etc.) inside Windows, but doing this on the active partition requires offline access (namely WinRE).
Attribute lists, attributes, and streams
For each file (or directory) described in the MFT record, there is a linear repository of stream descriptors (also named attributes), packed together in one or more MFT records (containing the so-called attributes list), with extra padding to fill the fixed 1 KB size of every MFT record, and that fully describes the effective streams associated with that file.
Each attribute has an attribute type (a fixed-size integer mapping to an attribute definition in file $AttrDef), an optional attribute name (for example, used as the name for an alternate data stream), and a value, represented in a sequence of bytes. For NTFS, the standard data of files, the alternate data streams, or the index data for directories are stored as attributes.
According to $AttrDef, some attributes can be either resident or non-resident. The $DATA attribute, which contains file data, is such an example. When the attribute is resident (which is represented by a flag), its value is stored directly in the MFT record. Otherwise, clusters are allocated for the data, and the cluster location information is stored as data runs in the attribute.
For each file in the MFT, the attributes identified by attribute type, attribute name must be unique. Additionally, NTFS has some ordering constraints for these attributes.
There is a predefined null attribute type, used to indicate the end of the list of attributes in one MFT record. It must be present as the last attribute in the record (all other storage space available after it will be ignored and just consists of padding bytes to match the record size in the MFT).
Some attribute types are required and must be present in each MFT record, except unused records that are just indicated by null attribute types.
This is the case for the $STANDARD_INFORMATION attribute that is stored as a fixed-size record and contains the timestamps and other basic single-bit attributes (compatible with those managed by FAT in DOS or Windows 9x).
Some attribute types cannot have a name and must remain anonymous.
This is the case for the standard attributes, or for the preferred NTFS "filename" attribute type, or the "short filename" attribute type, when it is also present (for compatibility with DOS-like applications, see below). It is also possible for a file to contain only a short filename, in which case it will be the preferred one, as listed in the Windows Explorer.
The filename attributes stored in the attribute list do not make the file immediately accessible through the hierarchical file system. In fact, all the filenames must be indexed separately in at least one other directory on the same volume. There it must have its own MFT record and its own security descriptors and attributes that reference the MFT record number for this file. This allows the same file or directory to be "hardlinked" several times from several containers on the same volume, possibly with distinct filenames.
The default data stream of a regular file is a stream of type $DATA but with an anonymous name, and the ADSs are similar but must be named.
On the other hand, the default data stream of directories has a distinct type, but are not anonymous: they have an attribute name ("$I30" in NTFS 3+) that reflects its indexing format.
All attributes of a given file may be displayed by using the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools".
Windows system calls may handle alternate data streams. Depending on the operating system, utility and remote file system, a file transfer might silently strip data streams. A safe way of copying or moving files is to use the BackupRead and BackupWrite system calls, which allow programs to enumerate streams, to verify whether each stream should be written to the destination volume and to knowingly skip unwanted streams.
Resident vs. non-resident attributes
To optimize the storage and reduce the I/O overhead for the very common case of attributes with very small associated value, NTFS prefers to place the value within the attribute itself (if the size of the attribute does not then exceed the maximum size of an MFT record), instead of using the MFT record space to list clusters containing the data; in that case, the attribute will not store the data directly but will just store an allocation map (in the form of data runs) pointing to the actual data stored elsewhere on the volume. When the value can be accessed directly from within the attribute, it is called "resident data" (by computer forensics workers). The amount of data that fits is highly dependent on the file's characteristics, but 700 to 800 bytes is common in single-stream files with non-lengthy filenames and no ACLs.
Some attributes (such as the preferred filename, the basic file attributes) cannot be made non-resident. For non-resident attributes, their allocation map must fit within MFT records.
Encrypted-by-NTFS, sparse data streams, or compressed data streams cannot be made resident.
The format of the allocation map for non-resident attributes depends on its capability of supporting sparse data storage. In the current implementation of NTFS, once a non-resident data stream has been marked and converted as sparse, it cannot be changed back to non-sparse data, so it cannot become resident again, unless this data is fully truncated, discarding the sparse allocation map completely.
When a non-resident attribute is so fragmented, that its effective allocation map cannot fit entirely within one MFT record, NTFS stores the attribute in multiple records. The first one among them is called the base record, while the others are called extension records. NTFS creates a special attribute $ATTRIBUTE_LIST to store information mapping different parts of the long attribute to the MFT records, which means the allocation map may be split into multiple records. The $ATTRIBUTE_LIST itself can also be non-resident, but its own allocation map must fit within one MFT record.
When there are too many attributes for a file (including ADS's, extended attributes, or security descriptors), so that they cannot fit all within the MFT record, extension records may also be used to store the other attributes, using the same format as the one used in the base MFT record, but without the space constraints of one MFT record.
The allocation map is stored in a form of data runs with compressed encoding. Each data run represents a contiguous group of clusters that store the attribute value. For files on a multi-GB volume, each entry can be encoded as 5 to 7 bytes, which means a 1 KB MFT record can store about 100 such data runs. However, as the $ATTRIBUTE_LIST also has a size limit, it is dangerous to have more than 1 million fragments of a single file on an NTFS volume, which also implies that it is in general not a good idea to use NTFS compression on a file larger than 10GB.
The NTFS file system driver will sometimes attempt to relocate the data of some of the attributes that can be made non-resident into the clusters, and will also attempt to relocate the data stored in clusters back to the attribute inside the MFT record, based on priority and preferred ordering rules, and size constraints.
Since resident files do not directly occupy clusters ("allocation units"), it is possible for an NTFS volume to contain more files on a volume than there are clusters. For example, a 74.5GB partition NTFS formats with 19,543,064 clusters of 4KB. Subtracting system files (a 64MB log file, a 2,442,888-byte Bitmap file, and about 25 clusters of fixed overhead) leaves 19,526,158 clusters free for files and indices. Since there are four MFT records per cluster, this volume theoretically could hold almost 4 × 19,526,158= 78,104,632 resident files.
Opportunistic locks
Opportunistic file locks (oplocks) allow clients to alter their buffering strategy for a given file or stream in order to increase performance and reduce network use. Oplocks apply to the given open stream of a file and do not affect oplocks on a different stream.
Oplocks can be used to transparently access files in the background. A network client may avoid writing information into a file on a remote server if no other process is accessing the data, or it may buffer read-ahead data if no other process is writing data.
Windows supports four different types of oplocks:
Level 2 (or shared) oplock: multiple readers, no writers (i.e. read caching).
Level 1 (or exclusive) oplock: exclusive access with arbitrary buffering (i.e. read and write caching).
Batch oplock (also exclusive): a stream is opened on the server, but closed on the client machine (i.e. read, write and handle caching).
Filter oplock (also exclusive): applications and file system filters can "back out" when others try to access the same stream (i.e. read and write caching) (since Windows 2000)
Opportunistic locks have been enhanced in Windows 7 and Windows Server 2008 R2 with per-client oplock keys.
Time
Windows NT and its descendants keep internal timestamps as UTC and make the appropriate conversions for display purposes; all NTFS timestamps are in UTC.
For historical reasons, the versions of Windows that do not support NTFS all keep time internally as local zone time, and therefore so do all file systems – other than NTFS – that are supported by current versions of Windows. This means that when files are copied or moved between NTFS and non-NTFS partitions, the OS needs to convert timestamps on the fly. But if some files are moved when daylight saving time (DST) is in effect, and other files are moved when standard time is in effect, there can be some ambiguities in the conversions. As a result, especially shortly after one of the days on which local zone time changes, users may observe that some files have timestamps that are incorrect by one hour. Due to the differences in implementation of DST in different jurisdictions, this can result in a potential timestamp error of up to 4 hours in any given 12 months.
See also
Comparison of file systems
NTFSDOS
ntfsresize
WinFS (a canceled Microsoft filesystem)
ReFS, a newer Microsoft filesystem
References
Further reading
Compression file systems
Windows disk file systems
1993 software |
39352 | https://en.wikipedia.org/wiki/List%20of%20telecommunications%20encryption%20terms | List of telecommunications encryption terms |
This is a list of telecommunications encryption terms. This list is derived in part from the Glossary of Telecommunication Terms published as Federal Standard 1037C.
A5/1a stream cipher used to provide over-the-air communication privacy in the GSM cellular telephone standard.
Bulk encryption
Cellular Message Encryption Algorithma block cipher which was used for securing mobile phones in the United States.
Cipher
Cipher system
Cipher text
Ciphony
Civision
Codress message
COMSEC equipment
Cryptanalysis
Cryptographic key
CRYPTO (International Cryptology Conference)
Crypto phone
Crypto-shredding
Data Encryption Standard (DES)
Decipher
Decode
Decrypt
DECT Standard Cipher
Descrambler
Dncipher
Encode
Encoding law
Encrypt
End-to-end encryption
group
IMSI-catcheran eavesdropping device used for interception of cellular phones and usually is undetectable for users of mobile phones.
Key distribution center (KDC)
Key management
Key stream
KSD-64
Link encryption
MISTY1
Multiplex link encryption
Net control station (NCS)
Null cipher
One-time pad
Over the Air Rekeying (OTAR)
Plaintext
PPPoX
Protected distribution system (PDS)
Protection interval (PI)
Pseudorandom number generator
Public-key cryptography
RED/BLACK concept
RED signal
Remote rekeying
Security management
Spoofing
Squirtto load or transfer code key from an electronic key storage device. See Over the Air Rekeying.
STU-IIIa family of secure telephones introduced in 1987 by the National Security Agency for use by the United States government, its contractors, and its allies.
Superencryption
Synchronous crypto-operation
Transmission security key (TSK)
Trunk encryption device (TED)
Type 1 encryption
Type 2 encryption
Type 3 encryption
Type 4 encryption
Unique key
VoIP VPNcombines voice over IP and virtual private network technologies to offer a method for delivering secure voice.
ZRTPa cryptographic key-agreement protocol used in Voice over Internet Protocol (VoIP) phone telephony.
See also
Communications security
CONDOR secure cell phone
Cryptography standards
Secure communication
Secure telephone
Telecommunication
References
Further reading
Rutenbeck, Jeff (2006). Tech terms: what every telecommunications and digital media person should know. Elsevier, Inc.
Kissel, Richard (editor). (February, 2011). Glossary of Key Information Security Terms (NIST IR 7298 Revision 1). National Institute of Standards and Technology.
External links
"Federal Standard 1037C."Telecommunications: Glossary of Telecommunication Terms
Embedding Security into Handsets Would Boost Data Usage - Report (2005) from Cellular-news.com
Wireless, Telecom and Computer Glossary from Cellular Network Perspectives
Telecommunications encryption terms |
39675 | https://en.wikipedia.org/wiki/DeCSS | DeCSS | DeCSS is one of the first free computer programs capable of decrypting content on a commercially produced DVD video disc. Before the release of DeCSS, open source operating systems (such as BSD and Linux) could not play encrypted video DVDs.
DeCSS's development was done without a license from the DVD Copy Control Association (CCA), the organization responsible for DVD copy protection—namely, the Content Scramble System (CSS) used by commercial DVD publishers. The release of DeCSS resulted in a Norwegian criminal trial and subsequent acquittal of one of the authors of DeCSS. The DVD CCA launched numerous lawsuits in the United States in an effort to stop the distribution of the software.
Origins and history
DeCSS was devised by three people, two of whom remain anonymous. It was on the Internet mailing list LiViD in October 1999. The one known author of the trio is Norwegian programmer Jon Lech Johansen, whose home was raided in 2000 by Norwegian police. Still a teenager at the time, he was put on trial in a Norwegian court for violating Norwegian Criminal Code section 145, and faced a possible jail sentence of two years and large fines, but was acquitted of all charges in early 2003. On 5 March 2003, a Norwegian appeals court ruled that Johansen would have to be retried. The court said that arguments filed by the prosecutor and additional evidence merited another trial. On 22 December 2003, the appeals court agreed with the acquittal, and on 5 January 2004, Norway's Økokrim (Economic Crime Unit) decided not to pursue the case further.
The program was first released on 6 October 1999 when Johansen posted an announcement of DeCSS 1.1b, a closed source Windows-only application for DVD ripping, on the livid-dev mailing list. The source code was leaked before the end of the month. The first release of DeCSS was preceded by a few weeks by a program called DoD DVD Speed Ripper from a group called DrinkOrDie, which didn't include source code and which apparently did not work with all DVDs. Drink or Die reportedly disassembled the object code of the Xing DVD player to obtain a player key. The group that wrote DeCSS, including Johansen, came to call themselves Masters of Reverse Engineering and may have obtained information from Drink or Die.
The CSS decryption source code used in DeCSS was mailed to Derek Fawcus before DeCSS was released. When the DeCSS source code was leaked, Fawcus noticed that DeCSS included his css-auth code in violation of the GNU GPL. When Johansen was made aware of this, he contacted Fawcus to solve the issue and was granted a license to use the code in DeCSS under non-GPL terms.
On 22 January 2004, the DVD CCA dropped the case against Jon Johansen.
Jon Lech Johansen's involvement
The DeCSS program was a collaborative project, in which Johansen wrote the graphical user interface. The transcripts from the Borgarting Court of Appeal, published in the Norwegian newspaper Verdens Gang, contain the following description of the process which led to the release of DeCSS:
Through Internet Relay Chat (henceforth IRC), [Jon Lech Johansen] made contact with like-minded [people seeking to develop a DVD-player under the Linux operating system]. 11 September 1999, he had a conversation with "mdx" about how the encryption algorithm in CSS could be found, by using a poorly secured software-based DVD-player. In a conversation [between Jon Lech Johansen and "mdx"] 22 September, "mdx" informs that "the nomad" had found the code for CSS decryption, and that "mdx" now would send this [code] to Jon Lech Johansen. "The nomad" allegedly found this decryption algorithm through so-called reverse engineering of a Xing DVD-player, where the [decryption] keys were more or less openly accessible. Through this, information that made it possible [for "mdx"] to create the code CSS_scramble.cpp was retrieved. From chat logs dated 4 November 1999 and 25 November 1999, it appears that "the nomad" carried through the reverse engineering process on a Xing player, which he characterized as illegal. As the case is presented for the High Court, this was not known by Jon Lech Johansen before 4 November [1999].
Regarding the authentication code, the High Court takes for its basis that "the nomad" obtained this code through the electronic mailing list LiVid (Linux Video) on the Internet, and that it was created by Derek Fawcus. It appears through a LiVid posting dated 6 October 1999 that Derek Fawcus on this date read through the DeCSS source code and compared it with his own. Further, it appears that "the creators [of DeCSS] have taken [Derek Fawcus' code] almost verbatim - the only alteration was the removal of [Derek Fawcus'] copyright header and a paragraph containing commentaries, and a change of the function names." The name [of the code] was CSS_auth.cpp.
The High Court takes for its basis that the program Jon Lech Johansen later programmed, the graphical user interface, consisted of "the nomad's" decryption algorithm and Derek Fawcus' authentication package. The creation of a graphical user interface made the program accessible, also for users without special knowledge in programming. The program was published on the Internet for the first time 6 October 1999, after Jon Lech Johansen had tested it on the movie "The Matrix." In this, he downloaded approximately 2.5%. 200 megabytes, of the movie to the hard drive on his computer. This file is the only film fragment Jon Lech Johansen has saved on his computer.
Technology and derived works
When the release of the DeCSS source code made the CSS algorithm available for public scrutiny, it was soon found to be susceptible to a brute force attack quite different from DeCSS. The encryption is only 40-bit, and does not use all keys; a high-end home computer in 1999 running optimized code could brute-force it within 24 hours, and modern computers can brute-force it in a few seconds or less.
Programmers around the world created hundreds of programs equivalent to DeCSS, some merely to demonstrate the trivial ease with which the system could be bypassed, and others to add DVD support to open source movie players. The licensing restrictions on CSS make it impossible to create an open source implementation through official channels, and closed source drivers are unavailable for some operating systems, so some users need DeCSS to watch even legally obtained movies.
Legal response
The first legal threats against sites hosting DeCSS, and the beginning of the DeCSS mirroring campaign, began in early November 1999 (Universal v. Reimerdes). The preliminary injunction in DVD Copy Control Association, Inc. v. Bunner followed soon after, in January 2000. As a response to these threats a program also called DeCSS but with an unrelated function was developed. This program can be used to strip Cascading Style Sheets tags from HTML pages. In one case, a school removed a student's webpage that included a copy of this program, mistaking it for the original DeCSS program, and received a great deal of negative media attention. The CSS stripping program had been specifically created to bait the MPAA in this manner.
In protest against legislation that prohibits publication of copy protection circumvention code in countries that implement the WIPO Copyright Treaty (such as the United States' Digital Millennium Copyright Act), some have devised clever ways of distributing descriptions of the DeCSS algorithm, such as through steganography, through various Internet protocols, on T-shirts and in dramatic readings, as MIDI files, as a haiku poem (DeCSS haiku), and even as a so-called illegal prime number.
See also
DVD Copy Control Association
AACS encryption key controversy
Illegal prime
youtube-dl
References
Further reading
Lawrence Lessig, The Future of Ideas, 2001, pp. 187–190, freely available here.
External links
DeCSS Central - Information about DVD, CSS, DeCSS, LiVid, the DVD CCA and MPAA and the various lawsuits surrounding DeCSS.
EFF archive of information on the Bunner and Pavlovich DVD-CAA lawsuits
2600 News: DVD Industry Takes 2600 to Court
Aftenposten: Prosecutors let DVD-Jon's victory stand
The Openlaw DVD/DeCSS Forum Frequently Asked Questions (FAQ) List
42 ways to distribute DeCSS
DeCSS Explained - A technical overview of the CSS decryption algorithm.
DeCSS.c, The DeCSS source code
1999 software
Cryptanalytic software
Cryptography law
Digital rights management circumvention software
Compact Disc and DVD copy protection |
40594 | https://en.wikipedia.org/wiki/Pseudonym | Pseudonym | A pseudonym () (originally: ψευδώνυμος in Greek) or alias () is a fictitious name that a person or group assumes for a particular purpose, which differs from their original or true name (orthonym). This also differs from a new name that entirely or legally replaces an individual's own. Most pseudonym holders use pseudonyms because they wish to remain anonymous, but anonymity is difficult to achieve and often fraught with legal issues.
Scope
Pseudonyms include stage names, user names, ring names, pen names, nicknames, aliases, superhero or villain identities and code names, gamer identifications, and regnal names of emperors, popes, and other monarchs. Historically, they have sometimes taken the form of anagrams, Graecisms, and Latinisations, although there may be many other methods of choosing a pseudonym.
Pseudonyms should not be confused with new names that replace old ones and become the individual's full-time name. Pseudonyms are "part-time" names, used only in certain contexts – to provide a more clear-cut separation between one's private and professional lives, to showcase or enhance a particular persona, or to hide an individual's real identity, as with writers' pen names, graffiti artists' tags, resistance fighters' or terrorists' noms de guerre, and computer hackers' handles. Actors, voice-over artists, musicians, and other performers sometimes use stage names, for example, to better channel a relevant energy, gain a greater sense of security and comfort via privacy, more easily avoid troublesome fans/"stalkers", or to mask their ethnic backgrounds.
In some cases, pseudonyms are adopted because they are part of a cultural or organisational tradition: for example devotional names used by members of some religious institutes, and "cadre names" used by Communist party leaders such as Trotsky and Lenin.
A pseudonym may also be used for personal reasons: for example, an individual may prefer to be called or known by a name that differs from their given or legal name, but is not ready to take the numerous steps to get their name legally changed; or an individual may simply feel that the context and content of an exchange offer no reason, legal or otherwise, to provide their given or legal name.
A collective name or collective pseudonym is one shared by two or more persons, for example the co-authors of a work, such as Carolyn Keene, Erin Hunter, Ellery Queen, Nicolas Bourbaki, or James S. A. Corey.
Etymology
The term pseudonym is derived from the Greek word "" (pseudṓnymon), literally 'false name', from (pseûdos) 'lie, falsehood' and (ónoma) 'name'. The term alias is a Latin adverb meaning 'at another time, elsewhere'.
Usage
Name change
Sometimes people change their names in such a manner that the new name becomes permanent and is used by all who know the person. This is not an alias or pseudonym, but in fact a new name. In many countries, including common law countries, a name change can be ratified by a court and become a person's new legal name.
For example, in the 1960s, civil rights campaigner Malcolm X, originally known as Malcolm Little, changed his surname to "X" to represent his unknown African ancestral name that had been lost when his ancestors were brought to North America as slaves. He then changed his name again to Malik El-Shabazz when he converted to Islam.
Likewise some Jews adopted Hebrew family names upon immigrating to Israel, dropping surnames that had been in their families for generations. The politician David Ben-Gurion, for example, was born David Grün in Poland. He adopted his Hebrew name in 1910 when he published his first article in a Zionist journal in Jerusalem.
Concealing identity
Business
Businesspersons of ethnic minorities in some parts of the world are sometimes advised by an employer to use a pseudonym that is common or acceptable in that area when conducting business, to overcome racial or religious bias.
Criminal activity
Criminals may use aliases, fictitious business names, and dummy corporations (corporate shells) to hide their identity, or to impersonate other persons or entities in order to commit fraud. Aliases and fictitious business names used for dummy corporations may become so complex that, in the words of The Washington Post, "getting to the truth requires a walk down a bizarre labyrinth" and multiple government agencies may become involved to uncover the truth. Giving a false name to a law enforcement officer is a crime in many jurisdictions; see identity fraud.
Literature
A pen name, or "nom de plume" (French for "pen name"), is a pseudonym (sometimes a particular form of the real name) adopted by an author (or on the author's behalf by their publishers).
Although the term is most frequently used today with regard to identity and the Internet, the concept of pseudonymity has a long history. In ancient literature it was common to write in the name of a famous person, not for concealment or with any intention of deceit; in the New Testament, the second letter of Peter is probably such. A more modern example is all of The Federalist Papers, which were signed by Publius, a pseudonym representing the trio of James Madison, Alexander Hamilton, and John Jay. The papers were written partially in response to several Anti-Federalist Papers, also written under pseudonyms. As a result of this pseudonymity, historians know that the papers were written by Madison, Hamilton, and Jay, but have not been able to discern with complete accuracy which of the three authored a few of the papers. There are also examples of modern politicians and high-ranking bureaucrats writing under pseudonyms.
Some female authors used male pen names, in particular in the 19th century, when writing was a male-dominated profession. The Brontë sisters used pen names for their early work, so as not to reveal their gender (see below) and so that local residents would not know that the books related to people of the neighbourhood. The Brontës used their neighbours as inspiration for characters in many of their books. Anne Brontë's The Tenant of Wildfell Hall (1848) was published under the name Acton Bell, while Charlotte Brontë used the name Currer Bell for Jane Eyre (1847) and Shirley (1849), and Emily Brontë adopted Ellis Bell as cover for Wuthering Heights (1847). Other examples from the nineteenth-century are the novelist Mary Ann Evans (George Eliot) and the French writer Amandine Aurore Lucile Dupin (George Sand). Pseudonyms may also be used due to cultural or organization or political prejudices.
On the other hand, some 20th- and 21st-century male romance novelists have used female pen names. A few examples are Brindle Chase, Peter O'Donnell (as Madeline Brent), Christopher Wood (as Penny Sutton and Rosie Dixon), and Hugh C. Rae (as Jessica Sterling).
A pen name may be used if a writer's real name is likely to be confused with the name of another writer or notable individual, or if the real name is deemed unsuitable.
Authors who write both fiction and non-fiction, or in different genres, may use different pen names to avoid confusing their readers. For example, the romance writer Nora Roberts writes mystery novels under the name J. D. Robb.
In some cases, an author may become better known by his pen name than his real name. Some famous examples of that include Samuel Clemens, writing as Mark Twain, and Theodor Geisel, better known as Dr. Seuss. The British mathematician Charles Dodgson wrote fantasy novels as Lewis Carroll and mathematical treatises under his own name.
Some authors, such as Harold Robbins, use several literary pseudonyms.
Some pen names have been used for long periods, even decades, without the author's true identity being discovered, as with Elena Ferrante and Torsten Krol.
Joanne Rowling published the Harry Potter series as J. K. Rowling. Rowling also published the Cormoran Strike series, a series of detective novels including The Cuckoo's Calling under the pseudonym "Robert Galbraith".
Winston Churchill wrote as Winston S. Churchill (from his full surname "Spencer-Churchill" which he did not otherwise use) in an attempt to avoid confusion with an American novelist of the same name. The attempt was not wholly successful – the two are still sometimes confused by booksellers.
A pen name may be used specifically to hide the identity of the author, as with exposé books about espionage or crime, or explicit erotic fiction. Some prolific authors adopt a pseudonym to disguise the extent of their published output, e. g. Stephen King writing as Richard Bachman. Co-authors may choose to publish under a collective pseudonym, e. g., P. J. Tracy and Perri O'Shaughnessy. Frederic Dannay and Manfred Lee used the name Ellery Queen as a pen name for their collaborative works and as the name of their main character. Asa Earl Carter, a Southern white segregationist affiliated with the KKK, wrote Western books under a fictional Cherokee persona to imply legitimacy and conceal his history.
"Why do authors choose pseudonyms? It is rarely because they actually hope to stay anonymous forever," mused writer and columnist Russell Smith in his review of the Canadian novel Into That Fire by the pseudonymous M. J. Cates.
A famous case in French literature was Romain Gary. Already a well-known writer, he started publishing books as Émile Ajar to test whether his new books would be well received on their own merits, without the aid of his established reputation. They were: Émile Ajar, like Romain Gary before him, was awarded the prestigious Prix Goncourt by a jury unaware that they were the same person. Similarly, TV actor Ronnie Barker submitted comedy material under the name Gerald Wiley.
A collective pseudonym may represent an entire publishing house, or any contributor to a long-running series, especially with juvenile literature. Examples include Watty Piper, Victor Appleton, Erin Hunter, and Kamiru M. Xhan.
Another use of a pseudonym in literature is to present a story as being written by the fictional characters in the story. The series of novels known as A Series of Unfortunate Events are written by Daniel Handler under the pen name of Lemony Snicket, a character in the series. This applies also to some of the several 18th-century English and American writers who used the name Fidelia.
An anonymity pseudonym or multiple-use name is a name used by many different people to protect anonymity. It is a strategy that has been adopted by many unconnected radical groups and by cultural groups, where the construct of personal identity has been criticised. This has led to the idea of the "open pop star".
Medicine
Pseudonyms and acronyms are often employed in medical research to protect subjects' identities through a process known as de-identification.
Science
Nicolaus Copernicus put forward his theory of heliocentrism in the manuscript Commentariolus anonymously, in part because of his employment as a law clerk for a church-government organization.
Sophie Germain and William Sealy Gosset used pseudonyms to publish their work in the field of mathematics – Germain, to avoid rampant 19th century academic misogyny, and Gosset, to avoid revealing brewing practices of his employer, the Guinness Brewery.
Satoshi Nakamoto is a pseudonym of a still unknown author or authors' group behind a white paper about bitcoin.
Military and paramilitary organizations
In Ancien Régime France, a nom de guerre ("war name") would be adopted by each new recruit (or assigned to them by the captain of their company) as they enlisted in the French army. These pseudonyms had an official character and were the predecessor of identification numbers: soldiers were identified by their first names, their family names, and their noms de guerre (e. g. Jean Amarault dit Lafidélité). These pseudonyms were usually related to the soldier's place of origin (e. g. Jean Deslandes dit Champigny, for a soldier coming from a town named Champigny), or to a particular physical or personal trait (e. g. Antoine Bonnet dit Prettaboire, for a soldier prêt à boire, ready to drink). In 1716, a nom de guerre was mandatory for every soldier; officers did not adopt noms de guerre as they considered them derogatory. In daily life, these aliases could replace the real family name.
Noms de guerre were adopted for security reasons by members of World War II French resistance and Polish resistance. Such pseudonyms are often adopted by military special-forces soldiers, such as members of the SAS and similar units of resistance fighters, terrorists, and guerrillas. This practice hides their identities and may protect their families from reprisals; it may also be a form of dissociation from domestic life. Some well-known men who adopted noms de guerre include Carlos, for Ilich Ramírez Sánchez; Willy Brandt, Chancellor of West Germany; and Subcomandante Marcos, spokesman of the Zapatista Army of National Liberation (EZLN). During Lehi's underground fight against the British in Mandatory Palestine, the organization's commander Yitzchak Shamir (later Prime Minister of Israel) adopted the nom de guerre "Michael", in honour of Ireland's Michael Collins.
Revolutionaries and resistance leaders, such as Lenin, Trotsky, Golda Meir, Philippe Leclerc de Hauteclocque, and Josip Broz Tito, often adopted their noms de guerre as their proper names after the struggle. George Grivas, the Greek-Cypriot EOKA militant, adopted the nom de guerre Digenis (Διγενής). In the French Foreign Legion, recruits can adopt a pseudonym to break with their past lives. Mercenaries have long used "noms de guerre", sometimes even multiple identities, depending on the country, conflict, and circumstance. Some of the most familiar noms de guerre today are the kunya used by Islamic mujahideen. These take the form of a teknonym, either literal or figurative.
Online activity
Individuals using a computer online may adopt or be required to use a form of pseudonym known as a "handle" (a term deriving from CB slang), "user name", "login name", "avatar", or, sometimes, "screen name", "gamertag" "IGN (In Game (Nick)Name)" or "nickname". On the Internet, pseudonymous remailers use cryptography that achieves persistent pseudonymity, so that two-way communication can be achieved, and reputations can be established, without linking physical identities to their respective pseudonyms. Aliasing is the use of multiple names for the same data location.
More sophisticated cryptographic systems, such as anonymous digital credentials, enable users to communicate pseudonymously (i. e., by identifying themselves by means of pseudonyms). In well-defined abuse cases, a designated authority may be able to revoke the pseudonyms and reveal the individuals' real identity.
Use of pseudonyms is common among professional eSports players, despite the fact that many professional games are played on LAN.
Pseudonymity has become an important phenomenon on the Internet and other computer networks. In computer networks, pseudonyms possess varying degrees of anonymity, ranging from highly linkable public pseudonyms (the link between the pseudonym and a human being is publicly known or easy to discover), potentially linkable non-public pseudonyms (the link is known to system operators but is not publicly disclosed), and unlinkable pseudonyms (the link is not known to system operators and cannot be determined). For example, true anonymous remailer enables Internet users to establish unlinkable pseudonyms; those that employ non-public pseudonyms (such as the now-defunct Penet remailer) are called pseudonymous remailers.
The continuum of unlinkability can also be seen, in part, on Wikipedia. Some registered users make no attempt to disguise their real identities (for example, by placing their real name on their user page). The pseudonym of unregistered users is their IP address, which can, in many cases, easily be linked to them. Other registered users prefer to remain anonymous, and do not disclose identifying information. However, in certain cases, permits system administrators to consult the server logs to determine the IP address, and perhaps the true name, of a registered user. It is possible, in theory, to create an unlinkable Wikipedia pseudonym by using an Open proxy, a Web server that disguises the user's IP address. But most open proxy addresses are blocked indefinitely due to their frequent use by vandals. Additionally, Wikipedia's public record of a user's interest areas, writing style, and argumentative positions may still establish an identifiable pattern.
System operators (sysops) at sites offering pseudonymity, such as Wikipedia, are not likely to build unlinkability into their systems, as this would render them unable to obtain information about abusive users quickly enough to stop vandalism and other undesirable behaviors. Law enforcement personnel, fearing an avalanche of illegal behavior, are equally unenthusiastic. Still, some users and privacy activists like the American Civil Liberties Union believe that Internet users deserve stronger pseudonymity so that they can protect themselves against identity theft, illegal government surveillance, stalking, and other unwelcome consequences of Internet use (including unintentional disclosures of their personal information and doxing, as discussed in the next section). Their views are supported by laws in some nations (such as Canada) that guarantee citizens a right to speak using a pseudonym. This right does not, however, give citizens the right to demand publication of pseudonymous speech on equipment they do not own.
Confidentiality
Most Web sites that offer pseudonymity retain information about users. These sites are often susceptible to unauthorized intrusions into their non-public database systems. For example, in 2000, a Welsh teenager obtained information about more than 26,000 credit card accounts, including that of Bill Gates. In 2003, VISA and MasterCard announced that intruders obtained information about 5.6 million credit cards. Sites that offer pseudonymity are also vulnerable to confidentiality breaches. In a study of a Web dating service and a pseudonymous remailer, University of Cambridge researchers discovered that the systems used by these Web sites to protect user data could be easily compromised, even if the pseudonymous channel is protected by strong encryption. Typically, the protected pseudonymous channel exists within a broader framework in which multiple vulnerabilities exist. Pseudonym users should bear in mind that, given the current state of Web security engineering, their true names may be revealed at any time.
Online reputations
Pseudonymity is an important component of the reputation systems found in online auction services (such as eBay), discussion sites (such as Slashdot), and collaborative knowledge development sites (such as Wikipedia). A pseudonymous user who has acquired a favorable reputation gains the trust of other users. When users believe that they will be rewarded by acquiring a favorable reputation, they are more likely to behave in accordance with the site's policies.
If users can obtain new pseudonymous identities freely or at a very low cost, reputation-based systems are vulnerable to whitewashing attacks, also called serial pseudonymity, in which abusive users continuously discard their old identities and acquire new ones in order to escape the consequences of their behavior: "On the Internet, nobody knows that yesterday you were a dog, and therefore should be in the doghouse today." Users of Internet communities who have been banned only to return with new identities are called sock puppets. Whitewashing is one specific form of Sybil attack on distributed systems.
The social cost of cheaply discarded pseudonyms is that experienced users lose confidence in new users, and may subject new users to abuse until they establish a good reputation. System operators may need to remind experienced users that most newcomers are well-intentioned (see, for example, ). Concerns have also been expressed about sock puppets exhausting the supply of easily remembered usernames. In addition a recent research paper demonstrated that people behave in a potentially more aggressive manner when using pseudonyms/nicknames (due to the online disinhibition effect) as opposed to being completely anonymous. In contrast, research by the blog comment hosting service Disqus found pseudonymous users contributed the "highest quantity and quality of comments", where "quality" is based on an aggregate of likes, replies, flags, spam reports, and comment deletions, and found that users trusted pseudonyms and real names equally.
Researchers at the University of Cambridge showed that pseudonymous comments tended to be more substantive and engaged with other users in explanations, justifications, and chains of argument, and less likely to use insults, than either fully anonymous or real name comments. Proposals have been made to raise the costs of obtaining new identities, such as by charging a small fee or requiring e-mail confirmation. Academic research has proposed cryptographic methods to pseudonymize social media identities or government-issued identities, to accrue and use anonymous reputation in online forums, or to obtain one-per-person and hence less readily-discardable pseudonyms periodically at physical-world pseudonym parties. Others point out that Wikipedia's success is attributable in large measure to its nearly non-existent initial participation costs.
Privacy
People seeking privacy often use pseudonyms to make appointments and reservations. Those writing to advice columns in newspapers and magazines may use pseudonyms. Steve Wozniak used a pseudonym when attending the University of California, Berkeley after co-founding Apple Computer, because "I knew I wouldn't have time enough to be an A+ student."
Stage names
When used by an actor, musician, radio disc jockey, model, or other performer or "show business" personality a pseudonym is called a stage name, or, occasionally, a professional name, or screen name.
Film, theatre, and related activities
Members of a marginalized ethnic or religious group have often adopted stage names, typically changing their surname or entire name to mask their original background.
Stage names are also used to create a more marketable name, as in the case of Creighton Tull Chaney, who adopted the pseudonym Lon Chaney, Jr., a reference to his famous father Lon Chaney, Sr.
Chris Curtis of Deep Purple fame was christened as Christopher Crummey ("crumby" is UK slang for poor quality). In this and similar cases a stage name is adopted simply to avoid an unfortunate pun.
Pseudonyms are also used to comply with the rules of performing-arts guilds (Screen Actors Guild (SAG), Writers Guild of America, East (WGA), AFTRA, etc.), which do not allow performers to use an existing name, in order to avoid confusion. For example, these rules required film and television actor Michael Fox to add a middle initial and become Michael J. Fox, to avoid being confused with another actor named Michael Fox. This was also true of author and actress Fannie Flagg, who chose this pseudonym; her real name, Patricia Neal, being the name of another well-known actress; and British actor Stewart Granger, whose real name was James Stewart. The film-making team of Joel and Ethan Coen, for instance, share credit for editing under the alias Roderick Jaynes.
Some stage names are used to conceal a person's identity, such as the pseudonym Alan Smithee, which was used by directors in the Directors Guild of America (DGA) to remove their name from a film they feel was edited or modified beyond their artistic satisfaction. In theatre, the pseudonyms George or Georgina Spelvin, and Walter Plinge are used to hide the identity of a performer, usually when he or she is "doubling" (playing more than one role in the same play).
David Agnew was a name used by the BBC to conceal the identity of a scriptwriter, such as for the Doctor Who serial City of Death, which had three writers, including Douglas Adams, who was at the time of writing the show's Script Editor. In another Doctor Who serial, The Brain of Morbius, writer Terrance Dicks demanded the removal of his name from the credits saying it could go out under a "bland pseudonym". This ended up as Robin Bland.
Music
Musicians and singers can use pseudonyms to allow artists to collaborate with artists on other labels while avoiding the need to gain permission from their own labels, such as the artist Jerry Samuels, who made songs under Napoleon XIV. Rock singer-guitarist George Harrison, for example, played guitar on Cream's song "Badge" using a pseudonym. In classical music, some record companies issued recordings under a nom de disque in the 1950s and 1960s to avoid paying royalties. A number of popular budget LPs of piano music were released under the pseudonym Paul Procopolis. Another example is that Paul McCartney used his fictional name "Bernerd Webb" for Peter and Gordon's song Woman.
Pseudonyms are used as stage names in heavy metal bands, such as Tracii Guns in LA Guns, Axl Rose and Slash in Guns N' Roses, Mick Mars in Mötley Crüe, Dimebag Darrell in Pantera, or C.C. Deville in Poison. Some such names have additional meanings, like that of Brian Hugh Warner, more commonly known as Marilyn Manson: Marilyn coming from Marilyn Monroe and Manson from convicted serial killer Charles Manson. Jacoby Shaddix of Papa Roach went under the name "Coby Dick" during the Infest era. He changed back to his birth name when lovehatetragedy was released.
David Johansen, front man for the hard rock band New York Dolls, recorded and performed pop and lounge music under the pseudonym Buster Poindexter in the late 1980s and early 1990s. The music video for Poindexter's debut single, Hot Hot Hot, opens with a monologue from Johansen where he notes his time with the New York Dolls and explains his desire to create more sophisticated music.
Ross Bagdasarian, Sr., creator of Alvin and the Chipmunks, wrote original songs, arranged, and produced the records under his real name, but performed on them as David Seville. He also wrote songs as Skipper Adams. Danish pop pianist Bent Fabric, whose full name is Bent Fabricius-Bjerre, wrote his biggest instrumental hit "Alley Cat" as Frank Bjorn.
For a time, the musician Prince used an unpronounceable "Love Symbol" as a pseudonym ("Prince" is his actual first name rather than a stage name). He wrote the song "Sugar Walls" for Sheena Easton as "Alexander Nevermind" and "Manic Monday" for The Bangles as "Christopher Tracy". (He also produced albums early in his career as "Jamie Starr").
Many Italian-American singers have used stage names, as their birth names were difficult to pronounce or considered too ethnic for American tastes. Singers changing their names included Dean Martin (born Dino Paul Crocetti), Connie Francis (born Concetta Franconero), Frankie Valli (born Francesco Castelluccio), Tony Bennett (born Anthony Benedetto), and Lady Gaga (born Stefani Germanotta)
In 2009, the British rock band Feeder briefly changed its name to Renegades so it could play a whole show featuring a set list in which 95 per cent of the songs played were from their forthcoming new album of the same name, with none of their singles included. Front man Grant Nicholas felt that if they played as Feeder, there would be uproar over him not playing any of the singles, so used the pseudonym as a hint. A series of small shows were played in 2010, at 250- to 1,000-capacity venues with the plan not to say who the band really are and just announce the shows as if they were a new band.
In many cases, hip-hop and rap artists prefer to use pseudonyms that represents some variation of their name, personality, or interests. Examples include Iggy Azalea (her stage name is a combination of her dog's name, Iggy, and her home street in Mullumbimby, Azalea Street), Ol' Dirty Bastard (known under at least six aliases), Diddy (previously known at various times as Puffy, P. Diddy, and Puff Daddy), Ludacris, Flo Rida (whose stage name is a tribute to his home state, Florida), British-Jamaican hip-hop artist Stefflon Don (real name Stephanie Victoria Allen), LL Cool J, and Chingy. Black metal artists also adopt pseudonyms, usually symbolizing dark values, such as Nocturno Culto, Gaahl, Abbath, and Silenoz. In punk and hardcore punk, singers and band members often replace real names with tougher-sounding stage names such as Sid Vicious (real name John Simon Ritchie) of the late 1970s band Sex Pistols and "Rat" of the early 1980s band The Varukers and the 2000s re-formation of Discharge. The punk rock band The Ramones had every member take the last name of Ramone.
Henry John Deutschendorf Jr., an American singer-songwriter, used the stage name John Denver. The Australian country musician born Robert Lane changed his name to Tex Morton. Reginald Kenneth Dwight legally changed his name in 1972 to Elton John.
See also
Alter ego
Anonymity
Anonymous post
Anonymous remailer
Bugō
Courtesy name
Code name
Confidentiality
Data haven
Digital signature
Friend-to-friend
Heteronym
Hypocorism
John Doe
List of Latinised names
List of placeholder names by language
List of pseudonyms
List of pseudonyms used in the American Constitutional debates
List of stage names
Mononymous person
Nickname
Nym server
Nymwars
Onion routing
Penet.fi
Placeholder name
Placeholder names in cryptography
Pseudepigrapha
Pseudonymization
Pseudonymous Bosch
Pseudonymous remailer
Public key encryption
Secret identity
Notes
Sources
Peschke, Michael. 2006. International Encyclopedia of Pseudonyms. Detroit: Gale. .
Room, Adrian. 2010. Dictionary of Pseudonyms: 13,000 Assumed Names and Their Origins. 5th rev. ed. Jefferson, N.C.: McFarland & Co. .
External links
A site with pseudonyms for celebrities and entertainers
Another list of pseudonyms
The U.S. copyright status of pseudonyms
Anonymity Bibliography Excellent bibliography on anonymity and pseudonymity. Includes hyperlinks.
Anonymity Network Describes an architecture for anonymous Web browsing.
Electronic Frontier Foundation (EFF) Anonymity/Pseudonymity Archive
The Real Name Fallacy - "Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment." with 30 references
Semantics
Word play
Applications of cryptography
Anonymity
qu:Mut'i |
40642 | https://en.wikipedia.org/wiki/NeXTSTEP | NeXTSTEP | NeXTSTEP is a discontinued object-oriented, multitasking operating system based on the Mach kernel and the UNIX-derived BSD. It was developed by NeXT Computer in the late 1980s and early 1990s and was initially used for its range of proprietary workstation computers such as the NeXTcube. It was later ported to several other computer architectures.
Although relatively unsuccessful at the time, it attracted interest from computer scientists and researchers. It was used as the original platform for the development of the Electronic AppWrapper, the first commercial electronic software distribution catalog to collectively manage encryption and provide digital rights for application software and digital media, a forerunner of the modern "app store" concept. It was also the platform on which Tim Berners-Lee created the first web browser, and on which id Software developed the video games Doom and Quake.
In 1996, NeXT was acquired by Apple Computer. The NeXTSTEP platform and OpenStep later became components of the Unix-based architecture of Mac OS X (now macOS) — a successor to the classic Mac OS that leveraged a combination of Unix supplemented by NeXTSTEP components and Apple's own technologies. Unix derivatives incorporating NeXTSTEP would eventually power all of Apple's platforms, including iPhone.
Overview
NeXTSTEP (also stylized as NeXTstep, NeXTStep, and NEXTSTEP) is a combination of several parts:
a Unix operating system based on the Mach kernel, plus source code from BSD
Display PostScript and a proprietary windowing engine
the Objective-C language and runtime
an object-oriented (OO) application layer, including several "kits"
development tools for the OO layers.
NeXTSTEP is notable for having been a preeminent implementation of the latter three items. The toolkits offer considerable power, and are the canonical development system for all of the software on the machine.
It introduced the idea of the Dock (carried through OpenStep and into today's macOS) and the Shelf. NeXTSTEP also originated or innovated a large number of other GUI concepts which became common in other operating systems: 3D "chiseled" widgets, large full-color icons, system-wide drag and drop of a wide range of objects beyond file icons, system-wide piped services, real-time scrolling and window dragging, properties dialog boxes called "inspectors", and window modification notices (such as the saved status of a file). The system is among the first general-purpose user interfaces to handle publishing color standards, transparency, sophisticated sound and music processing (through a Motorola 56000 DSP), advanced graphics primitives, internationalization, and modern typography, in a consistent manner across all applications.
Additional kits were added to the product line to make the system more attractive. These include Portable Distributed Objects (PDO), which allow easy remote invocation, and Enterprise Objects Framework, a powerful object-relational database system. The kits made the system particularly interesting to custom application programmers, and NeXTSTEP had a long history in the financial programming community.
History
A preview release of NeXTSTEP (version 0.8) was shown with the launch of the NeXT Computer on October 12, 1988. The first full release, NeXTSTEP 1.0, shipped on September 18, 1989. The last version, 3.3, was released in early 1995, by which time it ran on not only the Motorola 68000 family processors used in NeXT computers, but also on Intel x86, Sun SPARC, and HP PA-RISC-based systems.
NeXTSTEP was later modified to separate the underlying operating system from the higher-level object libraries. The result was the OpenStep API, which ran on multiple underlying operating systems, including NeXT's own OPENSTEP, Windows NT and Solaris. NeXTSTEP's legacy stands today in the form of its direct descendants, Apple's macOS, iOS, watchOS, and tvOS operating systems.
Unix
From day one, the operating system of NeXTSTEP was built upon Mach/BSD.
It was initially built on 4.3BSD-Tahoe.
It changed to 4.3BSD-Reno after the release of NeXTSTEP 3.0.
It changed to 4.4BSD during the development of Rhapsody.
Legacy
The first web browser, WorldWideWeb, and the first-ever app store were all invented on the NeXTSTEP platform.
Some features and keyboard shortcuts now commonly found in web browsers can be traced back to NeXTSTEP conventions. The basic layout options of HTML 1.0 and 2.0 are attributable to those features available in NeXT's Text class.
Lighthouse Design Ltd. developed Diagram!, a drawing tool, originally called BLT (for Box-and-Line Tool) in which objects (boxes) are connected together using "smart links" (lines) to construct diagrams such a flow charts. This basic design could be enhanced by the simple addition of new links and new documents, located anywhere in the local area network, that foreshadowed Tim Berners-Lee's initial prototype that was written in NeXTStep (October–December 1990).
In the 1990s, the pioneering PC games Doom (with its WAD level editor), Doom II, and Quake (with its respective level editor) were developed by id Software on NeXT machines. Other games based on the Doom engine such as Heretic and its sequel Hexen by Raven Software as well as Strife by Rogue Entertainment were also developed on NeXT hardware using id's tools.
Altsys made a NeXTSTEP application called Virtuoso, version 2 of which was ported to Mac OS and Windows to become Macromedia FreeHand version 4. The modern "Notebook" interface for Mathematica, and the advanced spreadsheet Lotus Improv, were developed using NeXTSTEP. The software that controlled MCI's Friends and Family calling plan program was developed using NeXTSTEP.
About the time of the release of NeXTSTEP 3.2, NeXT partnered with Sun Microsystems to develop OpenStep. It is the product of an effort to separate the underlying operating system from the higher-level object libraries to create a cross-platform object-oriented API standard derived from NeXTSTEP. The OpenStep API targets multiple underlying operating systems, including NeXT's own OPENSTEP. Implementations of that standard were released for Sun's Solaris, Windows NT, and NeXT's version of the Mach kernel. NeXT's implementation is called "OPENSTEP for Mach" and its first release (4.0) superseded NeXTSTEP 3.3 on NeXT, Sun, and Intel IA-32 systems.
Following an announcement on December 20, 1996, Apple Computer acquired NeXT on February 4, 1997, for $429 million. Based upon the "OPENSTEP for Mach" operating system, and developing the OPENSTEP API to become Cocoa, Apple created the basis of Mac OS X, and eventually, in turn, of iOS/iPadOS, watchOS, and tvOS.
A free software implementation of the OpenStep standard, GNUstep, also exists.
Release history
Versions up to 4.1 are general releases. OPENSTEP 4.2 pre-release 2 is a bug-fix release published by Apple and supported for five years after its September 1997 release.
See also
OpenStep, the object-oriented application programming interface derived from NeXTSTEP
GNUstep, an open-source implementation of Cocoa API respectively OpenStep API
Window Maker, a window manager designed to emulate the NeXT GUI for the X Window System
Bundle (macOS)
Miller Columns, the method of directory browsing that NeXTSTEP's File Viewer used
Multi-architecture binary
NeXT character set
Previous, an emulator for NeXT hardware capable of running some versions of NeXTSTEP
References
http://www.cnet.com/news/ibm-buys-sequent-for-810-million
A complete guide to the confusing series of names applied to the system
External links
NeXTComputers.org
The Next Step BYTE Magazine 14-03, Object Oriented Programming with NextStep
A modern NextStep inspired desktop environment.
1989 software
Berkeley Software Distribution
Discontinued operating systems
Mach (kernel)
NeXT
Object-oriented operating systems
Unix variants
Window-based operating systems |
40684 | https://en.wikipedia.org/wiki/Access%20control | Access control | In the fields of physical security and information security, access control (AC) is the selective restriction of access to a place or other resource, while access management describes the process. The act of accessing may mean consuming, entering, or using. Permission to access a resource is called authorization.
Locks and login credentials are two analogous mechanisms of access control.
Physical security
Geographical access control may be enforced by personnel (e.g. border guard, bouncer, ticket checker), or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.
The term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the mantrap. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.
Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically, this was partially accomplished through keys and locks. When a door is locked, only someone with a key can enter through the door, depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door, and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.
Electronic access control
Electronic access control (EAC) uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked.
When a credential is presented to a reader, the reader sends the credential's information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the door. The control panel also ignores a door open signal to prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted.
The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room, but Bob does not. Alice either gives Bob her credential, or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted; another factor can be a PIN, a second credential, operator intervention, or a biometric input.
There are three types (factors) of authenticating information:
something the user knows, e.g. a password, pass-phrase or PIN
something the user has, such as smart card or a key fob
something the user is, such as fingerprint, verified by biometric measurement
Passwords are a common means of verifying a user's identity before access is given to information systems. In addition, a fourth factor of authentication is now recognized: someone you know, whereby another person who knows you can provide a human element of authentication in situations where systems have been set up to allow for such scenarios. For example, a user may have their password, but have forgotten their smart card. In such a scenario, if the user is known to designated cohorts, the cohorts may provide their smart card and password, in combination with the extant factor of the user in question, and thus provide two factors for the user with the missing credential, giving three factors overall to allow access.
Credential
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something a person knows (such as a number or PIN), something they have (such as an access badge), something they are (such as a biometric feature), something they do (measurable behavioural patterns), or some combination of these items. This is known as multi-factor authentication. The typical credential is an access card or key-fob, and newer software can also turn users' smartphones into access devices.
There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26-bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs, which are more compact than ID cards, and attach to a key ring. Biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry. The built-in biometric technologies found on newer smartphones can also be used as credentials in conjunction with access software running on mobile devices. In addition to older more traditional card access technologies, newer technologies such as Near field communication (NFC), Bluetooth low energy or Ultra-wideband (UWB) can also communicate user credentials to readers for system or building access.
Access control system components
Components of an access control system include:
An access control panel (also known as a controller)
An access-controlled entry, such as a door, turnstile, parking gate, elevator, or other physical barrier
A reader installed near the entry. (In cases where the exit is also controlled, a second reader is used on the opposite side of the entry.)
Locking hardware, such as electric door strikes and electromagnetic locks
A magnetic door switch for monitoring door position
Request-to-exit (RTE) devices for allowing egress. When a RTE button is pushed, or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.
Access control topology
Access control decisions are made by comparing the credentials to an access control list. This look-up can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has observed a steady push of the look-up out from a central host to the edge of the system, or the reader. The predominant topology circa 2009 is hub and spoke with a control panel as the hub, and the readers as the spokes. The look-up and control functions are by the control panel. The spokes communicate through a serial connection; usually RS-485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers are IP enabled, and connect to a host and database using standard networks
Types of readers
Access control readers may be classified by the functions they are able to perform:
Basic (non-intelligent) readers: simply read card number or PIN, and forward it to a control panel. In case of biometric identification, such readers output the ID number of a user. Typically, Wiegand protocol is used for transmitting data to the control panel, but other options such as RS-232, RS-485 and Clock/Data are not uncommon. This is the most popular type of access control readers. Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by HID, and P300 by Farpointe Data.
Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not make any access decisions. When a user presents a card or enters a PIN, the reader sends information to the main controller, and waits for its response. If the connection to the main controller is interrupted, such readers stop working, or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of such readers are InfoProx Lite IPL200 by CEM Systems, and AP-510 by Apollo.
Intelligent readers: have all inputs and outputs necessary to control door hardware; they also have memory and processing power necessary to make access decisions independently. Like semi-intelligent readers, they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates, and retrieves events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems, and AP-500 by Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers usually do not have traditional control panels, and readers communicate directly to a PC that acts as a host.
Some readers may have additional features such as an LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support.
Access control system topologies
1. Serial controllers. Controllers are connected to a host PC via a serial RS-485 communication line (or via 20mA current loop in some older systems). External RS-232/485 converters or internal RS-485 cards have to be installed, as standard PCs do not have RS-485 communication ports.
Advantages:
RS-485 standard allows long cable runs, up to 4000 feet (1200 m)
Relatively short response time. The maximum number of devices on an RS-485 line is limited to 32, which means that the host can frequently request status updates from each device, and display events almost in real time.
High reliability and security as the communication line is not shared with any other systems.
Disadvantages:
RS-485 does not allow Star-type wiring unless splitters are used
RS-485 is not well suited for transferring large amounts of data (i.e. configuration and users). The highest possible throughput is 115.2 kbit/sec, but in most system it is downgraded to 56.2 kbit/sec, or less, to increase reliability.
RS-485 does not allow the host PC to communicate with several controllers connected to the same port simultaneously. Therefore, in large systems, transfers of configuration, and users to controllers may take a very long time, interfering with normal operations.
Controllers cannot initiate communication in case of an alarm. The host PC acts as a master on the RS-485 communication line, and controllers have to wait until they are polled.
Special serial switches are required, in order to build a redundant host PC setup.
Separate RS-485 lines have to be installed, instead of using an already existing network infrastructure.
Cable that meets RS-485 standards is significantly more expensive than regular Category 5 UTP network cable.
Operation of the system is highly dependent on the host PC. In the case that the host PC fails, events from controllers are not retrieved, and functions that require interaction between controllers (i.e. anti-passback) stop working.
2. Serial main and sub-controllers. All door hardware is connected to sub-controllers (a.k.a. door controllers or door interfaces). Sub-controllers usually do not make access decisions, and instead forward all requests to the main controllers. Main controllers usually support from 16 to 32 sub-controllers.
Advantages:
Work load on the host PC is significantly reduced, because it only needs to communicate with a few main controllers.
The overall cost of the system is lower, as sub-controllers are usually simple and inexpensive devices.
All other advantages listed in the first paragraph apply.
Disadvantages:
Operation of the system is highly dependent on main controllers. In case one of the main controllers fails, events from its sub-controllers are not retrieved, and functions that require interaction between sub-controllers (i.e. anti-passback) stop working.
Some models of sub-controllers (usually lower cost) do not have the memory or processing power to make access decisions independently. If the main controller fails, sub-controllers change to degraded mode in which doors are either completely locked or unlocked, and no events are recorded. Such sub-controllers should be avoided, or used only in areas that do not require high security.
Main controllers tend to be expensive, therefore such a topology is not very well suited for systems with multiple remote locations that have only a few doors.
All other RS-485-related disadvantages listed in the first paragraph apply.
3. Serial main controllers & intelligent readers. All door hardware is connected directly to intelligent or semi-intelligent readers. Readers usually do not make access decisions, and forward all requests to the main controller. Only if the connection to the main controller is unavailable, will the readers use their internal database to make access decisions and record events. Semi-intelligent reader that have no database and cannot function without the main controller should be used only in areas that do not require high security. Main controllers usually support from 16 to 64 readers. All advantages and disadvantages are the same as the ones listed in the second paragraph.
4. Serial controllers with terminal servers. In spite of the rapid development and increasing use of computer networks, access control manufacturers remained conservative, and did not rush to introduce network-enabled products. When pressed for solutions with network connectivity, many chose the option requiring less efforts: addition of a terminal server, a device that converts serial data for transmission via LAN or WAN.
Advantages:
Allows utilizing the existing network infrastructure for connecting separate segments of the system.
Provides a convenient solution in cases when the installation of an RS-485 line would be difficult or impossible.
Disadvantages:
Increases complexity of the system.
Creates additional work for installers: usually terminal servers have to be configured independently, and not through the interface of the access control software.
Serial communication link between the controller and the terminal server acts as a bottleneck: even though the data between the host PC and the terminal server travels at the 10/100/1000Mbit/sec network speed, it must slow down to the serial speed of 112.5 kbit/sec or less. There are also additional delays introduced in the process of conversion between serial and network data.
All the RS-485-related advantages and disadvantages also apply.
5. Network-enabled main controllers. The topology is nearly the same as described in the second and third paragraphs. The same advantages and disadvantages apply, but the on-board network interface offers a couple of valuable improvements. Transmission of configuration and user data to the main controllers is faster, and may be done in parallel. This makes the system more responsive, and does not interrupt normal operations. No special hardware is required in order to achieve redundant host PC setup: in the case that the primary host PC fails, the secondary host PC may start polling network controllers. The disadvantages introduced by terminal servers (listed in the fourth paragraph) are also eliminated.
6. IP controllers. Controllers are connected to a host PC via Ethernet LAN or WAN.
Advantages:
An existing network infrastructure is fully utilized, and there is no need to install new communication lines.
There are no limitations regarding the number of controllers (as the 32 per line in cases of RS-485).
Special RS-485 installation, termination, grounding and troubleshooting knowledge is not required.
Communication with the controllers may be done at the full network speed, which is important if transferring a lot of data (databases with thousands of users, possibly including biometric records).
In case of an alarm, controllers may initiate connection to the host PC. This ability is important in large systems, because it serves to reduce network traffic caused by unnecessary polling.
Simplifies installation of systems consisting of multiple sites that are separated by large distances. A basic Internet link is sufficient to establish connections to the remote locations.
Wide selection of standard network equipment is available to provide connectivity in various situations (fiber, wireless, VPN, dual path, PoE)
Disadvantages:
The system becomes susceptible to network related problems, such as delays in case of heavy traffic and network equipment failures.
Access controllers and workstations may become accessible to hackers if the network of the organization is not well protected. This threat may be eliminated by physically separating the access control network from the network of the organization. Most IP controllers utilize either Linux platform or proprietary operating systems, which makes them more difficult to hack. Industry standard data encryption is also used.
Maximum distance from a hub or a switch to the controller (if using a copper cable) is 100 meters (330 ft).
Operation of the system is dependent on the host PC. In case the host PC fails, events from controllers are not retrieved and functions that require interaction between controllers (i.e. anti-passback) stop working. Some controllers, however, have a peer-to-peer communication option in order to reduce dependency on the host PC.
7. IP readers. Readers are connected to a host PC via Ethernet LAN or WAN.
Advantages:
Most IP readers are PoE capable. This feature makes it very easy to provide battery backed power to the entire system, including the locks and various types of detectors (if used).
IP readers eliminate the need for controller enclosures.
There is no wasted capacity when using IP readers (e.g. a 4-door controller would have 25% of unused capacity if it was controlling only 3 doors).
IP reader systems scale easily: there is no need to install new main or sub-controllers.
Failure of one IP reader does not affect any other readers in the system.
Disadvantages:
In order to be used in high-security areas, IP readers require special input/output modules to eliminate the possibility of intrusion by accessing lock and/or exit button wiring. Not all IP reader manufacturers have such modules available.
Being more sophisticated than basic readers, IP readers are also more expensive and sensitive, therefore they should not be installed outdoors in areas with harsh weather conditions, or high probability of vandalism, unless specifically designed for exterior installation. A few manufacturers make such models.
The advantages and disadvantages of IP controllers apply to the IP readers as well.
Security risks
The most common security risk of intrusion through an access control system is by simply following a legitimate user through a door, and this is referred to as tailgating. Often the legitimate user will hold the door for the intruder. This risk can be minimized through security awareness training of the user population or more active means such as turnstiles. In very high-security applications this risk is minimized by using a sally port, sometimes called a security vestibule or mantrap, where operator intervention is required presumably to assure valid identification.
The second most common risk is from levering a door open. This is relatively difficult on properly secured doors with strikes or high holding force magnetic locks. Fully implemented access control systems include forced door monitoring alarms. These vary in effectiveness, usually failing from high false positive alarms, poor database configuration, or lack of active intrusion monitoring. Most newer access control systems incorporate some type of door prop alarm to inform system administrators of a door left open longer than a specified length of time.
The third most common security risk is natural disasters. In order to mitigate risk from natural disasters, the structure of the building, down to the quality of the network and computer equipment vital. From an organizational perspective, the leadership will need to adopt and implement an All Hazards Plan, or Incident Response Plan. The highlights of any incident plan determined by the National Incident Management System must include Pre-incident planning, during incident actions, disaster recovery, and after-action review.
Similar to levering is crashing through cheap partition walls. In shared tenant spaces, the divisional wall is a vulnerability. A vulnerability along the same lines is the breaking of sidelights.
Spoofing locking hardware is fairly simple and more elegant than levering. A strong magnet can operate the solenoid controlling bolts in electric locking hardware. Motor locks, more prevalent in Europe than in the US, are also susceptible to this attack using a doughnut-shaped magnet. It is also possible to manipulate the power to the lock either by removing or adding current, although most Access Control systems incorporate battery back-up systems and the locks are almost always located on the secure side of the door.
Access cards themselves have proven vulnerable to sophisticated attacks. Enterprising hackers have built portable readers that capture the card number from a user's proximity card. The hacker simply walks by the user, reads the card, and then presents the number to a reader securing the door. This is possible because card numbers are sent in the clear, no encryption being used. To counter this, dual authentication methods, such as a card plus a PIN should always be used.
Many access control credentials unique serial numbers are programmed in sequential order during manufacturing. Known as a sequential attack, if an intruder has a credential once used in the system they can simply increment or decrement the serial number until they find a credential that is currently authorized in the system. Ordering credentials with random unique serial numbers is recommended to counter this threat.
Finally, most electric locking hardware still has mechanical keys as a fail-over. Mechanical key locks are vulnerable to bumping.
The need-to-know principle
The need to know principle can be enforced with user access controls and authorization procedures and its objective is to ensure that only authorized individuals gain access to information or systems necessary to undertake their duties.
Computer security
In computer security, general access control includes authentication, authorization, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric analysis, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems.
In any access-control model, the entities that can perform actions on the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities, rather than as human users: any human users can only have an effect on the system via the software entities that they control.
Although some systems equate subjects with user IDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity).
In some models, for example the object-capability model, any software entity can potentially act as both subject and object.
, access-control models tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs).
In a capability-based model, holding an unforgeable reference or capability to an object provides access to the object (roughly analogous to how possession of one's house key grants one access to one's house); access is conveyed to another party by transmitting such a capability over a secure channel
In an ACL-based model, a subject's access to an object depends on whether its identity appears on a list associated with the object (roughly analogous to how a bouncer at a private party would check an ID to see if a name appears on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.)
Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject).
Access control systems provide the essential services of authorization, identification and authentication (I&A), access approval, and accountability where:
authorization specifies what a subject can do
identification and authentication ensure that only legitimate subjects can log on to a system
access approval grants access during operations, by association of users with the resources that they are allowed to access, based on the authorization policy
accountability identifies what a subject (or all subjects associated with a user) did
Access control models
Access to accounts can be enforced through many types of controls.
Attribute-based Access Control (ABAC) An access control paradigm whereby access rights are granted to users through the use of policies which evaluate attributes (user attributes, resource attributes and environment conditions)
Discretionary Access Control (DAC)In DAC, the data owner determines who can access specific resources. For example, a system administrator may create a hierarchy of files to be accessed based on certain permissions.
Graph-based Access Control (GBAC)Compared to other approaches like RBAC or ABAC, the main difference is that in GBAC access rights are defined using an organizational query language instead of total enumeration.
History-Based Access Control (HBAC)Access is granted or declined based on the real-time evaluation of a history of activities of the inquiring party, e.g. behavior, time between requests, content of requests. For example, the access to a certain service or data source can be granted or declined on the personal behavior, e.g. the request interval exceeds one query per second.
History-of-Presence Based Access Control (HPBAC)Access control to resources is defined in terms of presence policies that need to be satisfied by presence records stored by the requestor. Policies are usually written in terms of frequency, spread and regularity. An example policy would be "The requestor has made k separate visitations, all within last week, and no two consecutive visitations are apart by more than T hours."
Identity-Based Access Control (IBAC)Using this network administrators can more effectively manage activity and access based on individual needs.
Lattice-Based Access Control (LBAC)A lattice is used to define the levels of security that an object may have and that a subject may have access to. The subject is only allowed to access an object if the security level of the subject is greater than or equal to that of the object.
Mandatory Access Control (MAC)In MAC, users do not have much freedom to determine who has access to their files. For example, security clearance of users and classification of data (as confidential, secret or top secret) are used as security labels to define the level of trust.
Organization-Based Access control (OrBAC) OrBAC model allows the policy designer to define a security policy independently of the implementation
Role-Based Access Control (RBAC)RBAC allows access based on the job title. RBAC largely eliminates discretion when providing access to objects. For example, a human resources specialist should not have permissions to create network accounts; this should be a role reserved for network administrators.
Rule-Based Access Control (RAC)RAC method, also referred to as Rule-Based Role-Based Access Control (RB-RBAC), is largely context based. Example of this would be allowing students to use labs only during a certain time of day; it is the combination of students' RBAC-based information system access control with the time-based lab access rules.
Responsibility Based Access control Information is accessed based on the responsibilities assigned to an actor or a business role
Telecommunication
In telecommunication, the term access control is defined in U.S. Federal Standard 1037C with the following meanings:
A service feature or technique used to permit or deny use of the components of a communication system.
A technique used to define or restrict the rights of individuals or application programs to obtain data from, or place data onto, a storage device.
The definition or restriction of the rights of individuals or application programs to obtain data from, or place data into, a storage device.
The process of limiting access to the resources of an AIS (Automated Information System) to authorized users, programs, processes, or other systems.
That function performed by the resource controller that allocates system resources to satisfy user requests.
This definition depends on several other technical terms from Federal Standard 1037C.
Attribute accessors
Special public member methods – accessors (aka getters) and mutator methods (often called setters) are used to control changes to class variables in order to prevent unauthorized access and data corruption.
Public policy
In public policy, access control to restrict access to systems ("authorization") or to track or monitor behavior within systems ("accountability") is an implementation feature of using trusted systems for security or social control.
See also
Alarm device, Alarm management, Security alarm
Card reader, Common Access Card, Magnetic stripe card, Proximity card, Smart card, Optical turnstile, Access badge
Castle, Fortification
Computer security, Logical security, .htaccess, Wiegand effect, XACML, Credential
Door security, Lock picking, Lock (security device), Electronic lock, Safe, Safe-cracking, Bank vault
Fingerprint scanner, Photo identification, Biometrics
Identity management, Identity document, OpenID, IP Controller, IP reader
Key management, Key cards
Lock screen
Physical security information management
Physical Security Professional
Prison, Barbed tape, Mantrap
Security, Security engineering, Security lighting, Security management, Security policy
References
U.S. Federal 1037C
U.S. MIL-188
U.S. National Information Systems Security Glossary
Harris, Shon, All-in-one CISSP Exam Guide, 6th Edition, McGraw Hill Osborne, Emeryville, California, 2012.
"Integrated Security Systems Design" – Butterworth/Heinenmann – 2007 – Thomas L. Norman, CPP/PSP/CSC Author
NIST.gov – Computer Security Division – Computer Security Resource Center – ATTRIBUTE BASED ACCESS CONTROL (ABAC) – OVERVIEW
External links
Access Control Markup Language. An OASIS standard language/model for access control. Also XACML.
Identity management
Perimeter security
Physical security |
40922 | https://en.wikipedia.org/wiki/Communications%20security | Communications security | Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients.
In the North Atlantic Treaty Organization culture, including United States Department of Defense culture, it is often referred to by the abbreviation COMSEC. The field includes cryptographic security, transmission security, emissions security and physical security of COMSEC equipment and associated keying material.
COMSEC is used to protect both classified and unclassified traffic on military communications networks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links.
Voice over secure internet protocol VOSIP has become the de facto standard for securing voice communication, replacing the need for
Secure Terminal Equipment (STE) in much of NATO, including the U.S.A. USCENTCOM moved entirely to VOSIP in 2008.
Specialties
Cryptographic security: The component of communications security that results from the provision of technically sound cryptosystems and their proper use. This includes ensuring message confidentiality and authenticity.
Emission security (EMSEC): The protection resulting from all measures taken to deny unauthorized persons information of value that might be derived from communications systems and cryptographic equipment intercepts and the interception and analysis of compromising emanations from cryptographic—equipment, information systems, and telecommunications systems.
Transmission security (TRANSEC): The component of communications security that results from the application of measures designed to protect transmissions from interception and exploitation by means other than cryptanalysis (e.g. frequency hopping and spread spectrum).
Physical security: The component of communications security that results from all physical measures necessary to safeguard classified equipment, material, and documents from access thereto or observation thereof by unauthorized persons.
Related terms
AKMS = the Army Key Management System
AEK = Algorithmic Encryption Key
CT3 = Common Tier 3
CCI = Controlled Cryptographic Item - equipment which contains COMSEC embedded devices
ACES = Automated Communications Engineering Software
DTD = Data Transfer Device
ICOM = Integrated COMSEC, e.g. a radio with built in encryption
TEK = Traffic Encryption Key
TED = Trunk Encryption Device such as the WALBURN/KG family
KEK = Key Encryption Key
KPK = Key production key
OWK = Over the Wire Key
OTAR = Over the Air Rekeying
LCMS = Local COMSEC Management Software
KYK-13 = Electronic Transfer Device
KOI-18 = Tape Reader General Purpose
KYX-15 = Electronic Transfer Device
KG-30 = family of COMSEC equipment
TSEC = Telecommunications Security (sometimes referred to in error transmission security or TRANSEC)
SOI = Signal operating instructions
SKL = Simple Key Loader
TPI = Two person integrity
STU-III (obsolete secure phone, replaced by STE)
STE - Secure Terminal Equipment (secure phone)
Types of COMSEC equipment:
Crypto equipment: Any equipment that embodies cryptographic logic or performs one or more cryptographic functions (key generation, encryption, and authentication).
Crypto-ancillary equipment: Equipment designed specifically to facilitate efficient or reliable operation of crypto-equipment, without performing cryptographic functions itself.
Crypto-production equipment: Equipment used to produce or load keying material
Authentication equipment:
DoD Electronic Key Management System
The Electronic Key Management System (EKMS) is a United States Department of Defense (DoD) key management, COMSEC material distribution, and logistics support system. The National Security Agency (NSA) established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to provide COMSEC managers with an automated system capable of ordering, generation, production, distribution, storage, security accounting, and access control.
The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC management operations. It eliminates paper keying material, hardcopy SOI, and associated time and resource-intensive courier distribution. It has 4 components:
LCMS provides automation for the detailed accounting required for every COMSEC account, and electronic key generation and distribution capability.
ACES is the frequency management portion of AKMS. ACES has been designated by the Military Communications Electronics Board as the joint standard for use by all services in development of frequency management and cryptonet planning.
CT3 with DTD software is in a fielded, ruggedized hand-held device that handles, views, stores, and loads SOI, Key, and electronic protection data. DTD provides an improved net-control device to automate crypto-net control operations for communications networks employing electronically keyed COMSEC equipment.
SKL is a hand-held PDA that handles, views, stores, and loads SOI, Key, and electronic protection data.
Key Management Infrastructure (KMI) Program
KMI is intended to replace the legacy Electronic Key Management System to provide a means for securely ordering, generating, producing, distributing, managing, and auditing cryptographic products (e.g., asymmetric keys, symmetric keys, manual cryptographic systems, and cryptographic applications). This system is currently being fielded by Major Commands and variants will be required for non-DoD Agencies with a COMSEC Mission.
See also
Dynamic secrets
Electronics technician (United States Navy)
Information security
Information warfare
List of telecommunications encryption terms
NSA encryption systems
NSA product types
Operations security
Secure communication
Signals intelligence
Traffic analysis
References
National Information Systems Security Glossary
https://web.archive.org/web/20121002192433/http://www.dtic.mil/whs/directives/corres/pdf/466002p.pdf
Cryptography machines
Cryptography
Military communications
Military radio systems
Encryption devices |
40967 | https://en.wikipedia.org/wiki/Core | Core | Core or cores may refer to:
Science and technology
Core (anatomy), everything except the appendages
Core (manufacturing), used in casting and molding
Core (optical fiber), the signal-carrying portion of an optical fiber
Core, the central part of a fruit
Hydrophobic core, the interior zone of a protein
Nuclear reactor core, a portion containing the fuel components
Pit (nuclear weapon) or core, the fissile material in a nuclear weapon
Semiconductor intellectual property core (IP core), is a unit of design in ASIC/FPGA electronics and IC manufacturing
Atomic core, an atom with no valence electrons
Geology and astrophysics
Core sample, in Earth science, a sample obtained by coring
Ice core
Core, the central part of a galaxy; see Mass deficit
Core (anticline), the central part of an anticline or syncline
Planetary core, the center of a planet
Earth's inner core
Earth's outer core
Stellar core, the region of a star where nuclear fusion takes place
Solar core,
Computing
Core Animation, a data visualization API used in macOS
Core dump, the recorded state of a running program
Intel Core, a family of single-core and multi-core 32-bit and 64-bit CPUs released by Intel
Magnetic core, in electricity and electronics, ferromagnetic material around which wires are wound
Magnetic-core memory, the primary memory technology used before semiconductor memory
Central processing unit (CPU), called a core
Multi-core processor, a microprocessor with multiple CPUs on one integrated circuit chip
Server Core, a minimalist Microsoft Windows Server installation option
Mathematics
Core (game theory), the collection of stable allocations that no coalition can improve upon
Core (graph theory), the homomorphically minimal subgraph of a graph
Core (group theory), an object in group theory
Core of a triangulated category
Core, an essential domain of a closed operator; see Unbounded operator
Core, a radial kernel of a subset of a vector space; see Algebraic interior
Arts, entertainment and media
Core (novel), a 1993 science fiction novel by Paul Preuss
Core (radio station), a defunct digital radio station in the United Kingdom
90.3 The Core RLC-WVPH, a radio station in Piscataway, New Jersey, US
C.O.R.E. (video game), a 2009 NDS game
Core (video game), a video game with integrated game creation system
"CORE", an area in the Underground in the video game Undertale
"The Core", an episode of The Transformers cartoon
Film and television
Cores (film), a 2012 film
The Core, a 2003 science fiction film
The Core, the 2006–2007 name for the programming block on Five currently known as Shake!
Music
Core (band), a stoner rock band
Core (Stone Temple Pilots album), 1992
Core (Persefone album), 2006
"Core", a song by Susumu Hirasawa from Paranoia Agent Original Soundtrack
"The Core", a song from Eric Clapton's 1977 album Slowhand
"CORE", a track from the soundtrack of the 2015 video game Undertale by Toby Fox
Organizations
Core International, a defunct American computer and technology corporation
Core Design, a videogame developer best known for the Tomb Raider series
Coordenadoria de Recursos Especiais, Brazilian state police SWAT team
Digestive Disorders Foundation, working name Core
Center for Operations Research and Econometrics at the Université catholique de Louvain in Belgium
Central Organisation for Railway Electrification, an organization in India
China Open Resources for Education, an OpenCourseWare organization in China
Congress of Racial Equality, United States civil rights organization
CORE (research service), a UK-based aggregator of open access content
C.O.R.E., a computer animation studio
CORE System Trust, see CORE-OM
Places
United States
Core, San Diego, a neighborhood in California
Core, West Virginia
Core Banks, North Carolina
Core Sound, North Carolina
Other places
Corés, a parish in Spain
The Core Shopping Centre (Calgary), Alberta, Canada
The Core, a shopping centre in Leeds, England, on the site of Schofields
People
Earl Lemley Core (1902–1984), West Virginia botanist
Ericson Core, American director and cinematographer
Other uses
Core (architecture)
Co-ordinated On-line Record of Electors, central database in the United Kingdom
Coree or Cores, a Native American tribe
Korah, a biblical figure
Leadership core, concept in Chinese politics
Persephone, a Greek goddess also known as Kore or Cora (Greek κόρη = daughter)
Core countries, in dependency theory, an industrialized country on which peripheral countries depend
Core curriculum, in education, an essential part of the curriculum
Lithic core, in archaeology, a stone artifact left over from toolmaking
CORE (Clinical Outcomes in Routine Use) System, see CORE-OM
See also
CORE (disambiguation)
Corre (disambiguation)
Nucleus (disambiguation)
Corium (disambiguation) |
41229 | https://en.wikipedia.org/wiki/Handshaking | Handshaking | In computing, a handshake is a signal between two devices or programs, used to, e.g., authenticate, coordinate. An example is the handshaking between a hypervisor and an application in a guest virtual machine.
In telecommunications, a handshake is an automated process of negotiation between two participants (example "Alice and Bob") through the exchange of information that establishes the protocols of a communication link at the start of the communication, before full communication begins. The handshaking process usually takes place in order to establish rules for communication when a computer attempts to communicate with another device. Signals are usually exchanged between two devices to establish a communication link. For example, when a computer communicates with another device such as a modem, the two devices will signal each other that they are switched on and ready to work, as well as to agree to which protocols are being used.
Handshaking can negotiate parameters that are acceptable to equipment and systems at both ends of the communication channel, including information transfer rate, coding alphabet, parity, interrupt procedure, and other protocol or hardware features.
Handshaking is a technique of communication between two entities. However, within TCP/IP RFCs, the term "handshake" is most commonly used to reference the TCP three-way handshake. For example, the term "handshake" is not present in RFCs covering FTP or SMTP. One exception is Transport Layer Security, TLS, setup, FTP RFC 4217. In place of the term "handshake", FTP RFC 3659 substitutes the term "conversation" for the passing of commands.
A simple handshaking protocol might only involve the receiver sending a message meaning "I received your last message and I am ready for you to send me another one." A more complex handshaking protocol might allow the sender to ask the receiver if it is ready to receive or for the receiver to reply with a negative acknowledgement meaning "I did not receive your last message correctly, please resend it" (e.g., if the data was corrupted en route).
Handshaking facilitates connecting relatively heterogeneous systems or equipment over a communication channel without the need for human intervention to set parameters.
Example
TCP three-way handshake
Establishing a normal TCP connection requires three separate steps:
The first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with its own sequence number , which Bob receives.
Bob replies with a synchronize-acknowledgment (SYN-ACK) message with its own sequence number and acknowledgement number , which Alice receives.
Alice replies with an acknowledgment (ACK) message with acknowledgement number , which Bob receives and to which he doesn't need to reply.
In this setup, the synchronize messages act as service requests from one server to the other, while the acknowledgement messages return to the requesting server to let it know the message was received.
The reason for the client and server not using a default sequence number such as 0 for establishing the connection is to protect against two incarnations of the same connection reusing the same sequence number too soon, which means a segment from an earlier incarnation of a connection might interfere with a later incarnation of the connection.
SMTP
The Simple Mail Transfer Protocol (SMTP) is the key Internet standard for email transmission. It includes handshaking to negotiate authentication, encryption and maximum message size.
TLS handshake
When a Transport Layer Security (SSL or TLS) connection starts, the record encapsulates a "control" protocol—the handshake messaging protocol (content type 22). This protocol is used to exchange all the information required by both sides for the exchange of the actual application data by TLS. It defines the messages formatting or containing this information and the order of their exchange. These may vary according to the demands of the client and server—i.e., there are several possible procedures to set up the connection. This initial exchange results in a successful TLS connection (both parties ready to transfer application data with TLS) or an alert message (as specified below).
The protocol is used to negotiate the secure attributes of a session. (RFC 5246, p. 37)
WPA2 wireless
The WPA2 standard for wireless uses a four-way handshake defined in IEEE 802.11i-2004.
Dial-up access modems
One classic example of handshaking is that of dial-up modems, which typically negotiate communication parameters for a brief period when a connection is first established, and there after use those parameters to provide optimal information transfer over the channel as a function of its quality and capacity. The "squealing" (which is actually a sound that changes in pitch 100 times every second) noises made by some modems with speaker output immediately after a connection is established are in fact the sounds of modems at both ends engaging in a handshaking procedure; once the procedure is completed, the speaker might be silenced, depending on the settings of operating system or the application controlling the modem.
Serial "Hardware Handshaking"
This frequently used term describes the use of RTS and CTS signals over a serial interconnection. It is, however, not quite correct; it's not a true form of handshaking, and is better described as flow control.
References
Data transmission
Network architecture
Network protocols
de:Datenflusssteuerung |
41243 | https://en.wikipedia.org/wiki/Hotline | Hotline | A hotline is a point-to-point communications link in which a call is automatically directed to the preselected destination without any additional action by the user when the end instrument goes off-hook. An example would be a phone that automatically connects to emergency services on picking up the receiver. Therefore, dedicated hotline phones do not need a rotary dial or keypad. A hotline can also be called an automatic signaling, ringdown, or off-hook service.
For crises and service
True hotlines cannot be used to originate calls other than to preselected destinations. However, in common or colloquial usage, a "hotline" often refers to a call center reachable by dialing a standard telephone number, or sometimes the phone numbers themselves.
This is especially the case with 24-hour, noncommercial numbers, such as police tip hotlines or suicide crisis hotlines, which are staffed around the clock and thereby give the appearance of real hotlines. Increasingly, however, the term is found being applied to any customer service telephone number.
Between states
Russia–United States
The most famous hotline between states is the Moscow–Washington hotline, which is also known as the "red telephone", although telephones have never been used in this capacity. This direct communications link was established on 20 June 1963, in the wake of the Cuban Missile Crisis, and utilized teletypewriter technology, later replaced by telecopier and then by electronic mail.
United Kingdom–United States
Already during World War II—two decades before the Washington–Moscow hotline was established—there was a hotline between No. 10 Downing Street and the Cabinet War Room bunker under the Treasury, Whitehall; with the White House in Washington, D.C. From 1943 to 1946, this link was made secure by using the very first voice encryption machine, called SIGSALY.
China–Russia
A hotline connection between Beijing and Moscow was used during the 1969 frontier confrontation between the two countries. The Chinese however refused the Russian peace attempts and ended the communications link. After a reconciliation between the former enemies, the hotline between China and Russia was revived in 1996.
France–Russia
On his visit to the Soviet Union in 1966, French President Charles de Gaulle announced that a hotline would be established between Paris and Moscow. The line was upgraded from a telex to a high-speed fax machine in 1989.
Russia–United Kingdom
A London–Moscow hotline was not formally established until a treaty of friendship between the two countries in 1992. An upgrade was announced when Foreign Secretary William Hague visited Moscow in 2011.
India–Pakistan
On 20 June 2004, both India and Pakistan agreed to extend a nuclear testing ban and to set up an Islamabad–New Delhi hotline between their foreign secretaries aimed at preventing misunderstandings that might lead to nuclear war. The hotline was set up with the assistance of United States military officers.
China–United States
The United States and China set up a defense hotline in 2008, but it has rarely been used in crises.
China–India
India and China announced a hotline for the foreign ministers of both countries while reiterating their commitment to strengthening ties and building "mutual political trust". As of August 2015 the hotline was yet to be made operational.
China–Japan
In February 2013, the Senkaku Islands dispute gave renewed impetus to a China–Japan hotline, which had been agreed to but due to rising tensions had not been established.
North and South Korea
Between North and South Korea there are over 40 direct phone lines, the first of which was opened in September 1971. Most of these hotlines run through the Panmunjeom Joint Security Area (JSA) and are maintained by the Red Cross. Since 1971, North Korea has deactivated the hotlines seven times, the last time in February 2016. After Kim Jong-un's New Years address, the border hotline was reopened on January 3, 2018.
India–United States
In August 2015 the hotline between the White House and New Delhi became operational. The decision of establishing this hotline was taken during Obama's visit to India in January 2015. This is the first hotline connecting an Indian Prime Minister to a head of state.
See also
Bat phone
Complaint system
References
External links
Top Level Telecommunications: Bilateral Hotlines Worldwide
Telecommunication services
Bilateral relations
Hotline between countries
de:Heißer Draht |
41333 | https://en.wikipedia.org/wiki/Long-haul%20communications | Long-haul communications | In telecommunication, the term long-haul communications has the following meanings:
1. In public switched networks, pertaining to circuits that span large distances, such as the circuits in inter-LATA, interstate, and international communications. See also Long line (telecommunications)
2. In the military community, communications among users on a national or worldwide basis.
Note 1: Compared to tactical communications, long-haul communications are characterized by (a) higher levels of users, such as the US National Command Authority, (b) more stringent performance requirements, such as higher quality circuits, (c) longer distances between users, including worldwide distances, (d) higher traffic volumes and densities, (e) larger switches and trunk cross sections, and (f) fixed and recoverable assets.
Note 2: "Long-haul communications" usually pertains to the U.S. Defense Communications System.
Note 3: "Long-haul telecommunications technicians" can be translated into many fields of IT work within the corporate industry (Information Technology, Network Technician, Telecommunication Specialist, It Support, and so on). While the term is used in military most career fields that are in communications such as 3D1X2 - Cyber Transport Systems (the career field has been renamed so many times over the course of many years but essentially it is the same job (Network Infrastructure Tech., Systems Control Technician, and Cyber Transport Systems)) or may work in areas that require the "in between" (cloud networking) for networks (MSPP, ATM, Routers, Switches), phones (VOIP, DS0 - DS4 or higher, and so on), encryption (configuring encryption devices or monitoring), and video support data transfers. The "bulk data transfer" or aggregation networking.
The Long-haul telecommunication technicians is considered a "jack of all" but it is much in the technician's interest to gather greater education with certifications to qualify for certain jobs outside the military. The Military provides an avenue but does not make the individual a master of the career field. The technician will find that the job out look outside of military requires many things that aren't required of them within the career field while in the military. So it is best to find the job that is similar to the AFSC and also view the companies description of the qualification to fit that job. Also at least get an associate degree, over 5 years experience, and all of the required "certs" (Network +, Security +, CCNA, CCNP and so on) to acquire the job or at least an interview. The best time to apply or get a guaranteed job is the last three months before you leave the military. Military personnel that are within the career field 3D1X2 require a Secret, TS, or TS with SCI clearance in order to do the job.
See also
Long-distance calling
Meteor burst communications
Communication circuits |
41680 | https://en.wikipedia.org/wiki/Scrambler | Scrambler | In telecommunications, a scrambler is a device that transposes or inverts signals or otherwise encodes a message at the sender's side to make the message unintelligible at a receiver not equipped with an appropriately set descrambling device. Whereas encryption usually refers to operations carried out in the digital domain, scrambling usually refers to operations carried out in the analog domain. Scrambling is accomplished by the addition of components to the original signal or the changing of some important component of the original signal in order to make extraction of the original signal difficult. Examples of the latter might include removing or changing vertical or horizontal sync pulses in television signals; televisions will not be able to display a picture from such a signal. Some modern scramblers are actually encryption devices, the name remaining due to the similarities in use, as opposed to internal operation.
In telecommunications and recording, a scrambler (also referred to as a randomizer) is a device that manipulates a data stream before transmitting. The manipulations are reversed by a descrambler at the receiving side. Scrambling is widely used in satellite, radio relay communications and PSTN modems. A scrambler can be placed just before a FEC coder, or it can be placed after the FEC, just before the modulation or line code. A scrambler in this context has nothing to do with encrypting, as the intent is not to render the message unintelligible, but to give the transmitted data useful engineering properties.
A scrambler replaces sequences (referred to as whitening sequences) with other sequences without removing undesirable sequences, and as a result it changes the probability of occurrence of vexatious sequences. Clearly it is not foolproof as there are input sequences that yield all-zeros, all-ones, or other undesirable periodic output sequences. A scrambler is therefore not a good substitute for a line code, which, through a coding step, removes unwanted sequences.
Purposes of scrambling
A scrambler (or randomizer) can be either:
An algorithm that converts an input string into a seemingly random output string of the same length (e.g., by pseudo-randomly selecting bits to invert), thus avoiding long sequences of bits of the same value; in this context, a randomizer is also referred to as a scrambler.
An analog or digital source of unpredictable (i.e., high entropy), unbiased, and usually independent (i.e., random) output bits. A "truly" random generator may be used to feed a (more practical) deterministic pseudo-random random number generator, which extends the random seed value.
There are two main reasons why scrambling is used:
To enable accurate timing recovery on receiver equipment without resorting to redundant line coding. It facilitates the work of a timing recovery circuit (see also clock recovery), an automatic gain control and other adaptive circuits of the receiver (eliminating long sequences consisting of '0' or '1' only).
For energy dispersal on the carrier, reducing inter-carrier signal interference. It eliminates the dependence of a signal's power spectrum upon the actual transmitted data, making it more dispersed to meet maximum power spectral density requirements (because if the power is concentrated in a narrow frequency band, it can interfere with adjacent channels due to the intermodulation (also known as cross-modulation) caused by non-linearities of the receiving tract).
Scramblers are essential components of physical layer system standards besides interleaved coding and modulation. They are usually defined based on linear-feedback shift registers (LFSRs) due to their good statistical properties and ease of implementation in hardware.
It is common for physical layer standards bodies to refer to lower-layer (physical layer and link layer) encryption as scrambling as well. This may well be because (traditional) mechanisms employed are based on feedback shift registers as well.
Some standards for digital television, such as DVB-CA and MPE, refer to encryption at the link layer as scrambling.
Types of scramblers
Additive (synchronous) scramblers
Additive scramblers (they are also referred to as synchronous) transform the input data stream by applying a pseudo-random binary sequence (PRBS) (by modulo-two addition). Sometimes a pre-calculated PRBS stored in the read-only memory is used, but more often it is generated by a linear-feedback shift register (LFSR).
In order to assure a synchronous operation of the transmitting and receiving LFSR (that is, scrambler and descrambler), a sync-word must be used.
A sync-word is a pattern that is placed in the data stream through equal intervals (that is, in each frame). A receiver searches for a few sync-words in adjacent frames and hence determines the place when its LFSR must be reloaded with a pre-defined initial state.
The additive descrambler is just the same device as the additive scrambler.
Additive scrambler/descrambler is defined by the polynomial of its LFSR (for the scrambler on the picture above, it is ) and its initial state.
Multiplicative (self-synchronizing) scramblers
Multiplicative scramblers (also known as feed-through) are called so because they perform a multiplication of the input signal by the scrambler's transfer function in Z-space. They are discrete linear time-invariant systems.
A multiplicative scrambler is recursive, and a multiplicative descrambler is non-recursive. Unlike additive scramblers, multiplicative scramblers do not need the frame synchronization, that is why they are also called self-synchronizing. Multiplicative scrambler/descrambler is defined similarly by a polynomial (for the scrambler on the picture it is ), which is also a transfer function of the descrambler.
Comparison of scramblers
Scramblers have certain drawbacks:
Both types may fail to generate random sequences under worst-case input conditions.
Multiplicative scramblers lead to error multiplication during descrambling (i.e. a single-bit error at the descrambler's input will result in w errors at its output, where w equals the number of the scrambler's feedback taps).
Additive scramblers must be reset by the frame sync; if this fails, massive error propagation will result, as a complete frame cannot be descrambled.
The effective length of the random sequence of an additive scrambler is limited by the frame length, which is normally much shorter than the period of the PRBS. By adding frame numbers to the frame sync, it is possible to extend the length of the random sequence, by varying the random sequence in accordance with the frame number.
Noise
The first voice scramblers were invented at Bell Labs in the period just before World War II. These sets consisted of electronics that could mix two signals or alternatively "subtract" one signal back out again. The two signals were provided by a telephone and a record player. A matching pair of records was produced, each containing the same recording of noise. The recording was played into the telephone, and the mixed signal was sent over the wire. The noise was then subtracted out at the far end using the matching record, leaving the original voice signal intact. Eavesdroppers would hear only the noisy signal, unable to understand the voice.
One of those, used (among other duties) for telephone conversations between Winston Churchill and Franklin D. Roosevelt was intercepted and unscrambled by the Germans. At least one German engineer had worked at Bell Labs before the war and came up with a way to break them. Later versions were sufficiently different that the German team was unable to unscramble them. Early versions were known as "A-3" (from AT&T Corporation). An unrelated device called SIGSALY was used for higher-level voice communications.
The noise was provided on large shellac phonograph records made in pairs, shipped as needed, and destroyed after use. This worked, but was enormously awkward. Just achieving synchronization of the two records proved difficult. Post-war electronics made such systems much easier to work with by creating pseudo-random noise based on a short input tone. In use, the caller would play a tone into the phone, and both scrambler units would then listen to the signal and synchronize to it. This provided limited security, however, as any listener with a basic knowledge of the electronic circuitry could often produce a machine of similar-enough settings to break into the communications.
Cryptographic
It was the need to synchronize the scramblers that suggested to James H. Ellis the idea for non-secret encryption, which ultimately led to the invention of both the RSA encryption algorithm and Diffie–Hellman key exchange well before either was reinvented publicly by Rivest, Shamir, and Adleman, or by Diffie and Hellman.
The latest scramblers are not scramblers in the truest sense of the word, but rather digitizers combined with encryption machines. In these systems the original signal is first converted into digital form, and then the digital data is encrypted and sent. Using modern public-key systems, these "scramblers" are much more secure than their earlier analog counterparts. Only these types of systems are considered secure enough for sensitive data.
Voice inversion scrambling can be as simple as inverting the frequency bands around a static point to various complex methods of changing the inversion point randomly and in real time and using multiple bands.
The "scramblers" used in cable television are designed to prevent casual signal theft, not to provide any real security. Early versions of these devices simply "inverted" one important component of the TV signal, re-inverting it at the client end for display. Later devices were only slightly more complex, filtering out that component entirely and then adding it by examining other portions of the signal. In both cases the circuitry could be easily built by any reasonably knowledgeable hobbyist. (See Television encryption.)
Electronic kits for scrambling and descrambling are available from hobbyist suppliers. Scanner enthusiasts often use them to listen in to scrambled communications at car races and some public-service transmissions. It is also common in FRS radios. This is an easy way to learn about scrambling.
The term "scrambling" is sometimes incorrectly used when jamming is meant.
Descramble
Descramble in cable television context is the act of taking a scrambled or encrypted video signal that has been provided by a cable television company for premium television services, processed by a scrambler and then supplied over a coaxial cable and delivered to the household where a set-top box reprocesses the signal, thus descrambling it and making it available for viewing on the television set. A descrambler is a device that restores the picture and sound of a scrambled channel. A descrambler must be used with a cable converter box to be able to unencrypt all of the premium & pay-per-view channels of a Cable Television System.
See also
Ciphony
Cryptography
Cryptochannel
One-time pad
Secure voice
Secure telephone
Satellite modem
SIGSALY
Voice inversion
References
External links and references
DVB framing structure, channel coding and modulation for 11/12 GHz satellite services (EN 300 421)
V.34 ITU-T recommendation
INTELSAT Earth Station Standard IESS-308
Cryptography
Line codes
Applications of randomness
Satellite broadcasting
Telecommunications equipment
Television terminology |
41710 | https://en.wikipedia.org/wiki/Simple%20Network%20Management%20Protocol | Simple Network Management Protocol | Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behaviour. Devices that typically support SNMP include cable modems, routers, switches, servers, workstations, printers, and more.
SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base (MIB) which describe the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications.
Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security.
SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects.
Overview and basic concepts
In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent which reports information via SNMP to the manager.
An SNMP-managed network consists of three key components:
Managed devices
Agentsoftware which runs on managed devices
Network management station (NMS)software which runs on the manager
A managed device is a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional (read and write) access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to, routers, access servers, switches, cable modems, bridges, hubs, IP telephones, IP video cameras, computer hosts, and printers.
An agent is a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
A network management station executes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network.
Management information base
SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design that allows applications to define their own hierarchies. These hierarchies are described as a management information base (MIB). MIBs describe the structure of the management data of a device subsystem; they use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0 (SMIv2, ), a subset of ASN.1.
Protocol details
SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol (UDP). The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent. The agent response is sent back to the source port on the manager. The manager receives notifications (Traps and InformRequests) on port 162. The agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162.
SNMPv1 specifies five core protocol data units (PDUs). Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows:
The seven SNMP PDU types as identified by the PDU-type field are as follows:
GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings (the value field is not used). Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned.
SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with (current) new values for the variables is returned.
GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB. The entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request.
GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior. GetBulkRequest was introduced in SNMPv2.
Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-status and error-index fields. Although it was used as a response to both gets and sets, this PDU was called GetResponse in SNMPv1.
Asynchronous notification from agent to manager. While in other SNMP communication, the manager actively requests information from the agent, these are PDUs that are sent from the agent to the manager without being explicitly requested. SNMP traps enable an agent to notify the management station of significant events by way of an unsolicited SNMP message. Trap PDUs include current sysUpTime value, an OID identifying the type of trap and optional variable bindings. Destination addressing for traps is determined in an application-specific manner typically through trap configuration variables in the MIB. The format of the trap message was changed in SNMPv2 and the PDU was renamed SNMPv2-Trap.
Acknowledged asynchronous notification. This PDU was introduced in SNMPv2 and was originally defined as manager to manager communication. Later implementations have loosened the original definition to allow agent to manager communications. Manager-to-manager notifications were already possible in SNMPv1 using a Trap, but as SNMP commonly runs over UDP where delivery is not assured and dropped packets are not reported, delivery of a Trap was not guaranteed. InformRequest fixes this as an acknowledgement is returned on receipt.
specifies that an SNMP implementation must accept a message of at least 484 bytes in length. In practice, SNMP implementations accept longer messages. If implemented correctly, an SNMP message is discarded if the decoding of the message fails and thus malformed SNMP requests are ignored. A successfully decoded SNMP request is then authenticated using the community string. If the authentication fails, a trap is generated indicating an authentication failure and the message is dropped.
SNMPv1 and SNMPv2 use communities to establish trust between managers and agents. Most agents support three community names, one each for read-only, read-write and trap. These three community strings control different types of activities. The read-only community applies to get requests. The read-write community string applies to set requests. The trap community string applies to receipt of traps. SNMPv3 also uses community strings, but allows for secure authentication and communication between SNMP manager and agent.
Protocol versions
In practice, SNMP implementations often support multiple versions: typically SNMPv1, SNMPv2c, and SNMPv3.
Version 1
SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. The design of SNMPv1 was done in the 1980s by a group of collaborators who viewed the officially sponsored OSI/IETF/NSF (National Science Foundation) effort (HEMS/CMIS/CMIP) as both unimplementable in the computing platforms of the time as well as potentially unworkable. SNMP was approved based on a belief that it was an interim protocol needed for taking steps towards large-scale deployment of the Internet and its commercialization.
The first Request for Comments (RFCs) for SNMP, now known as SNMPv1, appeared in 1988:
— Structure and identification of management information for TCP/IP-based internets
— Management information base for network management of TCP/IP-based internets
— A simple network management protocol
In 1990, these documents were superseded by:
— Structure and identification of management information for TCP/IP-based internets
— Management information base for network management of TCP/IP-based internets
— A simple network management protocol
In 1991, (MIB-1) was replaced by the more often used:
— Version 2 of management information base (MIB-2) for network management of TCP/IP-based internets
SNMPv1 is widely used and is the de facto network management protocol in the Internet community.
SNMPv1 may be carried by transport layer protocols such as User Datagram Protocol (UDP), Internet Protocol (IP), OSI Connectionless-mode Network Service (CLNS), AppleTalk Datagram Delivery Protocol (DDP), and Novell Internetwork Packet Exchange (IPX).
Version 1 has been criticized for its poor security. The specification does, in fact, allow room for custom authentication to be used, but widely used implementations "support only a trivial authentication service that identifies all SNMP messages as authentic SNMP messages.". The security of the messages, therefore, becomes dependent on the security of the channels over which the messages are sent. For example, an organization may consider their internal network to be sufficiently secure that no encryption is necessary for its SNMP messages. In such cases, the "community name", which is transmitted in cleartext, tends to be viewed as a de facto password, in spite of the original specification.
Version 2
SNMPv2, defined by and , revises version 1 and includes improvements in the areas of performance, security and manager-to-manager communications. It introduced GetBulkRequest, an alternative to iterative GetNextRequests for retrieving large amounts of management data in a single request. The new party-based security system introduced in SNMPv2, viewed by many as overly complex, was not widely adopted. This version of SNMP reached the Proposed Standard level of maturity, but was deemed obsolete by later versions.
Community-Based Simple Network Management Protocol version 2, or SNMPv2c, is defined in –. SNMPv2c comprises SNMPv2 without the controversial new SNMP v2 security model, using instead the simple community-based security scheme of SNMPv1. This version is one of relatively few standards to meet the IETF's Draft Standard maturity level, and was widely considered the de facto SNMPv2 standard. It was later restated as part of SNMPv3.
User-Based Simple Network Management Protocol version 2, or SNMPv2u, is defined in –. This is a compromise that attempts to offer greater security than SNMPv1, but without incurring the high complexity of SNMPv2. A variant of this was commercialized as SNMP v2*, and the mechanism was eventually adopted as one of two security frameworks in SNMP v3.
64-bit counters
SNMP version 2 introduces the option for 64-bit data counters. Version 1 was designed only with 32-bit counters which can store integer values from zero to 4.29 billion (precisely 4,294,967,295). A 32-bit version 1 counter cannot store the maximum speed of a 10 gigabit or larger interface, expressed in bits per second. Similarly, a 32-bit counter tracking statistics for a 10 gigabit or larger interface can roll over back to zero again in less than one minute, which may be a shorter time interval than a counter is polled to read its current state. This would result in lost or invalid data due to the undetected value rollover, and corruption of trend-tracking data.
The 64-bit version 2 counter can store values from zero to 18.4 quintillion (precisely 18,446,744,073,709,551,615) and so is currently unlikely to experience a counter rollover between polling events. For example, 1.6 terabit Ethernet is predicted to become available by 2025. A 64-bit counter incrementing at a rate of 1.6 trillion bits per second would be able to retain information for such an interface without rolling over for 133 days.
SNMPv1 & SNMPv2c interoperability
SNMPv2c is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2c messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2c also uses two protocol operations that are not specified in SNMPv1. To overcome incompatibility, defines two SNMPv1/v2c coexistence strategies: proxy agents and bilingual network-management systems.
Proxy agents
An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1 managed devices. When an SNMPv2 NMS issues a command intended for an SNMPv1 agent it sends it to the SNMPv2 proxy agent instead. The proxy agent forwards Get, GetNext, and Set messages to the SNMPv1 agent unchanged. GetBulk messages are converted by the proxy agent to GetNext messages and then are forwarded to the SNMPv1 agent. Additionally, the proxy agent receives and maps SNMPv1 trap messages to SNMPv2 trap messages and then forwards them to the NMS.
Bilingual network-management system
Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this dual-management environment, a management application examines information stored in a local database to determine whether the agent supports SNMPv1 or SNMPv2. Based on the information in the database, the NMS communicates with the agent using the appropriate version of SNMP.
Version 3
Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks very different due to new textual conventions, concepts, and terminology. The most visible change was to define a secure version of SNMP, by adding security and remote configuration enhancements to SNMP. The security aspect is addressed by offering both strong authentication and data encryption for privacy. For the administration aspect, SNMPv3 focuses on two parts, namely notification originators and proxy forwarders. The changes also facilitate remote configuration and administration of the SNMP entities, as well as addressing issues related to the large-scale deployment, accounting, and fault management.
Features and enhancements included:
Identification of SNMP entities to facilitate communication only between known SNMP entities – Each SNMP entity has an identifier called the SNMPEngineID, and SNMP communication is possible only if an SNMP entity knows the identity of its peer. Traps and Notifications are exceptions to this rule.
Support for security models – A security model may define the security policy within an administrative domain or an intranet. SNMPv3 contains the specifications for a user-based security model (USM).
Definition of security goals where the goals of message authentication service include protection against the following:
Modification of Information – Protection against some unauthorized SNMP entity altering in-transit messages generated by an authorized principal.
Masquerade – Protection against attempting management operations not authorized for some principal by assuming the identity of another principal that has the appropriate authorizations.
Message stream modification – Protection against messages getting maliciously re-ordered, delayed, or replayed to affect unauthorized management operations.
Disclosure – Protection against eavesdropping on the exchanges between SNMP engines.
Specification for USM – USM consists of the general definition of the following communication mechanisms available:
Communication without authentication and privacy (NoAuthNoPriv).
Communication with authentication and without privacy (AuthNoPriv).
Communication with authentication and privacy (AuthPriv).
Definition of different authentication and privacy protocols – MD5, SHA and HMAC-SHA-2 authentication protocols and the CBC_DES and CFB_AES_128 privacy protocols are supported in the USM.
Definition of a discovery procedure – To find the SNMPEngineID of an SNMP entity for a given transport address and transport endpoint address.
Definition of the time synchronization procedure – To facilitate authenticated communication between the SNMP entities.
Definition of the SNMP framework MIB – To facilitate remote configuration and administration of the SNMP entity.
Definition of the USM MIBs – To facilitate remote configuration and administration of the security module.
Definition of the view-based access control model (VACM) MIBs – To facilitate remote configuration and administration of the access control module.
Security was one of the biggest weakness of SNMP until v3. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent. Each SNMPv3 message contains security parameters which are encoded as an octet string. The meaning of these security parameters depends on the security model being used. The security approach in v3 targets:
Confidentiality – Encryption of packets to prevent snooping by an unauthorized source.
Integrity – Message integrity to ensure that a packet has not been tampered while in transit including an optional packet replay protection mechanism.
Authentication – to verify that the message is from a valid source.
v3 also defines the USM and VACM, which were later followed by a transport security model (TSM) that provided support for SNMPv3 over SSH and SNMPv3 over TLS and DTLS.
USM (User-based Security Model) provides authentication and privacy (encryption) functions and operates at the message level.
VACM (View-based Access Control Model) determines whether a given principal is allowed access to a particular MIB object to perform specific functions and operates at the PDU level.
TSM (Transport Security Model) provides a method for authenticating and encrypting messages over external security channels. Two transports, SSH and TLS/DTLS, have been defined that make use of the TSM specification.
the IETF recognizes Simple Network Management Protocol version 3 as defined by – (also known as STD0062) as the current standard version of SNMP. The IETF has designated SNMPv3 a full Internet standard, the highest maturity level for an RFC. It considers earlier versions to be obsolete (designating them variously "Historic" or "Obsolete").
Implementation issues
SNMP's powerful write capabilities, which would allow the configuration of network devices, are not being fully utilized by many vendors, partly because of a lack of security in SNMP versions before SNMPv3, and partly because many devices simply are not capable of being configured via individual MIB object changes.
Some SNMP values (especially tabular values) require specific knowledge of table indexing schemes, and these index values are not necessarily consistent across platforms. This can cause correlation issues when fetching information from multiple devices that may not employ the same table indexing scheme (for example fetching disk utilization metrics, where a specific disk identifier is different across platforms.)
Some major equipment vendors tend to over-extend their proprietary command line interface (CLI) centric configuration and control systems.
In February 2002 the Carnegie Mellon Software Engineering Institute (CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC) issued an Advisory on SNMPv1, after the Oulu University Secure Programming Group conducted a thorough analysis of SNMP message handling. Most SNMP implementations, regardless of which version of the protocol they support, use the same program code for decoding protocol data units (PDU) and problems were identified in this code. Other problems were found with decoding SNMP trap messages received by the SNMP management station or requests received by the SNMP agent on the network device. Many vendors had to issue patches for their SNMP implementations.
Security implications
Using SNMP to attack a network
Because SNMP is designed to allow administrators to monitor and configure network devices remotely it can also be used to penetrate a network. A significant number of software tools can scan the entire network using SNMP, therefore mistakes in the configuration of the read-write mode can make a network susceptible to attacks.
In 2001, Cisco released information that indicated that, even in read-only mode, the SNMP implementation of Cisco IOS is vulnerable to certain denial of service attacks. These security issues can be fixed through an IOS upgrade.
If SNMP is not used in a network it should be disabled in network devices. When configuring SNMP read-only mode, close attention should be paid to the configuration of the access control and from which IP addresses SNMP messages are accepted. If the SNMP servers are identified by their IP, SNMP is only allowed to respond to these IPs and SNMP messages from other IP addresses would be denied. However, IP address spoofing remains a security concern.
Authentication
SNMP is available in different versions, each has its own security issues. SNMP v1 sends passwords in clear-text over the network. Therefore, passwords can be read with packet sniffing. SNMP v2 allows password hashing with MD5, but this has to be configured. Virtually all network management software support SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to provide data security, that is authentication, privacy and authorization, but only SNMP version 2c gained the endorsement of the Internet Engineering Task Force (IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5, Secure Hash Algorithm (SHA) and keyed algorithms to offer protection against unauthorized data modification and spoofing attacks. If a higher level of security is needed the Data Encryption Standard (DES) can be optionally used in the cipher block chaining mode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T.
SNMPv3 may be subject to brute force and dictionary attacks for guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary. SNMPv3 allows both providing random uniformly distributed cryptographic keys and generating cryptographic keys from a password supplied by the user. The risk of guessing authentication strings from hash values transmitted over the network depends on the cryptographic hash function used and the length of the hash value. SNMPv3 uses the HMAC-SHA-2 authentication protocol for the User-based Security Model (USM). SNMP does not use a more secure challenge-handshake authentication protocol. SNMPv3 (like other SNMP protocol versions) is a stateless protocol, and it has been designed with a minimal amount of interactions between the agent and the manager. Thus introducing a challenge-response handshake for each command would impose a burden on the agent (and possibly on the network itself) that the protocol designers deemed excessive and unacceptable.
The security deficiencies of all SNMP versions can be mitigated by IPsec authentication and confidentiality mechanisms. SNMP also may be carried securely over Datagram Transport Layer Security (DTLS).
Many SNMP implementations include a type of automatic discovery where a new network component, such as a switch or router, is discovered and polled automatically. In SNMPv1 and SNMPv2c this is done through a community string that is transmitted in clear-text to other devices. Clear-text passwords are a significant security risk. Once the community string is known outside the organization it could become the target for an attack. To alert administrators of other attempts to glean community strings, SNMP can be configured to pass community-name authentication failure traps. If SNMPv2 is used, the issue can be avoided by enabling password encryption on the SNMP agents of network devices.
The common default configuration for community strings are "public" for read-only access and "private" for read-write. Because of the well-known defaults, SNMP topped the list of the SANS Institute's Common Default Configuration Issues and was number ten on the SANS Top 10 Most Critical Internet Security Threats for the year 2000. System and network administrators frequently do not change these configurations.
Whether it runs over TCP or UDP, SNMPv1 and v2 are vulnerable to IP spoofing attacks. With spoofing, attackers may bypass device access lists in agents that are implemented to restrict SNMP access. SNMPv3 security mechanisms such as USM or TSM prevent a successful spoofing attack.
RFC references
(STD 16) — Structure and Identification of Management Information for the TCP/IP-based Internets
(Historic) — Management Information Base for Network Management of TCP/IP-based internets
(Historic) — A Simple Network Management Protocol (SNMP)
(STD 17) — Management Information Base for Network Management of TCP/IP-based internets: MIB-II
(Informational) — Coexistence between version 1 and version 2 of the Internet-standard Network Management Framework (Obsoleted by )
(Experimental) — Introduction to Community-based SNMPv2
(Draft Standard) — Structure of Management Information for SNMPv2 (Obsoleted by )
(Standards Track) — Coexistence between Version 1 and Version 2 of the Internet-standard Network Management Framework
(Informational) — Introduction to Version 3 of the Internet-standard Network Management Framework (Obsoleted by )
(STD 58) — Structure of Management Information Version 2 (SMIv2)
(Informational) — Introduction and Applicability Statements for Internet Standard Management Framework
STD 62 contains the following RFCs:
— An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks
— Message Processing and Dispatching for the Simple Network Management Protocol (SNMP)
— Simple Network Management Protocol (SNMP) Applications
— User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)
— View-based Access Control Model (VACM) for the Simple Network Management Protocol (SNMP)
— Version 2 of the Protocol Operations for the Simple Network Management Protocol (SNMP)
— Transport Mappings for the Simple Network Management Protocol (SNMP)
— Management Information Base (MIB) for the Simple Network Management Protocol (SNMP)
(Experimental) — Simple Network Management Protocol (SNMP) over Transmission Control Protocol (TCP) Transport Mapping
(BCP 74) — Coexistence between Version 1, Version 2, and Version 3 of the Internet-standard Network Management Framework
(Proposed) — The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model
(Proposed) — Simple Network Management Protocol (SNMP) over IEEE 802 Networks
(STD 78) — Simple Network Management Protocol (SNMP) Context EngineID Discovery
(STD 78) — Transport Subsystem for the Simple Network Management Protocol (SNMP)
(STD 78) — Transport Security Model for the Simple Network Management Protocol (SNMP)
(Proposed) — Secure Shell Transport Model for the Simple Network Management Protocol (SNMP)
(Proposed) — Remote Authentication Dial-In User Service (RADIUS) Usage for Simple Network Management Protocol (SNMP) Transport Models.
(STD 78) — Transport Layer Security (TLS) Transport Model for the Simple Network Management Protocol (SNMP)
(Proposed) — HMAC-SHA-2 Authentication Protocols in the User-based Security Model (USM) for SNMPv3
See also
Agent Extensibility Protocol (AgentX) – Subagent protocol for SNMP
Common Management Information Protocol (CMIP) – Management protocol by ISO/OSI used by telecommunications devices
Common Management Information Service (CMIS)
Comparison of network monitoring systems
Net-SNMP – Open source reference implementation of SNMP
NETCONF – Protocol which is an XML-based configuration protocol for network equipment
Remote Network Monitoring (RMON)
Simple Gateway Monitoring Protocol (SGMP) – Obsolete protocol replaced by SNMP
References
Further reading
External links
Application layer protocols
Internet protocols
Internet Standards
Multi-agent systems
Network management
System administration |
41775 | https://en.wikipedia.org/wiki/Tactical%20communications | Tactical communications | Tactical communications are military communications in which information of any kind, especially orders and military intelligence, are conveyed from one command, person, or place to another upon a battlefield, particularly during the conduct of combat. It includes any kind of delivery of information, whether verbal, written, visual or auditory, and can be sent in a variety of ways. In modern times, this is usually done by electronic means. Tactical communications do not include communications provided to tactical forces by the Defense Communications System to non-tactical military commands, to tactical forces by civil organizations, nor does it include strategic communication.
Early means
The earliest way of communicating with others in a battle was by the commander's voice or by human messenger. A runner would carry reports or orders from one officer to another. Once the horse was domesticated messages could travel much faster. A very fast way to send information was to use either drums, trumpets or flags. Each sound or banner would have a pre-determined significance for the soldier who would respond accordingly. Auditory signals were only as effective, though, as the receiver's ability to hear them. The din of battle or long distances could make using noise less effective. They were also limited in the amount of information they could convey; the information must be simple, such as attack or retreat.
Visual cues, such as flags or smoke signals required the receiver to have a clear line of sight to the signal, and know when and where to look for them. Intricate warning systems have though always been used such as scouting towers with fires to signal incoming threats - this could occur at the tactical as well as the strategic level. The armies of the 19th century used two flags in combinations that replicated the alphabet. This allowed commanders the ability to send any order they wanted as they needed to, but still relied on line-of-sight. During the Siege of Paris (1870–71) the defending French effectively used carrier pigeons to relay information between tactical units.
The wireless revolution
Although visual communication flew at the speed of light, it relied on a direct line of sight between the sender and the receiver. Telegraphs helped theater commanders to move large armies about, but one certainly could not count on using immobile telegraph lines on a changing battlefield.
At the end of the 19th century the disparate units across any field were instantaneously joined to their commanders by the invention and mass production of the radio. At first the radio could only broadcast tones, so messages were sent via Morse code. The first field radios used by the United States Army saw action in the Spanish–American War (1898) and the Philippine Insurrection (1899–1902). At the same time as radios were deployed the field telephone was developed and made commercially viable. This caused a new signal occupation specialty to be developed: lineman.
During the Interwar period the German army invented Blitzkrieg in which air, armor, and infantry forces acted swiftly and precisely, with constant radio communication. They triumphed until the defeated equipped themselves to communicate and coordinate similarly.
The digital battlefield
Security was a problem. If you broadcast your plans over radio waves, anyone with a similar radio listening to the same frequency could hear your plans. Advances in electronics, particularly during World War II, allowed for electronic scrambling of radio broadcasts, which permitted messages to be encrypted with ciphers too complex for humans to crack without the assistance of a similar, high-tech machine, such as the German Enigma machine. Once computer science advanced, large amounts of data could be sent over the airwaves in quick bursts of signals and more complex encryption was allowed.
Communication between armies were of course much more difficult before the electronic age and could only be achieved with messengers on horseback or by foot and with time delays according to the distance the messenger needed to travel. Advances in long-range communications aided the commander on the battlefield, for then they could receive news of any outside force or factor that could impact the conduct of a battle.
See also
Air Defense Control Center
Combat Information Center
History of communication
Network Simulator for simulation of Tactical Communication Systems
Joint Tactical Information Distribution System
Mission Control Center
Naval Tactical Data System
Electronics technician
References
Sources
History of Communications. Bakersfield, CA: William Penn School, 2011. http://library.thinkquest.org/5729/
Raines, Rebecca Robbins. "Getting the Message Through: A Branch History of the U.S. Army Signal Corps" (Washington: Center of Military History, US Army, 1999).
Rienzi, Thomas Matthew. "Vietnam Studies: Communications-Electronics 1962–1970. (Washington: Department of the Army, 1985.
"Signal Corps History." Augusta, GA: United States Army Signal Center, 2012. https://web.archive.org/web/20130403050141/http://www.signal.army.mil/ocos/rdiv/histarch/schist.asp
Military communications
Command and control |
41829 | https://en.wikipedia.org/wiki/NSA%20product%20types | NSA product types | The U.S. National Security Agency (NSA) ranks cryptographic products or algorithms by a certification called product types. Product types are defined in the National Information Assurance Glossary (CNSSI No. 4009) which defines Type 1, 2, 3, and 4 products.
Type 1 product
A Type 1 product is a device or system certified by NSA for use in cryptographically securing classified U.S. Government information. A Type 1 product is defined as:
Cryptographic equipment, assembly or component classified or certified by NSA for encrypting and decrypting classified and sensitive national security information when appropriately keyed. Developed using established NSA business processes and containing NSA approved algorithms. Used to protect systems requiring the most stringent protection mechanisms.
They are available to U.S. Government users, their contractors, and federally sponsored non-U.S. Government activities subject to export restrictions in accordance with International Traffic in Arms Regulations.
Type 1 certification is a rigorous process that includes testing and formal analysis of (among other things) cryptographic security, functional security, tamper resistance, emissions security (EMSEC/TEMPEST), and security of the product manufacturing and distribution process.
Type 2 product
A Type 2 product is unclassified cryptographic equipment, assemblies, or components, endorsed by the NSA, for use in telecommunications and automated information systems for the protection of national security information, as defined as:
Cryptographic equipment, assembly, or component certified by NSA for encrypting or decrypting sensitive national security information when appropriately keyed. Developed using established NSA business processes and containing NSA approved algorithms. Used to protect systems requiring protection mechanisms exceeding best commercial practices including systems used for the protection of unclassified national security information.
Type 3 product
A Type 3 product is a device for use with Sensitive, But Unclassified (SBU) information on non-national security systems, defined as:
Unclassified cryptographic equipment, assembly, or component used, when appropriately keyed, for encrypting or decrypting unclassified sensitive U.S. Government or commercial information, and to protect systems requiring protection mechanisms consistent with standard commercial practices. Developed using established commercial standards and containing NIST approved cryptographic algorithms/modules or successfully evaluated by the National Information Assurance Partnership (NIAP).
Approved encryption algorithms include three-key Triple DES, and AES (although AES can also be used in NSA-certified Type 1 products). Approvals for DES, two-key Triple DES and Skipjack have been withdrawn as of 2015.
Type 4 product
A Type 4 product is an encryption algorithm that has been registered with NIST but is not a Federal Information Processing Standard (FIPS), defined as:
Unevaluated commercial cryptographic equipment, assemblies, or components that neither NSA nor NIST certify for any Government usage. These products are typically delivered as part of commercial offerings and are commensurate with the vendor’s commercial practices. These products may contain either vendor proprietary algorithms, algorithms registered by NIST, or algorithms registered by NIST and published in a FIPS.
See also
NSA encryption systems, for a historically oriented list of NSA encryption products (most of them Type 1).
NSA cryptography for algorithms that NSA has participated in the development of.
NSA Suite B Cryptography
NSA Suite A Cryptography
References
Parts of this article have been derived from Federal Standard 1037C, the National Information Systems Security Glossary, and 40 USC 1452.
Cryptographic algorithms
National Security Agency encryption devices |
41890 | https://en.wikipedia.org/wiki/Group%20theory | Group theory | In mathematics and abstract algebra, group theory studies the algebraic structures known as groups.
The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, and three of the four known fundamental forces in the universe, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
The early history of group theory dates from the 19th century. One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
Main classes of groups
The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.
Permutation groups
The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself () by means of the left regular representation.
In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for , the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree in radicals.
Matrix groups
The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.
Transformation groups
Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.
The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.
Abstract groups
Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations,
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.
The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.
Groups with additional structure
An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),
are compatible with this structure, that is, they are continuous, smooth or regular (in the sense of algebraic geometry) maps, then G is a topological group, a Lie group, or an algebraic group.
The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.
Branches of group theory
Finite group theory
During the twentieth century, mathematicians investigated some aspects of the theory of finite groups in great depth, especially the local theory of finite groups and the theory of solvable and nilpotent groups. As a consequence, the complete classification of finite simple groups was achieved, meaning that all those simple groups from which all finite groups can be built are now known.
During the second half of the twentieth century, mathematicians such as Chevalley and Steinberg also increased our understanding of finite analogs of classical groups, and other related groups. One such family of groups is the family of general linear groups over finite fields.
Finite groups often occur when considering symmetry of mathematical or
physical objects, when those objects admit just a finite number of structure-preserving transformations. The theory of Lie groups,
which may be viewed as dealing with "continuous symmetry", is strongly influenced by the associated Weyl groups. These are finite groups generated by reflections which act on a finite-dimensional Euclidean space. The properties of finite groups can thus play a role in subjects such as theoretical physics and chemistry.
Representation of groups
Saying that a group G acts on a set X means that every element of G defines a bijective map on the set X in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics. On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit. On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts (see Maschke's theorem). These parts, in turn, are much more easily manageable than the whole V (via Schur's lemma).
Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.
Lie theory
A Lie group is a group that is also a differentiable manifold, with the property that the group operations are compatible with the smooth structure. Lie groups are named after Sophus Lie, who laid the foundations of the theory of continuous transformation groups. The term groupes de Lie first appeared in French in 1893 in the thesis of Lie's student Arthur Tresse, page 3.
Lie groups represent the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. They provide a natural framework for analysing the continuous symmetries of differential equations (differential Galois theory), in much the same way as permutation groups are used in Galois theory for analysing the discrete symmetries of algebraic equations. An extension of Galois theory to the case of continuous symmetry groups was one of Lie's principal motivations.
Combinatorial and geometric group theory
Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications . A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators , the free group generated by F surjects onto the group G. The kernel of this map is called the subgroup of relations, generated by some subset D. The presentation is usually denoted by For example, the group presentation describes a group which is isomorphic to A string consisting of generator symbols and their inverses is called a word.
Combinatorial group theory studies groups from the perspective of generators and relations. It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. For example, one can show that every subgroup of a free group is free.
There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example, the group with presentation is isomorphic to the additive group Z of integers, although this may not be immediately apparent.
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on. The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from a distance) to the space X.
Connection of groups and symmetry
Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation has the two solutions and . In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions is associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.
The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.
Applications of group theory
Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore, group theoretic arguments underlie large parts of the theory of those entities.
Galois theory
Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.
Algebraic topology
Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman, is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory relies in a way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
Algebraic geometry
Algebraic geometry likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures. The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.
Algebraic number theory
Algebraic number theory makes uses of groups for some important applications. For example, Euler's product formula,
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
Harmonic analysis
Analysis on Lie groups and certain other groups is called harmonic analysis. Haar measures, that is, integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.
Combinatorics
In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
Music
The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory. Transformational theory models musical transformations as elements of a mathematical group.
Physics
In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.
Group theory can be used to resolve the incompleteness of the statistical interpretations of mechanics developed by Willard Gibbs, relating to the summing of an infinite number of probabilities to yield a meaningful solution.
Chemistry and materials science
In chemistry and materials science, point groups are used to classify regular polyhedra, and the symmetries of molecules, and space groups to classify crystal structures. The assigned groups can then be used to determine physical properties (such as chemical polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy, infrared spectroscopy, circular dichroism spectroscopy, magnetic circular dichroism spectroscopy, UV/Vis spectroscopy, and fluorescence spectroscopy), and to construct molecular orbitals.
Molecular symmetry is responsible for many physical and spectroscopic properties of compounds and provides relevant information about how chemical reactions occur. In order to assign a point group for any given molecule, it is necessary to find the set of symmetry operations present on it. The symmetry operation is an action, such as a rotation around an axis or a reflection through a mirror plane. In other words, it is an operation that moves the molecule such that it is indistinguishable from the original configuration. In group theory, the rotation axes and mirror planes are called "symmetry elements". These elements can be a point, line or plane with respect to which the symmetry operation is carried out. The symmetry operations of a molecule determine the specific point group for this molecule.
In chemistry, there are five important symmetry operations. They are identity operation (E), rotation operation or proper rotation (Cn), reflection operation (σ), inversion (i) and rotation reflection operation or improper rotation (Sn). The identity operation (E) consists of leaving the molecule as it is. This is equivalent to any number of full rotations around any axis. This is a symmetry of all molecules, whereas the symmetry group of a chiral molecule consists of only the identity operation. An identity operation is a characteristic of every molecule even if it has no symmetry. Rotation around an axis (Cn) consists of rotating the molecule around a specific axis by a specific angle. It is rotation through the angle 360°/n, where n is an integer, about a rotation axis. For example, if a water molecule rotates 180° around the axis that passes through the oxygen atom and between the hydrogen atoms, it is in the same configuration as it started. In this case, , since applying it twice produces the identity operation. In molecules with more than one rotation axis, the Cn axis having the largest value of n is the highest order rotation axis or principal axis. For example in boron trifluoride (BF3), the highest order of rotation axis is C3, so the principal axis of rotation is C3.
In the reflection operation (σ) many molecules have mirror planes, although they may not be obvious. The reflection operation exchanges left and right, as if each point had moved perpendicularly through the plane to a position exactly as far from the plane as when it started. When the plane is perpendicular to the principal axis of rotation, it is called σh (horizontal). Other planes, which contain the principal axis of rotation, are labeled vertical (σv) or dihedral (σd).
Inversion (i ) is a more complex operation. Each point moves through the center of the molecule to a position opposite the original position and as far from the central point as where it started. Many molecules that seem at first glance to have an inversion center do not; for example, methane and other tetrahedral molecules lack inversion symmetry. To see this, hold a methane model with two hydrogen atoms in the vertical plane on the right and two hydrogen atoms in the horizontal plane on the left. Inversion results in two hydrogen atoms in the horizontal plane on the right and two hydrogen atoms in the vertical plane on the left. Inversion is therefore not a symmetry operation of methane, because the orientation of the molecule following the inversion operation differs from the original orientation. And the last operation is improper rotation or rotation reflection operation (Sn) requires rotation of 360°/n, followed by reflection through a plane perpendicular to the axis of rotation.
Cryptography
Very large groups of prime order constructed in elliptic curve cryptography serve for public-key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. Most cryptographic schemes use groups in some way. In particular Diffie–Hellman key exchange uses finite cyclic groups. So the term group-based cryptography refers mostly to cryptographic protocols that use infinite nonabelian groups such as a braid group.
History
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry.
Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.
The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.
See also
List of group theory topics
Examples of groups
Notes
References
Shows the advantage of generalising from group to groupoid.
An introductory undergraduate text in the spirit of texts by Gallian or Herstein, covering groups, rings, integral domains, fields and Galois theory. Free downloadable PDF with open-source GFDL license.
Conveys the practical value of group theory by explaining how it points to symmetries in physics and other sciences.
Ronan M., 2006. Symmetry and the Monster. Oxford University Press. . For lay readers. Describes the quest to find the basic building blocks for finite groups.
A standard contemporary reference.
Inexpensive and fairly readable, but somewhat dated in emphasis, style, and notation.
External links
History of the abstract group concept
Higher dimensional group theory This presents a view of group theory as level one of a theory that extends in all dimensions, and has applications in homotopy theory and to higher dimensional nonabelian methods for local-to-global problems.
Plus teacher and student package: Group Theory This package brings together all the articles on group theory from Plus, the online mathematics magazine produced by the Millennium Mathematics Project at the University of Cambridge, exploring applications and recent breakthroughs, and giving explicit definitions and examples of groups.
This is a detailed exposition of contemporaneous understanding of Group Theory by an early researcher in the field.
ml:ഗ്രൂപ്പ് സിദ്ധാന്തം |
42416 | https://en.wikipedia.org/wiki/X10%20%28industry%20standard%29 | X10 (industry standard) | X10 is a protocol for communication among electronic devices used for home automation (domotics). It primarily uses power line wiring for signaling and control, where the signals involve brief radio frequency bursts representing digital information. A wireless radio-based protocol transport is also defined.
X10 was developed in 1975 by Pico Electronics of Glenrothes, Scotland, in order to allow remote control of home devices and appliances. It was the first general purpose domotic network technology and remains the most widely available.
Although a number of higher-bandwidth alternatives exist, X10 remains popular in the home environment with millions of units in use worldwide, and inexpensive availability of new components.
History
In 1970, a group of engineers started a company in Glenrothes, Scotland called Pico Electronics. The company developed the first single chip calculator. When calculator integrated circuit prices started to fall, Pico refocused on commercial products rather than plain ICs.
In 1974, the Pico engineers jointly developed a LP record turntable, the ADC Accutrac 4000, with Birmingham Sound Reproducers, at the time the largest manufacturer of record changers in the world. It could be programmed to play selected tracks, and could be operated by a remote control using ultrasound signals, which sparked the idea of remote control for lights and appliances. By 1975, the X10 project was conceived, so named because it was the tenth project. In 1978, X10 products started to appear in RadioShack and Sears stores. Together with BSR a partnership was formed, with the name X10 Ltd. At that time the system consisted of a 16 channel command console, a lamp module, and an appliance module. Soon after came the wall switch module and the first X10 timer.
In the 1980s, the CP-290 computer interface was released. Software for the interface runs on the Commodore 64, Apple II, Macintosh, MS-DOS, and MS-Windows.
In 1985, BSR went out of business, and X10 (USA) Inc. was formed. In the early 1990s, the consumer market was divided into two main categories, the ultra-high-end with a budget at US$100,000 and the mass market with budgets at US$2,000 to US$35,000. CEBus (1984) and LonWorks (1991) were attempts to improve reliability and replace X10.
Brands
X10 components are sold under a variety of brand names:
X10 Powerhouse
X10 Pro
X10 Activehome
Radio Shack Plug 'n Power
Leviton Central Control System (CCS)
Leviton Decora Electronic Controls
Sears Home Control System
Stanley LightMaker
Stanley Homelink
Black & Decker Freewire
IBM Home Director
RCA Home Control
GE Homeminder
Advanced Control Technologies (ACT)
Magnavox Home Security
NuTone
Smarthome
Power line carrier control overview
Household electrical wiring which powers lights and appliances is used to send digital data between X10 devices. This data is encoded onto a 120 kHz carrier which is transmitted as bursts during the relatively quiet zero crossings of the 50 or 60 Hz AC alternating current waveform. One bit is transmitted at each zero crossing.
The digital data consists of an address and a command sent from a controller to a controlled device. More advanced controllers can also query equally advanced devices to respond with their status. This status may be as simple as "off" or "on", or the current dimming level, or even the temperature or other sensor reading. Devices usually plug into the wall where a lamp, television, or other household appliance plugs in; however some built-in controllers are also available for wall switches and ceiling fixtures.
The relatively high-frequency carrier wave carrying the signal cannot pass through a power transformer or across the phases of a multiphase system. For split phase systems, the signal can be passively coupled from leg-to-leg using a passive capacitor, but for three phase systems or where the capacitor provides insufficient coupling, an active X10 repeater can be used. To allow signals to be coupled across phases and still match each phase's zero crossing point, each bit is transmitted three times in each half cycle, offset by 1/6 cycle.
It may also be desirable to block X10 signals from leaving the local area so, for example, the X10 controls in one house do not interfere with the X10 controls in a neighboring house. In this situation, inductive filters can be used to attenuate the X10 signals coming into or going out of the local area.
Protocol
Whether using power line or radio communications, packets transmitted using the X10 control protocol consist of a four bit house code followed by one or more four bit unit codes, finally followed by a four bit command. For the convenience of users configuring a system, the four bit house code is selected as a letter from A through P while the four bit unit code is a number 1 through 16.
When the system is installed, each controlled device is configured to respond to one of the 256 possible addresses (16 house codes × 16 unit codes); each device reacts to commands specifically addressed to it, or possibly to several broadcast commands.
The protocol may transmit a message that says "select code A3", followed by "turn on", which commands unit "A3" to turn on its device. Several units can be addressed before giving the command, allowing a command to affect several units simultaneously. For example, "select A3", "select A15", "select A4", and finally, "turn on", causes units A3, A4, and A15 to all turn on.
Note that there is no restriction that prevents using more than one house code within a single house. The "all lights on" command and "all units off" commands will only affect a single house code, so an installation using multiple house codes effectively has the devices divided into separate zones.
One-way vs two-way
Inexpensive X10 devices only receive commands and do not acknowledge their status to the rest of the network. Two-way controller devices allow for a more robust network but cost two to four times more and require two-way X10 devices.
List of X10 commands
List of X10 house and unit code encodings
Note that the binary values for the house and unit codes correspond, but they are not a straight binary sequence. A unit code is followed by one additional "0" bit to distinguish from a command code (detailed above).
Physical layer details
In the 60 Hz AC current flow, each bit transmitted requires two zero crossings. A "1" bit is represented by an active zero crossing followed by an inactive zero crossing. A "0" bit is represented by an inactive zero crossing followed by an active zero crossing. An active zero crossing is represented by a 1 millisecond burst of 120 kHz at the zero crossing point (nominally 0°, but within 200 microseconds of the zero crossing point). An inactive zero crossing will not have a pulse of 120 kHz signal.
In order to provide a predictable start point, every data frame transmitted always begins with a start code of three active zero crossings followed by an inactive crossing. Since all data bits are sent as one active and one inactive (or one inactive and one active) zero crossing, the start code, possessing three active crossings in a row, can be uniquely detected. Many X10 protocol charts represent this start code as "1110", but it is important to realize that is in terms of zero crossings, not data bits.
Immediately after the start code, a 4-bit house code (normally represented by the letters A to P on interface units) appears, and after the house code comes a 5-bit function code. Function codes may specify a unit number code (1–16) or a command code. The unit number or command code occupies the first 4 of the 5 bits. The final bit is a 0 for a unit code and a 1 for a command code. Multiple unit codes may be transmitted in sequence before a command code is finally sent. The command will be applied to all unit codes sent. It is also possible to send a message with no unit codes, just a house code and a command code. This will apply to the command to the last group of units codes previously sent.
One start code, one house code, and one function code is known as an X10 frame and represent the minimum components of a valid X10 data packet.
Each frame is sent twice in succession to make sure the receivers understand it over any power line noise for purposes of redundancy, reliability, and to accommodate line repeaters. After allowing for retransmission, line control, etc., data rates are around 20 bit/s, making X10 data transmission so slow that the technology is confined to turning devices on and off or other very simple operations.
Whenever the data changes from one address to another address, from an address to a command, or from one command to another command, the data frames must be separated by at least 6 clear zero crossings (or "000000"). The sequence of six zeros resets the device decoder hardware.
Later developments (1997) of hardware are improvements of the native X10 hardware. In Europe (2001) for the 230 VAC 50 Hz market. All improved products use the same X10 protocol and are compatible.
RF protocol
To allow for wireless keypads, remote switches, motion sensors, et cetera, an RF protocol is also defined. X10 wireless devices send data packets that are nearly identical to the NEC IR protocol used by many IR remotes, and a radio receiver then provides a bridge which translates these radio packets to ordinary X10 power line control packets. The wireless protocol operates at a frequency of 310 MHz in the U.S. and 433.92 MHz in European systems.
The devices available using the radio protocol include:
Keypad controllers ("clickers")
Keychain controllers that can control one to four X10 devices
Burglar alarm modules that can transmit sensor data
Passive infrared switches to control lighting and X-10 chimes
Non-passive information bursts
Hardware support
Device modules
Depending on the load that is to be controlled, different modules must be used. For incandescent lamp loads, a lamp module or wall switch module can be used. These modules switch the power using a TRIAC solid state switch and are also capable of dimming the lamp load. Lamp modules are almost silent in operation, and generally rated to control loads ranging from approximately 40 to 500 watts.
For loads other than incandescent lamps, such as fluorescent lamps, high-intensity discharge lamps, and electrical home appliances, the triac-based electronic switching in the lamp module is unsuitable and an appliance module must be used instead. These modules switch the power using an impulse relay. In the U.S., these modules are generally rated to control loads up to 15 amperes (1800 watts at 120 V).
Many device modules offer a feature called local control. If the module is switched off, operating the power switch on the lamp or appliance will cause the module to turn on. In this way, a lamp can still be lit or a coffee pot turned on without the need to use an X10 controller. Wall switch modules may not offer this feature. As a result, older Appliance modules may fail to work with, for example, a very low load such as a 5W LED table lamp.
Some wall switch modules offer a feature called local dimming. Ordinarily, the local push button of a wall switch module simply offers on/off control with no possibility of locally dimming the controlled lamp. If local dimming is offered, holding down the push button will cause the lamp to cycle through its brightness range.
Higher end modules have more advanced features such as programmable on levels, customizable fade rates, the ability to transmit commands when used (referred to as 2-way devices), and scene support.
There are sensor modules that sense and report temperature, light, infrared, motion, or contact openings and closures. Device modules include thermostats, audible alarms and controllers for low voltage switches.
Controllers
X10 controllers range from extremely simple to very sophisticated.
The simplest controllers are arranged to control four X10 devices at four sequential addresses (1–4 or 5–8). The controllers typically contain the following buttons:
Unit 1 on/off
Unit 2 on/off
Unit 3 on/off
Unit 4 on/off
Brighten/dim (last selected unit)
All lights on/all units off
More sophisticated controllers can control more units and/or incorporate timers that perform preprogrammed functions at specific times each day. Units are also available that use passive infrared motion detectors or photocells to turn lights on and off based on external conditions.
Finally, very sophisticated units are available that can be fully programmed or, like the X10 Firecracker, use a program running in an external computer. These systems can execute many different timed events, respond to external sensors, and execute, with the press of a single button, an entire scene, turning lights on, establishing brightness levels, and so on. Control programs are available for computers running Microsoft Windows, Apple's Macintosh, Linux and FreeBSD operating systems.
Burglar alarm systems are also available. These systems contain door/window sensors, as well as motion sensors that use a coded radio frequency (RF) signal to identify when they are tripped or just to routinely check-in and give a heart-beat signal to show that the system is still active. Users can arm and disarm their system via several different remote controls that also use a coded RF signal to ensure security. When an alarm is triggered the console will make an outbound telephone call with a recorded message. The console will also use X10 protocols to flash lights when an alarm has been triggered while the security console sounds an external siren. Using X10 protocols, signals will also be sent to remote sirens for additional security.
Bridges
There are bridges to translate X10 to other domotic standards (e.g., KNX). ioBridge can be used to translate the X10 protocol to a web service API via the X10 PSC04 Powerline Interface Module. The magDomus home controller from magnocomp allows interconnection and inter-operation between most home automation technologies.
Limitations
Compatibility
Solid-state switches used in X10 controls pass a very small leakage current. Compact fluorescent lamps may display nuisance blinking when switched off; CFL manufacturers recommend against controlling lamps with solid-state timers or remote controls.
Some X10 controllers with triac solid-state outputs may not work well with low power devices (below 50 watts) or devices like fluorescent bulbs due to the leakage current of the device. An appliance module, using a relay with metallic contacts may resolve this problem. Many older appliance units have a 'local control' feature whereby the relay is intentionally bypassed with a high value resistor; the module can then sense the appliance's own switch and turn on the relay when the local switch is operated. This sense current may be incompatible with LED or CFL lamps.
Not all devices can be used on a dimmer. Fluorescent lamps are not dimmable with incandescent lamp dimmers; certain models of compact fluorescent lamps are dimmable but cost more. Motorized appliances such as fans, etc. generally will not operate as expected on a dimmer.
Wiring and interfering sources
One problem with X10 is excessive attenuation of signals between the two live conductors in the 3-wire 120/240 volt system used in typical North American residential construction. Signals from a transmitter on one live conductor may not propagate through the high impedance of the distribution transformer winding to the other live conductor. Often, there's simply no reliable path to allow the X10 signals to propagate from one transformer leg wire to the other; this failure may come and go as large 240 volt devices such as stoves or dryers are turned on and off. (When turned on, such devices provide a low-impedance bridge for the X10 signals between the two leg wires.) This problem can be permanently overcome by installing a capacitor between the leg wires as a path for the X10 signals; manufacturers commonly sell signal couplers that plug into 240 volt sockets that perform this function. More sophisticated installations install an active repeater device between the legs, while others combine signal amplifiers with a coupling device. A repeater is also needed for inter-phase communication in homes with three-phase electric power. In many countries outside North America, entire houses are typically wired from a single 240 volt single-phase wire, so this problem does not occur.
Television receivers or household wireless devices may cause spurious "off" or "on" signals. Noise filtering (as installed on computers as well as many modern appliances) may help keep external noise out of X10 signals, but noise filters not designed for X10 may also attenuate X10 signals traveling on the branch circuit to which the appliance is connected.
Certain types of power supplies used in modern electronic equipment, such as computers, television receivers and satellite receivers, attenuate passing X10 signals by providing a low impedance path to high frequency signals. Typically, the capacitors used on the inputs to these power supplies short the X10 signal from line to neutral, suppressing any hope of X10 control on the circuit near that device. Filters are available that will block the X10 signals from ever reaching such devices; plugging offending devices into such filters can cure mysterious X10 intermittent failures.
Having a backup power supply or standby power supply such as used with computers or other electronic devices can totally kill that leg in a household installation because of the filtering used in the power supply.
Commands getting lost
X10 signals can only be transmitted one command at a time, first by addressing the device to control, and then sending an operation for that device to perform. If two X10 signals are transmitted at the same time they may collide or interleave, leading to commands that either cannot be decoded or that trigger incorrect operations. The CM15A and RR501 Transceiver can avoid these signal collisions that can sometimes occur with other models.
Lack of speed
The X10 protocol is slow. It takes roughly three quarters of a second to transmit a device address and a command. While generally not noticeable when using a tabletop controller, it becomes a noticeable problem when using 2-way switches or when utilizing some sort of computerized controller. The apparent delay can be lessened somewhat by using slower device dim rates. With more advanced modules another option is to use group control (lighting scene) extended commands. These allow adjusting several modules at once by a single command.
Limited functionality
X10 protocol does support more advanced control over the dimming speed, direct dim level setting and group control (scene settings). This is done via extended message set, which is an official part of X10 standard. However support for all extended messages is not mandatory, and many cheaper modules implement only the basic message set. These require adjusting each lighting circuit one after the other, which can be visually unappealing and also very slow.
Interference and lack of encryption
The standard X10 power line and RF protocols lack support for encryption, and can only address 256 devices. Unfiltered power line signals from close neighbors using the same X10 device addresses may interfere with each other. Interfering RF wireless signals may similarly be received, with it being easy for anyone nearby with an X10 RF remote to wittingly or unwittingly cause mayhem if an RF to power line device is being used on a premises.
See also
Insteon
Home automation
Power-line communication
References
External links
X10 Knowledge Base
X10 Schematics and Modifications
Digital X-10, Which One Should I Use?.
Home automation
Remote control
Architectural lighting design
Lighting |
43285 | https://en.wikipedia.org/wiki/Common%20Object%20Request%20Broker%20Architecture | Common Object Request Broker Architecture | The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) designed to facilitate the communication of systems that are deployed on diverse platforms. CORBA enables collaboration between systems on different operating systems, programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of the distributed object paradigm.
Overview
CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses an interface definition language (IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, COBOL, Java, Lisp, PL/I, Object Pascal, Python, Ruby and Smalltalk. Non-standard mappings exist for C#, Erlang, Perl, Tcl and Visual Basic implemented by object request brokers (ORBs) written for those languages.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice:
The application initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies.
The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has the Portable Object Adapter (POA) that redirects calls either to the local servants or (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. see Model–view–controller), etc.
In addition to providing users with a language and a platform-neutral remote procedure call (RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.
Versions history
This table presents the history of CORBA standard versions.
Servants
A servant is the invocation target containing methods for handling the remote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into the object (that is exposed to remote invocations) and servant (to which the former part forwards the method calls). It can be one servant per remote object, or the same servant can support several (possibly all) objects, associated with the given Portable Object Adapter. The servant for each object can be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remote object and its servant are objects from the viewpoint of the object-oriented programming.
Incarnation is the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager.
The (POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remote object and its servant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server.
On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies.
Features
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.
Objects By Reference
This reference is either acquired through a stringified Uniform Resource Locator (URL), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
Data By Value
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Objects By Value (OBV)
Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs whence this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM)
CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.
Portable interceptors
Portable interceptors are the "hooks", used by CORBA and RMI-IIOP to mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:
IOR interceptors mediate the creation of the new references to the remote objects, presented by the current server.
Client interceptors usually mediate the remote method calls on the client (caller) side. If the object Servant exists on the same server where the method is invoked, they also mediate the local calls.
Server interceptors mediate the handling of the remote method calls on the server (handler) side.
The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.
General InterORB Protocol (GIOP)
The GIOP is an abstract protocol by which Object request brokers (ORBs) communicate. Standards associated with the protocol are maintained by the Object Management Group (OMG). The GIOP architecture provides several concrete protocols, including:
Internet InterORB Protocol (IIOP) – The Internet Inter-Orb Protocol is an implementation of the GIOP for use over the Internet, and provides a mapping between GIOP messages and the TCP/IP layer.
SSL InterORB Protocol (SSLIOP) – SSLIOP is IIOP over SSL, providing encryption and authentication.
HyperText InterORB Protocol (HTIOP) – HTIOP is IIOP over HTTP, providing transparent proxy bypassing.
Zipped IOP (ZIOP) – A zipped version of GIOP that reduces the bandwidth usage.
VMCID (Vendor Minor Codeset ID)
Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits.
Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58).
Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email to tagrequest@omg.org. A list of currently assigned VMCIDs can be found on the OMG website at: http://www.omg.org/cgi-bin/doc?vendor-tags
The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use.
The Common Object Request Broker: Architecture and Specification (CORBA 2.3)
Corba Location (CorbaLoc)
Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL.
All CORBA products must support two OMG-defined URLs: "" and "". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.
An example of corbaloc is shown below:
A CORBA product may optionally support the "", "" and "" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.
Benefits
CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
Language independenceCORBA was designed to free engineers from limitations of coupling their designs to a particular software language. Currently there are many languages supported by various CORBA providers, the most popular being Java and C++. There are also C++11, C-only, Smalltalk, Perl, Ada, Ruby, and Python implementations, just to mention a few.
OS-independence CORBA's design is meant to be OS-independent. CORBA is available in Java (OS-independent), as well as natively for Linux/Unix, Windows, Solaris, OS X, OpenVMS, HPUX, Android, LynxOS, VxWorks, ThreadX, INTEGRITY, and others.
Freedom from technologies One of the main implicit benefits is that CORBA provides a neutral playing field for engineers to be able to normalize the interfaces between various new and legacy systems. When integrating C, C++, Object Pascal, Java, Fortran, Python, and any other language or OS into a single cohesive system design model, CORBA provides the means to level the field and allow disparate teams to develop systems and unit tests that can later be joined together into a whole system. This does not rule out the need for basic system engineering decisions, such as threading, timing, object lifetime, etc. These issues are part of any system regardless of technology. CORBA allows system elements to be normalized into a single cohesive system model. For example, the design of a multitier architecture is made simple using Java Servlets in the web server and various CORBA servers containing the business logic and wrapping the database accesses. This allows the implementations of the business logic to change, while the interface changes would need to be handled as in any other technology. For example, a database wrapped by a server can have its database schema change for the sake of improved disk usage or performance (or even whole-scale database vendor change), without affecting the external interfaces. At the same time, C++ legacy code can talk to C/Fortran legacy code and Java database code, and can provide data to a web interface.
Data-typing CORBA provides flexible data typing, for example an "ANY" datatype. CORBA also enforces tightly coupled datatyping, reducing human errors. In a situation where Name-Value pairs are passed around, it is conceivable that a server provides a number where a string was expected. CORBA Interface Definition Language provides the mechanism to ensure that user-code conforms to method-names, return-, parameter-types, and exceptions.
High tunability Many implementations (e.g. ORBexpress (Ada, C++, and Java implementation) and OmniORB (open source C++ and Python implementation)) have options for tuning the threading and connection management features. Not all ORB implementations provide the same features.
Freedom from data-transfer details When handling low-level connection and threading, CORBA provides a high level of detail in error conditions. This is defined in the CORBA-defined standard exception set and the implementation-specific extended exception set. Through the exceptions, the application can determine if a call failed for reasons such as "Small problem, so try again", "The server is dead" or "The reference does not make sense." The general rule is: Not receiving an exception means that the method call completed successfully. This is a very powerful design feature.
Compression CORBA marshals its data in a binary form and supports compression. IONA, Remedy IT, and Telefónica have worked on an extension to the CORBA standard that delivers compression. This extension is called ZIOP and this is now a formal OMG standard.
Problems and criticism
While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.
Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
Initial implementation incompatibilities
The initial specifications of CORBA defined only the IDL, not the on-the-wire format. This meant that source-code compatibility was the best that was available for several years. With CORBA 2 and later this issue was resolved.
Location transparency
CORBA's notion of location transparency has been criticized; that is, that objects residing in the same address space and accessible with a simple function call are treated the same as objects residing elsewhere (different processes on the same machine, or different machines). This is a fundamental design flaw, as it makes all object access as complex as the most complex case (i.e., remote network call with a wide class of failures that are not possible in local calls). It also hides the inescapable differences between the two classes, making it impossible for applications to select an appropriate use strategy (that is, a call with 1µs latency and guaranteed return will be used very differently from a call with 1s latency with possible transport failure, in which the delivery status is potentially unknown and might take 30s to time out).
Design and process deficiencies
The creation of the CORBA standard is also often cited for its process of design by committee. There was no process to arbitrate between conflicting proposals or to decide on the hierarchy of problems to tackle. Thus the standard was created by taking a union of the features in all proposals with no regard to their coherence. This made the specification complex, expensive to implement entirely, and often ambiguous.
A design committee composed of a mixture of implementation vendors and customers created a diverse set of interests. This diversity made difficult a cohesive standard. Standards and interoperability increased competition and eased customers' movement between alternative implementations. This led to much political fighting within the committee and frequent releases of revisions of the CORBA standard that some ORB implementors ensured were difficult to use without proprietary extensions. Less ethical CORBA vendors encouraged customer lock-in and achieved strong short-term results. Over time the ORB vendors that encourage portability took over market share.
Problems with implementations
Through its history, CORBA has been plagued by shortcomings in poor ORB implementations. Unfortunately many of the papers criticizing CORBA as a standard are simply criticisms of a particularly bad CORBA ORB implementation.
CORBA is a comprehensive standard with many features. Few implementations attempt to implement all of the specifications, and initial implementations were incomplete or inadequate. As there were no requirements to provide a reference implementation, members were free to propose features which were never tested for usefulness or implementability. Implementations were further hindered by the general tendency of the standard to be verbose, and the common practice of compromising by adopting the sum of all submitted proposals, which often created APIs that were incoherent and difficult to use, even if the individual proposals were perfectly reasonable.
Robust implementations of CORBA have been very difficult to acquire in the past, but are now much easier to find. The SUN Java SDK comes with CORBA built-in. Some poorly designed implementations have been found to be complex, slow, incompatible and incomplete. Robust commercial versions began to appear but for significant cost. As good quality free implementations became available the bad commercial implementations died quickly.
Firewalls
CORBA (more precisely, GIOP) is not tied to any particular communications transport. A specialization of GIOP is the Internet Inter-ORB Protocol or IIOP. IIOP uses raw TCP/IP connections in order to transmit data.
If the client is behind a very restrictive firewall or transparent proxy server environment that only allows HTTP connections to the outside through port 80, communication may be impossible, unless the proxy server in question allows the HTTP CONNECT method or SOCKS connections as well. At one time, it was difficult even to force implementations to use a single standard port – they tended to pick multiple random ports instead. As of today, current ORBs do have these deficiencies. Due to such difficulties, some users have made increasing use of web services instead of CORBA. These communicate using XML/SOAP via port 80, which is normally left open or filtered through a HTTP proxy inside the organization, for web browsing via HTTP. Recent CORBA implementations, though, support SSL and can be easily configured to work on a single port. Some ORBS, such as TAO, omniORB and JacORB also support bidirectional GIOP, which gives CORBA the advantage of being able to use callback communication rather than the polling approach characteristic of web service implementations. Also, most modern firewalls support GIOP & IIOP and are thus CORBA-friendly firewalls.
See also
Software engineering
Component-based software engineering
Distributed computing
Portable object
Service-oriented architecture (SOA)
Component-based software technologies
Freedesktop.org D-Bus – current open cross-language cross-platform object model
GNOME Bonobo – deprecated GNOME cross-language object model
KDE DCOP – deprecated KDE interprocess and software componentry communication system
KDE KParts – KDE component framework
Component Object Model (COM) – Microsoft Windows-only cross-language object model
DCOM (Distributed COM) – extension making COM able to work in networks
Common Language Infrastructure – Current .NET cross-language cross-platform object model
XPCOM (Cross Platform Component Object Model) – developed by Mozilla for applications based on it (e.g. Mozilla Application Suite, SeaMonkey 1.x)
IBM System Object Model SOM and DSOM – component systems from IBM used in OS/2 and AIX
Internet Communications Engine (ICE)
Java remote method invocation (Java RMI)
Java Platform, Enterprise Edition (Java EE)
JavaBean
OpenAIR
Remote procedure call (RPC)
Windows Communication Foundation (WCF)
Software Communications Architecture (SCA) – components for embedded systems, cross-language, cross-transport, cross-platform
Language bindings
Language binding
Foreign function interface
Calling convention
Dynamic Invocation Interface
Name mangling
Application programming interface - API
Application binary interface - ABI
Comparison of application virtual machines
SWIG opensource automatic interfaces bindings generator from many languages to many languages
References
Further reading
External links
Official OMG CORBA Components page
Unofficial CORBA Component Model page
Comparing IDL to C++ with IDL to C++11
Corba: Gone But (Hopefully) Not Forgotten
OMG XMI Specification
Component-based software engineering
GNOME
Inter-process communication
ISO standards
Object-oriented programming |
43308 | https://en.wikipedia.org/wiki/John%20Gilmore%20%28activist%29 | John Gilmore (activist) | John Gilmore (born 1955) is one of the founders of the Electronic Frontier Foundation, the Cypherpunks mailing list, and Cygnus Solutions. He created the alt.* hierarchy in Usenet and is a major contributor to the GNU Project.
An outspoken civil libertarian, Gilmore has sued the Federal Aviation Administration, the United States Department of Justice, and others. He was the plaintiff in the prominent case Gilmore v. Gonzales, challenging secret travel-restriction laws, which he lost. He is an advocate for drug policy reform.
He co-authored the Bootstrap Protocol in 1985, which evolved into Dynamic Host Configuration Protocol (DHCP), the primary way local networks assign an IP address to devices.
Life and career
As the fifth employee of Sun Microsystems and founder of Cygnus Support, he became wealthy enough to retire early and pursue other interests.
He is a frequent contributor to free software, and worked on several GNU projects, including maintaining the GNU Debugger in the early 1990s, initiating GNU Radio in 1998, starting Gnash media player in December 2005 to create a free software player for Flash movies, and writing the pdtar program which became GNU tar. Outside of the GNU project he founded the FreeS/WAN project, an implementation of IPsec, to promote the encryption of Internet traffic. He sponsored the EFF's Deep Crack DES cracker, sponsored the Micropolis city building game based on SimCity, and is a proponent of opportunistic encryption.
Gilmore co-authored the Bootstrap Protocol (RFC 951) with Bill Croft in 1985. The Bootstrap Protocol evolved into DHCP, the method by which Ethernet and wireless networks typically assign devices an IP address.
On the 22nd of October 2021, EFF announced that they have removed Gilmore from the Board.
Activism
Gilmore owns the domain name toad.com, which is one of the 100 oldest active .com domains. It was registered on August 18, 1987. He runs the mail server at toad.com as an open mail relay. In October 2002, Gilmore's ISP, Verio, cut off his Internet access for running an open relay, a violation of Verio's terms of service. Many people contend that open relays make it too easy to send spam. Gilmore protests that his mail server was programmed to be essentially useless to spammers and other senders of mass email and he argues that Verio's actions constitute censorship. He also notes that his configuration makes it easier for friends who travel to send email, although his critics counter that there are other mechanisms to accommodate people wanting to send email while traveling. The measures Gilmore took to make his server useless to spammers may or may not have helped, considering that in 2002, at least one mass-mailing worm that propagated through open relays — W32.Yaha — had been hard-coded to relay through the toad.com mail server.
Gilmore famously stated of Internet censorship that "The Net interprets censorship as damage and routes around it".
He unsuccessfully challenged the constitutionality of secret regulations regarding travel security policies in Gilmore v. Gonzales.
Gilmore is also an advocate for the relaxing of drug laws and has given financial support to Students for Sensible Drug Policy, the Marijuana Policy Project, Erowid, MAPS, Flex Your Rights, and various other organizations seeking to end the war on drugs. He is a member of the boards of directors of MAPS and the Marijuana Policy Project. Until October 2021, he was also a board member of the EFF.
Affiliations
Following the sale of AMPRNet address space in mid-2019, Gilmore, under amateur radio call sign W0GNU, was listed as a board member of Amateur Radio Digital Communications, Inc., a non-profit involved in the management of the resources on behalf of the amateur radio community.
Honours
Gilmore has received the Free Software Foundation's Advancement of Free Software 2009 award.
References
External links
Gilmore v. Gonzales information
Verio censored John Gilmore's email under pressure from anti-spammers.
John Gilmore on inflight activism, spam and sarongs; interview by Mikael Pawlo, August 18, 2004.
Gilmore on Secret Laws / Gonzales case; audio interview, 13 November 2006.
Living people
Drug policy reform activists
Free software programmers
Web developers
Computer programmers
Cypherpunks
GNU people
Internet activists
1955 births
Psychedelic drug advocates
American technology company founders
American philanthropists
People from York, Pennsylvania
Amateur radio people
Internet pioneers |
43314 | https://en.wikipedia.org/wiki/List%20of%20free%20and%20open-source%20software%20packages | List of free and open-source software packages | This is a list of free and open-source software packages, computer software licensed under free software licenses and open-source licenses. Software that fits the Free Software Definition may be more appropriately called free software; the GNU project in particular objects to their works being referred to as open-source. For more information about the philosophical background for open-source software, see free software movement and Open Source Initiative. However, nearly all software meeting the Free Software Definition also meets the Open Source Definition and vice versa. A small fraction of the software that meets either definition is listed here.
Some of the open-source applications are also the basis of commercial products, shown in the List of commercial open-source applications and services.
Artificial intelligence
General AI
OpenCog – A project that aims to build an artificial general intelligence (AGI) framework. OpenCog Prime is a specific set of interacting components designed to give rise to human-equivalent artificial general intelligence.
Computer Vision
AForge.NET – computer vision, artificial intelligence and robotics library for the .NET Framework
OpenCV – computer vision library in C++
Machine Learning
See List of open-source machine learning software
See Data Mining below
See R programming language – packages of statistical learning and analysis tools
Planning
TREX – Reactive planning
Robotics
ROS – Robot Operating System
Webots – Robot Simulator
YARP – Yet Another Robot Platform
Assistive technology
Speech (synthesis and recognition)
CMU Sphinx – Speech recognition software from Carnegie Mellon University
Emacspeak – Audio desktop
ESpeak – Compact software speech synthesizer for English and other languages
Festival Speech Synthesis System – General multilingual speech synthesis
Modular Audio Recognition Framework – Voice, audio, speech NLP processing
NonVisual Desktop Access – (NVDA) Screen reader, for Windows
Text2Speech – Lightweight, easy-to-use Text-To-Speech (TTS) Software
Other assistive technology
Dasher – Unique text input software
Gnopernicus – AT suite for GNOME 2
Virtual Magnifying Glass – A multi-platform screen magnification tool
CAD
FreeCAD – Parametric 3D CAD modeler with a focus on mechanical engineering, BIM, and product design
LibreCAD – 2D CAD software using AutoCAD-like interface and file format
SolveSpace - 2D and 3D CAD, constraint-based parametric modeler with simple mechanical simulation capabilities.
Electronic design automation (EDA)
Fritzing
KiCad
Computer simulation
Blender – 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, and motion graphics.
FlightGear - atmospheric and orbital flight simulator with a flight dynamics engine (JSBSim) that is used in a 2015 NASA benchmark to judge new simulation code to space industry standards.
SimPy – Queue-theoretic event-based simulator written in Python
Cybersecurity
Antivirus
ClamAV
ClamWin
Gateway Anti-Virus
Lynis
Data loss prevention
MyDLP
Data recovery
dvdisaster
Foremost
PhotoRec
TestDisk
Forensics
The Coroner's Toolkit
The Sleuth Kit
Anti-forensics
USBKill
Disk erasing
DBAN
srm
Encryption
AES
Bouncy Castle
GnuPG
GnuTLS
KGPG
NaCl
OpenSSL
Seahorse
Signal
stunnel
TextSecure
wolfCrypt
Disk encryption
dm-crypt
CrossCrypt
FreeOTFE and FreeOTFE Explorer
eCryptfs
Firewall
Uncomplicated Firewall (ufw)
Firestarter
IPFilter
ipfw
iptables
M0n0wall
PeerGuardian
PF
pfSense
Rope
Shorewall
SmoothWall
Vyatta
Network and security monitoring
Snort – Network intrusion detection system (IDS) and intrusion prevention system (IPS)
OpenVAS – software framework of several services and tools offering vulnerability scanning and vulnerability management
Secure Shell (SSH)
Cyberduck – macOS and Windows client (since version 4.0)
Lsh – Server and client, with support for SRP and Kerberos authentication
OpenSSH – Client and server
PuTTY – Client-only
Password management
Bitwarden
KeePass
KeePassXC (multiplatform fork able to open KeePass databases)
Password Safe
Mitro
Pass
Other cybersecurity programs
Data storage and management
Backup software
Database management systems (including administration)
Data mining
Environment for DeveLoping KDD-Applications Supported by Index-Structures (ELKI) – Data mining software framework written in Java with a focus on clustering and outlier detection methods
FrontlineSMS – Information distribution and collecting via text messaging (SMS)
Konstanz Information Miner (KNIME)
OpenNN – Open-source neural networks software library written in C++
Orange (software) – Data visualization and data mining for novice and experts, through visual programming or Python scripting. Extensions for bioinformatics and text mining
RapidMiner – Data mining software written in Java, fully integrating Weka, featuring 350+ operators for preprocessing, machine learning, visualization, etc. – the previous version is available as open-source
Scriptella ETL – ETL (Extract-Transform-Load) and script execution tool. Supports integration with J2EE and Spring. Provides connectors to CSV, LDAP, XML, JDBC/ODBC, and other data sources
Weka – Data mining software written in Java featuring machine learning operators for classification, regression, and clustering
JasperSoft – Data mining with programmable abstraction layer
Data Visualization Components
ParaView – Plotting and visualization functions developed by Sandia National Laboratory; capable of massively parallel flow visualization utilizing multiple computer processors
VTK – Toolkit for 3D computer graphics, image processing, and visualisation.
Digital Asset Management software system
Disk partitioning software
Enterprise search engines
ApexKB, formerly known as Jumper
Lucene
Nutch
Solr
Xapian
ETLs (Extract Transform Load)
Konstanz Information Miner (KNIME)
Pentaho
File archivers
File systems
OpenAFS – Distributed file system supporting a very wide variety of operating systems
Tahoe-LAFS – Distributed file system/Cloud storage system with integrated privacy and security features
CephFS – Distributed file system included in the Ceph storage platform.
Desktop publishing
Collabora Online Draw and Writer - Enterprise-ready edition of LibreOffice accessible from a web browser. The Draw application is for flyers, newsletters, brochures and more, Writer has most of the functionality too.
Scribus – Designed for layout, typesetting, and preparation of files for professional-quality image-setting equipment. It can also create animated and interactive PDF presentations and forms.
E-book management and editing
Calibre – Cross-platform suite of e-book software
Collabora Online Writer - Enterprise-ready edition of LibreOffice accessible from a web browser. Allows exporting in the EPUB format.
Sigil – Editing software for e-books in the EPUB format
Educational
Educational suites
ATutor – Web-based Learning Content Management System (LCMS)
Chamilo – Web-based e-learning and content management system
Claroline – Collaborative Learning Management System
DoceboLMS – SAAS/cloud platform for learning
eFront – Icon-based learning management system
FlightPath – Academic advising software for universities
GCompris – Educational entertainment, aimed at children aged 2–10
Gnaural – Brainwave entrainment software
H5P – Framework for creating and sharing interactive HTML5 content
IUP Portfolio – Educational platform for Swedish schools
ILIAS – Web-based learning management system (LMS)
Moodle – Free and open-source learning management system
OLAT – Web-based Learning Content Management System
Omeka – Content management system for online digital collections
openSIS – Web-based Student Information and School Management system
Sakai Project – Web-based learning management system
SWAD – Web-based learning management system
Tux Paint – Painting application for 3–12 year olds
UberStudent – Linux based operating system and software suite for academic studies
Geography
KGeography – Educational game teaching geography
Learning support
Language
Kiten
Typing
KTouch – Touch typing lessons with a variety of keyboard layouts
Tux Typing – Typing tutor for children, featuring two games to improve typing speed
File managers
Finance
Accounting
GnuCash – Double-entry book-keeping
HomeBank – Personal accounting software
KMyMoney – Double-entry book-keeping
LedgerSMB – Double-entry book-keeping
RCA open-source application – management accounting application
SQL Ledger – Double-entry book-keeping
TurboCASH – Double-entry book-keeping for Windows
Wave Accounting – Double-entry book-keeping
Cryptocurrency
Bitcoin – Blockchain platform, peer-to-peer decentralised digital currency
Ethereum – Blockchain platform with smart contract functionality
CRM
CiviCRM – Constituent Relationship Management software aimed at NGOs
iDempiere – Business Suite, ERP and CRM
SuiteCRM – Web-based CRM
ERP
Adempiere – Enterprise resource planning (ERP) business suite
Compiere – ERP solution automates accounting, supply chain, inventory, and sales orders
Dolibarr – Web-based ERP system
ERPNext – Web-based open-source ERP system for managing accounting and finance
Ino erp – Dynamic pull based system ERP
JFire – An ERP business suite written with Java and JDO
metasfresh – ERP Software
Odoo – Open-source ERP, CRM and CMS
Openbravo – Web-based ERP
Tryton – Open-source ERP
Human resources
OrangeHRM – Commercial human resource management
Microfinance
Cyclos – Software for microfinance institutions, complementary currency systems and timebanks
Mifos – Microfinance Institution management software
Process management
Bonita Open Solution – Business Process Management
Trading
jFin – Java-based trade-processing program
QuickFIX – FIX protocol engine written in C++ with additional C#, Ruby, and Python wrappers
QuickFIX/J – FIX protocol engine written in Java
Games
Action
Xonotic – First-person shooter that runs on a heavily modified version of the Quake engine known as the DarkPlaces engine
Warsow – First-person shooter fast-paced arena FPS game that runs on the Qfusion engine
Application layer
WINE – Allows Windows applications to be run on Unix-like operating systems
Emulation
MAME – Multi-platform emulator designed to recreate the hardware of arcade game systems
RetroArch – Cross-platform front-end for emulators, game engines and video games
Puzzle
Pingus – Lemmings clone with penguins instead of lemmings
Sandbox
Minetest – An open source voxel game engine.
Simulation
OpenTTD – Business simulation game in which players try to earn money via transporting passengers and freight by road, rail, water and air.
SuperTuxKart – Kart racing game that features mascots of various open-source projects.
Strategy
0 A.D. – Real-time strategy video game
Freeciv – Turn-based strategy game inspired by the proprietary Sid Meier's Civilization series.
The Battle for Wesnoth – Turn-based strategy video game with a fantasy setting
Genealogy
Gramps (software) – a free and open source genealogy software.
Geographic information systems
QGIS – cross-platform desktop geographic information system (GIS) application that supports viewing, editing, and analysis of geospatial data.
Graphical user interface
Desktop environments
Window managers
Windowing system
Groupware
Content management systems
Wiki software
Healthcare software
Integrated library management software
Evergreen – Integrated Library System initially developed for the Georgia Public Library Service's PINES catalog
Koha – SQL-based library management
NewGenLib
OpenBiblio
PMB
refbase – Web-based institutional repository and reference management software
Image editor
Darktable – Digital image workflow management, including RAW photo processing
digiKam – Integrated photography toolkit including editing capabilities
GIMP – Raster graphics editor aimed at image retouching/editing
Inkscape – Vector graphics editor
Karbon – Scalable vector drawing application in KDE
Krita – Digital painting, sketching and 2D animation application, with a variety of brush engines
LazPaint – Lightweight raster and vector graphics editor, aimed at being simpler to use than GIMP
LightZone – Free, open-source digital photo editor software application.
RawTherapee – Digital image workflow management aimed at RAW photo processing
Mathematics
Statistics
R Statistics Software
Numerical Analysis
Octave - Numerical Analysis Software
Geometry
Geogebra - Geometry and Algebra
Spreadsheet
LibreOffice Calc algebraic operations on table cells - descriptive data analysis
Media
Audio editors, audio management
CD/USB-writing software
Flash animation
Pencil2D – For animations
SWFTools – For scripting
Game Engines
Godot – Application for the design of cross-platform video games
Stockfish – Universal Chess Interface chess engine
Leela Chess Zero – Universal Chess Interface chess engine
Armory – 3D engine focused on portability, minimal footprint and performance. Armory provides a full Blender integration add-on, turning it into a complete game development tool.
Stride – (prev. Xenko) 2D and 3D cross-platform game engine originally developed by Silicon Studio.
Graphics
2D
Pencil2D – Simple 2D graphics and animation program
Synfig – 2D vector graphics and timeline based animation
TupiTube (formerly KTooN) – Application for the design and creation of animation
OpenToonz – Part of a family of 2D animation software
Krita – Digital painting, sketching and 2D animation application, with a variety of brush engines
Blender – Computer graphics software, Blender's Grease Pencil tools allow for 2D animation within a full 3D pipeline.
mtPaint – raster graphics editor for creating icons, pixel art
3D
Blender – Computer graphics software featuring modeling, sculpting, texturing, rigging, simulation, rendering, camera tracking, video editing, and compositing
OpenFX – Modeling and animation software with a variety of built-in post processing effects
Seamless3d – Node-driven 3D modeling software
Wings 3D – subdivision modeler inspired by Nendo and Mirai from Izware.
Image galleries
Image viewers
Eye of GNOME
F-spot
feh
Geeqie
Gthumb
Gwenview
KPhotoAlbum
Opticks
Multimedia codecs, containers, splitters
Television
Video converters
Dr. DivX
FFmpeg
MEncoder
OggConvert
Video editing
Avidemux
AviSynth
Blender
Cinelerra
DVD Flick
Flowblade
Kdenlive
Kino
LiVES
LosslessCut
Natron
Olive
OpenShot
Pitivi
Shotcut
VirtualDub
VirtualDubMod
VideoLAN Movie Creator
Video encoders
Avidemux
HandBrake
FFmpeg
Video players
Media Player Classic
VLC media player
mpv
Other media packages
Celtx – Media pre-production software
Open Broadcaster Software (OBS) – Cross-platform streaming and recording program
Networking and Internet
Advertising
Revive Adserver
Communication-related
Asterisk – Telephony and VoIP server
Ekiga – Video conferencing application for GNOME and Microsoft Windows
ConferenceXP – video conferencing application for Windows XP or later
FreePBX – Front-end and advanced PBX configuration for Asterisk
FreeSWITCH – Telephony platform
Jami – Cross-platform, peer to peer instant-messaging and video-calling protocol that offers end-to-end encryption and SIP client
Jitsi – Java VoIP and Instant Messaging client
QuteCom – Voice, video, and IM client application
Enterprise Communications System sipXecs – SIP Communications Server
Slrn – Newsreader
Twinkle – VoIP softphone
Tox – Cross-platform, peer-to-peer instant-messaging and video-calling protocol that offers end-to-end encryption
E-mail
Geary – Email client based on WebKitGTK+
Mozilla Thunderbird – Email, news, RSS, and chat client
File transfer
Grid and distributed processing
GNU Queue
HTCondor
OpenLava
pexec
Instant messaging
IRC Clients
Middleware
Apache Axis2 – Web service framework (implementations are available in both Java & C)
Apache Geronimo – Application server
Bonita Open Solution – a J2EE web application and java BPMN2 compliant engine
GlassFish – Application server
Jakarta Tomcat – Servlet container and standalone webserver
JBoss Application Server – Application server
ObjectWeb JOnAS – Java Open Application Server, a J2EE application server
OpenRemote – IoT Middleware
TAO (software) – C++ implementation of the OMG's CORBA standard
Enduro/X – C/C++ middleware platform based on X/Open group's XATMI and XA standards
RSS/Atom readers/aggregators
Akregator – Platforms running KDE
Liferea – Platforms running GNOME
NetNewsWire – macOS, iOS
RSS Bandit – Windows, using .NET Framework
RSSOwl – Windows, Mac OS X, Solaris, Linux using Java SWT Eclipse
Sage (Mozilla Firefox extension)
Peer-to-peer file sharing
Popcorn Time – Multi-platform, free, and open-source media player
qBittorrent – Alternative to popular clients such as μTorrent
Transmission – BitTorrent client
Portal Server
Drupal
Liferay
Sun Java System Portal Server
uPortal
Remote access and management
FreeNX
OpenVPN
rdesktop
Synergy
VNC (RealVNC, TightVNC, UltraVNC)
Remmina (based on FreeRDP)
Routing software
Web browsers
Brave – web browser based on the Blink engine
Chromium – Minimalist web browser from which Google Chrome draws its source code
Falkon – web browser based on the Blink engine
Firefox – Mozilla-developed web browser using the Gecko layout engine
GNOME Web - web browser using the WebKit layout engine
Midori – Lightweight web browser using the WebKit layout engine
qutebrowser - web brower based on the WebKit layout engine
Tor Browser – Modified Mozilla Firefox ESR web browser
Waterfox – Alternative to Firefox (64-bit only)
SeaMonkey – Internet suite
Webcam
Cheese – GNOME webcam application
Guvcview – Linux webcam application
Webgrabber
cURL
HTTrack
Wget
Web-related
Apache Cocoon – A web application framework
Apache – The most popular web server
AWStats – Log file parser and analyzer
BookmarkSync – Tool for browsers
Cherokee – Fast, feature-rich HTTP server
curl-loader – Powerful HTTP/HTTPS/FTP/FTPS loading and testing tool
FileZilla – FTP
Hiawatha – Secure, high performance, and easy-to-configure HTTP server
HTTP File Server – User-friendly file server software, with a drag-and-drop interface
lighttpd – Resource-sparing, but also fast and full-featured, HTTP Server
Lucee – CFML application server
Nginx – Lightweight, high performance web server/reverse proxy and e-mail (IMAP/POP3) proxy
NetKernel – Internet application server
Qcodo – PHP5 framework
Squid – Web proxy cache
Vaadin – Fast, Java-based framework for creating web applications
Varnish – High-performance web application accelerator/reverse proxy and load balancer/HTTP router
XAMPP – Package of web applications including Apache and MariaDB
Zope – Web application server
Web search engines
Searx – Self-hostable metasearch engine
YaCy – P2P-based search engine
Other networking programs
JXplorer – LDAP client
Nextcloud – A fork of ownCloud
OpenLDAP – LDAP server
ownCloud – File share and sync server
Wireshark – Network monitor
Office suites
Apache OpenOffice (formerly known as OpenOffice.org)
Calligra Suite – The continuation of KOffice under a new name
Collabora Online - Enterprise-ready edition of LibreOffice, web application, mobile phone, tablet, Chromebook and desktop (Windows, macOS, Linux)
LibreOffice – Independent Work of OpenOffice.org with a number of enhancements
ONLYOFFICE Desktop Editors – An open-source offline edition of the Cloud
Operating systems
Be advised that available distributions of these systems can contain, or offer to build and install, added software that is neither free software nor open-source.
Emulation and Virtualisation
DOSBox – DOS programs emulator (including PC games)
VirtualBox – hosted hypervisor for x86 virtualization
Personal information managers
Chandler – Developed by the OSAF
KAddressBook
Kontact
KOrganizer
Mozilla Calendar – Mozilla-based, multi-platform calendar program
Novell Evolution
Perkeep – Personal data store for pictures
Project.net – Commercial Project Management
TeamLab – Platform for project management and collaboration
Programming language support
Bug trackers
Bugzilla
Mantis
Mindquarry
Redmine
Trac
Code generators
Bison
CodeSynthesis XSD – XML Data Binding compiler for C++
CodeSynthesis XSD/e – Validating XML parser/serializer and C++ XML Data Binding generator for mobile and embedded systems
Flex lexical analyser – Generates lexical analyzers
Open Scene Graph – 3D graphics application programming interface
OpenSCDP – Open Smart Card Development Platform
phpCodeGenie
SableCC – Parser generator for Java and .NET
SWIG – Simplified Wrapper and Interface Generator for several languages
^txt2regex$
xmlbeansxx – XML Data Binding code generator for C++
YAKINDU Statechart Tools – Statechart code generator for C++ and Java
Documentation generators
Doxygen – Tool for writing software reference documentation. The documentation is written within code.
Mkd – The software documentation is extracted from the sources files, from pseudocode or comments.
Natural Docs – Claims to use a more natural language as input from the comments, hence its name.
Configuration software
Autoconf
Automake
BuildAMation
CMake
Debuggers (for testing and trouble-shooting)
GNU Debugger – A portable debugger that runs on many Unix-like systems
Memtest86 – Stress-tests RAM on x86 machines
Xnee – Record and replay tests
Integrated development environments
Version control systems
Reference management software
Risk Management
Active Agenda – Operational risk management and Rapid application development platform
Science
Bioinformatics
Cheminformatics
Chemistry Development Kit
JOELib
OpenBabel
Electronic Lab Notebooks
ELOG
Jupyter
Geographic Information Systems
Geoscience
Grid computing
P-GRADE Portal – Grid portal software enabling the creation, execution and monitoring of workflows through high-level Web interfaces
Microscope image processing
CellProfiler – Automatic microscopic analysis, aimed at individuals lacking training in computer vision
Endrov – Java-based plugin architecture designed to analyse complex spatio-temporal image data
Fiji – ImageJ-based image processing
Ilastik – Image-classification and segmentation software
ImageJ – Image processing application developed at the National Institutes of Health
IMOD – 2D and 3D analysis of electron microscopy data
ITK – Development framework used for creation of image segmentation and registration programs
KNIME – Data analytics, reporting, and integration platform
VTK – C++ toolkit for 3D computer graphics, image processing, and visualisation
3DSlicer – Medical image analysis and visualisation
Molecular dynamics
GROMACS – Protein, lipid, and nucleic acid simulation
LAMMPS – Molecular dynamics software
MDynaMix – General-purpose molecular dynamics, simulating mixtures of molecules
ms2 - molecular dynamics and Monte Carlo simulation package for the prediction of thermophysical properties of fluids
NWChem – Quantum chemical and molecular dynamics software
Molecule viewer
Avogadro – Plugin-extensible molecule visualisation
BALLView – Molecular modeling and visualisation
Jmol – 3D representation of molecules in a variety of formats, for use as a teaching tool
Molekel – Molecule viewing software
MeshLab – Able to import PDB dataset and build up surfaces from them
PyMOL – High-quality representations of small molecules as well as biological macromolecules
QuteMol – Interactive molecule representations offering an array of innovative OpenGL visual effects
RasMol – Visualisation of biological macromolecules
Nanotechnology
Ninithi – Visualise and analyse carbon allotropes, such as Carbon nanotube, Fullerene, Graphene nanoribbons
Plotting
Veusz
Quantum chemistry
CP2K – Atomistic and molecular simulation of solid-state, liquid, molecular, and biological systems
Screensavers
BOINC
Electric Sheep
XScreenSaver
Statistics
R Statistics Software
LimeSurvey – Online survey system
Theology
Bible study tools
Go Bible – A free Bible viewer application for Java mobile phones
Marcion – Coptic–English/Czech dictionary
OpenLP – A worship presentation program licensed under the GNU General Public License
The SWORD Project – The CrossWire Bible Society's free software project
Typesetting
See also
GNOME Core Applications
List of GNU packages
List of KDE applications
List of formerly proprietary software
List of Unix commands
General directories
AlternativeTo
CodePlex
Free Software Directory
Freecode
Open Hub
SourceForge
References
External links
Open Source Software Directory (OSSD), a collection of FOSS organized by target audience.
Open Source Living, a community-driven archive of open-source software (OSS).
OpenDisc, a pre-assembled ISO image of open-source software for Microsoft Windows
List of open-source programs (LOOP) for Windows, maintained by the Ubuntu Documentation Project.
The OSSwin Project, a list of free and open-source software for Windows
See also
Open-source software
Open source license
Free software lists and comparisons
Lists of software |
43342 | https://en.wikipedia.org/wiki/IPsec | IPsec | In computing, Internet Protocol Security (IPsec) is a secure network protocol suite that authenticates and encrypts the packets of data to provide secure encrypted communication between two computers over an Internet Protocol network. It is used in virtual private networks (VPNs).
IPsec includes protocols for establishing mutual authentication between agents at the beginning of a session and negotiation of cryptographic keys to use during the session. IPsec can protect data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).
IPsec uses cryptographic security services to protect communications over Internet Protocol (IP) networks. It supports network-level peer authentication, data origin authentication, data integrity, data confidentiality (encryption), and replay protection.
The initial IPv4 suite was developed with few security provisions. As a part of the IPv4 enhancement, IPsec is a layer 3 OSI model or internet layer end-to-end security scheme. In contrast, while some other Internet security systems in widespread use operate above the network layer, such as Transport Layer Security (TLS) that operates above the transport layer and Secure Shell (SSH) that operates at the application layer, IPsec can automatically secure applications at the internet layer.
History
Starting in the early 1970s, the Advanced Research Projects Agency sponsored a series of experimental ARPANET encryption devices, at first for native ARPANET packet encryption and subsequently for TCP/IP packet encryption; some of these were certified and fielded. From 1986 to 1991, the NSA sponsored the development of security protocols for the Internet under its Secure Data Network Systems (SDNS) program. This brought together various vendors including Motorola who produced a network encryption device in 1988. The work was openly published from about 1988 by NIST and, of these, Security Protocol at Layer 3 (SP3) would eventually morph into the ISO standard Network Layer Security Protocol (NLSP).
From 1992 to 1995, various groups conducted research into IP-layer encryption.
1. In 1992, the US Naval Research Laboratory (NRL) began the Simple Internet Protocol Plus (SIPP) project to research and implement IP encryption.
2. In 1993, at Columbia University and AT&T Bell Labs, John Ioannidis and others researched the software experimental Software IP Encryption Protocol (swIPe) on SunOS.
3. In 1993, Sponsored by Whitehouse internet service project, Wei Xu at Trusted Information Systems (TIS) further researched the Software IP Security Protocols and developed the hardware support for the triple DES Data Encryption Standard, which was coded in the BSD 4.1 kernel and supported both x86 and SUNOS architectures. By December 1994, TIS released their DARPA-sponsored open-source Gauntlet Firewall product with the integrated 3DES hardware encryption at over T1 speeds. It was the first-time using IPSec VPN connections between the east and west coast of the States, known as the first commercial IPSec VPN product.
4. Under NRL's DARPA-funded research effort, NRL developed the IETF standards-track specifications (RFC 1825 through RFC 1827) for IPsec, which was coded in the BSD 4.4 kernel and supported both x86 and SPARC CPU architectures. NRL's IPsec implementation was described in their paper in the 1996 USENIX Conference Proceedings. NRL's open-source IPsec implementation was made available online by MIT and became the basis for most initial commercial implementations.
The Internet Engineering Task Force (IETF) formed the IP Security Working Group in 1992 to standardize openly specified security extensions to IP, called IPsec. In 1995, the working group organized a few of the workshops with members from the five companies (TIS, CISCO, FTP, Checkpoint, etc.). During the IPSec workshops, the NRL's standards and Cisco and TIS' software are standardized as the public references, published as RFC-1825 through RFC-1827.
Security architecture
The IPsec is an open standard as a part of the IPv4 suite. IPsec uses the following protocols to perform various functions:
Authentication Headers (AH) provides connectionless data integrity and data origin authentication for IP datagrams and provides protection against replay attacks.
Encapsulating Security Payloads (ESP) provides confidentiality, connectionless data integrity, data origin authentication, an anti-replay service (a form of partial sequence integrity), and limited traffic-flow confidentiality.
Internet Security Association and Key Management Protocol (ISAKMP) provides a framework for authentication and key exchange, with actual authenticated keying material provided either by manual configuration with pre-shared keys, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), or IPSECKEY DNS records. The purpose is to generate the Security Associations (SA) with the bundle of algorithms and parameters necessary for AH and/or ESP operations.
Authentication Header
The Security Authentication Header (AH) was developed at the US Naval Research Laboratory in the early 1990s and is derived in part from previous IETF standards' work for authentication of the Simple Network Management Protocol (SNMP) version 2. Authentication Header (AH) is a member of the IPsec protocol suite. AH ensures connectionless integrity by using a hash function and a secret shared key in the AH algorithm. AH also guarantees the data origin by authenticating IP packets. Optionally a sequence number can protect the IPsec packet's contents against replay attacks, using the sliding window technique and discarding old packets.
In IPv4, AH prevents option-insertion attacks. In IPv6, AH protects both against header insertion attacks and option insertion attacks.
In IPv4, the AH protects the IP payload and all header fields of an IP datagram except for mutable fields (i.e. those that might be altered in transit), and also IP options such as the IP Security Option (RFC 1108). Mutable (and therefore unauthenticated) IPv4 header fields are DSCP/ToS, ECN, Flags, Fragment Offset, TTL and Header Checksum.
In IPv6, the AH protects most of the IPv6 base header, AH itself, non-mutable extension headers after the AH, and the IP payload. Protection for the IPv6 header excludes the mutable fields: DSCP, ECN, Flow Label, and Hop Limit.
AH operates directly on top of IP, using IP protocol number 51.
The following AH packet diagram shows how an AH packet is constructed and interpreted:
Next Header (8 bits) Type of the next header, indicating what upper-layer protocol was protected. The value is taken from the list of IP protocol numbers.
Payload Len (8 bits) The length of this Authentication Header in 4-octet units, minus 2. For example, an AH value of 4 equals 3×(32-bit fixed-length AH fields) + 3×(32-bit ICV fields) − 2 and thus an AH value of 4 means 24 octets. Although the size is measured in 4-octet units, the length of this header needs to be a multiple of 8 octets if carried in an IPv6 packet. This restriction does not apply to an Authentication Header carried in an IPv4 packet.
Reserved (16 bits) Reserved for future use (all zeroes until then).
Security Parameters Index (32 bits) Arbitrary value which is used (together with the destination IP address) to identify the security association of the receiving party.
Sequence Number (32 bits) A monotonic strictly increasing sequence number (incremented by 1 for every packet sent) to prevent replay attacks. When replay detection is enabled, sequence numbers are never reused, because a new security association must be renegotiated before an attempt to increment the sequence number beyond its maximum value.
Integrity Check Value (multiple of 32 bits) Variable length check value. It may contain padding to align the field to an 8-octet boundary for IPv6, or a 4-octet boundary for IPv4.
Encapsulating Security Payload
The IP Encapsulating Security Payload (ESP) was developed at the Naval Research Laboratory starting in 1992 as part of a DARPA-sponsored research project, and was openly published by IETF SIPP Working Group drafted in December 1993 as a security extension for SIPP. This ESP was originally derived from the US Department of Defense SP3D protocol, rather than being derived from the ISO Network-Layer Security Protocol (NLSP). The SP3D protocol specification was published by NIST in the late 1980s, but designed by the Secure Data Network System project of the US Department of Defense.
Encapsulating Security Payload (ESP) is a member of the IPsec protocol suite. It provides origin authenticity through source authentication, data integrity through hash functions and confidentiality through encryption protection for IP packets. ESP also supports encryption-only and authentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure.
Unlike Authentication Header (AH), ESP in transport mode does not provide integrity and authentication for the entire IP packet. However, in Tunnel Mode, where the entire original IP packet is encapsulated with a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected. ESP operates directly on top of IP, using IP protocol number 50.
The following ESP packet diagram shows how an ESP packet is constructed and interpreted:
Security Parameters Index (32 bits) Arbitrary value used (together with the destination IP address) to identify the security association of the receiving party.
Sequence Number (32 bits) A monotonically increasing sequence number (incremented by 1 for every packet sent) to protect against replay attacks. There is a separate counter kept for every security association.
Payload data (variable) The protected contents of the original IP packet, including any data used to protect the contents (e.g. an Initialisation Vector for the cryptographic algorithm). The type of content that was protected is indicated by the Next Header field.
Padding (0-255 octets) Padding for encryption, to extend the payload data to a size that fits the encryption's cipher block size, and to align the next field.
Pad Length (8 bits) Size of the padding (in octets).
Next Header (8 bits) Type of the next header. The value is taken from the list of IP protocol numbers.
Integrity Check Value (multiple of 32 bits) Variable length check value. It may contain padding to align the field to an 8-octet boundary for IPv6, or a 4-octet boundary for IPv4.
Security association
The IPsec protocols use a security association, where the communicating parties establish shared security attributes such as algorithms and keys. As such IPsec provides a range of options once it has been determined whether AH or ESP is used. Before exchanging data the two hosts agree on which symmetric encryption algorithm is used to encrypt the IP packet, for example AES or ChaCha20, and which hash function is used to ensure the integrity of the data, such as BLAKE2 or SHA256. These parameters are agreed for the particular session, for which a lifetime must be agreed and a session key.
The algorithm for authentication is also agreed before the data transfer takes place and IPsec supports a range of methods. Authentication is possible through pre-shared key, where a symmetric key is already in the possession of both hosts, and the hosts send each other hashes of the shared key to prove that they are in possession of the same key. IPsec also supports public key encryption, where each host has a public and a private key, they exchange their public keys and each host sends the other a nonce encrypted with the other host's public key. Alternatively if both hosts hold a public key certificate from a certificate authority, this can be used for IPsec authentication.
The security associations of IPsec are established using the Internet Security Association and Key Management Protocol (ISAKMP). ISAKMP is implemented by manual configuration with pre-shared secrets, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), and the use of IPSECKEY DNS records. RFC 5386 defines Better-Than-Nothing Security (BTNS) as an unauthenticated mode of IPsec using an extended IKE protocol. C. Meadows, C. Cremers, and others have used Formal Methods to identify various anomalies which exist in IKEv1 and also in IKEv2.
In order to decide what protection is to be provided for an outgoing packet, IPsec uses the Security Parameter Index (SPI), an index to the security association database (SADB), along with the destination address in a packet header, which together uniquely identifies a security association for that packet. A similar procedure is performed for an incoming packet, where IPsec gathers decryption and verification keys from the security association database.
For IP multicast a security association is provided for the group, and is duplicated across all authorized receivers of the group. There may be more than one security association for a group, using different SPIs, thereby allowing multiple levels and sets of security within a group. Indeed, each sender can have multiple security associations, allowing authentication, since a receiver can only know that someone knowing the keys sent the data. Note that the relevant standard does not describe how the association is chosen and duplicated across the group; it is assumed that a responsible party will have made the choice.
Modes of operation
The IPsec protocols AH and ESP can be implemented in a host-to-host transport mode, as well as in a network tunneling mode.
Transport mode
In transport mode, only the payload of the IP packet is usually encrypted or authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when the authentication header is used, the IP addresses cannot be modified by network address translation, as this always invalidates the hash value. The transport and application layers are always secured by a hash, so they cannot be modified in any way, for example by translating the port numbers.
A means to encapsulate IPsec messages for NAT traversal has been defined by RFC documents describing the NAT-T mechanism.
Tunnel mode
In tunnel mode, the entire IP packet is encrypted and authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to create virtual private networks for network-to-network communications (e.g. between routers to link sites), host-to-network communications (e.g. remote user access) and host-to-host communications (e.g. private chat).
Tunnel mode supports NAT traversal.
Algorithms
Symmetric encryption algorithms
Cryptographic algorithms defined for use with IPsec include:
HMAC-SHA1/SHA2 for integrity protection and authenticity.
TripleDES-CBC for confidentiality
AES-CBC and AES-CTR for confidentiality.
AES-GCM and ChaCha20-Poly1305 providing confidentiality and authentication together efficiently.
Refer to RFC 8221 for details.
Key exchange algorithms
Diffie–Hellman (RFC 3526)
ECDH (RFC 4753)
Authentication algorithms
RSA
ECDSA (RFC 4754)
PSK (RFC 6617)
Implementations
The IPsec can be implemented in the IP stack of an operating system, which requires modification of the source code. This method of implementation is done for hosts and security gateways. Various IPsec capable IP stacks are available from companies, such as HP or IBM. An alternative is so called bump-in-the-stack (BITS) implementation, where the operating system source code does not have to be modified. Here IPsec is installed between the IP stack and the network drivers. This way operating systems can be retrofitted with IPsec. This method of implementation is also used for both hosts and gateways. However, when retrofitting IPsec the encapsulation of IP packets may cause problems for the automatic path MTU discovery, where the maximum transmission unit (MTU) size on the network path between two IP hosts is established. If a host or gateway has a separate cryptoprocessor, which is common in the military and can also be found in commercial systems, a so-called bump-in-the-wire (BITW) implementation of IPsec is possible.
When IPsec is implemented in the kernel, the key management and ISAKMP/IKE negotiation is carried out from user space. The NRL-developed and openly specified "PF_KEY Key Management API, Version 2" is often used to enable the application-space key management application to update the IPsec Security Associations stored within the kernel-space IPsec implementation. Existing IPsec implementations usually include ESP, AH, and IKE version 2. Existing IPsec implementations on UNIX-like operating systems, for example, Solaris or Linux, usually include PF_KEY version 2.
Embedded IPsec can be used to ensure the secure communication among applications running over constrained resource systems with a small overhead.
Standards status
IPsec was developed in conjunction with IPv6 and was originally required to be supported by all standards-compliant implementations of IPv6 before RFC 6434 made it only a recommendation. IPsec is also optional for IPv4 implementations. IPsec is most commonly used to secure IPv4 traffic.
IPsec protocols were originally defined in RFC 1825 through RFC 1829, which were published in 1995. In 1998, these documents were superseded by RFC 2401 and RFC 2412 with a few incompatible engineering details, although they were conceptually identical. In addition, a mutual authentication and key exchange protocol Internet Key Exchange (IKE) was defined to create and manage security associations. In December 2005, new standards were defined in RFC 4301 and RFC 4309 which are largely a superset of the previous editions with a second version of the Internet Key Exchange standard IKEv2. These third-generation documents standardized the abbreviation of IPsec to uppercase “IP” and lowercase “sec”. “ESP” generally refers to RFC 4303, which is the most recent version of the specification.
Since mid-2008, an IPsec Maintenance and Extensions (ipsecme) working group is active at the IETF.
Alleged NSA interference
In 2013, as part of Snowden leaks, it was revealed that the US National Security Agency had been actively working to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of the Bullrun program. There are allegations that IPsec was a targeted encryption system.
The OpenBSD IPsec stack came later on and also was widely copied. In a letter which OpenBSD lead developer Theo de Raadt received on 11 Dec 2010 from Gregory Perry, it is alleged that Jason Wright and others, working for the FBI, inserted "a number of backdoors and side channel key leaking mechanisms" into the OpenBSD crypto code. In the forwarded email from 2010, Theo de Raadt did not at first express an official position on the validity of the claims, apart from the implicit endorsement from forwarding the email. Jason Wright's response to the allegations: "Every urban legend is made more real by the inclusion of real names, dates, and times. Gregory Perry's email falls into this category. … I will state clearly that I did not add backdoors to the OpenBSD operating system or the OpenBSD crypto framework (OCF)." Some days later, de Raadt commented that "I believe that NETSEC was probably contracted to write backdoors as alleged. … If those were written, I don't believe they made it into our tree." This was published before the Snowden leaks.
An alternative explanation put forward by the authors of the Logjam attack suggests that the NSA compromised IPsec VPNs by undermining the Diffie-Hellman algorithm used in the key exchange. In their paper, they allege the NSA specially built a computing cluster to precompute multiplicative subgroups for specific primes and generators, such as for the second Oakley group defined in RFC 2409. As of May 2015, 90% of addressable IPsec VPNs supported the second Oakley group as part of IKE. If an organization were to precompute this group, they could derive the keys being exchanged and decrypt traffic without inserting any software backdoors.
A second alternative explanation that was put forward was that the Equation Group used zero-day exploits against several manufacturers' VPN equipment which were validated by Kaspersky Lab as being tied to the Equation Group and validated by those manufacturers as being real exploits, some of which were zero-day exploits at the time of their exposure. The Cisco PIX and ASA firewalls had vulnerabilities that were used for wiretapping by the NSA.
Furthermore, IPsec VPNs using "Aggressive Mode" settings send a hash of the PSK in the clear. This can be and apparently is targeted by the NSA using offline dictionary attacks.
IETF documentation
Standards track
: The ESP DES-CBC Transform
: The Use of HMAC-MD5-96 within ESP and AH
: The Use of HMAC-SHA-1-96 within ESP and AH
: The ESP DES-CBC Cipher Algorithm With Explicit IV
: The NULL Encryption Algorithm and Its Use With IPsec
: The ESP CBC-Mode Cipher Algorithms
: The Use of HMAC-RIPEMD-160-96 within ESP and AH
: More Modular Exponential (MODP) Diffie-Hellman groups for Internet Key Exchange (IKE)
: The AES-CBC Cipher Algorithm and Its Use with IPsec
: Using Advanced Encryption Standard (AES) Counter Mode With IPsec Encapsulating Security Payload (ESP)
: Negotiation of NAT-Traversal in the IKE
: UDP Encapsulation of IPsec ESP Packets
: The Use of Galois/Counter Mode (GCM) in IPsec Encapsulating Security Payload (ESP)
: Security Architecture for the Internet Protocol
: IP Authentication Header
: IP Encapsulating Security Payload
: Extended Sequence Number (ESN) Addendum to IPsec Domain of Interpretation (DOI) for Internet Security Association and Key Management Protocol (ISAKMP)
: Cryptographic Algorithms for Use in the Internet Key Exchange Version 2 (IKEv2)
: Cryptographic Suites for IPsec
: Using Advanced Encryption Standard (AES) CCM Mode with IPsec Encapsulating Security Payload (ESP)
: The Use of Galois Message Authentication Code (GMAC) in IPsec ESP and AH
: IKEv2 Mobility and Multihoming Protocol (MOBIKE)
: Online Certificate Status Protocol (OCSP) Extensions to IKEv2
: Using HMAC-SHA-256, HMAC-SHA-384, and HMAC-SHA-512 with IPsec
: The Internet IP Security PKI Profile of IKEv1/ISAKMP, IKEv2, and PKIX
: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
: Using Authenticated Encryption Algorithms with the Encrypted Payload of the Internet Key Exchange version 2 (IKEv2) Protocol
: Better-Than-Nothing Security: An Unauthenticated Mode of IPsec
: Modes of Operation for Camellia for Use with IPsec
: Redirect Mechanism for the Internet Key Exchange Protocol Version 2 (IKEv2)
: Internet Key Exchange Protocol Version 2 (IKEv2) Session Resumption
: IKEv2 Extensions to Support Robust Header Compression over IPsec
: IPsec Extensions to Support Robust Header Compression over IPsec
: Internet Key Exchange Protocol Version 2 (IKEv2)
: Cryptographic Algorithm Implementation Requirements and Usage Guidance for Encapsulating Security Payload (ESP) and Authentication Header (AH)
: Internet Key Exchange Protocol Version 2 (IKEv2) Message Fragmentation
: Signature Authentication in the Internet Key Exchange Version 2 (IKEv2)
: ChaCha20, Poly1305, and Their Use in the Internet Key Exchange Protocol (IKE) and IPsec
Experimental RFCs
: Repeated Authentication in Internet Key Exchange (IKEv2) Protocol
Informational RFCs
: PF_KEY Interface
: The OAKLEY Key Determination Protocol
: A Traffic-Based Method of Detecting Dead Internet Key Exchange (IKE) Peers
: IPsec-Network Address Translation (NAT) Compatibility Requirements
: Design of the IKEv2 Mobility and Multihoming (MOBIKE) Protocol
: Requirements for an IPsec Certificate Management Profile
: Problem and Applicability Statement for Better-Than-Nothing Security (BTNS)
: Integration of Robust Header Compression over IPsec Security Associations
: Using Advanced Encryption Standard Counter Mode (AES-CTR) with the Internet Key Exchange version 02 (IKEv2) Protocol
: IPsec Cluster Problem Statement
: IPsec and IKE Document Roadmap
: Suite B Cryptographic Suites for IPsec
: Suite B Profile for Internet Protocol Security (IPsec)
: Secure Password Framework for Internet Key Exchange Version 2 (IKEv2)
Best current practice RFCs
: Guidelines for Specifying the Use of IPsec Version 2
Obsolete/historic RFCs
: Security Architecture for the Internet Protocol (obsoleted by RFC 2401)
: IP Authentication Header (obsoleted by RFC 2402)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 2406)
: IP Authentication using Keyed MD5 (historic)
: Security Architecture for the Internet Protocol (IPsec overview) (obsoleted by RFC 4301)
: IP Encapsulating Security Payload (ESP) (obsoleted by RFC 4303 and RFC 4305)
: The Internet IP Security Domain of Interpretation for ISAKMP (obsoleted by RFC 4306)
: The Internet Key Exchange (obsoleted by RFC 4306)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 4835)
: Internet Key Exchange (IKEv2) Protocol (obsoleted by RFC 5996)
: IKEv2 Clarifications and Implementation Guidelines (obsoleted by RFC 7296)
: Cryptographic Algorithm Implementation Requirements for Encapsulating Security Payload (ESP) and Authentication Header (AH) (obsoleted by RFC 7321)
: Internet Key Exchange Protocol Version 2 (IKEv2) (obsoleted by RFC 7296)
See also
Dynamic Multipoint Virtual Private Network
Information security
NAT traversal
Opportunistic encryption
tcpcrypt
References
External links
All IETF active security WGs
IETF ipsecme WG ("IP Security Maintenance and Extensions" Working Group)
IETF btns WG ("Better-Than-Nothing Security" Working Group) (chartered to work on unauthenticated IPsec, IPsec APIs, connection latching)]
Securing Data in Transit with IPsec WindowsSecurity.com article by Deb Shinder
IPsec on Microsoft TechNet
Microsoft IPsec Diagnostic Tool on Microsoft Download Center
An Illustrated Guide to IPsec by Steve Friedl
Security Architecture for IP (IPsec) Data Communication Lectures by Manfred Lindner Part IPsec
Creating VPNs with IPsec and SSL/TLS Linux Journal article by Rami Rosen
Cryptographic protocols
Internet protocols
Network layer protocols
Tunneling protocols |
43478 | https://en.wikipedia.org/wiki/Email%20client | Email client | An email client, email reader or, more formally, message user agent (MUA) or mail user agent is a computer program used to access and manage a user's email.
A web application which provides message management, composition, and reception functions may act as a web email client, and a piece of computer hardware or software whose primary or most visible role is to work as an email client may also use the term.
Retrieving messages from a mailbox
Like most client programs, an email client is only active when a user runs it. The common arrangement is for an email user (the client) to make an arrangement with a remote Mail Transfer Agent
(MTA) server for the receipt and storage of the client's emails. The MTA, using a suitable mail delivery agent (MDA), adds email messages to a client's storage as they arrive. The remote mail storage is referred to as the user's mailbox. The default setting on many Unix systems is for the mail server to store formatted messages in mbox, within the user's home directory. Of course, users of the system can log-in and run a mail client on the same computer that hosts their mailboxes; in which case, the server is not actually remote, other than in a generic sense.
Emails are stored in the user's mailbox on the remote server until the user's email client requests them to be downloaded to the user's computer, or can otherwise access the user's mailbox on the possibly remote server. The email client can be set up to connect to multiple mailboxes at the same time and to request the download of emails either automatically, such as at pre-set intervals, or the request can be manually initiated by the user.
A user's mailbox can be accessed in two dedicated ways. The Post Office Protocol (POP) allows the user to download messages one at a time and only deletes them from the server after they have been successfully saved on local storage. It is possible to leave messages on the server to permit another client to access them. However, there is no provision for flagging a specific message as seen, answered, or forwarded, thus POP is not convenient for users who access the same mail from different machines.
Alternatively, the Internet Message Access Protocol (IMAP) allows users to keep messages on the server, flagging them as appropriate. IMAP provides folders and sub-folders, which can be shared among different users with possibly different access rights. Typically, the Sent, Drafts, and Trash folders are created by default. IMAP features an idle extension for real-time updates, providing faster notification than polling, where long-lasting connections are feasible. See also the remote messages section below.
The JSON Meta Application Protocol (JMAP) is implemented using JSON APIs over HTTP and has been developed as an alternative to IMAP/SMTP.
In addition, the mailbox storage can be accessed directly by programs running on the server or via shared disks. Direct access can be more efficient but is less portable as it depends on the mailbox format; it is used by some email clients, including some webmail applications.
Message composition
Email clients usually contain user interfaces to display and edit text. Some applications permit the use of a program-external editor.
The email clients will perform formatting according to RFC 5322 for headers and body, and MIME for non-textual content and attachments. Headers include the destination fields, To, Cc (short for Carbon copy), and Bcc (Blind carbon copy), and the originator fields From which is the message's author(s), Sender in case there are more authors, and Reply-To in case responses should be addressed to a different mailbox. To better assist the user with destination fields, many clients maintain one or more address books and/or are able to connect to an LDAP directory server. For originator fields, clients may support different identities.
Client settings require the user's real name and email address for each user's identity, and possibly a list of LDAP servers.
Submitting messages to a server
When a user wishes to create and send an email, the email client will handle the task. The email client is usually set up automatically to connect to the user's mail server, which is typically either an MSA or an MTA, two variations of the SMTP protocol. The email client which uses the SMTP protocol creates an authentication extension, which the mail server uses to authenticate the sender. This method eases modularity and nomadic computing. The older method was for the mail server to recognize the client's IP address, e.g. because the client is on the same machine and uses internal address 127.0.0.1, or because the client's IP address is controlled by the same Internet service provider that provides both Internet access and mail services.
Client settings require the name or IP address of the preferred outgoing mail server, the port number (25 for MTA, 587 for MSA), and the user name and password for the authentication, if any. There is a non-standard port 465 for SSL encrypted SMTP sessions, that many clients and servers support for backward compatibility.
Encryption
With no encryption, much like for postcards, email activity is plainly visible by any occasional eavesdropper. Email encryption enables privacy to be safeguarded by encrypting the mail sessions, the body of the message, or both. Without it, anyone with network access and the right tools can monitor email and obtain login passwords. Examples of concern include the government censorship and surveillance and fellow wireless network users such as at an Internet cafe.
All relevant email protocols have an option to encrypt the whole session, to prevent a user's name and password from being sniffed. They are strongly suggested for nomadic users and whenever the Internet access provider is not trusted. When sending mail, users can only control encryption at the first hop from a client to its configured outgoing mail server. At any further hop, messages may be transmitted with or without encryption, depending solely on the general configuration of the transmitting server and the capabilities of the receiving one.
Encrypted mail sessions deliver messages in their original format, i.e. plain text or encrypted body, on a user's local mailbox and on the destination server's. The latter server is operated by an email hosting service provider, possibly a different entity than the Internet access provider currently at hand.
Encrypting an email retrieval session with, e.g., SSL, can protect both parts (authentication, and message transfer) of the session.
Alternatively, if the user has SSH access to their mail server, they can use SSH port forwarding to create an encrypted tunnel over which to retrieve their emails.
Encryption of the message body
There are two main models for managing cryptographic keys. S/MIME employs a model based on a trusted certificate authority (CA) that signs users' public keys. OpenPGP employs a somewhat more flexible web of trust mechanism that allows users to sign one another's public keys. OpenPGP is also more flexible in the format of the messages, in that it still supports plain message encryption and signing as they used to work before MIME standardization.
In both cases, only the message body is encrypted. Header fields, including originator, recipients, and often subject, remain in plain text.
Webmail
In addition to email clients running on a desktop computer, there are those hosted remotely, either as part of a remote UNIX installation accessible by telnet (i.e. a shell account), or hosted on the Web. Both of these approaches have several advantages: they share an ability to send and receive email away from the user's normal base using a web browser or telnet client, thus eliminating the need to install a dedicated email client on the user's device.
Some websites are dedicated to providing email services, and many Internet service providers provide webmail services as part of their Internet service package. The main limitations of webmail are that user interactions are subject to the website's operating system and the general inability to download email messages and compose or work on the messages offline, although there are software packages that can integrate parts of the webmail functionality into the OS (e.g. creating messages directly from third party applications via MAPI).
Like IMAP and MAPI, webmail provides for email messages to remain on the mail server. See next section.
Remote messages
POP3 has an option to leave messages on the server. By contrast, both IMAP and webmail keep messages on the server as their method of operating, albeit users can make local copies as they like. Keeping messages on the server has advantages and disadvantages.
Advantages
Messages can be accessed from various computers or mobile devices at different locations, using different clients.
Some kind of backup is usually provided by the server.
Disadvantages
With limited bandwidth, access to long messages can be lengthy, unless the email client caches a local copy.
There may be privacy concerns since messages that stay on the server at all times have more chances to be casually accessed by IT personnel, unless end-to-end encryption is used.
Protocols
Popular protocols for retrieving mail include POP3 and IMAP4. Sending mail is usually done using the SMTP protocol.
Another important standard supported by most email clients is MIME, which is used to send binary file email attachments. Attachments are files that are not part of the email proper but are sent with the email.
Most email clients use a User-Agent header field to identify the software used to send the message. According to RFC 2076, this is a common but non-standard header field.
RFC 6409, Message Submission for Mail, details the role of the Mail submission agent.
RFC 5068, Email Submission Operations: Access and Accountability Requirements, provides a survey of the concepts of MTA, MSA, MDA, and MUA. It mentions that " Access Providers MUST NOT block users from accessing the external Internet using the SUBMISSION port 587" and that "MUAs SHOULD use the SUBMISSION port for message submission."
Port numbers
Email servers and clients by convention use the TCP port numbers in the following table. For MSA, IMAP and POP3, the table reports also the labels that a client can use to query the SRV records and discover both the host name and the port number of the corresponding service.
Note that while webmail obeys the earlier HTTP disposition of having separate ports for encrypt and plain text sessions, mail protocols use the STARTTLS technique, thereby allowing encryption to start on an already established TCP connection. While RFC 2595 used to discourage the use of the previously established ports 995 and 993, RFC 8314 promotes the use of implicit TLS when available.
Proprietary client protocols
Microsoft mail systems use the proprietary Messaging Application Programming Interface (MAPI) in client applications, such as Microsoft Outlook, to access Microsoft Exchange electronic mail servers.
See also
Comparison of email clients
Mail submission agent (MSA)
Mailto
Message delivery agent (MDA)
Message transfer agent (MTA)
Simple Mail Transfer Protocol
Text-based email client
References
Bibliography |
45200 | https://en.wikipedia.org/wiki/Universal%20algebra | Universal algebra | Universal algebra (sometimes called general algebra) is the field of mathematics that studies algebraic structures themselves, not examples ("models") of algebraic structures.
For instance, rather than take particular groups as the object of study, in universal algebra one takes the class of groups as an object of study.
Basic idea
In universal algebra, an algebra (or algebraic structure) is a set A together with a collection of operations on A. An n-ary operation on A is a function that takes n elements of A and returns a single element of A. Thus, a 0-ary operation (or nullary operation) can be represented simply as an element of A, or a constant, often denoted by a letter like a. A 1-ary operation (or unary operation) is simply a function from A to A, often denoted by a symbol placed in front of its argument, like ~x. A 2-ary operation (or binary operation) is often denoted by a symbol placed between its arguments, like x ∗ y. Operations of higher or unspecified arity are usually denoted by function symbols, with the arguments placed in parentheses and separated by commas, like f(x,y,z) or f(x1,...,xn). Some researchers allow infinitary operations, such as where J is an infinite index set, thus leading into the algebraic theory of complete lattices. One way of talking about an algebra, then, is by referring to it as an algebra of a certain type , where is an ordered sequence of natural numbers representing the arity of the operations of the algebra.
Equations
After the operations have been specified, the nature of the algebra is further defined by axioms, which in universal algebra often take the form of identities, or equational laws. An example is the associative axiom for a binary operation, which is given by the equation x ∗ (y ∗ z) = (x ∗ y) ∗ z. The axiom is intended to hold for all elements x, y, and z of the set A.
Varieties
A collection of algebraic structures defined by identities is called a variety or equational class.
Restricting one's study to varieties rules out:
quantification, including universal quantification () except before an equation, and existential quantification ()
logical connectives other than conjunction (∧)
relations other than equality, in particular inequalities, both and order relations
The study of equational classes can be seen as a special branch of model theory, typically dealing with structures having operations only (i.e. the type can have symbols for functions but not for relations other than equality), and in which the language used to talk about these structures uses equations only.
Not all algebraic structures in a wider sense fall into this scope. For example, ordered groups involve an ordering relation, so would not fall within this scope.
The class of fields is not an equational class because there is no type (or "signature") in which all field laws can be written as equations (inverses of elements are defined for all non-zero elements in a field, so inversion cannot be added to the type).
One advantage of this restriction is that the structures studied in universal algebra can be defined in any category that has finite products. For example, a topological group is just a group in the category of topological spaces.
Examples
Most of the usual algebraic systems of mathematics are examples of varieties, but not always in an obvious way, since the usual definitions often involve quantification or inequalities.
Groups
As an example, consider the definition of a group. Usually a group is defined in terms of a single binary operation ∗, subject to the axioms:
Associativity (as in the previous section): x ∗ (y ∗ z) = (x ∗ y) ∗ z; formally: ∀x,y,z. x∗(y∗z)=(x∗y)∗z.
Identity element: There exists an element e such that for each element x, one has e ∗ x = x = x ∗ e; formally: ∃e ∀x. e∗x=x=x∗e.
Inverse element: The identity element is easily seen to be unique, and is usually denoted by e. Then for each x, there exists an element i such that x ∗ i = e = i ∗ x; formally: ∀x ∃i. x∗i=e=i∗x.
(Some authors also use the "closure" axiom that x ∗ y belongs to A whenever x and y do, but here this is already implied by calling ∗ a binary operation.)
This definition of a group does not immediately fit the point of view of universal algebra, because the axioms of the identity element and inversion are not stated purely in terms of equational laws which hold universally "for all ..." elements, but also involve the existential quantifier "there exists ...". The group axioms can be phrased as universally quantified equations by specifying, in addition to the binary operation ∗, a nullary operation e and a unary operation ~, with ~x usually written as x−1. The axioms become:
Associativity: .
Identity element: ; formally: ∀x. e∗x=x=x∗e.
Inverse element: formally: ∀x. x∗~x=e=~x∗x.
To summarize, the usual definition has:
a single binary operation (signature (2))
1 equational law (associativity)
2 quantified laws (identity and inverse)
while the universal algebra definition has:
3 operations: one binary, one unary, and one nullary (signature (2,1,0))
3 equational laws (associativity, identity, and inverse)
no quantified laws (except outermost universal quantifiers, which are allowed in varieties)
A key point is that the extra operations do not add information, but follow uniquely from the usual definition of a group. Although the usual definition did not uniquely specify the identity element e, an easy exercise shows it is unique, as is each inverse element.
The universal algebra point of view is well adapted to category theory. For example, when defining a group object in category theory, where the object in question may not be a set, one must use equational laws (which make sense in general categories), rather than quantified laws (which refer to individual elements). Further, the inverse and identity are specified as morphisms in the category. For example, in a topological group, the inverse must not only exist element-wise, but must give a continuous mapping (a morphism). Some authors also require the identity map to be a closed inclusion (a cofibration).
Other examples
Most algebraic structures are examples of universal algebras.
Rings, semigroups, quasigroups, groupoids, magmas, loops, and others.
Vector spaces over a fixed field and modules over a fixed ring are universal algebras. These have a binary addition and a family of unary scalar multiplication operators, one for each element of the field or ring.
Examples of relational algebras include semilattices, lattices, and Boolean algebras.
Basic constructions
We assume that the type, , has been fixed. Then there are three basic constructions in universal algebra: homomorphic image, subalgebra, and product.
A homomorphism between two algebras A and B is a function h: A → B from the set A to the set B such that, for every operation fA of A and corresponding fB of B (of arity, say, n), h(fA(x1,...,xn)) = fB(h(x1),...,h(xn)). (Sometimes the subscripts on f are taken off when it is clear from context which algebra the function is from.) For example, if e is a constant (nullary operation), then h(eA) = eB. If ~ is a unary operation, then h(~x) = ~h(x). If ∗ is a binary operation, then h(x ∗ y) = h(x) ∗ h(y). And so on. A few of the things that can be done with homomorphisms, as well as definitions of certain special kinds of homomorphisms, are listed under the entry Homomorphism. In particular, we can take the homomorphic image of an algebra, h(A).
A subalgebra of A is a subset of A that is closed under all the operations of A. A product of some set of algebraic structures is the cartesian product of the sets with the operations defined coordinatewise.
Some basic theorems
The isomorphism theorems, which encompass the isomorphism theorems of groups, rings, modules, etc.
Birkhoff's HSP Theorem, which states that a class of algebras is a variety if and only if it is closed under homomorphic images, subalgebras, and arbitrary direct products.
Motivations and applications
In addition to its unifying approach, universal algebra also gives deep theorems and important examples and counterexamples. It provides a useful framework for those who intend to start the study of new classes of algebras.
It can enable the use of methods invented for some particular classes of algebras to other classes of algebras, by recasting the methods in terms of universal algebra (if possible), and then interpreting these as applied to other classes. It has also provided conceptual clarification; as J.D.H. Smith puts it, "What looks messy and complicated in a particular framework may turn out to be simple and obvious in the proper general one."
In particular, universal algebra can be applied to the study of monoids, rings, and lattices. Before universal algebra came along, many theorems (most notably the isomorphism theorems) were proved separately in all of these classes, but with universal algebra, they can be proven once and for all for every kind of algebraic system.
The 1956 paper by Higgins referenced below has been well followed up for its framework for a range of particular algebraic systems, while his 1963 paper is notable for its discussion of algebras with operations which are only partially defined, typical examples for this being categories and groupoids. This leads on to the subject of higher-dimensional algebra which can be defined as the study of algebraic theories with partial operations whose domains are defined under geometric conditions. Notable examples of these are various forms of higher-dimensional categories and groupoids.
Constraint satisfaction problem
Universal algebra provides a natural language for the constraint satisfaction problem (CSP). CSP refers to an important class of computational problems where, given a relational algebra and an existential sentence over this algebra, the question is to find out whether can be satisfied in . The algebra is often fixed, so that refers to the problem whose instance is only the existential sentence .
It is proved that every computational problem can be formulated as for some algebra .
For example, the n-coloring problem can be stated as CSP of the algebra , i.e. an algebra with elements and a single relation, inequality.
The dichotomy conjecture (proved in April 2017) states that if is a finite algebra, then is either P or NP-complete.
Generalizations
Universal algebra has also been studied using the techniques of category theory. In this approach, instead of writing a list of operations and equations obeyed by those operations, one can describe an algebraic structure using categories of a special sort, known as Lawvere theories or more generally algebraic theories. Alternatively, one can describe algebraic structures using monads. The two approaches are closely related, with each having their own advantages.
In particular, every Lawvere theory gives a monad on the category of sets, while any "finitary" monad on the category of sets arises from a Lawvere theory. However, a monad describes algebraic structures within one particular category (for example the category of sets), while algebraic theories describe structure within any of a large class of categories (namely those having finite products).
A more recent development in category theory is operad theory – an operad is a set of operations, similar to a universal algebra, but restricted in that equations are only allowed between expressions with the variables, with no duplication or omission of variables allowed. Thus, rings can be described as the so-called "algebras" of some operad, but not groups, since the law duplicates the variable g on the left side and omits it on the right side. At first this may seem to be a troublesome restriction, but the payoff is that operads have certain advantages: for example, one can hybridize the concepts of ring and vector space to obtain the concept of associative algebra, but one cannot form a similar hybrid of the concepts of group and vector space.
Another development is partial algebra where the operators can be partial functions. Certain partial functions can also be handled by a generalization of Lawvere theories known as essentially algebraic theories.
Another generalization of universal algebra is model theory, which is sometimes described as "universal algebra + logic".
History
In Alfred North Whitehead's book A Treatise on Universal Algebra, published in 1898, the term universal algebra had essentially the same meaning that it has today. Whitehead credits William Rowan Hamilton and Augustus De Morgan as originators of the subject matter, and James Joseph Sylvester with coining the term itself.
At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra so as to include them, but rather the comparative study of their several structures." At the time George Boole's algebra of logic made a strong counterpoint to ordinary number algebra, so the term "universal" served to calm strained sensibilities.
Whitehead's early work sought to unify quaternions (due to Hamilton), Grassmann's Ausdehnungslehre, and Boole's algebra of logic. Whitehead wrote in his book:
"Such algebras have an intrinsic value for separate detailed study; also they are worthy of comparative study, for the sake of the light thereby thrown on the general theory of symbolic reasoning, and on algebraic symbolism in particular. The comparative study necessarily presupposes some previous separate study, comparison being impossible without knowledge."
Whitehead, however, had no results of a general nature. Work on the subject was minimal until the early 1930s, when Garrett Birkhoff and Øystein Ore began publishing on universal algebras. Developments in metamathematics and category theory in the 1940s and 1950s furthered the field, particularly the work of Abraham Robinson, Alfred Tarski, Andrzej Mostowski, and their students.
In the period between 1935 and 1950, most papers were written along the lines suggested by Birkhoff's papers, dealing with free algebras, congruence and subalgebra lattices, and homomorphism theorems. Although the development of mathematical logic had made applications to algebra possible, they came about slowly; results published by Anatoly Maltsev in the 1940s went unnoticed because of the war. Tarski's lecture at the 1950 International Congress of Mathematicians in Cambridge ushered in a new period in which model-theoretic aspects were developed, mainly by Tarski himself, as well as C.C. Chang, Leon Henkin, Bjarni Jónsson, Roger Lyndon, and others.
In the late 1950s, Edward Marczewski emphasized the importance of free algebras, leading to the publication of more than 50 papers on the algebraic theory of free algebras by Marczewski himself, together with Jan Mycielski, Władysław Narkiewicz, Witold Nitka, J. Płonka, S. Świerczkowski, K. Urbanik, and others.
Starting with William Lawvere's thesis in 1963, techniques from category theory have become important in universal algebra.
See also
Graph algebra
Term algebra
Clone
Universal algebraic geometry
Simple universal algebra
Footnotes
References
Bergman, George M., 1998. An Invitation to General Algebra and Universal Constructions (pub. Henry Helson, 15 the Crescent, Berkeley CA, 94708) 398 pp. .
Birkhoff, Garrett, 1946. Universal algebra. Comptes Rendus du Premier Congrès Canadien de Mathématiques, University of Toronto Press, Toronto, pp. 310–326.
Burris, Stanley N., and H.P. Sankappanavar, 1981. A Course in Universal Algebra Springer-Verlag. Free online edition.
Cohn, Paul Moritz, 1981. Universal Algebra. Dordrecht, Netherlands: D.Reidel Publishing. (First published in 1965 by Harper & Row)
Freese, Ralph, and Ralph McKenzie, 1987. Commutator Theory for Congruence Modular Varieties, 1st ed. London Mathematical Society Lecture Note Series, 125. Cambridge Univ. Press. . Free online second edition.
Grätzer, George, 1968. Universal Algebra D. Van Nostrand Company, Inc.
Higgins, P. J. Groups with multiple operators. Proc. London Math. Soc. (3) 6 (1956), 366–416.
Higgins, P.J., Algebras with a scheme of operators. Mathematische Nachrichten (27) (1963) 115–132.
Hobby, David, and Ralph McKenzie, 1988. The Structure of Finite Algebras American Mathematical Society. . Free online edition.
Jipsen, Peter, and Henry Rose, 1992. Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer Verlag. . Free online edition.
Pigozzi, Don. General Theory of Algebras. Free online edition.
Smith, J.D.H., 1976. Mal'cev Varieties, Springer-Verlag.
Whitehead, Alfred North, 1898. A Treatise on Universal Algebra, Cambridge. (Mainly of historical interest.)
External links
Algebra Universalis—a journal dedicated to Universal Algebra. |
45845 | https://en.wikipedia.org/wiki/Voynich%20manuscript | Voynich manuscript | The Voynich manuscript is an illustrated codex hand-written in an otherwise unknown writing system, referred to as 'Voynichese'. The vellum on which it is written has been carbon-dated to the early 15th century (1404–1438), and stylistic analysis indicates it may have been composed in Italy during the Italian Renaissance. The origins, authorship, and purpose of the manuscript are debated. Various hypotheses have been suggested, including that it is an otherwise unrecorded script for a natural language or constructed language; an unread code, cypher, or other form of cryptography; or simply a meaningless hoax.
The manuscript currently consists of around 240 pages, but there is evidence that additional pages are missing. Some pages are foldable sheets of varying size. Most of the pages have fantastical illustrations or diagrams, some crudely coloured, with sections of the manuscript showing people, fictitious plants, astrological symbols, etc. The text is written from left to right. The manuscript is named after Wilfrid Voynich, a Polish book dealer who purchased it in 1912. Since 1969, it has been held in Yale University's Beinecke Rare Book and Manuscript Library.
The Voynich manuscript has been studied by many professional and amateur cryptographers, including American and British codebreakers from both World War I and World War II. The manuscript has never been demonstrably deciphered, and none of the many hypotheses proposed over the last hundred years have been independently verified. The mystery of its meaning and origin has excited the popular imagination, making it the subject of study and speculation.
Description
Codicology
The codicology, or physical characteristics of the manuscript, has been studied by researchers. The manuscript measures , with hundreds of vellum pages collected into 18 quires. The total number of pages is around 240, but the exact number depends on how the manuscript's unusual foldouts are counted. The quires have been numbered from 1 to 20 in various locations, using numerals consistent with the 1400s, and the top righthand corner of each recto (righthand) page has been numbered from 1 to 116, using numerals of a later date. From the various numbering gaps in the quires and pages, it seems likely that, in the past, the manuscript had at least 272 pages in 20 quires, some of which were already missing when Wilfrid Voynich acquired the manuscript in 1912. There is strong evidence that many of the book's bifolios were reordered at various points in its history, and that the original page order may well have been quite different from what it is today.
Parchment, covers, and binding
Samples from various parts of the manuscript were radiocarbon dated at the University of Arizona in 2009. The results were consistent for all samples tested and indicated a date for the parchment between 1404 and 1438. Protein testing in 2014 revealed that the parchment was made from calf skin, and multispectral analysis showed that it had not been written on before the manuscript was created (i.e., it is not a palimpsest). The parchment was created with care, but deficiencies exist and the quality is assessed as average, at best. The parchment is prepared from "at least fourteen or fifteen entire calfskins".
Some folios (e.g., 42 and 47) are thicker than the usual parchment.
The goat skin binding and covers are not original to the book, but date to its possession by the Collegio Romano. Insect holes are present on the first and last folios of the manuscript in the current order and suggest that a wooden cover was present before the later covers. Discolouring on the edges points to a tanned-leather inside cover.
Ink
Many pages contain substantial drawings or charts which are colored with paint. Based on modern analysis using polarized light microscopy (PLM), it has been determined that a quill pen and iron gall ink were used for the text and figure outlines. The ink of the drawings, text, and page and quire numbers have similar microscopic characteristics. In 2009, Energy-dispersive X-ray spectroscopy (EDS) revealed that the inks contained major amounts of carbon, iron, sulfur, potassium and calcium with trace amounts of copper and occasionally zinc. EDS did not show the presence of lead, while X-ray diffraction (XRD) identified potassium lead oxide, potassium hydrogen sulphate, and syngenite in one of the samples tested. The similarity between the drawing inks and text inks suggested a contemporaneous origin.
Paint
Colored paint was applied (somewhat crudely) to the ink outlined figures, possibly at a later date. The blue, white, red-brown, and green paints of the manuscript have been analyzed using PLM, XRD, EDS, and scanning electron microscopy (SEM).
The blue paint proved to be ground azurite with minor traces of the copper oxide cuprite.
The white paint is likely a mixture of eggwhite and calcium carbonate.
The green paint is tentatively characterized by copper and copper-chlorine resinate; the crystalline material might be atacamite or some other copper-chlorine compound.
Analysis of the red-brown paint indicated a red ochre with the crystal phases hematite and iron sulfide. Minor amounts of lead sulfide and palmierite are possibly present in the red-brown paint.
The pigments used were deemed inexpensive.
Retouching
Computer scientist Jorge Stolfi of the University of Campinas highlighted that parts of the text and drawings have been modified, using darker ink over a fainter, earlier script. Evidence for this is visible in various folios, for example f1r, f3v, f26v, f57v, f67r2, f71r, f72v1, f72v3 and f73r.
Text
Every page in the manuscript contains text, mostly in an unidentified language, but some have extraneous writing in Latin script. The bulk of the text in the 240-page manuscript is written in an unknown script, running left to right. Most of the characters are composed of one or two simple pen strokes. There exists some dispute as to whether certain characters are distinct, but a script of 20–25 characters would account for virtually all of the text; the exceptions are a few dozen rarer characters that occur only once or twice each. There is no obvious punctuation.
Much of the text is written in a single column in the body of a page, with a slightly ragged right margin and paragraph divisions and sometimes with stars in the left margin. Other text occurs in charts or as labels associated with illustrations. There are no indications of any errors or corrections made at any place in the document. The ductus flows smoothly, giving the impression that the symbols were not enciphered; there is no delay between characters, as would normally be expected in written encoded text.
Extraneous writing
Only a few of the words in the manuscript are thought to have not been written in the unknown script:
f1r: A sequence of Latin letters in the right margin parallel with characters from the unknown script, also the now-unreadable signature of "Jacobj à Tepenece" is found in the bottom margin.
f17r: A line of writing in the Latin script in the top margin.
f70v–f73v: The astrological series of diagrams in the astronomical section has the names of 10 of the months (from March to December) written in Latin script, with spelling suggestive of the medieval languages of France, northwest Italy, or the Iberian Peninsula.
f66r: A small number of words in the bottom left corner near a drawing of a nude man have been read as "der Mussteil", a High German phrase for "a widow's share".
f116v: Four lines written in rather distorted Latin script, except for two words in the unknown script. The words in Latin script appear to be distorted with characteristics of the unknown language. The lettering resembles European alphabets of the late 14th and 15th centuries, but the words do not seem to make sense in any language. Whether these bits of Latin script were part of the original text or were added later is not known.
Transcription
Various transcription alphabets have been created to equate Voynich characters with Latin characters to help with cryptanalysis, such as the Extensible (originally: European) Voynich Alphabet (EVA). The first major one was created by the "First Study Group", led by cryptographer William F. Friedman in the 1940s, where each line of the manuscript was transcribed to an IBM punch card to make it machine readable.
Statistical patterns
The text consists of over 170,000 characters, with spaces dividing the text into about 35,000 groups of varying length, usually referred to as "words" or "word tokens" (37,919); 8,114 of those words are considered unique "word types." The structure of these words seems to follow phonological or orthographic laws of some sort; for example, certain characters must appear in each word (like English vowels), some characters never follow others, or some may be doubled or tripled, but others may not. The distribution of letters within words is also rather peculiar: Some characters occur only at the beginning of a word, some only at the end (like Greek ς), and some always in the middle section.
Many researchers have commented upon the highly regular structure of the words. Professor Gonzalo Rubio, an expert in ancient languages at Pennsylvania State University, stated:
Stephan Vonfelt studied statistical properties of the distribution of letters and their correlations (properties which can be vaguely characterized as rhythmic resonance, alliteration, or assonance) and found that under that respect Voynichese is more similar to the Mandarin Chinese pinyin text of the Records of the Grand Historian than to the text of works from European languages, although the numerical differences between Voynichese and Mandarin Chinese pinyin look larger than those between Mandarin Chinese pinyin and European languages.
Practically no words have fewer than two letters or more than ten. Some words occur in only certain sections, or in only a few pages; others occur throughout the manuscript. Few repetitions occur among the thousand or so labels attached to the illustrations. There are instances where the same common word appears up to three times in a row (see Zipf's law). Words that differ by only one letter also repeat with unusual frequency, causing single-substitution alphabet decipherings to yield babble-like text. In 1962, cryptanalyst Elizebeth Friedman described such statistical analyses as "doomed to utter frustration".
In 2014, a team led by Diego Amancio of the University of São Paulo published a study using statistical methods to analyse the relationships of the words in the text. Instead of trying to find the meaning, Amancio's team looked for connections and clusters of words. By measuring the frequency and intermittence of words, Amancio claimed to identify the text's keywords and produced three-dimensional models of the text's structure and word frequencies. The team concluded that, in 90% of cases, the Voynich systems are similar to those of other known books, indicating that the text is in an actual language, not random gibberish.
Linguists Claire Bowern and Luke Lindemann have applied statistical methods to the Voynich manuscript, comparing it to other languages and encodings of languages, and have found both similarities and differences in statistical properties. Character sequences in languages are measured using a metric called h2, or second-order conditional entropy. Natural languages tend to have an h2 between 3 and 4, but Voynichese has much more predictable character sequences, and an h2 around 2. However, at higher levels of organization, the Voynich manuscript displays properties similar to those of natural languages. Based on this, Bowern dismisses theories that the manuscript is gibberish. It is likely to be an encoded natural language or a constructed language. Bowern also concludes that the statistical properties of the Voynich manuscript are not consistent with the use of a substitution cipher or polyalphabetic cipher.
As noted in Bowern's review, multiple scribes or "hands" may have written the manuscript, possibly using two methods of encoding at least one natural language. The "language" Voynich A appears in the herbal and pharmaceutical parts of the manuscript. The "language" known as Voynich B appears in the balneological section, some parts of the medicinal and herbal sections, and the astrological section. The most common vocabulary items of Voynich A and Voynich B are substantially different. Topic modeling of the manuscript suggests that pages identified as written by a particular scribe may relate to a different topic.
In terms of morphology, if visual spaces in the manuscript are assumed to indicate word breaks, there are consistent patterns that suggest a three-part word structure of prefix, root or midfix, and suffix. Certain characters and character combinations are more likely to appear in particular fields. There are minor variations between Voynich A and Voynich B. The predictability of certain letters in a relatively small number of combinations in certain parts of words appears to explain the low entropy (h2) of Voynichese. In the absence of obvious punctuation, some variants of the same word appear to be specific to typographical positions, such as the beginning of a paragraph, line, or sentence.
The Voynich word frequencies of both variants appear to conform to a Zipfian distribution, supporting the idea that the text has linguistic meaning. This has implications for the encoding methods most likely to have been used, since some forms of encoding interfere with the Zipfian distribution. Measures of the proportional frequency of the ten most common words is similar to those of the Semitic, Iranian, and Germanic languages. Another measure of morphological complexity, the Moving-Average Type–Token Ratio (MATTR) index, is similar to Iranian, Germanic, and Romance languages.
Illustrations
The illustrations are conventionally used to divide most of the manuscript into six different sections, since the text cannot be read. Each section is typified by illustrations with different styles and supposed subject matter except for the last section, in which the only drawings are small stars in the margin. The following are the sections and their conventional names:
Herbal, 112 folios: Each page displays one or two plants and a few paragraphs of text, a format typical of European herbals of the time. Some parts of these drawings are larger and cleaner copies of sketches seen in the "pharmaceutical" section. None of the plants depicted are unambiguously identifiable.
Astronomical, 21 folios: Contains circular diagrams suggestive of astronomy or astrology, some of them with suns, moons, and stars. One series of 12 diagrams depicts conventional symbols for the zodiacal constellations (two fish for Pisces, a bull for Taurus, a hunter with crossbow for Sagittarius, etc.). Each of these has 30 female figures arranged in two or more concentric bands. Most of the females are at least partly nude, and each holds what appears to be a labeled star or is shown with the star attached to either arm by what could be a tether or cord of some kind. The last two pages of this section were lost (Aquarius and Capricornus, roughly January and February), while Aries and Taurus are split into four paired diagrams with 15 women and 15 stars each. Some of these diagrams are on fold-out pages.
Balneological, 20 folios: A dense, continuous text interspersed with drawings, mostly showing small nude women, some wearing crowns, bathing in pools or tubs connected by an elaborate network of pipes. The bifolio consists of folios 78 (verso) and 81 (recto); it forms an integrated design, with water flowing from one folio to the other.
Cosmological, 13 folios: More circular diagrams, but they are of an obscure nature. This section also has foldouts; one of them spans six pages, commonly called the Rosettes folio, and contains a map or diagram with nine "islands" or "rosettes" connected by "causeways" and containing castles, as well as what might be a volcano.
Pharmaceutical, 34 folios: Many labeled drawings of isolated plant parts (roots, leaves, etc.), objects resembling apothecary jars, ranging in style from the mundane to the fantastical, and a few text paragraphs.
Recipes, 22 folios: Full pages of text broken into many short paragraphs, each marked with a star in the left margin.
Five folios contain only text, and at least 28 folios are missing from the manuscript.
Purpose
The overall impression given by the surviving leaves of the manuscript is that it was meant to serve as a pharmacopoeia or to address topics in medieval or early modern medicine. However, the puzzling details of the illustrations have fueled many theories about the book's origin, the contents of its text, and the purpose for which it was intended.
The first section of the book is almost certainly herbal, but attempts have failed to identify the plants, either with actual specimens or with the stylized drawings of contemporaneous herbals. Only a few of the plant drawings can be identified with reasonable certainty, such as a wild pansy and the maidenhair fern. The herbal pictures that match pharmacological sketches appear to be clean copies of them, except that missing parts were completed with improbable-looking details. In fact, many of the plant drawings in the herbal section seem to be composite: the roots of one species have been fastened to the leaves of another, with flowers from a third.
The basins and tubes in the balneological section are sometimes interpreted as implying a connection to alchemy, yet they bear little obvious resemblance to the alchemical equipment of the period.
Astrological considerations frequently played a prominent role in herb gathering, bloodletting, and other medical procedures common during the likeliest dates of the manuscript. However, interpretation remains speculative, apart from the obvious Zodiac symbols and one diagram possibly showing the classical planets.
History
Much of the early history of the book is unknown, though the text and illustrations are all characteristically European. In 2009, University of Arizona researchers radiocarbon dated the manuscript's vellum to between 1404 and 1438. In addition, McCrone Associates in Westmont, Illinois found that the paints in the manuscript were of materials to be expected from that period of European history. There have been erroneous reports that McCrone Associates indicated that much of the ink was added not long after the creation of the parchment, but their official report contains no statement of this.
The first confirmed owner was Georg Baresch, a 17th-century alchemist from Prague. Baresch was apparently puzzled about this "Sphynx" that had been "taking up space uselessly in his library" for many years. He learned that Jesuit scholar Athanasius Kircher from the Collegio Romano had published a Coptic (Egyptian) dictionary and claimed to have deciphered the Egyptian hieroglyphs; Baresch twice sent a sample copy of the script to Kircher in Rome, asking for clues. The 1639 letter from Baresch to Kircher is the earliest known mention of the manuscript to have been confirmed.
Whether Kircher answered the request or not is not known, but he was apparently interested enough to try to acquire the book, which Baresch refused to yield. Upon Baresch's death, the manuscript passed to his friend Jan Marek Marci (also known as Johannes Marcus Marci), then rector of Charles University in Prague. A few years later, Marci sent the book to Kircher, his longtime friend and correspondent.
Marci also sent Kircher a cover letter (in Latin, dated 19 August 1665 or 1666) that was still attached to the book when Voynich acquired it:
The "Dr. Raphael" is believed to be Raphael Sobiehrd-Mnishovsky, and the sum would be about 2 kg of gold.
While Wilfrid Voynich took Raphael's claim at face value, the Bacon authorship theory has been largely discredited. However, a piece of evidence supporting Rudolph's ownership is the now almost invisible name or signature, on the first page of the book, of Jacobus Horcicky de Tepenecz, the head of Rudolph's botanical gardens in Prague. Rudolph died still owing money to de Tepenecz, and it is possible that de Tepenecz may have been given the book (or simply taken it) in partial payment of that debt.
No records of the book for the next 200 years have been found, but in all likelihood, it was stored with the rest of Kircher's correspondence in the library of the Collegio Romano (now the Pontifical Gregorian University). It probably remained there until the troops of Victor Emmanuel II of Italy captured the city in 1870 and annexed the Papal States. The new Italian government decided to confiscate many properties of the Church, including the library of the Collegio. Many books of the university's library were hastily transferred to the personal libraries of its faculty just before this happened, according to investigations by Xavier Ceccaldi and others, and those books were exempt from confiscation. Kircher's correspondence was among those books, and so, apparently, was the Voynich manuscript, as it still bears the ex libris of Petrus Beckx, head of the Jesuit order and the university's rector at the time.
Beckx's private library was moved to the Villa Mondragone, Frascati, a large country palace near Rome that had been bought by the Society of Jesus in 1866 and housed the headquarters of the Jesuits' Ghislieri College.
In 1903, the Society of Jesus (Collegio Romano) was short of money and decided to sell some of its holdings discreetly to the Vatican Library. The sale took place in 1912, but not all of the manuscripts listed for sale ended up going to the Vatican. Wilfrid Voynich acquired 30 of these manuscripts, among them the one which now bears his name. He spent the next seven years attempting to interest scholars in deciphering the script, while he worked to determine the origins of the manuscript.
In 1930, the manuscript was inherited after Wilfrid's death by his widow Ethel Voynich, author of the novel The Gadfly and daughter of mathematician George Boole. She died in 1960 and left the manuscript to her close friend Anne Nill. In 1961, Nill sold the book to antique book dealer Hans P. Kraus. Kraus was unable to find a buyer and donated the manuscript to Yale University in 1969, where it was catalogued as "MS 408", sometimes also referred to as "Beinecke MS 408".
Timeline of ownership
The timeline of ownership of the Voynich manuscript is given below. The time when it was possibly created is shown in green (early 1400s), based on carbon dating of the vellum. Periods of unknown ownership are indicated in white. The commonly accepted owners of the 17th century are shown in orange; the long period of storage in the Collegio Romano is yellow. The location where Wilfrid Voynich allegedly acquired the manuscript (Frascati) is shown in green (late 1800s); Voynich's ownership is shown in red, and modern owners are highlighted blue.
Authorship hypotheses
Many people have been proposed as possible authors of the Voynich manuscript, among them Roger Bacon, John Dee or Edward Kelley, Giovanni Fontana, and Voynich.
Early history
Marci's 1665/1666 cover letter to Kircher says that, according to his friend the late Raphael Mnishovsky, the book had once been bought by Rudolf II, Holy Roman Emperor and King of Bohemia for 600 ducats (66.42 troy ounce actual gold weight, or 2.07 kg). (Mnishovsky had died in 1644, more than 20 years earlier, and the deal must have occurred before Rudolf's abdication in 1611, at least 55 years before Marci's letter. However, Karl Widemann sold books to Rudolf II in March 1599.)
According to the letter, Mnishovsky (but not necessarily Rudolf) speculated that the author was 13th-century Franciscan friar and polymath Roger Bacon. Marci said that he was suspending judgment about this claim, but it was taken quite seriously by Wilfrid Voynich, who did his best to confirm it. Voynich contemplated the possibility that the author was Albertus Magnus if not Roger Bacon.
The assumption that Bacon was the author led Voynich to conclude that John Dee sold the manuscript to Rudolf. Dee was a mathematician and astrologer at the court of Queen Elizabeth I of England who was known to have owned a large collection of Bacon's manuscripts.
Dee and his scrier (spirit medium) Edward Kelley lived in Bohemia for several years, where they had hoped to sell their services to the emperor. However, this sale seems quite unlikely, according to John Schuster, because Dee's meticulously kept diaries do not mention it.
If Bacon did not create the Voynich manuscript, a supposed connection to Dee is much weakened. It was thought possible, prior to the carbon dating of the manuscript, that Dee or Kelley might have written it and spread the rumor that it was originally a work of Bacon's in the hopes of later selling it.
Fabrication by Voynich
Some suspect Voynich of having fabricated the manuscript himself. As an antique book dealer, he probably had the necessary knowledge and means, and a lost book by Roger Bacon would have been worth a fortune. Furthermore, Baresch's letter and Marci's letter only establish the existence of a manuscript, not that the Voynich manuscript is the same one mentioned. These letters could possibly have been the motivation for Voynich to fabricate the manuscript, assuming that he was aware of them. However, many consider the expert internal dating of the manuscript and the June 1999 discovery of Baresch's letter to Kircher as having eliminated this possibility.
Eamon Duffy says that the radiocarbon dating of the parchment (or, more accurately, vellum) "effectively rules out any possibility that the manuscript is a post-medieval forgery", as the consistency of the pages indicates origin from a single source, and "it is inconceivable" that a quantity of unused parchment comprising "at least fourteen or fifteen entire calfskins" could have survived from the early 15th century.
Giovanni Fontana
It has been suggested that some illustrations in the books of an Italian engineer, Giovanni Fontana, slightly resemble Voynich illustrations. Fontana was familiar with cryptography and used it in his books, although he did not use the Voynich script but a simple substitution cipher. In the book Secretum de thesauro experimentorum ymaginationis hominum (Secret of the treasure-room of experiments in man's imagination), written c. 1430, Fontana described mnemonic machines, written in his cypher. That book and his Bellicorum instrumentorum liber both used a cryptographic system, described as a simple, rational cipher, based on signs without letters or numbers.
Other theories
Sometime before 1921, Voynich was able to read a name faintly written at the foot of the manuscript's first page: "Jacobj à Tepenece". This is taken to be a reference to Jakub Hořčický of Tepenec, also known by his Latin name Jacobus Sinapius. Rudolph II had ennobled him in 1607, had appointed him his Imperial Distiller, and had made him curator of his botanical gardens as well as one of his personal physicians. Voynich (and many other people after him) concluded that Jacobus owned the Voynich manuscript prior to Baresch, and he drew a link from that to Rudolf's court, in confirmation of Mnishovsky's story.
Jacobus's name has faded further since Voynich saw it, but is still legible under ultraviolet light. It does not match the copy of his signature in a document located by Jan Hurych in 2003. As a result, it has been suggested that the signature was added later, possibly even fraudulently by Voynich himself.
Baresch's letter bears some resemblance to a hoax that orientalist Andreas Mueller once played on Athanasius Kircher. Mueller sent some unintelligible text to Kircher with a note explaining that it had come from Egypt, and asking him for a translation. Kircher reportedly solved it. It has been speculated that these were both cryptographic tricks played on Kircher to make him look foolish.
Raphael Mnishovsky, the friend of Marci who was the reputed source of the Bacon story, was himself a cryptographer and apparently invented a cipher which he claimed was uncrackable (c. 1618). This has led to the speculation that Mnishovsky might have produced the Voynich manuscript as a practical demonstration of his cipher and made Baresch his unwitting test subject. Indeed, the disclaimer in the Voynich manuscript cover letter could mean that Marci suspected some kind of deception.
In his 2006 book, Nick Pelling proposed that the Voynich manuscript was written by 15th-century North Italian architect Antonio Averlino (also known as "Filarete"), a theory broadly consistent with the radiocarbon dating.
Language hypotheses
Many hypotheses have been developed about the Voynich manuscript's "language", called Voynichese:
Ciphers
According to the "letter-based cipher" theory, the Voynich manuscript contains a meaningful text in some European language that was intentionally rendered obscure by mapping it to the Voynich manuscript "alphabet" through a cipher of some sort—an algorithm that operated on individual letters. This was the working hypothesis for most 20th-century deciphering attempts, including an informal team of NSA cryptographers led by William F. Friedman in the early 1950s.
The main argument for this theory is that it is difficult to explain a European author using a strange alphabet, except as an attempt to hide information. Indeed, even Roger Bacon knew about ciphers, and the estimated date for the manuscript roughly coincides with the birth of cryptography in Europe as a relatively systematic discipline.
The counterargument is that almost all cipher systems consistent with that era fail to match what is seen in the Voynich manuscript. For example, simple substitution ciphers would be excluded because the distribution of letter frequencies does not resemble that of any known language, while the small number of different letter shapes used implies that nomenclator and homophonic ciphers should be ruled out, because these typically employ larger cipher alphabets. Polyalphabetic ciphers were invented by Alberti in the 1460s and included the later Vigenère cipher, but they usually yield ciphertexts where all cipher shapes occur with roughly equal probability, quite unlike the language-like letter distribution which the Voynich manuscript appears to have.
However, the presence of many tightly grouped shapes in the Voynich manuscript (such as "or", "ar", "ol", "al", "an", "ain", "aiin", "air", "aiir", "am", "ee", "eee", among others) does suggest that its cipher system may make use of a "verbose cipher", where single letters in a plaintext get enciphered into groups of fake letters. For example, the first two lines of page f15v (seen above) contain "" and "", which strongly resemble how Roman numerals such as "CCC" or "XXXX" would look if verbosely enciphered.
It is possible that the text was encrypted by starting from a fundamentally simple cipher, then augmenting it by adding nulls (meaningless symbols), homophones (duplicate symbols), a transposition cipher (letter rearrangement), false word breaks, etc.
Codes
According to the "codebook cipher" theory, the Voynich manuscript "words" would actually be codes to be looked up in a "dictionary" or codebook. The main evidence for this theory is that the internal structure and length distribution of many words are similar to those of Roman numerals, which at the time would be a natural choice for the codes. However, book-based ciphers would be viable for only short messages, because they are very cumbersome to write and to read.
Shorthand
In 1943, Joseph Martin Feely claimed that the manuscript was a scientific diary written in shorthand. According to D'Imperio, this was "Latin, but in a system of abbreviated forms not considered acceptable by other scholars, who unanimously rejected his readings of the text".
Steganography
This theory holds that the text of the Voynich manuscript is mostly meaningless, but contains meaningful information hidden in inconspicuous details—e.g., the second letter of every word, or the number of letters in each line. This technique, called steganography, is very old and was described by Johannes Trithemius in 1499. Though the plain text was speculated to have been extracted by a Cardan grille (an overlay with cut-outs for the meaningful text) of some sort, this seems somewhat unlikely because the words and letters are not arranged on anything like a regular grid. Still, steganographic claims are hard to prove or disprove, because stegotexts can be arbitrarily hard to find.
It has been suggested that the meaningful text could be encoded in the length or shape of certain pen strokes. There are indeed examples of steganography from about that time that use letter shape (italic vs. upright) to hide information. However, when examined at high magnification, the Voynich manuscript pen strokes seem quite natural, and substantially affected by the uneven surface of the vellum.
Natural language
Statistical analysis of the text reveals patterns similar to those of natural languages. For instance, the word entropy (about 10 bits per word) is similar to that of English or Latin texts. Amancio et al. (2013) argued that the Voynich manuscript "is mostly compatible with natural languages and incompatible with random texts."
The linguist Jacques Guy once suggested that the Voynich manuscript text could be some little-known natural language, written plaintext with an invented alphabet. He suggested Chinese in jest, but later comparison of word length statistics with Vietnamese and Chinese made him view that hypothesis seriously. In many language families of East and Central Asia, mainly Sino-Tibetan (Chinese, Tibetan, and Burmese), Austroasiatic (Vietnamese, Khmer, etc.) and possibly Tai (Thai, Lao, etc.), morphemes generally have only one syllable; and syllables have a rather rich structure, including tonal patterns.
Child (1976), a linguist of Indo-European languages for the U.S. National Security Agency, proposed that the manuscript was written in a "hitherto unknown North Germanic dialect". He identified in the manuscript a "skeletal syntax several elements of which are reminiscent of certain Germanic languages", while the content is expressed using "a great deal of obscurity."
In February 2014, Professor Stephen Bax of the University of Bedfordshire made public his research into using "bottom up" methodology to understand the manuscript. His method involved looking for and translating proper nouns, in association with relevant illustrations, in the context of other languages of the same time period. A paper he posted online offers tentative translation of 14 characters and 10 words.
He suggested the text is a treatise on nature written in a natural language, rather than a code.
Tucker & Talbert (2014) published a paper claiming a positive identification of 37 plants, 6 animals, and one mineral referenced in the manuscript to plant drawings in the Libellus de Medicinalibus Indorum Herbis or Badianus manuscript, a fifteenth-century Aztec herbal. Together with the presence of atacamite in the paint, they argue that the plants were from colonial New Spain and the text represented Nahuatl, the language of the Aztecs. They date the manuscript to between 1521 (the date of the Spanish conquest of the Aztec Empire) and circa 1576. These dates contradict the earlier radiocarbon date of the vellum and other elements of the manuscript. However, they argued that the vellum could have been stored and used at a later date. The analysis has been criticized by other Voynich manuscript researchers, who argued that a skilled forger could construct plants that coincidentally have a passing resemblance to theretofore undiscovered existing plants.
Constructed language
The peculiar internal structure of Voynich manuscript words led William F. Friedman to conjecture that the text could be a constructed language. In 1950, Friedman asked the British army officer John Tiltman to analyze a few pages of the text, but Tiltman did not share this conclusion. In a paper in 1967, Brigadier Tiltman said:
The concept of a constructed language is quite old, as attested by John Wilkins's Philosophical Language (1668), but still postdates the generally accepted origin of the Voynich manuscript by two centuries. In most known examples, categories are subdivided by adding suffixes (fusional languages); as a consequence, a text in a particular subject would have many words with similar prefixes—for example, all plant names would begin with similar letters, and likewise for all diseases, etc. This feature could then explain the repetitious nature of the Voynich text. However, no one has been able yet to assign a plausible meaning to any prefix or suffix in the Voynich manuscript.
Hoax
The fact that the manuscript has defied decipherment thus far has led various scholars to propose that the text does not contain meaningful content in the first place, implying that it may be a hoax.
In 2003, computer scientist Gordon Rugg showed that text with characteristics similar to the Voynich manuscript could have been produced using a table of word prefixes, stems, and suffixes, which would have been selected and combined by means of a perforated paper overlay. The latter device, known as a Cardan grille, was invented around 1550 as an encryption tool, more than 100 years after the estimated creation date of the Voynich manuscript. Some maintain that the similarity between the pseudo-texts generated in Gordon Rugg's experiments and the Voynich manuscript is superficial, and the grille method could be used to emulate any language to a certain degree.
In April 2007, a study by Austrian researcher Andreas Schinner published in Cryptologia supported the hoax hypothesis. Schinner showed that the statistical properties of the manuscript's text were more consistent with meaningless gibberish produced using a quasi-stochastic method, such as the one described by Rugg, than with Latin and medieval German texts.
Some scholars have claimed that the manuscript's text appears too sophisticated to be a hoax. In 2013, Marcelo Montemurro, a theoretical physicist from the University of Manchester, published findings claiming that semantic networks exist in the text of the manuscript, such as content-bearing words occurring in a clustered pattern, or new words being used when there was a shift in topic. With this evidence, he believes it unlikely that these features were intentionally "incorporated" into the text to make a hoax more realistic, as most of the required academic knowledge of these structures did not exist at the time the Voynich manuscript would have been written.
In September 2016, Gordon Rugg and Gavin Taylor addressed these objections in another article in Cryptologia, and illustrated a simple hoax method that they claim could have caused the mathematical properties of the text.
In 2019, Torsten Timm and Andreas Schinner published an algorithm that matches the statistical characteristics of the Voynich manuscript, and could have been used by a Medieval author to generate meaningless text.
Glossolalia
In their 2004 book, Gerry Kennedy and Rob Churchill suggest the possibility that the Voynich manuscript may be a case of glossolalia (speaking-in-tongues), channeling, or outsider art. If so, the author felt compelled to write large amounts of text in a manner which resembles stream of consciousness, either because of voices heard or because of an urge. This often takes place in an invented language in glossolalia, usually made up of fragments of the author's own language, although invented scripts for this purpose are rare.
Kennedy and Churchill use Hildegard von Bingen's works to point out similarities between the Voynich manuscript and the illustrations that she drew when she was suffering from severe bouts of migraine, which can induce a trance-like state prone to glossolalia. Prominent features found in both are abundant "streams of stars", and the repetitive nature of the "nymphs" in the balneological section. This theory has been found unlikely by other researchers.
The theory is virtually impossible to prove or disprove, short of deciphering the text. Kennedy and Churchill are themselves not convinced of the hypothesis, but consider it plausible. In the culminating chapter of their work, Kennedy states his belief that it is a hoax or forgery. Churchill acknowledges the possibility that the manuscript is either a synthetic forgotten language (as advanced by Friedman), or else a forgery, as the preeminent theory. However, he concludes that, if the manuscript is a genuine creation, mental illness or delusion seems to have affected the author.
Decipherment claims
Since the manuscript's modern rediscovery in 1912, there have been a number of claimed decipherings.
William Romaine Newbold
One of the earliest efforts to unlock the book's secrets (and the first of many premature claims of decipherment) was made in 1921 by William Romaine Newbold of the University of Pennsylvania. His singular hypothesis held that the visible text is meaningless, but that each apparent "letter" is in fact constructed of a series of tiny markings discernible only under magnification. These markings were supposed to be based on ancient Greek shorthand, forming a second level of script that held the real content of the writing. Newbold claimed to have used this knowledge to work out entire paragraphs proving the authorship of Bacon and recording his use of a compound microscope four hundred years before van Leeuwenhoek. A circular drawing in the astronomical section depicts an irregularly shaped object with four curved arms, which Newbold interpreted as a picture of a galaxy, which could be obtained only with a telescope. Similarly, he interpreted other drawings as cells seen through a microscope.
However, Newbold's analysis has since been dismissed as overly speculative after John Matthews Manly of the University of Chicago pointed out serious flaws in his theory. Each shorthand character was assumed to have multiple interpretations, with no reliable way to determine which was intended for any given case. Newbold's method also required rearranging letters at will until intelligible Latin was produced. These factors alone ensure the system enough flexibility that nearly anything at all could be discerned from the microscopic markings. Although evidence of micrography using the Hebrew language can be traced as far back as the ninth century, it is nowhere near as compact or complex as the shapes Newbold made out. Close study of the manuscript revealed the markings to be artefacts caused by the way ink cracks as it dries on rough vellum. Perceiving significance in these artefacts can be attributed to pareidolia. Thanks to Manly's thorough refutation, the micrography theory is now generally disregarded.
Joseph Martin Feely
In 1943, Joseph Martin Feely published Roger Bacon's Cipher: The Right Key Found, in which he claimed that the book was a scientific diary written by Roger Bacon. Feely's method posited that the text was a highly abbreviated medieval Latin written in a simple substitution cipher.
Leonell C. Strong
Leonell C. Strong, a cancer research scientist and amateur cryptographer, believed that the solution to the Voynich manuscript was a "peculiar double system of arithmetical progressions of a multiple alphabet". Strong claimed that the plaintext revealed the Voynich manuscript to be written by the 16th-century English author Anthony Ascham, whose works include A Little Herbal, published in 1550. Notes released after his death reveal that the last stages of his analysis, in which he selected words to combine into phrases, were questionably subjective.
Robert S. Brumbaugh
In 1978, Robert Brumbaugh, a professor of classical and medieval philosophy at Yale University, claimed that the manuscript was a forgery intended to fool Emperor Rudolf II into purchasing it, and that the text is Latin enciphered with a complex, two-step method.
John Stojko
In 1978, John Stojko published Letters to God's Eye, in which he claimed that the Voynich Manuscript was a series of letters written in vowelless Ukrainian. The theory caused some sensation among the Ukrainian diaspora at the time, and then in independent Ukraine after 1991. However, the date Stojko gives for the letters, the lack of relation between the text and the images, and the general looseness in the method of decryption have all been criticised.
Stephen Bax
In 2014, applied linguistics Professor Stephen Bax self-published a paper claiming to have translated ten words from the manuscript using techniques similar to those used to successfully translate Egyptian hieroglyphs. He claimed the manuscript to be a treatise on nature, in a Near Eastern or Asian language, but no full translation was made before Bax's death in 2017.
Nicholas Gibbs
In September 2017, television writer Nicholas Gibbs claimed to have decoded the manuscript as idiosyncratically abbreviated Latin. He declared the manuscript to be a mostly plagiarized guide to women's health.
Scholars judged Gibbs' hypothesis to be trite. His work was criticized as patching together already-existing scholarship with a highly speculative and incorrect translation; Lisa Fagin Davis, director of the Medieval Academy of America, stated that Gibbs' decipherment "doesn't result in Latin that makes sense."
Greg Kondrak
Greg Kondrak, a professor of natural language processing at the University of Alberta, and his graduate student Bradley Hauer used computational linguistics in an attempt to decode the manuscript. Their findings were presented at the Annual Meeting of the Association for Computational Linguistics in 2017, in the form of an article suggesting that the language of the manuscript is most likely Hebrew, but encoded using alphagrams, i.e. alphabetically ordered anagrams. However, the team admitted that experts in medieval manuscripts who reviewed the work were not convinced.
Ahmet Ardıç
In 2018, Ahmet Ardıç, an electrical engineer with an interest in Turkic languages, claimed in a YouTube video that the Voynich script is a kind of Old Turkic written in a 'poetic' style. The text would then be written using 'phonemic orthography', meaning the author spelled out words as they heard them. Ardıç claimed to have deciphered and translated over 30% of the manuscript. His submission to the journal Digital Philology was rejected in 2019.
Gerard Cheshire
In 2019, Cheshire, a biology research assistant at the University of Bristol, made headlines for his theory that the manuscript was written in a "calligraphic proto-Romance" language. He claimed to have deciphered the manuscript in two weeks using a combination of "lateral thinking and ingenuity." Cheshire has suggested that the manuscript is "a compendium of information on herbal remedies, therapeutic bathing, and astrological readings", that it contains numerous descriptions of medicinal plants
and passages that focus on female physical and mental health, reproduction, and parenting; and that the manuscript is the only known text written in proto-Romance. He further claimed: "The manuscript was compiled by Dominican nuns as a source of reference for Maria of Castile, Queen of Aragon."
Cheshire claims that the fold-out illustration on page 158 depicts a volcano, and theorizes that it places the manuscript's creators near the island of Vulcano which was an active volcano during the 15th century.
However, experts in medieval documents disputed this interpretation vigorously, with the executive director of the Medieval Academy of America, Lisa Fagin Davis, denouncing the paper as "just more aspirational, circular, self-fulfilling nonsense". Approached for comment by Ars Technica, Davis gave this explanation:
The University of Bristol subsequently removed a reference to Cheshire's claims from its website, referring, in a statement, to concerns about the validity of the research and stating: "This research was entirely the author's own work and is not affiliated with the University of Bristol, the School of Arts nor the Centre for Medieval Studies".
Facsimiles
Many books and articles have been written about the manuscript. Copies of the manuscript pages were made by alchemist Georgius Barschius (the Latinized form of the name of Georg Baresch; cf. the second paragraph under "History" above) in 1637 and sent to Athanasius Kircher, and later by Wilfrid Voynich.
In 2004, the Beinecke Rare Book and Manuscript Library made high-resolution digital scans publicly available online, and several printed facsimiles appeared. In 2016, the Beinecke Library and Yale University Press co-published a facsimile, The Voynich Manuscript, with scholarly essays.
The Beinecke Library also authorized the production of a print run of 898 replicas by the Spanish publisher Siloé in 2017.
Cultural influence
The manuscript has also inspired several works of fiction, including the following:
Between 1976 and 1978, Italian artist Luigi Serafini created the Codex Seraphinianus containing false writing and pictures of imaginary plants in a style reminiscent of the Voynich manuscript.
Contemporary classical composer Hanspeter Kyburz's 1995 chamber work The Voynich Cipher Manuscript, for chorus & ensemble is inspired by the manuscript.
The xkcd comic published on 5 June 2009 references the manuscript, and implies that it was a 15th-century role-playing game manual.
In 2015, the New Haven Symphony Orchestra commissioned Hannah Lash to compose a symphony inspired by the manuscript.
The novel Solenoid (2015), by Romanian writer Mircea Cartarescu uses the manuscript as literary device in one of its important themes.
For the 500th strip of the webcomic Sandra and Woo, entitled The Book of Woo and published on 29 July 2013, writer Oliver Knörzer and artist Puri Andini created four illustrated pages inspired by the Voynich manuscript. All four pages show strange illustrations next to a cipher text. The strip was mentioned in MTV Geek and discussed in the Cipher Mysteries blog of cryptology expert Nick Pelling as well as Klausis Krypto Kolumne of cryptology expert Klaus Schmeh. The Book of Woo was also discussed on several pages of Craig P. Bauer’s book Unsolved! about the history of famous ciphers. As part of the lead-up to the 1000th strip, Knörzer posted the original English text on 28 June 2018. The crucial obfuscation step was the translation of the English plain text into the constructed language Toki Pona by Matthew Martin.
See also
References
Citations
Bibliography
; (Books Express Publishing, 2011, )
Further reading
External links
— navigating through high-resolution scans
Analyst websites
News and documentaries
news – summary of Gordon Rugg's paper directed towards a more general audience
History of cryptography
Scientific illuminated manuscripts
Undeciphered historical codes and ciphers
Manuscripts written in undeciphered writing systems
15th-century manuscripts
Works of unknown authorship
Yale University Library |
46129 | https://en.wikipedia.org/wiki/ITV%20Digital | ITV Digital | ITV Digital was a British digital terrestrial television broadcaster which launched a pay-TV service on the world's first digital terrestrial television network. Its main shareholders were Carlton Communications plc and Granada plc, owners of two franchises of the ITV network. Starting as ONdigital in 1998, the service was re-branded as ITV Digital in July 2001.
Low audience figures, piracy issues and an ultimately unaffordable multi-million pound deal with the Football League led to the broadcaster suffering massive losses, forcing it to enter administration in March 2002. Pay television services ceased permanently on 1 May that year, and the remaining free-to-air channels such as BBC One and Channel 4 had ceased when the company was liquidated in October. The terrestrial multiplexes were subsequently taken over by Crown Castle and the BBC to create Freeview later that month.
History
On 31 January 1997, Carlton Television, Granada Television and satellite company British Sky Broadcasting (BSkyB), together created British Digital Broadcasting (BDB) as a joint venture and applied to operate three digital terrestrial television (DTT) licences. They faced competition from a rival, Digital Television Network (DTN), a company created by cable operator CableTel (later known as NTL). On 25 June 1997, BDB won the auction and the Independent Television Commission (ITC) awarded the sole broadcast licence for DTT to the consortium. Then on 20 December 1997, the ITC awarded three pay-TV digital multiplex licences to BDB.
That same year, however, the ITC forced BSkyB out of the consortium on competition grounds; this effectively placed Sky in direct competition with the new service as Sky would also launch its digital satellite service in 1998, although Sky was still required to provide key channels such as Sky Movies and Sky Sports to ONdigital. With Sky part of the consortium, ONdigital would have paid discounted rates to carry Sky's television channels. Instead, with its positioning as a competitor, Sky charged the full market rates for the channels, at an extra cost of around £60million a year to ONdigital. On 28 July 1998, BDB announced the service would be called ONdigital, and claimed it would be the biggest television brand launch in history. The company would be based in Marco Polo House, now demolished, in Battersea, south London, which was previously the home of BSkyB's earlier rival, British Satellite Broadcasting (BSB).
Six multiplexes were set up, with three of them allocated to the existing analogue broadcasters. The other three multiplexes were auctioned off. ONdigital was given one year from the award of the licence to launch the first DTT service. In addition to launching audio and video services, it also led the specification of an industry-wide advanced interactive engine, based on MHEG-5. This was an open standard that was used by all broadcasters on DTT.
The launch
ONdigital was officially launched on 15 November 1998 amid a large public ceremony featuring celebrity Ulrika Jonsson and fireworks around the Crystal Palace transmitting station. Its competitor Sky Digital had already debuted on 1 October. The service launched with 12 primary channels, which included the new BBC Choice and ITV2 channels; a subscription package featuring channels such as Sky One, Cartoon Network, E4, UKTV channels and many developed in-house by Carlton and Granada such as Carlton World; premium channels including Sky Sports 1, 2, 3, Sky Premier and Sky MovieMax; and the newly launched FilmFour.
From the beginning, however, the service was quickly losing money. Supply problems with set-top boxes meant that the company missed Christmas sales. Meanwhile, aggressive marketing by BSkyB for Sky Digital made the ONdigital offer look unattractive. The new digital satellite service provided a dish, digibox, installation and around 200 channels for £159, a lower price than ONdigital at £199. ONdigital's subscription pricing had been set to compare with the older Sky analogue service of 20 channels. In 1999, digital cable services were launched by NTL, Telewest and Cable & Wireless.
In February 1999, ITV secured the rights for UEFA Champions League football matches for four years, which would partly be broadcast through ONdigital and two new sports channels on the platform, Champions ON 28 and Champions ON 99 (later renamed ONsport 1 and ONsport 2 when it secured the rights to ATP tennis games), the latter of which timeshared with Carlton Cinema. Throughout 1999, channels including MTV and British Eurosport launched on the platform. The exclusive Carlton Kids and Carlton World channels closed in 2000 to make way for two Discovery channels.
ONdigital reported in April 1999 that it had 110,000 subscribers. Sky Digital, however, had over 350,000 by this time. By March 2000, there were 673,000 ONdigital customers.
The first interactive digital service was launched in mid-1999, called ONgames. On 7 March 2000, ONmail was launched which provided an interactive e-mail service. A deal with multiplex operator SDN led to the launch of pay-per-view service ONrequest on 1 May 2000. In June 2000, ONoffer was launched. On 18 September 2000, the internet TV service ONnet was launched.
On 17 June 2000, ONdigital agreed to a major £315 million three-year deal with the Football League to broadcast 88 live Nationwide League and Worthington Cup matches from the 2001–02 season.
Problems
ONdigital's growth slowed throughout 2000, and by the start of 2001 the number of subscribers stopped increasing; meanwhile, its competitor Sky Digital was still growing. The ONdigital management team responded with a series of free set top box promotions, initially at retailers such as Currys and Dixons, when ONdigital receiving equipment was purchased at the same time as a television set or similarly priced piece of equipment. These offers eventually became permanent, with the set-top box loaned to the customer at no charge for as long as they continued to subscribe to ONdigital, an offer that was matched by Sky. ONdigital's churn rate, a measure of the number of subscribers leaving the service, reached 28% during 2001.
Additional problems for ONdigital were the choice of 64QAM broadcast mode, which when coupled with far weaker than expected broadcast power, meant that the signal was weak in many areas; a complex pricing structure with many options; a poor-quality subscriber management system (adapted from Canal+); a paper magazine TV guide whereas BSkyB had an electronic programme guide (EPG); insufficient technical customer services; and much signal piracy. While there was a limited return path provided via an in-built 2400 baud modem, there was no requirement, as there was with BSkyB, to connect the set-top box's modem to a phone line.
Loaned equipment
Later problems occurred when ONdigital began to sell prepaid set-top boxes (under the name ONprepaid) from November 1999. This bundle sold in high street stores and supermarkets at a price that included – in theory – the set-top box on loan and the first year's subscription package. These prepaid boxes amounted to 50% of sales in December 1999. Thousands of these packages were also sold at well below retail price on auction sites such as the then-popular QXL. As the call to activate the viewing card did not require any bank details, many ONdigital boxes which were technically on loan were at unverifiable addresses. This was later changed so a customer could not walk away with a box without ONdigital verifying their address. Many customers did not activate the viewing card at all, although where the viewer's address was known, ONdigital would write informing them that they must activate before a certain deadline.
Piracy
The ONdigital pay-per-view channels were encrypted using a system – SECA MediaGuard – which had subsequently been cracked. ONdigital did not update this system, therefore it was possible to produce and sell counterfeit subscription cards which would give access to all the channels. About 100,000 pirate cards were in circulation by 2002, and these played a role in the demise of the broadcaster that year.
Rebranding
In April 2001 it was said that ONdigital would be 'relaunched' to bring it closer to the ITV network and to better compete with Sky. On 11 July 2001 Carlton and Granada rebranded ONdigital as ITV Digital.
Other services were also rebranded, such as ONnet to ITV Active. A re-branding campaign was launched, with customers being sent ITV Digital stickers to place over the ONdigital logos on their remote controls and set top boxes. The software running on the receivers was not changed, however, and continued to display 'ON' on nearly every screen.
The rebrand was not without controversy, as SMG plc (owner of Scottish Television and Grampian Television), UTV and Channel Television all pointed out that the ITV brand did not belong solely to Carlton and Granada. SMG and UTV initially refused to carry the advertising campaign for ITV Digital and did not allow the ITV Sport Channel space on their multiplex, thus it was not available at launch in most of Scotland and Northern Ireland. The case was resolved in Scotland and the Channel Islands and later still in Northern Ireland, allowing ITV Sport to launch in the non-Carlton and Granada regions, although it was never made available in the Channel Islands, where there was no DTT or cable, and it never appeared on Sky Digital.
Later in 2001, ITV Sport Channel was announced. This would be a premium sport channel, and would broadcast English football games as per the company's deal with the Football League in 2000, as well as ATP tennis games and Champions League games previously covered by ONsport 1 and ONsport 2. The channel launched on 11 August of that year.
Downfall
The service reached 1 million subscribers by January 2001, whereas Sky Digital had 5.7 million. Granada reported £69 million in losses in the first six months of 2001, leading some investors to urge it to close or sell ONdigital/ITV Digital. ITV Digital was unable to make a deal to put the ITV Sport Channel on Sky, which could have given the channel access to millions of Sky customers and generated income; the channel was only licensed to cable company NTL. Subscriptions for ONnet/ITV Active, its internet service, peaked at around 100,000 customers. ITV Digital had a 12% share of digital subscribers as of December 2001. ITV Digital and Granada cut jobs that month. By 2002, the company was thought to be losing up to £1 million per day.
In February 2002, Carlton and Granada said that ITV Digital needed an urgent "fundamental restructuring". The biggest cost the company faced was its three-year deal with the Football League, which was already deemed too expensive by critics when agreed, as it was inferior to the top-flight Premiership coverage from Sky Sports. It was reported on 21 March 2002 that ITV Digital had proposed paying only £50 million for its remaining two years in the Football League deal, a reduction of £129m. Chiefs from the League said that any reduction in the payment could threaten the existence of many football clubs, which had budgeted for large incomes from the television contract.
Administration
On 27 March 2002, ITV Digital was placed in administration as it was unable to pay the full amount due to the Football League. Later, as chances of its survival remained bleak, the Football League sued Carlton and Granada, claiming that the firms had breached their contract in failing to deliver the guaranteed income. However, on 1 August the league lost the case, with the judge ruling that it had "failed to extract sufficient written guarantees". The league then filed a negligence claim against its own lawyers for failing to press for a written guarantee at the time of the deal with ITV Digital. From this, in June 2006, it was awarded a paltry £4 in damages of the £150m it was seeking. The collapse put in doubt the government's ambition to switch off analogue terrestrial TV signals by 2010.
Despite several interested parties, the administrators were unable to find a buyer for the company and effectively put it into liquidation on 26 April 2002. Most subscription channels stopped broadcasting on ITV Digital on 1 May 2002 at 7 am, with only free-to-air services continuing. The next day, ITV chief executive Stuart Prebble quit. In all, 1,500 jobs were lost by ITV Digital's collapse. ITV Digital was eventually placed into liquidation on 18 October, with debts of £1.25 billion.
Post-collapse
By 30 April 2002, the Independent Television Commission (ITC) had revoked ITV Digital's broadcasting licence and started looking for a buyer. A consortium made up of the BBC and Crown Castle submitted an application on 13 June, later joined by BSkyB, and were awarded the licence on 4 July. They launched the Freeview service on 30 October 2002, offering 30 free-to-air TV channels and 20 free-to-air radio channels including several interactive channels such as BBC Red Button and Teletext, but no subscription or premium services. Those followed on 31 March 2004 when Top Up TV began broadcasting 11 pay TV channels in timeshared broadcast slots.
From 10 December 2002, ITV Digital's liquidators started to ask customers to return their set top boxes or pay a £39.99 fee. Had this been successful, it could have threatened to undermine the fledgling Freeview service, since at the time most digital terrestrial receivers in households were ONdigital and ITV Digital legacy hardware. In January 2003, Carlton and Granada stepped in and paid £2.8m to the liquidators to allow the boxes to stay with their customers, because at the time the ITV companies received a discount on their broadcasting licence payments based on the number of homes they had converted to digital television. It was also likely done to avoid further negativity towards the two companies.
During the time under administration, Carlton and Granada were in talks regarding a merger, which was eventually cleared in 2004.
Effect on football clubs
ITV Digital's collapse had a large effect on many football clubs. Bradford City F.C. was one of the affected, and its debt forced it into administration in May 2002.
Barnsley F.C. also entered administration in October 2002, despite the club making a profit for the twelve years prior to the collapse of ITV Digital. Barnsley had budgeted on the basis that the money from the ITV Digital deal would be received, leaving a £2.5 million shortfall in their accounts when the broadcaster collapsed.
Clubs were forced to slash staff, and some players were forced to be sold as they were unable to pay them. Some clubs increased ticket prices for fans to offset the losses.
The rights to show Football League matches were resold to Sky Sports for £95 million for the next four years compared to £315 million over three years from ITV Digital, leading to a reduction from £2 million per season to £700,000 in broadcasting revenue for First Division clubs.
In total, fourteen Football League clubs were placed in administration within four years of the collapse of ITV Digital, compared to four in the four years before.
News Corporation hacking allegations
On 31 March 2002, French cable company Canal+ accused Rupert Murdoch's News Corporation in the United States of extracting the UserROM code from its MediaGuard encryption cards and leaking it onto the internet. Canal+ brought a lawsuit against News Corporation alleging that it, through its subsidiary NDS (which provides encryption technology for Sky and other TV services from Murdoch), had been working on breaking the MediaGuard smartcards used by Canal+, ITV Digital and other non-Murdoch-owned TV companies throughout Europe. The action was later partially dropped after News Corporation agreed to buy Canal+'s struggling Italian operation Telepiu, a direct rival to a Murdoch-owned company in that country.
Other legal action by EchoStar/NagraStar was being pursued as late as August 2005, accusing NDS of the same wrongdoing. In 2008, NDS was found to have broken piracy laws by hacking EchoStar Communications' smart card system, however only $1,500 in statutory damages was awarded.
On 26 March 2012, an investigation from BBC's Panorama found evidence that one of News Corporation's subsidiaries sabotaged ITV Digital. It found that NDS hacked ONdigital/ITV Digital smartcard data and leaked them through a pirate website under Murdoch's control – actions which enabled pirated cards to flood the market. The accusations arose from emails obtained by the BBC, and an interview with Lee Gibling, the operator of a hacking website, who claimed he was paid up to £60,000 per year by Ray Adams, NDS's head of security. This would mean that Murdoch used computer hacking to directly undermine rival ITV Digital. Lawyers for News Corporation claimed that these accusations of illegal activities against a rival business are "false and libellous". In June 2013 the Metropolitan Police decided to look into these allegations following a request by Labour MP Tom Watson.
Marketing
ITV Digital ran an advertising campaign involving the comedian Johnny Vegas as Al and a knitted monkey simply called Monkey, voiced by Ben Miller. A knitted replica of Monkey could be obtained by signing up to ITV Digital. Because the monkey could not be obtained without signing up to the service, a market for second-hand monkeys developed. At one time, original ITV Digital Monkeys were fetching several hundred pounds on eBay, and knitting patterns delivered by email were sold for several pounds. The campaign was created by the advertising agency Mother. In August 2002, following ITV Digital's collapse, Vegas claimed that he was owed money for the advertisements. In early 2007, Monkey and Al reappeared in an advert for PG Tips tea, which at first included a reference to ITV Digital's downfall.
Set top boxes
This is a list of ex-ITV and ONdigital set-top boxes. All boxes used similar software, with the design of the user interface common to all models. Top Up TV provided a small update in 2004 which upgraded minor technicalities with encryption services.
Nokia Mediamaster 9850T
Pace Micro Technology DTR-730, DTR-735
Philips DTX 6370, DTX 6371, DTX 6372
Pioneer DBR-T200, DBR-T210
Sony VTX-D500U
Toshiba DTB2000
All these set top boxes (and some ONdigital-branded integrated TVs) become obsolete after the digital switchover, completed in 2012, as post-switchover broadcasts utilised a newer 8k modulation scheme with which this earlier equipment was not compatible.
iDTVs
ONdigital and ITV Digital could also be received with an Integrated Digital Television (iDTV) receiver. They used a conditional-access module (CAM) with a smart card, plugged into a DVB Common Interface slot in the back of the set.
Purchasers of iDTVs were given a substantially discounted price on using the ONdigital service, as there was no cost for a set-top box.
Some of the original iDTVs needed firmware upgrades to work with the CAM. For example, Sony sent technicians out to homes to make the necessary updates free of charge.
Carlton/Granada digital television channels
Carlton and Granada (later ITV Digital Channels Ltd) created a selection of channels which formed some of the core content of channels available via the service, which were:
See also
Sky UK
Top Up TV
Freeview
NDS Group
Notes
External links
ONdigital in liquidation Information for subscribers
ONdigital history site
ITV Digital / PG Tips Monkey Mavis, 8 March 2009 – Knitting kit for Monkey
ITV Digital goes broke BBC News, 27 March 2002
Set-top box offers low-cost digital BBC News, 29 March 2002
British companies established in 1998
British companies disestablished in 2002
Mass media companies established in 1998
Mass media companies disestablished in 2002
Companies that have entered administration in the United Kingdom
Digital television in the United Kingdom
ITV (TV network)
Pay television
History of ITV |
46265 | https://en.wikipedia.org/wiki/Atbash | Atbash | Atbash (; also transliterated Atbaš) is a monoalphabetic substitution cipher originally used to encrypt the Hebrew alphabet. It can be modified for use with any known writing system with a standard collating order.
Encryption
The Atbash cipher is a particular type of monoalphabetic cipher formed by taking the alphabet (or abjad, syllabary, etc.) and mapping it to its reverse, so that the first letter becomes the last letter, the second letter becomes the second to last letter, and so on. For example, the Latin alphabet would work like this:
Due to the fact that there is only one way to perform this, the Atbash cipher provides no communications security, as it lacks any sort of key. If multiple collating orders are available, which one was used in encryption can be used as a key, but this does not provide significantly more security, considering that only a few letters can give away which one was used.
History
The name derives from the first, last, second, and second to last Hebrew letters (Aleph–Taw–Bet–Shin).
The Atbash cipher for the modern Hebrew alphabet would be:
In the Bible
Several biblical words are described by commentators as being examples of Atbash:
Jeremiah 25:26 – "The king of Sheshach shall drink after them" – Sheshach meaning Babylon in Atbash ( bbl → ššk).
Jeremiah 51:1 – "Behold, I will raise up against Babylon, and against the inhabitants of Lev-kamai, a destroying wind." – Lev-kamai meaning Chaldeans ( kšdym → lbqmy).
Jeremiah 51:41 – "How has Sheshach been captured! and the praise of the whole earth taken! How has Babylon become a curse among the nations!" – Sheshach meaning Babylon ( bbl → ššk).
Regarding a potential Atbash switch of a single letter:
- "Any place I will mention My name" () → "Any place you will mention My name" () (a → t), according to Yom Tov Asevilli
Relationship to the affine cipher
The Atbash cipher can be seen as a special case of the affine cipher.
Under the standard affine convention, an alphabet of m letters is mapped to the numbers (The Hebrew alphabet has and the standard Latin alphabet has The Atbash cipher may then be enciphered and deciphered using the encryption function for an affine cipher by setting
This may be simplified to
If, instead, the m letters of the alphabet are mapped to then the encryption and decryption function for the Atbash cipher becomes
See also
Temurah (Kabbalah)
Gematria
Hebrew language
ROT13
Notes
References
External links
Online Atbash decoder
Classical ciphers
Jewish mysticism
Hebrew-language names |
46608 | https://en.wikipedia.org/wiki/Caesar%20%28disambiguation%29 | Caesar (disambiguation) | Julius Caesar (100–44 BC) was a Roman general and dictator.
Caesar or Cæsar may also refer to:
Places
Caesar, Zimbabwe
Caesar Creek State Park, in southwestern Ohio
People
Caesar (given name)
Caesar (surname)
Caesar (title), a title used by Roman and Byzantine emperors, and also at times by Ottoman emperors, derived from the dictator's name
Augustus (63 BC – 14 AD), adoptive son of the dictator and first Roman emperor
Other members of the Julii Caesares, the family from which the dictator came
Gaius Julius Caesar (proconsul) (140–85 BC), father of the dictator
Claudius, fourth Roman emperor, first bearer of the name Claudius Caesar
Nero, fifth Roman emperor, second bearer of the name Claudius Caesar
Caesar of Dyrrhachium, 1st-Century Bishop
Bernhard Caesar Einstein (1930–2008), Swiss-American physicist and grandson of Albert Einstein
Caesar the Geezer (born 1958), British radio personality
Art and entertainment
Fictional characters
Caesar (Planet of the Apes)
Caesar (Xena), a character in Xena: Warrior Princess loosely based on Julius Caesar
Caesar, the leader of Caesar's Legion in Fallout
King Caesar, a monster in the Godzilla series
Malik Caesar, a character in the video game Tales of Graces
Caesar Flickerman, a character in The Hunger Games
Caesar Salazar, a supporting character in Cartoon Network's nanopunk show Generator Rex and the eccentric older brother of titular character.
Caesar Anthonio Zeppeli, a character in the manga Jojo's Bizarre Adventure: Battle Tendency
Literature
Caesar (McCullough novel), a 1998 novel by Colleen McCullough
Caesar (Massie novel), a 1993 novel by Allan Massie
Caesar, Life of a Colossus, a 2006 biography of Julius Caesar by Adrian Goldsworthy
Music
Caesar (band), a Dutch indie rock band
Caesars (band), a Swedish garage rock band
"Caesar" (song), a 2010 song by I Blame Coco
"Caesar", a 1993 song by Iggy Pop from American Caesar
Other uses in art and entertainment
Caesar (Mercury Theatre), 1937 stage production of Orson Welles's Mercury Theatre
Caesar (video game), a 1992 city-building computer game
Caesar!, a British series of radio plays by Mike Walker
The Caesars (TV series), a 1968 British television series
Brands and enterprises
Caesar Film, an Italian film company of the silent era
Caesars Entertainment (2020), a hotel and casino operator, among whose properties include:
Caesars Atlantic City, New Jersey, US
Caesars Palace, Las Vegas, Nevada, US
Caesars Southern Indiana, Elizabeth, Indiana, US
Caesars Tahoe, now MontBleu, Stateline, Nevada, US
Caesars Windsor, Ontario, Canada
Food and drinks
Caesar (cocktail), a Canadian cocktail
Caesar salad
Caesar's, Tijuana restaurant and birthplace of the eponymous salad
Little Caesars, a pizza chain
Military
CAESAR self-propelled howitzer, a French artillery gun
HMS Caesar, several ships of the Royal Navy
Operation Caesar, a German World War II mission
Science and technology
CAESAR (spacecraft), a proposed NASA sample-return mission to comet 67P/Churyumov-Gerasimenko
Caesar cipher, an encryption technique
Caesarean section (often simply called "a caesar"), surgically assisted birth procedure
Center of Advanced European Studies and Research
Clean And Environmentally Safe Advanced Reactor, a nuclear reactor design
Committee for the Scientific Examination of Religion
Ctgf/hcs24 CAESAR, a cis-acting RNA element
Euroradar CAPTOR, a radar system
Other uses
Caesar (dog), a fox terrier owned by King Edward VII
Caesar cut, a hairstyle
Nottingham Caesars, an American football team
See also
Given name
Cesar (disambiguation)
Cesare (disambiguation)
Qaisar
Title
Kaiser (disambiguation)
Kayser (disambiguation)
Keiser (disambiguation)
Czar (disambiguation)
Tsar (disambiguation)
Caesarea (disambiguation)
Julius Caesar (disambiguation)
Giulio Cesare (disambiguation)
Julio Cesar (disambiguation)
Little Caesar (disambiguation)
Seasar, open source application framework |
46628 | https://en.wikipedia.org/wiki/Automated%20teller%20machine | Automated teller machine | An automated teller machine (ATM) or cash machine (in British English) is an electronic telecommunications device that enables customers of financial institutions to perform financial transactions, such as cash withdrawals, deposits, funds transfers, balance inquiries or account information inquiries, at any time and without the need for direct interaction with bank staff.
ATMs are known by a variety of names, including automatic teller machine (ATM) in the United States (sometimes redundantly as "ATM machine"). In Canada, the term automated banking machine (ABM) is also used, although ATM is also very commonly used in Canada, with many Canadian organizations using ATM over ABM. In British English, the terms cashpoint, cash machine and hole in the wall are most widely used. Other terms include any time money, cashline, tyme machine, cash dispenser, cash corner, bankomat, or bancomat. ATMs that are not operated by a financial institution are known as "white-label" ATMs.
Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of financial transactions, most notably cash withdrawals and balance checking, as well as transferring credit to and from mobile phones. ATMs can also be used to withdraw cash in a foreign country. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated, the money will be converted at the financial institution's exchange rate. Customers are typically identified by inserting a plastic ATM card (or some other acceptable payment card) into the ATM, with authentication being by the customer entering a personal identification number (PIN), which must match the PIN stored in the chip on the card (if the card is so equipped), or in the issuing financial institution's database.
According to the ATM Industry Association (ATMIA), , there were close to 3.5 million ATMs installed worldwide. However, the use of ATMs is gradually declining with the increase in cashless payment systems.
History
The idea of out-of-hours cash distribution developed from bankers' needs in Japan, Sweden,
and the United Kingdom.
In 1960 Luther George Simjian invented an automated deposit machine (accepting coins, cash and cheques) although it did not have cash dispensing features. His US patent was first filed on 30 June 1960 and granted on 26 February 1963. The roll-out of this machine, called Bankograph, was delayed by a couple of years, due in part to Simjian's Reflectone Electronics Inc. being acquired by Universal Match Corporation. An experimental Bankograph was installed in New York City in 1961 by the City Bank of New York, but removed after six months due to the lack of customer acceptance.
In 1962 Adrian Ashfield invented the idea of a card system to securely identify a user and control and monitor the dispensing of goods or services. This was granted UK Patent 959,713 in June 1964 and assigned to Kins Developments Limited.
A Japanese device called the "Computer Loan Machine" supplied cash as a three-month loan at 5% p.a. after inserting a credit card. The device was operational in 1966. However, little is known about the device.
A cash machine was put into use by Barclays Bank in its Enfield Town branch in North London, United Kingdom, on 27 June 1967. This machine was inaugurated by English comedy actor Reg Varney. This instance of the invention is credited to the engineering team led by John Shepherd-Barron of printing firm De La Rue, who was awarded an OBE in the 2005 New Year Honours. Transactions were initiated by inserting paper cheques issued by a teller or cashier, marked with carbon-14 for machine readability and security, which in a later model were matched with a six-digit personal identification number (PIN). Shepherd-Barron stated "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash."
The Barclays–De La Rue machine (called De La Rue Automatic Cash System or DACS) beat the Swedish saving banks' and a company called Metior's machine (a device called Bankomat) by a mere nine days and Westminster Bank's–Smith Industries–Chubb system (called Chubb MD2) by a month. The online version of the Swedish machine is listed to have been operational on 6 May 1968, while claiming to be the first online ATM in the world, ahead of similar claims by IBM and Lloyds Bank in 1971, and Oki in 1970. The collaboration of a small start-up called Speytec and Midland Bank developed a fourth machine which was marketed after 1969 in Europe and the US by the Burroughs Corporation. The patent for this device (GB1329964) was filed in September 1969 (and granted in 1973) by John David Edwards, Leonard Perkins, John Henry Donald, Peter Lee Chappell, Sean Benjamin Newcombe, and Malcom David Roe.
Both the DACS and MD2 accepted only a single-use token or voucher which was retained by the machine, while the Speytec worked with a card with a magnetic stripe at the back. They used principles including Carbon-14 and low-coercivity magnetism in order to make fraud more difficult.
The idea of a PIN stored on the card was developed by a group of engineers working at Smiths Group on the Chubb MD2 in 1965 and which has been credited to James Goodfellow (patent GB1197183 filed on 2 May 1966 with Anthony Davies). The essence of this system was that it enabled the verification of the customer with the debited account without human intervention. This patent is also the earliest instance of a complete "currency dispenser system" in the patent record. This patent was filed on 5 March 1968 in the US (US 3543904) and granted on 1 December 1970. It had a profound influence on the industry as a whole. Not only did future entrants into the cash dispenser market such as NCR Corporation and IBM licence Goodfellow's PIN system, but a number of later patents reference this patent as "Prior Art Device".
Propagation
Devices designed by British (i.e. Chubb, De La Rue) and Swedish (i.e. Asea Meteor) quickly spread out. For example, given its link with Barclays, Bank of Scotland deployed a DACS in 1968 under the 'Scotcash' brand. Customers were given personal code numbers to activate the machines, similar to the modern PIN. They were also supplied with £10 vouchers. These were fed into the machine, and the corresponding amount debited from the customer's account.
A Chubb-made ATM appeared in Sydney in 1969. This was the first ATM installed in Australia. The machine only dispensed $25 at a time and the bank card itself would be mailed to the user after the bank had processed the withdrawal.
Asea Metior's Bancomat was the first ATM installed in Spain on 9 January 1969, in central Madrid by Banesto. This device dispensed 1,000 peseta bills (1 to 5 max). Each user had to introduce a security personal key using a combination of the ten numeric buttons. In March of the same year an ad with the instructions to use the Bancomat was published in the same newspaper.
Docutel in the United States
After looking firsthand at the experiences in Europe, in 1968 the ATM was pioneered in the U.S. by Donald Wetzel, who was a department head at a company called Docutel. Docutel was a subsidiary of Recognition Equipment Inc of Dallas, Texas, which was producing optical scanning equipment and had instructed Docutel to explore automated baggage handling and automated gasoline pumps.
On 2 September 1969, Chemical Bank installed a prototype ATM in the U.S. at its branch in Rockville Centre, New York. The first ATMs were designed to dispense a fixed amount of cash when a user inserted a specially coded card. A Chemical Bank advertisement boasted "On Sept. 2 our bank will open at 9:00 and never close again." Chemical's ATM, initially known as a Docuteller was designed by Donald Wetzel and his company Docutel. Chemical executives were initially hesitant about the electronic banking transition given the high cost of the early machines. Additionally, executives were concerned that customers would resist having machines handling their money. In 1995, the Smithsonian National Museum of American History recognised Docutel and Wetzel as the inventors of the networked ATM. To show confidence in Docutel, Chemical installed the first four production machines in a marketing test that proved they worked reliably, customers would use them and even pay a fee for usage. Based on this, banks around the country began to experiment with ATM installations.
By 1974, Docutel had acquired 70 percent of the U.S. market; but as a result of the early 1970s worldwide recession and its reliance on a single product line, Docutel lost its independence and was forced to merge with the U.S. subsidiary of Olivetti.
In 1973, Wetzel was granted U.S. Patent # 3,761,682; the application had been filed in October 1971. However, the U.S. patent record cites at least three previous applications from Docutel, all relevant to the development of the ATM and where Wetzel does not figure, namely US Patent # 3,662,343, U.S. Patent # 3651976 and U.S. Patent # 3,68,569. These patents are all credited to Kenneth S. Goldstein, MR Karecki, TR Barnes, GR Chastian and John D. White.
Further advances
In April 1971, Busicom began to manufacture ATMs based on the first commercial microprocessor, the Intel 4004. Busicom manufactured these microprocessor-based automated teller machines for several buyers, with NCR Corporation as the main customer.
Mohamed Atalla invented the first hardware security module (HSM), dubbed the "Atalla Box", a security system which encrypted PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. In March 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification.
He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. The success of the "Atalla Box" led to the wide adoption of hardware security modules in ATMs. Its PIN verification process was similar to the later IBM 3624. Atalla's HSM products protect 250million card transactions every day as of 2013, and secure the majority of the world's ATM transactions as of 2014.
The IBM 2984 was a modern ATM and came into use at Lloyds Bank, High Street, Brentwood, Essex, the UK in December 1972. The IBM 2984 was designed at the request of Lloyds Bank. The 2984 Cash Issuing Terminal was a true ATM, similar in function to today's machines and named by Lloyds Bank: Cashpoint. Cashpoint is still a registered trademark of Lloyds Banking Group in the UK but is often used as a generic trademark to refer to ATMs of all UK banks. All were online and issued a variable amount which was immediately deducted from the account. A small number of 2984s were supplied to a U.S. bank. A couple of well known historical models of ATMs include the Atalla Box, IBM 3614, IBM 3624 and 473x series, Diebold 10xx and TABS 9000 series, NCR 1780 and earlier NCR 770 series.
The first switching system to enable shared automated teller machines between banks went into production operation on 3 February 1979, in Denver, Colorado, in an effort by Colorado National Bank of Denver and Kranzley and Company of Cherry Hill, New Jersey.
In 2012, a new ATM at Royal Bank of Scotland allowed customers to withdraw cash up to £130 without a card by inputting a six-digit code requested through their smartphones.
Location
ATMs can be placed at any location but are most often placed near or inside banks, shopping centers/malls, airports, railway stations, metro stations, grocery stores, petrol/gas stations, restaurants, and other locations. ATMs are also found on cruise ships and on some US Navy ships, where sailors can draw out their pay.
ATMs may be on- and off-premises. On-premises ATMs are typically more advanced, multi-function machines that complement a bank branch's capabilities, and are thus more expensive. Off-premises machines are deployed by financial institutions and independent sales organisations (ISOs) where there is a simple need for cash, so they are generally cheaper single-function devices.
In the US, Canada and some Gulf countries, banks may have drive-thru lanes providing access to ATMs using an automobile.
In recent times, countries like India and some countries in Africa are installing solar-powered ATMs in rural areas.
The world's highest ATM is located at the Khunjerab Pass in Pakistan. Installed at an elevation of by the National Bank of Pakistan, it is designed to work in temperatures as low as -40-degree Celsius.
Financial networks
Most ATMs are connected to interbank networks, enabling people to withdraw and deposit money from machines not belonging to the bank where they have their accounts or in the countries where their accounts are held (enabling cash withdrawals in local currency). Some examples of interbank networks include NYCE, PULSE, PLUS, Cirrus, AFFN, Interac, Interswitch, STAR, LINK, MegaLink, and BancNet.
ATMs rely on the authorization of a financial transaction by the card issuer or other authorizing institution on a communications network. This is often performed through an ISO 8583 messaging system.
Many banks charge ATM usage fees. In some cases, these fees are charged solely to users who are not customers of the bank that operates the ATM; in other cases, they apply to all users.
In order to allow a more diverse range of devices to attach to their networks, some interbank networks have passed rules expanding the definition of an ATM to be a terminal that either has the vault within its footprint or utilises the vault or cash drawer within the merchant establishment, which allows for the use of a scrip cash dispenser.
ATMs typically connect directly to their host or ATM Controller on either ADSL or dial-up modem over a telephone line or directly on a leased line. Leased lines are preferable to plain old telephone service (POTS) lines because they require less time to establish a connection. Less-trafficked machines will usually rely on a dial-up modem on a POTS line rather than using a leased line, since a leased line may be comparatively more expensive to operate compared to a POTS line. That dilemma may be solved as high-speed Internet VPN connections become more ubiquitous. Common lower-level layer communication protocols used by ATMs to communicate back to the bank include SNA over SDLC, TC500 over Async, X.25, and TCP/IP over Ethernet.
In addition to methods employed for transaction security and secrecy, all communications traffic between the ATM and the Transaction Processor may also be encrypted using methods such as SSL. 2021
Global use
There are no hard international or government-compiled numbers totaling the complete number of ATMs in use worldwide. Estimates developed by ATMIA place the number of ATMs currently in use at 3 million units, or approximately 1 ATM per 3,000 people in the world.
To simplify the analysis of ATM usage around the world, financial institutions generally divide the world into seven regions, based on the penetration rates, usage statistics, and features deployed. Four regions (USA, Canada, Europe, and Japan) have high numbers of ATMs per million people. Despite the large number of ATMs, there is additional demand for machines in the Asia/Pacific area as well as in Latin America. Macau may have the highest density of ATMs at 254 ATMs per 100,000 adults. ATMs have yet to reach high numbers in the Near East and Africa.
Hardware
An ATM is typically made up of the following devices:
CPU (to control the user interface and transaction devices)
Magnetic or chip card reader (to identify the customer)
a PIN pad for accepting and encrypting personal identification number EPP4 (similar in layout to a touch tone or calculator keypad), manufactured as part of a secure enclosure
Secure cryptoprocessor, generally within a secure enclosure
Display (used by the customer for performing the transaction)
Function key buttons (usually close to the display) or a touchscreen (used to select the various aspects of the transaction)
Record printer (to provide the customer with a record of the transaction)
Vault (to store the parts of the machinery requiring restricted access)
Housing (for aesthetics and to attach signage to)
Sensors and indicators
Due to heavier computing demands and the falling price of personal computer–like architectures, ATMs have moved away from custom hardware architectures using microcontrollers or application-specific integrated circuits and have adopted the hardware architecture of a personal computer, such as USB connections for peripherals, Ethernet and IP communications, and use personal computer operating systems.
Business owners often lease ATMs from service providers. However, based on the economies of scale, the price of equipment has dropped to the point where many business owners are simply paying for ATMs using a credit card.
New ADA voice and text-to-speech guidelines imposed in 2010, but required by March 2012 have forced many ATM owners to either upgrade non-compliant machines or dispose them if they are not upgradable, and purchase new compliant equipment. This has created an avenue for hackers and thieves to obtain ATM hardware at junkyards from improperly disposed decommissioned machines.
The vault of an ATM is within the footprint of the device itself and is where items of value are kept. Scrip cash dispensers do not incorporate a vault.
Mechanisms found inside the vault may include:
Dispensing mechanism (to provide cash or other items of value)
Deposit mechanism including a cheque processing module and bulk note acceptor (to allow the customer to make deposits)
Security sensors (magnetic, thermal, seismic, gas)
Locks (to control access to the contents of the vault)
Journaling systems; many are electronic (a sealed flash memory device based on in-house standards) or a solid-state device (an actual printer) which accrues all records of activity including access timestamps, number of notes dispensed, etc. This is considered sensitive data and is secured in similar fashion to the cash as it is a similar liability.
ATM vaults are supplied by manufacturers in several grades. Factors influencing vault grade selection include cost, weight, regulatory requirements, ATM type, operator risk avoidance practices and internal volume requirements. Industry standard vault configurations include Underwriters Laboratories UL-291 "Business Hours" and Level 1 Safes, RAL TL-30 derivatives, and CEN EN 1143-1 - CEN III and CEN IV.
ATM manufacturers recommend that a vault be attached to the floor to prevent theft, though there is a record of a theft conducted by tunnelling into an ATM floor.
Software
With the migration to commodity Personal Computer hardware, standard commercial "off-the-shelf" operating systems and programming environments can be used inside of ATMs. Typical platforms previously used in ATM development include RMX or OS/2.
Today, the vast majority of ATMs worldwide use Microsoft Windows. In early 2014, 95% of ATMs were running Windows XP. A small number of deployments may still be running older versions of the Windows OS, such as Windows NT, Windows CE, or Windows 2000, even though Microsoft still supports only Windows 8.1,Windows 10 and Windows 11.
There is a computer industry security view that general public desktop operating systems(os) have greater risks as operating systems for cash dispensing machines than other types of operating systems like (secure) real-time operating systems (RTOS). RISKS Digest has many articles about ATM operating system vulnerabilities.
Linux is also finding some reception in the ATM marketplace. An example of this is Banrisul, the largest bank in the south of Brazil, which has replaced the MS-DOS operating systems in its ATMs with Linux. Banco do Brasil is also migrating ATMs to Linux. Indian-based Vortex Engineering is manufacturing ATMs that operate only with Linux.
Common application layer transaction protocols, such as Diebold 91x (911 or 912) and NCR NDC or NDC+ provide emulation of older generations of hardware on newer platforms with incremental extensions made over time to address new capabilities, although companies like NCR continuously improve these protocols issuing newer versions (e.g. NCR's AANDC v3.x.y, where x.y are subversions). Most major ATM manufacturers provide software packages that implement these protocols. Newer protocols such as IFX have yet to find wide acceptance by transaction processors.
With the move to a more standardised software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. WOSA/XFS, now known as CEN XFS (or simply XFS), provides a common API for accessing and manipulating the various devices of an ATM. J/XFS is a Java implementation of the CEN XFS API.
While the perceived benefit of XFS is similar to the Java's "write once, run anywhere" mantra, often different ATM hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that ATM applications typically use a middleware to even out the differences among various platforms.
With the onset of Windows operating systems and XFS on ATMs, the software applications have the ability to become more intelligent. This has created a new breed of ATM applications commonly referred to as programmable applications. These types of applications allows for an entirely new host of applications in which the ATM terminal can do more than only communicate with the ATM switch. It is now empowered to connected to other content servers and video banking systems.
Notable ATM software that operates on XFS platforms include Triton PRISM, Diebold Agilis EmPower, NCR APTRA Edge, Absolute Systems AbsoluteINTERACT, KAL Kalignite Software Platform, Phoenix Interactive VISTAatm, Wincor Nixdorf ProTopas, Euronet EFTS and Intertech inter-ATM.
With the move of ATMs to industry-standard computing environments, concern has risen about the integrity of the ATM's software stack.
Impact on labor
The number of human bank tellers in the United States increased from approximately 300,000 in 1970 to approximately 600,000 in 2010. Counter-intuitively, a contributing factor may be the introduction of automated teller machines. ATMs let a branch operate with fewer tellers, making it cheaper for banks to open more branches. This likely resulted in more tellers being hired to handle non-automated tasks, but further automation and online banking may reverse this increase.
Security
Security, as it relates to ATMs, has several dimensions. ATMs also provide a practical demonstration of a number of security systems and concepts operating together and how various security concerns are addressed.
Physical
Early ATM security focused on making the terminals invulnerable to physical attack; they were effectively safes with dispenser mechanisms. A number of attacks resulted, with thieves attempting to steal entire machines by ram-raiding. Since the late 1990s, criminal groups operating in Japan improved ram-raiding by stealing and using a truck loaded with heavy construction machinery to effectively demolish or uproot an entire ATM and any housing to steal its cash.
Another attack method, plofkraak, is to seal all openings of the ATM with silicone and fill the vault with a combustible gas or to place an explosive inside, attached, or near the machine. This gas or explosive is ignited and the vault is opened or distorted by the force of the resulting explosion and the criminals can break in. This type of theft has occurred in the Netherlands, Belgium, France, Denmark, Germany, Australia, and the United Kingdom. These types of attacks can be prevented by a number of gas explosion prevention devices also known as gas suppression system. These systems use explosive gas detection sensor to detect explosive gas and to neutralise it by releasing a special explosion suppression chemical which changes the composition of the explosive gas and renders it ineffective.
Several attacks in the UK (at least one of which was successful) have involved digging a concealed tunnel under the ATM and cutting through the reinforced base to remove the money.
Modern ATM physical security, per other modern money-handling security, concentrates on denying the use of the money inside the machine to a thief, by using different types of Intelligent Banknote Neutralisation Systems.
A common method is to simply rob the staff filling the machine with money. To avoid this, the schedule for filling them is kept secret, varying and random. The money is often kept in cassettes, which will dye the money if incorrectly opened.
Transactional secrecy and integrity
The security of ATM transactions relies mostly on the integrity of the secure cryptoprocessor: the ATM often uses general commodity components that sometimes are not considered to be "trusted systems".
Encryption of personal information, required by law in many jurisdictions, is used to prevent fraud. Sensitive data in ATM transactions are usually encrypted with DES, but transaction processors now usually require the use of Triple DES. Remote Key Loading techniques may be used to ensure the secrecy of the initialisation of the encryption keys in the ATM. Message Authentication Code (MAC) or Partial MAC may also be used to ensure messages have not been tampered with while in transit between the ATM and the financial network.
Customer identity integrity
There have also been a number of incidents of fraud by Man-in-the-middle attacks, where criminals have attached fake keypads or card readers to existing machines. These have then been used to record customers' PINs and bank card information in order to gain unauthorised access to their accounts. Various ATM manufacturers have put in place countermeasures to protect the equipment they manufacture from these threats.
Alternative methods to verify cardholder identities have been tested and deployed in some countries, such as finger and palm vein patterns, iris, and facial recognition technologies. Cheaper mass-produced equipment has been developed and is being installed in machines globally that detect the presence of foreign objects on the front of ATMs, current tests have shown 99% detection success for all types of skimming devices.
Device operation integrity
Openings on the customer side of ATMs are often covered by mechanical shutters to prevent tampering with the mechanisms when they are not in use. Alarm sensors are placed inside ATMs and their servicing areas to alert their operators when doors have been opened by unauthorised personnel.
To protect against hackers, ATMs have a built-in firewall. Once the firewall has detected malicious attempts to break into the machine remotely, the firewall locks down the machine.
Rules are usually set by the government or ATM operating body that dictate what happens when integrity systems fail. Depending on the jurisdiction, a bank may or may not be liable when an attempt is made to dispense a customer's money from an ATM and the money either gets outside of the ATM's vault, or was exposed in a non-secure fashion, or they are unable to determine the state of the money after a failed transaction. Customers often commented that it is difficult to recover money lost in this way, but this is often complicated by the policies regarding suspicious activities typical of the criminal element.
Customer security
In some countries, multiple security cameras and security guards are a common feature. In the United States, The New York State Comptroller's Office has advised the New York State Department of Banking to have more thorough safety inspections of ATMs in high crime areas.
Consultants of ATM operators assert that the issue of customer security should have more focus by the banking industry; it has been suggested that efforts are now more concentrated on the preventive measure of deterrent legislation than on the problem of ongoing forced withdrawals.
At least as far back as 30 July 1986, consultants of the industry have advised for the adoption of an emergency PIN system for ATMs, where the user is able to send a silent alarm in response to a threat. Legislative efforts to require an emergency PIN system have appeared in Illinois, Kansas and Georgia, but none has succeeded yet. In January 2009, Senate Bill 1355 was proposed in the Illinois Senate that revisits the issue of the reverse emergency PIN system. The bill is again supported by the police and denied by the banking lobby.
In 1998, three towns outside Cleveland, Ohio, in response to an ATM crime wave, adopted legislation requiring that an emergency telephone number switch be installed at all outdoor ATMs within their jurisdiction. In the wake of a homicide in Sharon Hill, Pennsylvania, the city council passed an ATM security bill as well.
In China and elsewhere, many efforts to promote security have been made. On-premises ATMs are often located inside the bank's lobby, which may be accessible 24 hours a day. These lobbies have extensive security camera coverage, a courtesy telephone for consulting with the bank staff, and a security guard on the premises. Bank lobbies that are not guarded 24 hours a day may also have secure doors that can only be opened from outside by swiping the bank card against a wall-mounted scanner, allowing the bank to identify which card enters the building. Most ATMs will also display on-screen safety warnings and may also be fitted with convex mirrors above the display allowing the user to see what is happening behind them.
As of 2013, the only claim available about the extent of ATM-connected homicides is that they range from 500 to 1,000 per year in the US, covering only cases where the victim had an ATM card and the card was used by the killer after the known time of death.
Jackpotting
The term is used to describe one method criminals utilize to steal money from an ATM. The thieves gain physical access through a small hole drilled in the machine. They disconnect the existing hard drive and connect an external drive using an industrial endoscope. They then depress an internal button that reboots the device so that it is now under the control of the external drive. They can then have the ATM dispense all of its cash.
Encryption
In recent years, many ATMs also encrypt the hard disk. This means that actually creating the software for jackpotting is more difficult, and provides more security for the ATM.
Uses
ATMs were originally developed as cash dispensers, and have evolved to provide many other bank-related functions:
Paying routine bills, fees, and taxes (utilities, phone bills, social security, legal fees, income taxes, etc.)
Printing or ordering bank statements
Updating passbooks
Cash advances
Cheque Processing Module
Paying (in full or partially) the credit balance on a card linked to a specific current account.
Transferring money between linked accounts (such as transferring between accounts)
Deposit currency recognition, acceptance, and recycling
In some countries, especially those which benefit from a fully integrated cross-bank network (e.g.: Multibanco in Portugal), ATMs include many functions that are not directly related to the management of one's own bank account, such as:
Loading monetary value into stored-value cards
Adding pre-paid cell phone / mobile phone credit.
Purchasing
Concert tickets
Gold
Lottery tickets
Movie tickets
Postage stamps.
Train tickets
Shopping mall gift certificates.
Donating to charities
Increasingly, banks are seeking to use the ATM as a sales device to deliver pre approved loans and targeted advertising using products such as ITM (the Intelligent Teller Machine) from Aptra Relate from NCR. ATMs can also act as an advertising channel for other companies.*
However, several different ATM technologies have not yet reached worldwide acceptance, such as:
Videoconferencing with human tellers, known as video tellers
Biometrics, where authorization of transactions is based on the scanning of a customer's fingerprint, iris, face, etc.
Cheque/cash Acceptance, where the machine accepts and recognises cheques and/or currency without using envelopes Expected to grow in importance in the US through Check 21 legislation.
Bar code scanning
On-demand printing of "items of value" (such as movie tickets, traveler's cheques, etc.)
Dispensing additional media (such as phone cards)
Co-ordination of ATMs with mobile phones
Integration with non-banking equipment
Games and promotional features
CRM through the ATM
Videoconferencing teller machines are currently referred to as Interactive Teller Machines. Benton Smith, in the Idaho Business Review writes "The software that allows interactive teller machines to function was created by a Salt Lake City-based company called uGenius, a producer of video banking software. NCR, a leading manufacturer of ATMs, acquired uGenius in 2013 and married its own ATM hardware with uGenius' video software."
Pharmacy dispensing units
Reliability
Before an ATM is placed in a public place, it typically has undergone extensive testing with both test money and the backend computer systems that allow it to perform transactions. Banking customers also have come to expect high reliability in their ATMs, which provides incentives to ATM providers to minimise machine and network failures. Financial consequences of incorrect machine operation also provide high degrees of incentive to minimise malfunctions.
ATMs and the supporting electronic financial networks are generally very reliable, with industry benchmarks typically producing 98.25% customer availability for ATMs and up to 99.999% availability for host systems that manage the networks of ATMs. If ATM networks do go out of service, customers could be left without the ability to make transactions until the beginning of their bank's next time of opening hours.
This said, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes. The result of receiving too much money may be influenced by the card holder agreement in place between the customer and the bank.
Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures; envelope deposit mechanisms); software (such as operating system; device driver; application); communications; or purely down to operator error.
To aid in reliability, some ATMs print each transaction to a roll-paper journal that is stored inside the ATM, which allows its users and the related financial institutions to settle things based on the records in the journal in case there is a dispute. In some cases, transactions are posted to an electronic journal to remove the cost of supplying journal paper to the ATM and for more convenient searching of data.
Improper money checking can cause the possibility of a customer receiving counterfeit banknotes from an ATM. While bank personnel are generally trained better at spotting and removing counterfeit cash, the resulting ATM money supplies used by banks provide no guarantee for proper banknotes, as the Federal Criminal Police Office of Germany has confirmed that there are regularly incidents of false banknotes having been dispensed through ATMs. Some ATMs may be stocked and wholly owned by outside companies, which can further complicate this problem.
Bill validation technology can be used by ATM providers to help ensure the authenticity of the cash before it is stocked in the machine; those with cash recycling capabilities include this capability.
In India, whenever a transaction fails with an ATM due to network or technical issues and if the amount does not get dispensed in spite of the account being debited then the banks are supposed to return the debited amount to the customer within seven working days from the day of receipt of a complaint. Banks are also liable to pay the late fees in case of delay in repayment of funds post seven days.
Fraud
As with any device containing objects of value, ATMs and the systems they depend on to function are the targets of fraud. Fraud against ATMs and people's attempts to use them takes several forms.
The first known instance of a fake ATM was installed at a shopping mall in Manchester, Connecticut, in 1993. By modifying the inner workings of a Fujitsu model 7020 ATM, a criminal gang known as the Bucklands Boys stole information from cards inserted into the machine by customers.
WAVY-TV reported an incident in Virginia Beach in September 2006 where a hacker, who had probably obtained a factory-default administrator password for a filling station's white-label ATM, caused the unit to assume it was loaded with US$5 bills instead of $20s, enabling himself—and many subsequent customers—to walk away with four times the money withdrawn from their accounts. This type of scam was featured on the TV series The Real Hustle.
ATM behaviour can change during what is called "stand-in" time, where the bank's cash dispensing network is unable to access databases that contain account information (possibly for database maintenance). In order to give customers access to cash, customers may be allowed to withdraw cash up to a certain amount that may be less than their usual daily withdrawal limit, but may still exceed the amount of available money in their accounts, which could result in fraud if the customers intentionally withdraw more money than they had in their accounts.
Card fraud
In an attempt to prevent criminals from shoulder surfing the customer's personal identification number (PIN), some banks draw privacy areas on the floor.
For a low-tech form of fraud, the easiest is to simply steal a customer's card along with its PIN. A later variant of this approach is to trap the card inside of the ATM's card reader with a device often referred to as a Lebanese loop. When the customer gets frustrated by not getting the card back and walks away from the machine, the criminal is able to remove the card and withdraw cash from the customer's account, using the card and its PIN.
This type of fraud has spread globally. Although somewhat replaced in terms of volume by skimming incidents, a re-emergence of card trapping has been noticed in regions such as Europe, where EMV chip and PIN cards have increased in circulation.
Another simple form of fraud involves attempting to get the customer's bank to issue a new card and its PIN and stealing them from their mail.
By contrast, a newer high-tech method of operating, sometimes called card skimming or card cloning, involves the installation of a magnetic card reader over the real ATM's card slot and the use of a wireless surveillance camera or a modified digital camera or a false PIN keypad to observe the user's PIN. Card data is then cloned into a duplicate card and the criminal attempts a standard cash withdrawal. The availability of low-cost commodity wireless cameras, keypads, card readers, and card writers has made it a relatively simple form of fraud, with comparatively low risk to the fraudsters.
In an attempt to stop these practices, countermeasures against card cloning have been developed by the banking industry, in particular by the use of smart cards which cannot easily be copied or spoofed by unauthenticated devices, and by attempting to make the outside of their ATMs tamper evident. Older chip-card security systems include the French Carte Bleue, Visa Cash, Mondex, Blue from American Express and EMV '96 or EMV 3.11. The most actively developed form of smart card security in the industry today is known as EMV 2000 or EMV 4.x.
EMV is widely used in the UK (Chip and PIN) and other parts of Europe, but when it is not available in a specific area, ATMs must fall back to using the easy–to–copy magnetic stripe to perform transactions. This fallback behaviour can be exploited. However, the fallback option has been removed on the ATMs of some UK banks, meaning if the chip is not read, the transaction will be declined.
Card cloning and skimming can be detected by the implementation of magnetic card reader heads and firmware that can read a signature embedded in all magnetic stripes during the card production process. This signature, known as a "MagnePrint" or "BluPrint", can be used in conjunction with common two-factor authentication schemes used in ATM, debit/retail point-of-sale and prepaid card applications.
The concept and various methods of copying the contents of an ATM card's magnetic stripe onto a duplicate card to access other people's financial information were well known in the hacking communities by late 1990.
In 1996, Andrew Stone, a computer security consultant from Hampshire in the UK, was convicted of stealing more than £1 million by pointing high-definition video cameras at ATMs from a considerable distance and recording the card numbers, expiry dates, etc. from the embossed detail on the ATM cards along with video footage of the PINs being entered. After getting all the information from the videotapes, he was able to produce clone cards which not only allowed him to withdraw the full daily limit for each account, but also allowed him to sidestep withdrawal limits by using multiple copied cards. In court, it was shown that he could withdraw as much as £10,000 per hour by using this method. Stone was sentenced to five years and six months in prison.
Related devices
A talking ATM is a type of ATM that provides audible instructions so that people who cannot read a screen can independently use the machine, therefore effectively eliminating the need for assistance from an external, potentially malevolent source. All audible information is delivered privately through a standard headphone jack on the face of the machine. Alternatively, some banks such as the Nordea and Swedbank use a built-in external speaker which may be invoked by pressing the talk button on the keypad. Information is delivered to the customer either through pre-recorded sound files or via text-to-speech speech synthesis.
A postal interactive kiosk may share many components of an ATM (including a vault), but it only dispenses items related to postage.
A scrip cash dispenser may have many components in common with an ATM, but it lacks the ability to dispense physical cash and consequently requires no vault. Instead, the customer requests a withdrawal transaction from the machine, which prints a receipt or scrip. The customer then takes this receipt to a nearby sales clerk, who then exchanges it for cash from the till.
A teller assist unit (TAU) is distinct in that it is designed to be operated solely by trained personnel and not by the general public, does integrate directly into interbank networks, and usually is controlled by a computer that is not directly integrated into the overall construction of the unit.
A Web ATM is an online interface for ATM card banking that uses a smart card reader. All the usual ATM functions are available, except for withdrawing cash. Most banks in Taiwan provide these online services.
See also
ATM Industry Association (ATMIA)
Automated cash handling
Banknote counter
Cash register
EFTPOS
Electronic funds transfer
Financial cryptography
Key management
Payroll
Phantom withdrawal
RAS syndrome
Security of Automated Teller Machines
Self service
Teller system
Verification and validation
References
Further reading
Ali, Peter Ifeanyichukwu. "Impact of automated teller machine on banking services delivery in Nigeria: a stakeholder analysis." Brazilian Journal of Education, Technology and Society 9.1 (2016): 64–72. online
Bátiz-Lazo, Bernardo. Cash and Dash: How ATMs and Computers Changed Banking (Oxford University Press, 2018). online review
Batiz-Lazo, Bernardo. "Emergence and evolution of ATM networks in the UK, 1967–2000." Business History 51.1 (2009): 1-27. online
Batiz-Lazo, Bernardo, and Gustavo del Angel. The Dawn of the Plastic Jungle: The Introduction of the Credit Card in Europe and North America, 1950-1975 (Hoover Institution, 2016), abstract
Bessen, J. Learning by Doing: The Real Connection between Innovation, Wages, and Wealth (Yale UP, 2015)
Hota, Jyotiranjan, Saboohi Nasim, and Sasmita Mishra. "Drivers and Barriers to Adoption of Multivendor ATM Technology in India: Synthesis of Three Empirical Studies." Journal of Technology Management for Growing Economies 9.1 (2018): 89-102. online
McDysan, David E., and Darren L. Spohn. ATM theory and applications (McGraw-Hill Professional, 1998).
Mkpojiogu, Emmanuel OC, and A. Asuquo. "The user experience of ATM users in Nigeria: a systematic review of empirical papers." Journal of Research in National Development (2018). online
Primary sources
"Interview with Mr. Don Wetzel, Co-Patente of the Automatic Teller Machine" (1995) online
External links
The Money Machines: An account of US cash machine history; By Ellen Florian, Fortune.com
World Map and Chart of Automated Teller Machines per 100,000 Adults by Lebanese-economy-forum, World Bank data
Computer-related introductions in 1967
Automation
Banking equipment
Banking technology
Embedded systems
American inventions
English inventions
Payment systems
Articles containing video clips
1967 in economics
20th-century inventions |
46956 | https://en.wikipedia.org/wiki/Cepstrum | Cepstrum | In Fourier analysis, the cepstrum (; plural cepstra, adjective cepstral) is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech.
The term cepstrum was derived by reversing the first four letters of spectrum. Operations on cepstra are labelled quefrency analysis (or quefrency alanysis), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum.
Origin
The concept of the cepstrum was introduced in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey. It serves as a tool to investigate periodic structures in frequency spectra. Such effects are related to noticeable echos or reflections in the signal, or to the occurrence of harmonic frequencies (partials, overtones). Mathematically it deals with the problem of deconvolution of signals in the frequency space.
References to the Bogert paper, in a bibliography, are often edited incorrectly. The terms "quefrency", "alanysis", "cepstrum" and "saphe" were invented by the authors by rearranging the letters in frequency, analysis, spectrum, and phase. The invented terms are defined in analogy to the older terms.
General definition
The cepstrum is the result of following sequence of mathematical operations:
transformation of a signal from the time domain to the frequency domain
computation of the logarithm of the spectral amplitude
transformation to quefrency domain, where the final independent variable, the quefrency, has a time scale.
Types
The cepstrum is used in many variants. Most important are:
power cepstrum: The logarithm is taken from the "power spectrum"
complex cepstrum: The logarithm is taken from the spectrum, which is calculated via Fourier analysis
The following abbreviations are used in the formulas to explain the cepstrum:
Power cepstrum
The "cepstrum" was originally defined as power cepstrum by the following relationship:
The power cepstrum has main applications in analysis of sound and vibration signals. It is a complementary tool to spectral analysis.
Sometimes it is also defined as:
Due to this formula, the cepstrum is also sometimes called the spectrum of a spectrum. It can be shown that both formulas are consistent with each other as the frequency spectral distribution remains the same, the only difference being a scaling factor which can be applied afterwards. Some articles prefer the second formula.
Other notations are possible due to the fact that the log of the power spectrum is equal to the log of the spectrum if a scaling factor 2 is applied:
and therefore:
which provides a relationship to the real cepstrum (see below).
Further, it shall be noted, that the final squaring operation in the formula for the power spectrum is sometimes called unnecessary and therefore sometimes omitted.
The real cepstrum is directly related to the power cepstrum:
It is derived from the complex cepstrum (defined below) by discarding the phase information (contained in the imaginary part of the complex logarithm). It has a focus on periodic effects in the amplitudes of the spectrum:
Complex cepstrum
The complex cepstrum was defined by Oppenheim in his development of homomorphic system theory. The formula is provided also in other literature.
As is complex the log-term can be also written with as a product of magnitude and phase, and subsequently as a sum. Further simplification is obvious, if log is a natural logarithm with base e:
Therefore: The complex cepstrum can be also written as:
The complex cepstrum retains the information about the phase. Thus it is always possible to return from the quefrency domain to the time domain by the inverse operation:
where b is the base of the used logarithm.
Main application is the modification of the signal in the quefrency domain (liftering) as an analog operation to filtering in the spectral frequency domain. An example is the suppression of echo effects by suppression of certain quefrencies.
The phase cepstrum (after phase spectrum) is related to the complex cepstrum as
phase spectrum = (complex cepstrum − time reversal of complex cepstrum)2.
Related concepts
The independent variable of a cepstral graph is called the quefrency. The quefrency is a measure of time, though not in the sense of a signal in the time domain. For example, if the sampling rate of an audio signal is 44100 Hz and there is a large peak in the cepstrum whose quefrency is 100 samples, the peak indicates the presence of a fundamental frequency that is 44100/100 = 441 Hz. This peak occurs in the cepstrum because the harmonics in the spectrum are periodic and the period corresponds to the fundamental frequency, since harmonics are integer multiples of the fundamental frequency.
The kepstrum, which stands for "Kolmogorov-equation power-series time response", is similar to the cepstrum and has the same relation to it as expected value has to statistical average, i.e. cepstrum is the empirically measured quantity, while kepstrum is the theoretical quantity. It was in use before the cepstrum.
The autocepstrum is defined as the cepstrum of the autocorrelation. The autocepstrum is more accurate than the cepstrum in the analysis of data with echoes.
Playing further on the anagram theme, a filter that operates on a cepstrum might be called a lifter. A low-pass lifter is similar to a low-pass filter in the frequency domain. It can be implemented by multiplying by a window in the quefrency domain and then converting back to the frequency domain, resulting in a modified signal, i.e. with signal echo being reduced.
Interpretation
The cepstrum can be seen as information about the rate of change in the different spectrum bands. It was originally invented for characterizing the seismic echoes resulting from earthquakes and bomb explosions. It has also been used to determine the fundamental frequency of human speech and to analyze radar signal returns. Cepstrum pitch determination is particularly effective because the effects of the vocal excitation (pitch) and vocal tract (formants) are additive in the logarithm of the power spectrum and thus clearly separate.
The cepstrum is a representation used in homomorphic signal processing, to convert signals combined by convolution (such as a source and filter) into sums of their cepstra, for linear separation. In particular, the power cepstrum is often used as a feature vector for representing the human voice and musical signals. For these applications, the spectrum is usually first transformed using the mel scale. The result is called the mel-frequency cepstrum or MFC (its coefficients are called mel-frequency cepstral coefficients, or MFCCs). It is used for voice identification, pitch detection and much more. The cepstrum is useful in these applications because the low-frequency periodic excitation from the vocal cords and the formant filtering of the vocal tract, which convolve in the time domain and multiply in the frequency domain, are additive and in different regions in the quefrency domain.
Note that a pure sine wave can not be used to test the cepstrum for its pitch determination from quefrency as a pure sine wave does not contain any harmonics and does not lead to quefrency peaks. Rather, a test signal containing harmonics should be used (such as the sum of at least two sines where the second sine is some harmonic (multiple) of the first sine, or better, a signal with a square or triangle waveform, as such signals provide many overtones in the spectrum.).
An important property of the cepstral domain is that the convolution of two signals can be expressed as the addition of their complex cepstra:
Applications
The concept of the cepstrum has led to numerous applications:
dealing with reflection inference (radar, sonar applications, earth seismology)
estimation of speaker fundamental frequency (pitch)
speech analysis and recognition
medical applications in analysis of electroencephalogram (EEG) and brain waves
machine vibration analysis based on harmonic patterns (gearbox faults, turbine blade failures, ...)
Recently cepstrum based deconvolution was used to remove the effect of the stochastic impulse trains, which originates an sEMG signal, from the power spectrum of sEMG signal itself. In this way, only information on motor unit action potential (MUAP) shape and amplitude were maintained, and then, used to estimate the parameters of a time-domain model of the MUAP itself.
A short-time cepstrum analysis was proposed by Schroeder and Noll for application to pitch determination of human speech.
References
Further reading
"Speech Signal Analysis"
"Speech analysis: Cepstral analysis vs. LPC", www.advsolned.com
"A tutorial on Cepstrum and LPCCs"
Frequency-domain analysis
Signal processing |
47086 | https://en.wikipedia.org/wiki/MacOS%20Server | MacOS Server | macOS Server, formerly Mac OS X Server and OS X Server, is a series of Unix-like server operating systems developed by Apple Inc., based on macOS and later add-on software packages for the latter. macOS Server adds server functionality and system administration tools to macOS and provides tools to manage both macOS-based computers and iOS-based devices.
Versions of Mac OS X Server prior to version 10.7 “Lion” were sold as complete, standalone server operating systems; starting with Mac OS X 10.7 “Lion,” Mac OS X Server (and its successors OS X Server and macOS Server) have been offered as add-on software packages, sold through the Mac App Store, that are installed on top of a corresponding macOS installation.
macOS Server at one point provided network services such as a mail transfer agent, AFP and SMB servers, an LDAP server, and a domain name server, as well as server applications including a Web server, database, and calendar server. The latest version of macOS server only includes functionality related to user and group management, Xsan, and mobile device management through profiles.
Overview
Mac OS X Server was provided as the operating system for Xserve computers, rack mounted server computers designed by Apple. Also, it was optionally pre-installed on the Mac Mini and Mac Pro and was sold separately for use on any Macintosh computer meeting its minimum requirements.
macOS Server versions prior to Lion are based on an open source foundation called Darwin and use open industry standards and protocols.
Versions
Mac OS X Server 1.0 (Rhapsody)
The first version of Mac OS X was Mac OS X Server 1.0. Mac OS X Server 1.0 was based on Rhapsody, a hybrid of OPENSTEP from NeXT Computer and Mac OS 8.5.1. The GUI looked like a mixture of Mac OS 8's Platinum appearance with OPENSTEP's NeXT-based interface. It included a runtime layer called Blue Box for running legacy Mac OS-based applications within a separate window. There was discussion of implementing a 'transparent blue box' which would intermix Mac OS applications with those written for Rhapsody's Yellow Box environment, but this would not happen until Mac OS X's Classic environment. Apple File Services, Macintosh Manager, QuickTime Streaming Server, WebObjects, and NetBoot were included with Mac OS X Server 1.0. It could not use FireWire devices.
The last release is Mac OS X Server 1.2v3.
Mac OS X Server 10.0 (Cheetah)
Released: May 21, 2001
Mac OS X Server 10.0 included the new Aqua user interface, Apache, PHP, MySQL, Tomcat, WebDAV support, Macintosh Manager, and NetBoot.
Mac OS X Server 10.1 (Puma)
Released: September 25, 2001
Mac OS X Server 10.1 featured improved performance, increased system stability, and decreased file transfer times compared to Mac OS X Server 10.0. Support was added for RAID 0 and RAID 1 storage configurations, and Mac OS 9.2.1 in NetBoot.
Mac OS X Server 10.2 (Jaguar)
Released: August 23, 2002
The 10.2 Mac OS X Server release includes updated Open Directory user and file management, which with this release is based on LDAP, beginning the deprecation of the NeXT-originated NetInfo architecture. The new Workgroup Manager interface improved configuration significantly. The release also saw major updates to NetBoot and NetInstall. Many common network services are provided such as NTP, SNMP, web server (Apache), mail server (Postfix and Cyrus), LDAP (OpenLDAP), AFP, and print server. The inclusion of Samba version 3 allows tight integration with Windows clients and servers. MySQL v4.0.16 and PHP v4.3.7 are also included.
Mac OS X Server 10.3 (Panther)
Released: October 24, 2003
The 10.3 Mac OS X Server release includes updated Open Directory user and file management, which with this release is based on LDAP, beginning the deprecation of the NeXT-originated NetInfo architecture. The new Workgroup Manager interface improved configuration significantly. Many common network services are provided such as NTP, SNMP, web server (Apache), mail server (Postfix and Cyrus), LDAP (OpenLDAP), AFP, and print server. The inclusion of Samba version 3 allows tight integration with Windows clients and servers. MySQL v4.0.16 and PHP v4.3.7 are also included.
Mac OS X Server 10.4 (Tiger)
Released: April 29, 2005
The 10.4 release adds 64-bit application support, Access Control Lists, Xgrid, link aggregation, e-mail spam filtering (SpamAssassin), virus detection (ClamAV), Gateway Setup Assistant, and servers for Software Update, iChat Server using XMPP, Boot Camp Assistant, Dashboard, and weblogs.
On August 10, 2006, Apple announced the first Universal Binary release of Mac OS X Server, version 10.4.7, supporting both PowerPC and Intel processors. At the same time Apple announced the release of the Intel-based Mac Pro and Xserve systems.
Mac OS X Server 10.5 (Leopard Server)
Released: October 26, 2007.
Leopard Server sold for $999 for an unlimited-client license. Mac OS X Server version 10.5.x ‘Leopard’ was the last major version of Mac OS X Server to support PowerPC-based servers and workstations such as the Apple Xserve G5 and Power Mac G5.
Features
RADIUS Server. Leopard Server includes FreeRADIUS for network authentication. It ships with support for wireless access stations however can be modified into a fully functioning FreeRADIUS server.
Ruby on Rails. Mac OS X Server version 10.5 ‘Leopard’ was the first version to ship with Ruby on Rails, the server-side Web application framework used by sites such as GitHub.
Mac OS X Server 10.6 (Snow Leopard Server)
Released: August 28, 2009
Snow Leopard Server sold for $499 and included unlimited client licenses.
New Features:
Full 64-bit operating system. On appropriate systems with 4 GB of RAM or more, Snow Leopard Server uses a 64-bit kernel to address up to a theoretical 16 TB of RAM.
iCal Server 2 with improved CalDAV support, a new web calendaring application, push notifications and the ability to send email invitations to non-iCal users.
Address Book Server provides a central location for users to store and access personal contacts across multiple Macs and synchronized iPhones. Based on the CardDAV protocol standard.
Wiki Server 2, with server side Quick Look and the ability to view wiki content on iPhone.
A new Mail server engine that supports push email so users receive immediate access to new messages. However, Apple's implementation of push email is not supported for Apple's iPhone.
Podcast Producer 2 with dual-source video support. Also includes a new Podcast Composer application to automate the production process, making it simple to create podcasts with a customized, consistent look and feel. Podcast Composer creates a workflow to add titles, transitions and effects, save to a desired format and share to wikis, blogs, iTunes, iTunes U, Final Cut Server or Podcast Library.
Mobile Access Server enables iPhone and Mac users to access secured network services, including corporate websites, online business applications, email, calendars and contacts. Without requiring additional software, Mobile Access Server acts as a reverse proxy server and provides SSL encryption and authentication between the user's iPhone or Mac and a private network.
Mac OS X 10.7 (Lion Server)
Released: July 20, 2011
In releasing the developer preview of Mac OS X Lion in February 2011, Apple indicated that beginning with Lion, Mac OS X Server would be bundled with the operating system and would not be marketed as a separate product. However, a few months later, the company said it would instead sell the server components as a US$49.99 add-on to Lion, distributed through the Mac App Store (as well as Lion itself). The combined cost of an upgrade to Lion and the purchase of the OS X Server add-on, which costs approximately US$50, was nonetheless significantly lower than the retail cost of Snow Leopard Server (US$499).
Lion Server came with unlimited client licenses as did Snow Leopard Server.
Lion Server includes new versions of iCal Server, Wiki Server, and Mail Server. More significantly, Lion Server can be used for iOS mobile device management.
Starting with Apple Mac OS X Server Version 10.7 “Lion,” PostgreSQL replaces MySQL as the database provided with Mac OS X Server, coinciding with Oracle Corporation’s acquisition of Sun Microsystems and Oracle’s subsequent attempts to tighten MySQL’s licensing restrictions and to exert influence on MySQL’s previously open and independent development model.
OS X 10.8 (Mountain Lion Server)
Released: July 25, 2012.
Like Lion, Mountain Lion had no separate server edition. An OS X Server package was available for Mountain Lion from the Mac App Store for US$19.99, which included a server management application called Server, as well as other additional administrative tools to manage client profiles and Xsan.
Mountain Lion Server, like Lion Server, was provided with unlimited client licenses, and once purchased could be run on an unlimited number of systems.
OS X 10.9 (Mavericks Server)
Released: October 22, 2013.
There is no separate server edition of Mavericks, just as there was no separate server edition of Mountain Lion. There is a package, available from the Mac App Store for $19.99, that includes a server management app called Server, as well as other additional administrative tools to manage client profiles and Xsan, and once purchased can be run on an unlimited number of machines. Those enrolled in the Mac or iOS developer programs are given a code to download OS X Server for free.
OS X 10.10 (Yosemite Server 4.0)
Released: October 16, 2014.
There is no separate server edition of Yosemite, just as there was no separate server edition of Mavericks. There is a package, available from the Mac App Store for $19.99, that includes a server management app called Server, as well as other additional administrative tools to manage client profiles and Xsan, and once purchased can be run on an unlimited number of machines. Those enrolled in the Mac or iOS developer programs are given a code to download OS X Server for free.
OS X 10.11 (Server 5.0)
Released: September 16, 2015.
Version 5.0.3 of OS X Server operates with either OS X Yosemite 10.10.5 and OS X El Capitan 10.11.
OS X 10.11 (Server 5.1)
Released: March 21, 2016.
OS X Server 5.1 requires 10.11.4 El Capitan, as previous versions of OS X Server won't work on 10.11.4 El Capitan.
macOS 10.12 (Server 5.2)
Released: September 20, 2016.
Version 5.2 of macOS Server operates with either OS X El Capitan 10.11 or macOS Sierra 10.12.
macOS 10.12 (Server 5.3)
Released: March 17, 2017.
Version 5.3 of macOS Server only operates on macOS Sierra (10.12.4) and later.
For macOS Server 5.3.1:
macOS 10.13 (Server 5.4)
Released: September 25, 2017.
Version 5.4 of macOS Server only operates on macOS High Sierra (10.13) and later.
macOS 10.13.3 (Server 5.5)
Released: January 23, 2018.
Version 5.5 of macOS Server only operates on macOS High Sierra (10.13.3) and later.
macOS 10.13.5 (Server 5.6)
Released: April 24, 2018.
Version 5.6 of macOS Server only operates on macOS High Sierra (10.13.5) and later.
macOS 10.14 (Server 5.7)
Released: September 28, 2018.
Version 5.7 of macOS Server only operates on macOS Mojave (10.14) and later.
With this version Apple stopped bundling open source services such as Calendar Server, Contacts Server, the Mail Server, DNS, DHCP, VPN Server, and Websites with macOS Server. Included services are now limited to Profile Manager, Open Directory and Xsan.
macOS 10.14 (Server 5.8)
Released: March 25, 2019.
Version 5.8 of macOS Server only operates on macOS Mojave (10.14.4) and later. Profile Manager supports new restrictions, payloads, and commands.
macOS 10.15 (Server 5.9)
Released: October 8, 2019.
Version 5.9 of macOS Server only operates on macOS Catalina (10.15) and later.
macOS 10.15 (Server 5.10)
Released: April 1, 2020.
Version 5.10 of macOS Server only operates on macOS Catalina (10.15) and later.
macOS 11 (Server 5.11)
Released: December 15, 2020.
Version 5.11 of macOS Server only operates on macOS Big Sur (11) and later.
macOS 11 (Server 5.11.1)
Released: May ??, 2021.
Version 5.11.1 of macOS Server only operates on macOS Big Sur (11) and later.
macOS 12 (Server 5.12)
Released: December 8, 2021.
Version 5.12 of macOS Server only operates on macOS Monterey (12) and later.
Server administrator tools
Beginning with the release of OS X 10.8 – Mountain Lion – there is only one Administrative tool – "Server.app". This application is purchased and downloaded via the Mac App Store. This application is updated independently of macOS, also via the Mac App Store.
This Server tool is used to configure, maintain and monitor one or more macOS Server installations.
One purchase allows it to be installed on any licensed macOS installation.
The following information applies only to versions of Mac OS X Server prior to Mountain Lion (10.8)
Mac OS X Server comes with a variety of configuration tools that can be installed on non-server Macs as well:
Server Admin
Server Preferences (application)
Server Assistant
Server Monitor
System Image Utility
Workgroup Manager
Xgrid Admin
System requirements
Technical specifications
File and print services
Mac (AFP, AppleTalk PAP, IPP)
Windows (SMB/CIFS: Apple SMBX in Lion Server — previously Samba 2, IPP)
Unix-like systems (NFS, LPR/LPD, IPP)
Internet (FTP, WebDAV)
Directory services and authentication
Open Directory (OpenLDAP, Kerberos, SASL)
Windows NT Domain Services (removed in Lion Server, previously Samba 2)
Backup Domain Controller (BDC)
LDAP directory connector
Active Directory connector
BSD configuration files (/etc)
RADIUS
Mail services
SMTP (Postfix)
POP and IMAP (Dovecot)
SSL/TLS encryption (OpenSSL)
Mailing lists (Mailman)
Webmail (RoundCube)
Junk mail filtering (SpamAssassin)
Virus detection (ClamAV)
Calendaring
iCal Server (CalDAV, iTIP, iMIP)
Web hosting
Apache Web server (2.2 and 1.3)
SSL/TLS (OpenSSL)
WebDAV
Perl (5.8.8), PHP (5.2), Ruby (1.8.6), Rails (1.2.3)
MySQL 5 (replaced by PostgreSQL in Lion Server)
Capistrano, Mongrel
Collaboration services
Wiki Server (RSS)
iChat Server 3 (XMPP)
Application servers
Apache Tomcat (6)
Java SE virtual machine
WebObjects deployment (5.4)
Apache Axis (SOAP)
Media streaming
QuickTime Streaming Server 6 (removed in Lion Server)
QuickTime Broadcaster 1.5
Client management
Managed Preferences
NetBoot
NetInstall
Software Update Server
Portable home directories
Profile Manager (new in Lion Server)
Networking and VPN
DNS server (BIND 9)
DHCP server
NAT server
VPN server (L2TP/IPSec, PPTP)
Firewall (IPFW2)
NTP
Distributed computing
Xgrid 2
High-availability features
Automatic recovery
File system journaling
IP failover (dropped in OS X 10.7 and later)
Software RAID
Disk space monitor
File systems
HFS+ (journaled, case sensitive and case insensitive)
FAT
NTFS (write support only available on Mac OS X Snow Leopard Server)
UFS (read-only)
Management features
Server Assistant
Server Admin
Server Preferences
Server Status widget
Workgroup Manager
System Image Utility
Secure Shell (SSH2)
Server Monitor
RAID Utility
SNMPv3 (Net-SNMP)
References
External links
Apple – macOS Server
Official feedback page
Apple Introduces Mac OS X Server – Apple press release
Major Mac OS X Server v10.1 Update Now Available – Apple press release
Apple Announces Mac OS X Server “Jaguar”, World’s Easiest-to-Manage UNIX-Based Server Software – Apple press release
Apple Announces Mac OS X Server “Panther” – Apple press release
Apple Announces Mac OS X Server “Tiger” – Apple press release
Apple Announces New Mac OS X Server "Leopard" Features – Apple press release
Apple Introduces Mac OS X Server Snow Leopard – Apple press release
Server
Software version histories
Mobile device management |
47289 | https://en.wikipedia.org/wiki/FastTrack | FastTrack | FastTrack is a peer-to-peer (P2P) protocol that was used by the Kazaa, Grokster, iMesh and Morpheus file sharing programs. FastTrack was the most popular file sharing network in 2003, and used mainly for the exchange of music mp3 files. The network had approximately 2.4 million concurrent users in 2003. It is estimated that the total number of users was greater than that of Napster at its peak.
History
The FastTrack protocol and Kazaa were created and developed by Estonian programmers of BlueMoon Interactive headed by Jaan Tallinn, the same team that later created Skype. After selling it to Niklas Zennström from Sweden and Janus Friis from Denmark, it was introduced in March 2001 by their Dutch company Consumer Empowerment. It appeared during the end of the first generation of P2P networks – Napster shut down in July of that year. There are three FastTrack-based networks, and they use mutually incompatible versions of the protocol. The most popular clients on each are Kazaa (and its variations), Grokster, and iMesh. For more information about the various lawsuits surrounding Kazaa and Sharman Networks, see Kazaa.
Technology
FastTrack uses supernodes to improve scalability.
To allow downloading from multiple sources, FastTrack employs the UUHash hashing algorithm. While UUHash allows very large files to be checksummed in a short time, even on slow weak computers, it also allows for massive corruption of a file to go unnoticed. Many people, as well as the RIAA, have exploited this vulnerability to spread corrupt and fake files on the network.
The FastTrack protocol uses encryption and was not documented by its creators. The first clients were all closed source software. However, initialization data for the encryption algorithms is sent in the clear and no public key encryption is used, so reverse engineering was made comparatively easy. In 2003, open source programmers succeeded in reverse-engineering the portion of the protocol dealing with client-supernode communication, but the supernode-supernode communication protocol remains largely unknown.
Clients
The following programs are or have been FastTrack clients:
Kazaa and variants
KCeasy (requires the gIFT-fasttrack plugin)
Grokster
iMesh
Morpheus, until 2002
Apollon - KDE-Based
giFT-FastTrack – a giFT plugin
MLDonkey, a free multi-platform multi-network file sharing client
See also
Kad network
Overnet
Open Music Model
Comparison of file sharing applications
References
External links
giFT-FastTrack home page
Documentation of the known parts of the FastTrack protocol, from giFT-FastTrack
Boardwatch [ Interview with Niklas Zennstrom], July 17, 2003
FTWall - A firewalling technique for blocking the fast-track protocol.
Advanced Peer-Based Technology Business Models. Ghosemajumder, Shuman. MIT Sloan School of Management, 2002.
Music Downloads: Pirates- or Customers?. Silverthorne, Sean. Harvard Business School Working Knowledge, 2004.
File sharing networks
File transfer protocols |
48356 | https://en.wikipedia.org/wiki/Substitution%20cipher | Substitution cipher | In cryptography, a substitution cipher is a method of encrypting in which units of plaintext are replaced with the ciphertext, in a defined manner, with the help of a key; the "units" may be single letters (the most common), pairs of letters, triplets of letters, mixtures of the above, and so forth. The receiver deciphers the text by performing the inverse substitution process to extract the original message.
Substitution ciphers can be compared with transposition ciphers. In a transposition cipher, the units of the plaintext are rearranged in a different and usually quite complex order, but the units themselves are left unchanged. By contrast, in a substitution cipher, the units of the plaintext are retained in the same sequence in the ciphertext, but the units themselves are altered.
There are a number of different types of substitution cipher. If the cipher operates on single letters, it is termed a simple substitution cipher; a cipher that operates on larger groups of letters is termed polygraphic. A monoalphabetic cipher uses fixed substitution over the entire message, whereas a polyalphabetic cipher uses a number of substitutions at different positions in the message, where a unit from the plaintext is mapped to one of several possibilities in the ciphertext and vice versa.
Simple substitution
Substitution of single letters separately—simple substitution—can be demonstrated by writing out the alphabet in some order to represent the substitution. This is termed a substitution alphabet. The cipher alphabet may be shifted or reversed (creating the Caesar and Atbash ciphers, respectively) or scrambled in a more complex fashion, in which case it is called a mixed alphabet or deranged alphabet. Traditionally, mixed alphabets may be created by first writing out a keyword, removing repeated letters in it, then writing all the remaining letters in the alphabet in the usual order.
Using this system, the keyword "" gives us the following alphabets:
A message
flee at once. we are discovered!
enciphers to
SIAA ZQ LKBA. VA ZOA RFPBLUAOAR!
Usually the ciphertext is written out in blocks of fixed length, omitting punctuation and spaces; this is done to disguise word boundaries from the plaintext and to help avoid transmission errors. These blocks are called "groups", and sometimes a "group count" (i.e. the number of groups) is given as an additional check. Five-letter groups are often used, dating from when messages used to be transmitted by telegraph:
SIAAZ QLKBA VAZOA RFPBL UAOAR
If the length of the message happens not to be divisible by five, it may be padded at the end with "nulls". These can be any characters that decrypt to obvious nonsense, so that the receiver can easily spot them and discard them.
The ciphertext alphabet is sometimes different from the plaintext alphabet; for example, in the pigpen cipher, the ciphertext consists of a set of symbols derived from a grid. For example:
Such features make little difference to the security of a scheme, however – at the very least, any set of strange symbols can be transcribed back into an A-Z alphabet and dealt with as normal.
In lists and catalogues for salespeople, a very simple encryption is sometimes used to replace numeric digits by letters.
Example: MAT would be used to represent 120.
Security for simple substitution ciphers
Although the traditional keyword method for creating a mixed substitution alphabet is simple, a serious disadvantage is that the last letters of the alphabet (which are mostly low frequency) tend to stay at the end. A stronger way of constructing a mixed alphabet is to generate the substitution alphabet completely randomly.
Although the number of possible substitution alphabets is very large (26! ≈ 288.4, or about 88 bits), this cipher is not very strong, and is easily broken. Provided the message is of reasonable length (see below), the cryptanalyst can deduce the probable meaning of the most common symbols by analyzing the frequency distribution of the ciphertext. This allows formation of partial words, which can be tentatively filled in, progressively expanding the (partial) solution (see frequency analysis for a demonstration of this). In some cases, underlying words can also be determined from the pattern of their letters; for example, attract, osseous, and words with those two as the root are the only common English words with the pattern ABBCADB. Many people solve such ciphers for recreation, as with cryptogram puzzles in the newspaper.
According to the unicity distance of English, 27.6 letters of ciphertext are required to crack a mixed alphabet simple substitution. In practice, typically about 50 letters are needed, although some messages can be broken with fewer if unusual patterns are found. In other cases, the plaintext can be contrived to have a nearly flat frequency distribution, and much longer plaintexts will then be required by the cryptanalyst.
Nomenclator
One once-common variant of the substitution cipher is the nomenclator. Named after the public official who announced the titles of visiting dignitaries, this cipher uses a small code sheet containing letter, syllable and word substitution tables, sometimes homophonic, that typically converted symbols into numbers. Originally the code portion was restricted to the names of important people, hence the name of the cipher; in later years, it covered many common words and place names as well. The symbols for whole words (codewords in modern parlance) and letters (cipher in modern parlance) were not distinguished in the ciphertext. The Rossignols' Great Cipher used by Louis XIV of France was one.
Nomenclators were the standard fare of diplomatic correspondence, espionage, and advanced political conspiracy from the early fifteenth century to the late eighteenth century; most conspirators were and have remained less cryptographically sophisticated. Although government intelligence cryptanalysts were systematically breaking nomenclators by the mid-sixteenth century, and superior systems had been available since 1467, the usual response to cryptanalysis was simply to make the tables larger. By the late eighteenth century, when the system was beginning to die out, some nomenclators had 50,000 symbols.
Nevertheless, not all nomenclators were broken; today, cryptanalysis of archived ciphertexts remains a fruitful area of historical research.
Homophonic substitution
An early attempt to increase the difficulty of frequency analysis attacks on substitution ciphers was to disguise plaintext letter frequencies by homophony. In these ciphers, plaintext letters map to more than one ciphertext symbol. Usually, the highest-frequency plaintext symbols are given more equivalents than lower frequency letters. In this way, the frequency distribution is flattened, making analysis more difficult.
Since more than 26 characters will be required in the ciphertext alphabet, various solutions are employed to invent larger alphabets. Perhaps the simplest is to use a numeric substitution 'alphabet'. Another method consists of simple variations on the existing alphabet; uppercase, lowercase, upside down, etc. More artistically, though not necessarily more securely, some homophonic ciphers employed wholly invented alphabets of fanciful symbols.
The Beale ciphers are another example of a homophonic cipher. This is a story of buried treasure that was described in 1819–21 by use of a ciphered text that was keyed to the Declaration of Independence. Here each ciphertext character was represented by a number. The number was determined by taking the plaintext character and finding a word in the Declaration of Independence that started with that character and using the numerical position of that word in the Declaration of Independence as the encrypted form of that letter. Since many words in the Declaration of Independence start with the same letter, the encryption of that character could be any of the numbers associated with the words in the Declaration of Independence that start with that letter. Deciphering the encrypted text character X (which is a number) is as simple as looking up the Xth word of the Declaration of Independence and using the first letter of that word as the decrypted character.
Another homophonic cipher was described by Stahl and was one of the first attempts to provide for computer security of data systems in computers through encryption. Stahl constructed the cipher in such a way that the number of homophones for a given character was in proportion to the frequency of the character, thus making frequency analysis much more difficult.
The book cipher and straddling checkerboard are types of homophonic cipher.
Francesco I Gonzaga, Duke of Mantua, used the earliest known example of a homophonic substitution cipher in 1401 for correspondence with one Simone de Crema.
Polyalphabetic substitution
The work of Al-Qalqashandi (1355-1418), based on the earlier work of Ibn al-Durayhim (1312–1359), contained the first published discussion of the substitution and transposition of ciphers, as well as the first description of a polyalphabetic cipher, in which each plaintext letter is assigned more than one substitute. Polyalphabetic substitution ciphers were later described in 1467 by Leone Battista Alberti in the form of disks. Johannes Trithemius, in his book Steganographia (Ancient Greek for "hidden writing") introduced the now more standard form of a tableau (see below; ca. 1500 but not published until much later). A more sophisticated version using mixed alphabets was described in 1563 by Giovanni Battista della Porta in his book, De Furtivis Literarum Notis (Latin for "On concealed characters in writing").
In a polyalphabetic cipher, multiple cipher alphabets are used. To facilitate encryption, all the alphabets are usually written out in a large table, traditionally called a tableau. The tableau is usually 26×26, so that 26 full ciphertext alphabets are available. The method of filling the tableau, and of choosing which alphabet to use next, defines the particular polyalphabetic cipher. All such ciphers are easier to break than once believed, as substitution alphabets are repeated for sufficiently large plaintexts.
One of the most popular was that of Blaise de Vigenère. First published in 1585, it was considered unbreakable until 1863, and indeed was commonly called le chiffre indéchiffrable (French for "indecipherable cipher").
In the Vigenère cipher, the first row of the tableau is filled out with a copy of the plaintext alphabet, and successive rows are simply shifted one place to the left. (Such a simple tableau is called a tabula recta, and mathematically corresponds to adding the plaintext and key letters, modulo 26.) A keyword is then used to choose which ciphertext alphabet to use. Each letter of the keyword is used in turn, and then they are repeated again from the beginning. So if the keyword is 'CAT', the first letter of plaintext is enciphered under alphabet 'C', the second under 'A', the third under 'T', the fourth under 'C' again, and so on. In practice, Vigenère keys were often phrases several words long.
In 1863, Friedrich Kasiski published a method (probably discovered secretly and independently before the Crimean War by Charles Babbage) which enabled the calculation of the length of the keyword in a Vigenère ciphered message. Once this was done, ciphertext letters that had been enciphered under the same alphabet could be picked out and attacked separately as a number of semi-independent simple substitutions - complicated by the fact that within one alphabet letters were separated and did not form complete words, but simplified by the fact that usually a tabula recta had been employed.
As such, even today a Vigenère type cipher should theoretically be difficult to break if mixed alphabets are used in the tableau, if the keyword is random, and if the total length of ciphertext is less than 27.67 times the length of the keyword. These requirements are rarely understood in practice, and so Vigenère enciphered message security is usually less than might have been.
Other notable polyalphabetics include:
The Gronsfeld cipher. This is identical to the Vigenère except that only 10 alphabets are used, and so the "keyword" is numerical.
The Beaufort cipher. This is practically the same as the Vigenère, except the tabula recta is replaced by a backwards one, mathematically equivalent to ciphertext = key - plaintext. This operation is self-inverse, whereby the same table is used for both encryption and decryption.
The autokey cipher, which mixes plaintext with a key to avoid periodicity.
The running key cipher, where the key is made very long by using a passage from a book or similar text.
Modern stream ciphers can also be seen, from a sufficiently abstract perspective, to be a form of polyalphabetic cipher in which all the effort has gone into making the keystream as long and unpredictable as possible.
Polygraphic substitution
In a polygraphic substitution cipher, plaintext letters are substituted in larger groups, instead of substituting letters individually. The first advantage is that the frequency distribution is much flatter than that of individual letters (though not actually flat in real languages; for example, 'TH' is much more common than 'XQ' in English). Second, the larger number of symbols requires correspondingly more ciphertext to productively analyze letter frequencies.
To substitute pairs of letters would take a substitution alphabet 676 symbols long (). In the same De Furtivis Literarum Notis mentioned above, della Porta actually proposed such a system, with a 20 x 20 tableau (for the 20 letters of the Italian/Latin alphabet he was using) filled with 400 unique glyphs. However the system was impractical and probably never actually used.
The earliest practical digraphic cipher (pairwise substitution), was the so-called Playfair cipher, invented by Sir Charles Wheatstone in 1854. In this cipher, a 5 x 5 grid is filled with the letters of a mixed alphabet (two letters, usually I and J, are combined). A digraphic substitution is then simulated by taking pairs of letters as two corners of a rectangle, and using the other two corners as the ciphertext (see the Playfair cipher main article for a diagram). Special rules handle double letters and pairs falling in the same row or column. Playfair was in military use from the Boer War through World War II.
Several other practical polygraphics were introduced in 1901 by Felix Delastelle, including the bifid and four-square ciphers (both digraphic) and the trifid cipher (probably the first practical trigraphic).
The Hill cipher, invented in 1929 by Lester S. Hill, is a polygraphic substitution which can combine much larger groups of letters simultaneously using linear algebra. Each letter is treated as a digit in base 26: A = 0, B =1, and so on. (In a variation, 3 extra symbols are added to make the basis prime.) A block of n letters is then considered as a vector of n dimensions, and multiplied by a n x n matrix, modulo 26. The components of the matrix are the key, and should be random provided that the matrix is invertible in (to ensure decryption is possible). A mechanical version of the Hill cipher of dimension 6 was patented in 1929.
The Hill cipher is vulnerable to a known-plaintext attack because it is completely linear, so it must be combined with some non-linear step to defeat this attack. The combination of wider and wider weak, linear diffusive steps like a Hill cipher, with non-linear substitution steps, ultimately leads to a substitution–permutation network (e.g. a Feistel cipher), so it is possible – from this extreme perspective – to consider modern block ciphers as a type of polygraphic substitution.
Mechanical substitution ciphers
Between around World War I and the widespread availability of computers (for some governments this was approximately the 1950s or 1960s; for other organizations it was a decade or more later; for individuals it was no earlier than 1975), mechanical implementations of polyalphabetic substitution ciphers were widely used. Several inventors had similar ideas about the same time, and rotor cipher machines were patented four times in 1919. The most important of the resulting machines was the Enigma, especially in the versions used by the German military from approximately 1930. The Allies also developed and used rotor machines (e.g., SIGABA and Typex).
All of these were similar in that the substituted letter was chosen electrically from amongst the huge number of possible combinations resulting from the rotation of several letter disks. Since one or more of the disks rotated mechanically with each plaintext letter enciphered, the number of alphabets used was astronomical. Early versions of these machine were, nevertheless, breakable. William F. Friedman of the US Army's SIS early found vulnerabilities in Hebern's rotor machine, and GC&CS's Dillwyn Knox solved versions of the Enigma machine (those without the "plugboard") well before WWII began. Traffic protected by essentially all of the German military Enigmas was broken by Allied cryptanalysts, most notably those at Bletchley Park, beginning with the German Army variant used in the early 1930s. This version was broken by inspired mathematical insight by Marian Rejewski in Poland.
As far as is publicly known, no messages protected by the SIGABA and Typex machines were ever broken during or near the time when these systems were in service.
The one-time pad
One type of substitution cipher, the one-time pad, is quite special. It was invented near the end of World War I by Gilbert Vernam and Joseph Mauborgne in the US. It was mathematically proven unbreakable by Claude Shannon, probably during World War II; his work was first published in the late 1940s. In its most common implementation, the one-time pad can be called a substitution cipher only from an unusual perspective; typically, the plaintext letter is combined (not substituted) in some manner (e.g., XOR) with the key material character at that position.
The one-time pad is, in most cases, impractical as it requires that the key material be as long as the plaintext, actually random, used once and only once, and kept entirely secret from all except the sender and intended receiver. When these conditions are violated, even marginally, the one-time pad is no longer unbreakable. Soviet one-time pad messages sent from the US for a brief time during World War II used non-random key material. US cryptanalysts, beginning in the late 40s, were able to, entirely or partially, break a few thousand messages out of several hundred thousand. (See Venona project)
In a mechanical implementation, rather like the Rockex equipment, the one-time pad was used for messages sent on the Moscow-Washington hot line established after the Cuban Missile Crisis.
Substitution in modern cryptography
Substitution ciphers as discussed above, especially the older pencil-and-paper hand ciphers, are no longer in serious use. However, the cryptographic concept of substitution carries on even today. From a sufficiently abstract perspective, modern bit-oriented block ciphers (e.g., DES, or AES) can be viewed as substitution ciphers on an enormously large binary alphabet. In addition, block ciphers often include smaller substitution tables called S-boxes. See also substitution–permutation network.
Substitution ciphers in popular culture
Sherlock Holmes breaks a substitution cipher in "The Adventure of the Dancing Men". There, the cipher remained undeciphered for years if not decades; not due to its difficulty, but because no one suspected it to be a code, instead considering it childish scribblings.
The Al Bhed language in Final Fantasy X is actually a substitution cipher, although it is pronounced phonetically (i.e. "you" in English is translated to "oui" in Al Bhed, but is pronounced the same way that "oui" is pronounced in French).
The Minbari's alphabet from the Babylon 5 series is a substitution cipher from English.
The language in Starfox Adventures: Dinosaur Planet spoken by native Saurians and Krystal is also a substitution cipher of the English alphabet.
The television program Futurama contained a substitution cipher in which all 26 letters were replaced by symbols and called "Alien Language". This was deciphered rather quickly by the die hard viewers by showing a "Slurm" ad with the word "Drink" in both plain English and the Alien language thus giving the key. Later, the producers created a second alien language that used a combination of replacement and mathematical Ciphers. Once the English letter of the alien language is deciphered, then the numerical value of that letter (0 for "A" through 25 for "Z" respectively) is then added (modulo 26) to the value of the previous letter showing the actual intended letter. These messages can be seen throughout every episode of the series and the subsequent movies.
At the end of every season 1 episode of the cartoon series Gravity Falls, during the credit roll, there is one of three simple substitution ciphers: A -3 Caesar cipher (hinted by "3 letters back" at the end of the opening sequence), an Atbash cipher, or a letter-to-number simple substitution cipher. The season 1 finale encodes a message with all three. In the second season, Vigenère ciphers are used in place of the various monoalphabetic ciphers, each using a key hidden within its episode.
In the Artemis Fowl series by Eoin Colfer there are three substitution ciphers; Gnommish, Centaurean and Eternean, which run along the bottom of the pages or are somewhere else within the books.
In Bitterblue, the third novel by Kristin Cashore, substitution ciphers serve as an important form of coded communication.
In the 2013 video game BioShock Infinite, there are substitution ciphers hidden throughout the game in which the player must find code books to help decipher them and gain access to a surplus of supplies.
In the anime adaptation of The Devil Is a Part-Timer!, the language of Ente Isla, called Entean, uses a substitution cipher with the ciphertext alphabet , leaving only A, E, I, O, U, L, N, and Q in their original positions.
See also
Ban (unit) with Centiban Table
Copiale cipher
Leet
Vigenère cipher
Topics in cryptography
References
External links
quipqiup An automated tool for solving simple substitution ciphers both with and without known word boundaries.
CrypTool Exhaustive free and open-source e-learning tool to perform and break substitution ciphers and many more.
Substitution Cipher Toolkit Application that can - amongst other things - decrypt texts encrypted with substitution cipher automatically
SCB Cipher Solver A monoalphabetic cipher cracker.
Monoalphabetic Cipher Implementation for Encrypting File (C Language).
Substitution cipher implementation with Caesar and Atbash ciphers (Java)
Online simple substitution implementation (Flash)
Online simple substitution implementation for MAKEPROFIT code (CGI script: Set input in URL, read output in web page)
Monoalphabetic Substitution Breaking A Monoalphabetic Encryption System Using a Known Plaintext Attack
http://cryptoclub.math.uic.edu/substitutioncipher/sub2.htm
Classical ciphers
Cryptography
History of cryptography |
48358 | https://en.wikipedia.org/wiki/Transposition%20cipher | Transposition cipher | In cryptography, a transposition cipher is a method of encryption by which the positions held by units of plaintext (which are commonly characters or groups of characters) are shifted according to a regular system, so that the ciphertext constitutes a permutation of the plaintext. That is, the order of the units is changed (the plaintext is reordered). Mathematically a bijective function is used on the characters' positions to encrypt and an inverse function to decrypt.
Following are some implementations.
Rail Fence cipher
The Rail Fence cipher is a form of transposition cipher that gets its name from the way in which it is encoded. In the rail fence cipher, the plaintext is written downwards and diagonally on successive "rails" of an imaginary fence, then moving up when we get to the bottom. The message is then read off in rows. For example, using three "rails" and a message of 'WE ARE DISCOVERED FLEE AT ONCE', the cipherer writes out:
W . . . E . . . C . . . R . . . L . . . T . . . E
. E . R . D . S . O . E . E . F . E . A . O . C .
. . A . . . I . . . V . . . D . . . E . . . N . .
Then reads off:
WECRL TEERD SOEEF EAOCA IVDEN
(The cipher has broken this ciphertext up into blocks of five to help avoid errors. This is a common technique used to make the cipher more easily readable. The spacing is not related to spaces in the plaintext and so does not carry any information about the plaintext.)
Scytale
The rail fence cipher follows a pattern similar to that of the scytale, a mechanical system of producing a transposition cipher used by the ancient Greeks. The system consisted of a cylinder and a ribbon that was wrapped around the cylinder. The message to be encrypted was written on the coiled ribbon. The letters of the original message would be rearranged when the ribbon was uncoiled from the cylinder. However, the message was easily decrypted when the ribbon recoiled on a cylinder of the same diameter as the encrypting cylinder. Using the same example as before, if the cylinder has a radius such that only three letters can fit around its circumference, the cipherer writes out:
W . . E . . A . . R . . E . . D . . I . . S . . C
. O . . V . . E . . R . . E . . D . . F . . L . .
. . E . . E . . A . . T . . O . . N . . C . . E .
In this example, the cylinder is running horizontally and the ribbon is wrapped around vertically. Hence, the cipherer then reads off:
WOEEV EAEAR RTEEO DDNIF CSLEC
Route cipher
In a route cipher, the plaintext is first written out in a grid of given dimensions, then read off in a pattern given in the key. For example, using the same plaintext that we used for rail fence:
W R I O R F E O E
E E S V E L A N J
A D C E D E T C X
The key might specify "spiral inwards, clockwise, starting from the top right". That would give a cipher text of:
EJXCTEDEC DAEWRIORF EONALEVSE
Route ciphers have many more keys than a rail fence. In fact, for messages of reasonable length, the number of possible keys is potentially too great to be enumerated even by modern machinery. However, not all keys are equally good. Badly chosen routes will leave excessive chunks of plaintext, or text simply reversed, and this will give cryptanalysts a clue as to the routes.
A variation of the route cipher was the Union Route Cipher, used by Union forces during the American Civil War. This worked much like an ordinary route cipher, but transposed whole words instead of individual letters. Because this would leave certain highly sensitive words exposed, such words would first be concealed by code. The cipher clerk may also add entire null words, which were often chosen to make the ciphertext humorous.
Columnar transposition
In a columnar transposition, the message is written out in rows of a fixed length, and then read out again column by column, and the columns are chosen in some scrambled order. Both the width of the rows and the permutation of the columns are usually defined by a keyword. For example, the keyword is of length 6 (so the rows are of length 6), and the permutation is defined by the alphabetical order of the letters in the keyword. In this case, the order would be "6 3 2 4 1 5".
In a regular columnar transposition cipher, any spare spaces are filled with nulls; in an irregular columnar transposition cipher, the spaces are left blank. Finally, the message is read off in columns, in the order specified by the keyword. For example, suppose we use the keyword and the message . In a regular columnar transposition, we write this into the grid as follows:
6 3 2 4 1 5
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E Q K J E U
providing five nulls (), these letters can be randomly selected as they just fill out the incomplete columns and are not part of the message. The ciphertext is then read off as:
EVLNE ACDTK ESEAQ ROFOJ DEECU WIREE
In the irregular case, the columns are not completed by nulls:
6 3 2 4 1 5
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E
This results in the following ciphertext:
EVLNA CDTES EAROF ODEEC WIREE
To decipher it, the recipient has to work out the column lengths by dividing the message length by the key length. Then they can write the message out in columns again, then re-order the columns by reforming the key word.
In a variation, the message is blocked into segments that are the key length long and to each segment the same permutation (given by the key) is applied. This is equivalent to a columnar transposition where the read-out is by rows instead of columns.
Columnar transposition continued to be used for serious purposes as a component of more complex ciphers at least into the 1950s.
Double transposition
A single columnar transposition could be attacked by guessing possible column lengths, writing the message out in its columns (but in the wrong order, as the key is not yet known), and then looking for possible anagrams. Thus to make it stronger, a double transposition was often used. This is simply a columnar transposition applied twice. The same key can be used for both transpositions, or two different keys can be used.
As an example, we can take the result of the irregular columnar transposition in the previous section, and perform a second encryption with a different keyword, , which gives the permutation "564231":
5 6 4 2 3 1
E V L N A C
D T E S E A
R O F O D E
E C W I R E
E
As before, this is read off columnwise to give the ciphertext:
CAEEN SOIAE DRLEF WEDRE EVTOC
If multiple messages of exactly the same length are encrypted using the same keys, they can be anagrammed simultaneously. This can lead to both recovery of the messages, and to recovery of the keys (so that every other message sent with those keys can be read).
During World War I, the German military used a double columnar transposition cipher, changing the keys infrequently. The system was regularly solved by the French, naming it Übchi, who were typically able to quickly find the keys once they'd intercepted a number of messages of the same length, which generally took only a few days. However, the French success became widely known and, after a publication in Le Matin, the Germans changed to a new system on 18 November 1914.
During World War II, the double transposition cipher was used by Dutch Resistance groups, the French Maquis and the British Special Operations Executive (SOE), which was in charge of managing underground activities in Europe. It was also used by agents of the American Office of Strategic Services and as an emergency cipher for the German Army and Navy.
Until the invention of the VIC cipher, double transposition was generally regarded as the most complicated cipher that an agent could operate reliably under difficult field conditions.
Cryptanalysis
The double transposition cipher can be treated as a single transposition with a key as long as the product of the lengths of the two keys.
In late 2013, a double transposition challenge, regarded by its author as undecipherable, was solved by George Lasry using a divide-and-conquer approach where each transposition was attacked individually.
Myszkowski transposition
A variant form of columnar transposition, proposed by Émile Victor Théodore Myszkowski in 1902, requires a keyword with recurrent letters. In usual practice, subsequent occurrences of a keyword letter are treated as if the next letter in alphabetical order, e.g., the keyword TOMATO yields a numeric keystring of "532164."
In Myszkowski transposition, recurrent keyword letters are numbered identically, TOMATO yielding a keystring of "432143."
4 3 2 1 4 3
W E A R E D
I S C O V E
R E D F L E
E A T O N C
E
Plaintext columns with unique numbers are transcribed downward;
those with recurring numbers are transcribed left to right:
ROFOA CDTED SEEEA CWEIV RLENE
Disrupted transposition
A disrupted transposition cipher further complicates the transposition pattern with irregular filling of the rows of the matrix, i.e. with some spaces intentionally left blank (or blackened out like in the Rasterschlüssel 44), or filled later with either another part of the plaintext or random letters. One possible algorithm is to start a new row whenever the plaintext reaches a password character. Another simple option would be to use a password that places blanks according to its number sequence. E.g. "SECRET" would be decoded to a sequence of "5,2,1,4,3,6" and cross out the 5th field of the matrix, then count again and cross out the second field, etc. The following example would be a matrix set up for columnar transposition with the columnar key "CRYPTO" and filled with crossed out fields according to the disruption key "SECRET" (marked with an asterisk), whereafter the message "we are discovered, flee at once" is placed in the leftover spaces. The resulting ciphertext (the columns read according to the transposition key) is "WCEEO ERET RIVFC EODN SELE ADA".
C R Y P T O
1 4 6 3 5 2
W E A R * E
* * D I S *
C O * V E R
E D * F L E
E * A * * T
O N * C E *
Grilles
Another form of transposition cipher uses grilles, or physical masks with cut-outs. This can produce a highly irregular transposition over the period specified by the size of the grille, but requires the correspondents to keep a physical key secret. Grilles were first proposed in 1550, and were still in military use for the first few months of World War One.
Detection and cryptanalysis
Since transposition does not affect the frequency of individual symbols, simple transposition can be easily detected by the cryptanalyst by doing a frequency count. If the ciphertext exhibits a frequency distribution very similar to plaintext, it is most likely a transposition. This can then often be attacked by anagramming—sliding pieces of ciphertext around, then looking for sections that look like anagrams of English words, and solving the anagrams. Once such anagrams have been found, they reveal information about the transposition pattern, and can consequently be extended.
Simpler transpositions also often suffer from the property that keys very close to the correct key will reveal long sections of legible plaintext interspersed by gibberish. Consequently, such ciphers may be vulnerable to optimum seeking algorithms such as genetic algorithms.
A detailed description of the cryptanalysis of a German transposition cipher
can be found in chapter 7 of Herbert Yardley's "The American Black Chamber."
A cipher used by the Zodiac Killer, called "Z-340", organized into triangular sections with substitution of 63 different symbols for the letters and diagonal "knight move" transposition, remained unsolved for over 51 years, until an international team of private citizens cracked it on December 5, 2020, using specialized software.
Combinations
Transposition is often combined with other techniques such as evaluation methods. For example, a simple substitution cipher combined with a columnar transposition avoids the weakness of both. Replacing high frequency ciphertext symbols with high frequency plaintext letters does not reveal chunks of plaintext because of the transposition. Anagramming the transposition does not work because of the substitution. The technique is particularly powerful if combined with fractionation (see below). A disadvantage is that such ciphers are considerably more laborious and error prone than simpler ciphers.
Fractionation
Transposition is particularly effective when employed with fractionation – that is, a preliminary stage that divides each plaintext symbol into two or more ciphertext symbols. For example, the plaintext alphabet could be written out in a grid, and every letter in the message replaced by its co-ordinates (see Polybius square and Straddling checkerboard).
Another method of fractionation is to simply convert the message to Morse code, with a symbol for spaces as well as dots and dashes.
When such a fractionated message is transposed, the components of individual letters become widely separated in the message, thus achieving Claude E. Shannon's diffusion. Examples of ciphers that combine fractionation and transposition include the bifid cipher, the trifid cipher, the ADFGVX cipher and the VIC cipher.
Another choice would be to replace each letter with its binary representation, transpose that, and then convert the new binary string into the corresponding ASCII characters. Looping the scrambling process on the binary string multiple times before changing it into ASCII characters would likely make it harder to break. Many modern block ciphers use more complex forms of transposition related to this simple idea.
See also
Substitution cipher
Ban (unit)
Topics in cryptography
Notes
References
Kahn, David. The Codebreakers: The Story of Secret Writing. Rev Sub. Scribner, 1996.
Yardley, Herbert. The American Black Chamber. Bobbs-Merrill, 1931.
Classical ciphers
Permutations |
48362 | https://en.wikipedia.org/wiki/ROT13 | ROT13 | ROT13 ("rotate by 13 places", sometimes hyphenated ROT-13) is a simple letter substitution cipher that replaces a letter with the 13th letter after it in the alphabet. ROT13 is a special case of the Caesar cipher which was developed in ancient Rome.
Because there are 26 letters (2×13) in the basic Latin alphabet, ROT13 is its own inverse; that is, to undo ROT13, the same algorithm is applied, so the same action can be used for encoding and decoding. The algorithm provides virtually no cryptographic security, and is often cited as a canonical example of weak encryption.
ROT13 is used in online forums as a means of hiding spoilers, punchlines, puzzle solutions, and offensive materials from the casual glance. ROT13 has inspired a variety of letter and word games online, and is frequently mentioned in newsgroup conversations.
Description
Applying ROT13 to a piece of text merely requires examining its alphabetic characters and replacing each one by the letter 13 places further along in the alphabet, wrapping back to the beginning if necessary.
A becomes N, B becomes O, and so on up to M, which becomes Z, then the sequence continues at the beginning of the alphabet: N becomes A, O becomes B, and so on to Z, which becomes M. Only those letters which occur in the English alphabet are affected; numbers, symbols, whitespace, and all other characters are left unchanged. Because there are 26 letters in the English alphabet and 26 = 2 × 13, the ROT13 function is its own inverse:
for any basic Latin-alphabet text x.
In other words, two successive applications of ROT13 restore the original text (in mathematics, this is sometimes called an involution; in cryptography, a reciprocal cipher).
The transformation can be done using a lookup table, such as the following:
For example, in the following joke, the punchline has been obscured by ROT13:
Why did the chicken cross the road?
Gb trg gb gur bgure fvqr!
Transforming the entire text via ROT13 form, the answer to the joke is revealed:
Jul qvq gur puvpxra pebff gur ebnq?
To get to the other side!
A second application of ROT13 would restore the original.
Usage
ROT13 is a special case of the encryption algorithm known as a Caesar cipher, used by Julius Caesar in the 1st century BC.
Johann Ernst Elias Bessler, an 18th century clockmaker and constructor of perpetual motion machines, pointed out that ROT13 encodes his surname as Orffyre. He used its latinised form, Orffyreus, as his pseudonym.
ROT13 was in use in the net.jokes newsgroup by the early 1980s. It is used to hide potentially offensive jokes, or to obscure an answer to a puzzle or other spoiler. A shift of thirteen was chosen over other values, such as three as in the original Caesar cipher, because thirteen is the value for which encoding and decoding are equivalent, thereby allowing the convenience of a single command for both. ROT13 is typically supported as a built-in feature to newsreading software. Email addresses are also sometimes encoded with ROT13 to hide them from less sophisticated spam bots. It is also used to circumvent email screening and spam filtering. By obscuring an email's content, the screening algorithm is unable to identify the email as, for instance, a security risk, and allows it into the recipient's in-box.
In encrypted, normal, English-language text of any significant size, ROT13 is recognizable from some letter/word patterns. The words "n", "V" (capitalized only), and "gur" (ROT13 for "a", "I", and "the"), and words ending in "yl" ("ly") are examples.
ROT13 is not intended to be used where secrecy is of any concern—the use of a constant shift means that the encryption effectively has no key, and decryption requires no more knowledge than the fact that ROT13 is in use. Even without this knowledge, the algorithm is easily broken through frequency analysis. Because of its utter unsuitability for real secrecy, ROT13 has become a catchphrase to refer to any conspicuously weak encryption scheme; a critic might claim that "56-bit DES is little better than ROT13 these days". Also, in a play on real terms like "double DES", the terms "double ROT13", "ROT26", or "2ROT13" crop up with humorous intent (due to the fact that, since applying ROT13 to an already ROT13-encrypted text restores the original plaintext, ROT26 is equivalent to no encryption at all), including a spoof academic paper entitled "On the 2ROT13 Encryption Algorithm". By extension, triple-ROT13 (used in joking analogy with 3DES) is equivalent to regular ROT13.
In December 1999, it was found that Netscape Communicator used ROT13 as part of an insecure scheme to store email passwords. In 2001, Russian programmer Dimitry Sklyarov demonstrated that an eBook vendor, New Paradigm Research Group (NPRG), used ROT13 to encrypt their documents; it has been speculated that NPRG may have mistaken the ROT13 toy example—provided with the Adobe eBook software development kit—for a serious encryption scheme. Windows XP uses ROT13 on some of its registry keys. ROT13 is also used in the Unix fortune program to conceal potentially offensive dicta.
Letter games and net culture
ROT13 provides an opportunity for letter games. Some words will, when transformed with ROT13, produce another word. Examples of 7-letter pairs in the English language are abjurer and nowhere, and Chechen and purpura. Other examples of words like these are shown in the table. The pair gnat and tang is an example of words that are both ROT13 reciprocals and reversals.
The 1989 International Obfuscated C Code Contest (IOCCC) included an entry by Brian Westley. Westley's computer program can be encoded in ROT13 or reversed and still compiles correctly. Its operation, when executed, is either to perform ROT13 encoding on, or to reverse its input.
The newsgroup alt.folklore.urban coined a word—furrfu—that was the ROT13 encoding of the frequently encoded utterance "sheesh". "Furrfu" evolved in mid-1992 as a response to postings repeating urban myths on alt.folklore.urban, after some posters complained that "Sheesh!" as a response to newcomers was being overused.
Variants
ROT5 is a practice similar to ROT13 that applies to numeric digits (0 to 9). ROT13 and ROT5 can be used together in the same message, sometimes called ROT18 (18 = 13 + 5) or ROT13.5.
ROT47 is a derivative of ROT13 which, in addition to scrambling the basic letters, treats numbers and common symbols. Instead of using the sequence A–Z as the alphabet, ROT47 uses a larger set of characters from the common character encoding known as ASCII. Specifically, the 7-bit printable characters, excluding space, from decimal 33 '!' through 126 '~', 94 in total, taken in the order of the numerical values of their ASCII codes, are rotated by 47 positions, without special consideration of case. For example, the character A is mapped to p, while a is mapped to 2. The use of a larger alphabet produces a more thorough obfuscation than that of ROT13; for example, a telephone number such as +1-415-839-6885 is not obvious at first sight from the scrambled result Z'\c`d\gbh\eggd. On the other hand, because ROT47 introduces numbers and symbols into the mix without discrimination, it is more immediately obvious that the text has been enciphered.
Example:
The Quick Brown Fox Jumps Over The Lazy Dog.
enciphers to
%96 "F:4< qC@H? u@I yF>AD ~G6C %96 {2KJ s@8]
The GNU C library, a set of standard routines available for use in computer programming, contains a function—memfrob()—which has a similar purpose to ROT13, although it is intended for use with arbitrary binary data. The function operates by combining each byte with the binary pattern 00101010 (42) using the exclusive or (XOR) operation. This effects a simple XOR cipher. Like ROT13, XOR (and therefore memfrob()) is self-reciprocal, and provides a similar, virtually absent, level of security.
Implementation
tr
The ROT13 and ROT47 are fairly easy to implement using the Unix terminal application tr; to encrypt the string "The Quick Brown Fox Jumps Over The Lazy Dog" in ROT13:
$ # Map upper case A-Z to N-ZA-M and lower case a-z to n-za-m
$ tr 'A-Za-z' 'N-ZA-Mn-za-m' <<< "The Quick Brown Fox Jumps Over The Lazy Dog"
Gur Dhvpx Oebja Sbk Whzcf Bire Gur Ynml Qbt
and the same string for ROT47:
$ echo "The Quick Brown Fox Jumps Over The Lazy Dog" | tr '\!-~' 'P-~\!-O'
%96 "F:4< qC@H? u@I yF>AD ~G6C %96 {2KJ s@8
Emacs and Vim
In Emacs, one can ROT13 the buffer or a selection with the following commands:
M-x toggle-rot13-mode
M-x rot13-other-window
M-x rot13-region
and in the Vim text editor, one can ROT13 a buffer with the command:
ggg?G
Python
In Python, the module is implemented using ROT13:
>>> import this
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
>>> with open(this.__file__) as f:
... print(f.read())
s = """Gur Mra bs Clguba, ol Gvz Crgref
Ornhgvshy vf orggre guna htyl.
Rkcyvpvg vf orggre guna vzcyvpvg.
Fvzcyr vf orggre guna pbzcyrk.
Pbzcyrk vf orggre guna pbzcyvpngrq.
Syng vf orggre guna arfgrq.
Fcnefr vf orggre guna qrafr.
Ernqnovyvgl pbhagf.
Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf.
Nygubhtu cenpgvpnyvgl orngf chevgl.
Reebef fubhyq arire cnff fvyragyl.
Hayrff rkcyvpvgyl fvyraprq.
Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff.
Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg.
Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu.
Abj vf orggre guna arire.
Nygubhtu arire vf bsgra orggre guna *evtug* abj.
Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn.
Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn.
Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!"""
d = {}
for c in (65, 97):
for i in range(26):
d[chr(i+c)] = chr((i+13) % 26 + c)
print("".join([d.get(c, c) for c in s]))
The module provides text transform.
>>> import codecs
>>> print(codecs.encode(this.s, 'rot13'))
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
Ruby
Inputting an array of single string characters it returns the encoded message as an array of characters(strings).This version uses ruby 2.5.5p157. def rot13(secret_messages)
alpha = 'abcdefghijklmnopqrstuvwxyz'.split('')
nstrr = []
secret_messages.map {|word|
alpha.each_with_index {|c,i|
if word == c
nstrr << alpha[(i+13)%26].to_s
end
}
#after every word
if word == " "
nstrr << " ".to_s
end
}
return nstrr
end
puts ['h','e','l','l','o',' ','w','o','r','l','d']
puts " "
puts rot13(['h','e','l','l','o',' ','w','o','r','l','d'])
#uryyb jbeyq
Go
Parse each byte and perform ROT13 if it is an ASCII alpha character.func rot13(b byte) byte {
//ASCII 65 is A and 90 is Z
if b > 64 && b < 91 {
b = ((b - 65 + 13) % 26) + 65
}
//ASCII 97 is a and 122 is z
if b > 96 && b < 123 {
b = ((b - 97 + 13) % 26) + 97
}
return b
}
See also
Cryptanalysis
Atbash
References
External links
Online converter for ROT13, ROT5, ROT18, ROT47, Atbash and Caesar cipher.
ROT13 to Text on PureTables.com
Classical ciphers
Internet culture |
48375 | https://en.wikipedia.org/wiki/Triple%20DES | Triple DES | In cryptography, Triple DES (3DES or TDES), officially the Triple Data Encryption Algorithm (TDEA or Triple DEA), is a symmetric-key block cipher, which applies the DES cipher algorithm three times to each data block. The Data Encryption Standard's (DES) 56-bit key is no longer considered adequate in the face of modern cryptanalytic techniques and supercomputing power. A CVE released in 2016, CVE-2016-2183 disclosed a major security vulnerability in DES and 3DES encryption algorithms. This CVE, combined with the inadequate key size of DES and 3DES, NIST has deprecated DES and 3DES for new applications in 2017, and for all application by 2023. It has been replaced with the more secure, more robust AES.
While the government and industry standards abbreviate the algorithm's name as TDES (Triple DES) and TDEA (Triple Data Encryption Algorithm), RFC 1851 referred to it as 3DES from the time it first promulgated the idea, and this namesake has since come into wide use by most vendors, users, and cryptographers.
History
In 1978, a triple encryption method using DES with two 56-bit keys was proposed by Walter Tuchman; in 1981 Merkle and Hellman proposed a more secure triple key version of 3DES with 112 bits of security.
Standards
The Triple Data Encryption Algorithm is variously defined in several standards documents:
RFC 1851, The ESP Triple DES Transform (approved in 1995)
ANSI ANS X9.52-1998 Triple Data Encryption Algorithm Modes of Operation (approved in 1998, withdrawn in 2008)
FIPS PUB 46-3 Data Encryption Standard (DES) (approved in 1999, withdrawn in 2005)
NIST Special Publication 800-67 Revision 2 Recommendation for the Triple Data Encryption Algorithm (TDEA) Block Cipher (approved in 2017)
ISO/IEC 18033-3:2010: Part 3: Block ciphers (approved in 2005)
Algorithm
The original DES cipher's key size of 56 bits was generally sufficient when that algorithm was designed, but the availability of increasing computational power made brute-force attacks feasible. Triple DES provides a relatively simple method of increasing the key size of DES to protect against such attacks, without the need to design a completely new block cipher algorithm.
A naive approach to increase strength of a block encryption algorithm with short key length (like DES) would be to use two keys instead of one, and encrypt each block twice: . If the original key length is bits, one would hope this scheme provides security equivalent to using key bits long. Unfortunately, this approach is vulnerable to meet-in-the-middle attack: given a known plaintext pair , such that , one can recover the key pair in steps, instead of the steps one would expect from an ideally secure algorithm with bits of key.
Therefore, Triple DES uses a "key bundle" that comprises three DES keys, , and , each of 56 bits (excluding parity bits). The encryption algorithm is:
That is, DES encrypt with , DES decrypt with , then DES encrypt with .
Decryption is the reverse:
That is, decrypt with , encrypt with , then decrypt with .
Each triple encryption encrypts one block of 64 bits of data.
In each case the middle operation is the reverse of the first and last. This improves the strength of the algorithm when using keying option 2 and provides backward compatibility with DES with keying option 3.
Keying options
The standards define three keying options:
Keying option 1
All three keys are independent. Sometimes known as 3TDEA or triple-length keys.
This is the strongest, with 3 × 56 = 168 independent key bits. It is still vulnerable to meet-in-the-middle attack, but the attack requires 22 × 56 steps.
Keying option 2
K1 and K2 are independent, and K3 = K1. Sometimes known as 2TDEA or double-length keys.
This provides a shorter key length of 112 bits and a reasonable compromise between DES and Keying option 1, with the same caveat as above. This is an improvement over "double DES" which only requires 256 steps to attack. NIST has deprecated this option.
Keying option 3
All three keys are identical, i.e. K1 = K2 = K3.
This is backward compatible with DES, since two operations cancel out. ISO/IEC 18033-3 never allowed this option, and NIST no longer allows K1 = K2 or K2 = K3.
Each DES key is 8 odd-parity bytes, with 56 bits of key and 8 bits of error-detection. A key bundle requires 24 bytes for option 1, 16 for option 2, or 8 for option 3.
NIST (and the current TCG specifications version 2.0 of approved algorithms for Trusted Platform Module) also disallows using any one of the 64 following 64-bit values in any keys (note that 32 of them are the binary complement of the 32 others; and that 32 of these keys are also the reverse permutation of bytes of the 32 others), listed here in hexadecimal (in each byte, the least significant bit is an odd-parity generated bit, it is discarded when forming the effective 56-bit keys):
01.01.01.01.01.01.01.01, FE.FE.FE.FE.FE.FE.FE.FE, E0.FE.FE.E0.F1.FE.FE.F1, 1F.01.01.1F.0E.01.01.0E,
01.01.FE.FE.01.01.FE.FE, FE.FE.01.01.FE.FE.01.01, E0.FE.01.1F.F1.FE.01.0E, 1F.01.FE.E0.0E.01.FE.F1,
01.01.E0.E0.01.01.F1.F1, FE.FE.1F.1F.FE.FE.0E.0E, E0.FE.1F.01.F1.FE.0E.01, 1F.01.E0.FE.0E.01.F1.FE,
01.01.1F.1F.01.01.0E.0E, FE.FE.E0.E0.FE.FE.F1.F1, E0.FE.E0.FE.F1.FE.F1.FE, 1F.01.1F.01.0E.01.0E.01,
01.FE.01.FE.01.FE.01.FE, FE.01.FE.01.FE.01.FE.01, E0.01.FE.1F.F1.01.FE.0E, 1F.FE.01.E0.0E.FE.01.F1,
01.FE.FE.01.01.FE.FE.01, FE.01.01.FE.FE.01.01.FE, E0.01.01.E0.F1.01.01.F1, 1F.FE.FE.1F.0E.FE.FE.0E,
01.FE.E0.1F.01.FE.F1.0E, FE.01.1F.E0.FE.01.0E.F1, E0.01.1F.FE.F1.01.0E.FE, 1F.FE.E0.01.0E.FE.F1.01,
01.FE.1F.E0.01.FE.0E.F1, FE.01.E0.1F.FE.01.F1.0E, E0.01.E0.01.F1.01.F1.01, 1F.FE.1F.FE.0E.FE.0E.FE,
01.E0.01.E0.01.F1.01.F1, FE.1F.FE.1F.FE.0E.FE.0E, E0.1F.FE.01.F1.0E.FE.01, 1F.E0.01.FE.0E.F1.01.FE,
01.E0.FE.1F.01.F1.FE.0E, FE.1F.01.E0.FE.0E.01.F1, E0.1F.01.FE.F1.0E.01.FE, 1F.E0.FE.01.0E.F1.FE.01,
01.E0.E0.01.01.F1.F1.01, FE.1F.1F.FE.FE.0E.0E.FE, E0.1F.1F.E0.F1.0E.0E.F1, 1F.E0.E0.1F.0E.F1.F1.0E,
01.E0.1F.FE.01.F1.0E.FE, FE.1F.E0.01.FE.0E.F1.01, E0.1F.E0.1F.F1.0E.F1.0E, 1F.E0.1F.E0.0E.F1.0E.F1,
01.1F.01.1F.01.0E.01.0E, FE.E0.FE.E0.FE.F1.FE.F1, E0.E0.FE.FE.F1.F1.FE.FE, 1F.1F.01.01.0E.0E.01.01,
01.1F.FE.E0.01.0E.FE.F1, FE.E0.01.1F.FE.F1.01.0E, E0.E0.01.01.F1.F1.01.01, 1F.1F.FE.FE.0E.0E.FE.FE,
01.1F.E0.FE.01.0E.F1.FE, FE.E0.1F.01.FE.F1.0E.01, E0.E0.1F.1F.F1.F1.0E.0E, 1F.1F.E0.E0.0E.0E.F1.F1,
01.1F.1F.01.01.0E.0E.01, FE.E0.E0.FE.FE.F1.F1.FE, E0.E0.E0.E0.F1.F1.F1.F1, 1F.1F.1F.1F.0E.0E.0E.0E,
With these restrictions on allowed keys, Triple DES has been reapproved with keying options 1 and 2 only. Generally the three keys are generated by taking 24 bytes from a strong random generator and only keying option 1 should be used (option 2 needs only 16 random bytes, but strong random generators are hard to assert and it's considered best practice to use only option 1).
Encryption of more than one block
As with all block ciphers, encryption and decryption of multiple blocks of data may be performed using a variety of modes of operation, which can generally be defined independently of the block cipher algorithm. However, ANS X9.52 specifies directly, and NIST SP 800-67 specifies via SP 800-38A that some modes shall only be used with certain constraints on them that do not necessarily apply to general specifications of those modes. For example, ANS X9.52 specifies that for cipher block chaining, the initialization vector shall be different each time, whereas ISO/IEC 10116 does not. FIPS PUB 46-3 and ISO/IEC 18033-3 define only the single block algorithm, and do not place any restrictions on the modes of operation for multiple blocks.
Security
In general, Triple DES with three independent keys (keying option 1) has a key length of 168 bits (three 56-bit DES keys), but due to the meet-in-the-middle attack, the effective security it provides is only 112 bits. Keying option 2 reduces the effective key size to 112 bits (because the third key is the same as the first). However, this option is susceptible to certain chosen-plaintext or known-plaintext attacks, and thus it is designated by NIST to have only 80 bits of security. This can be considered insecure, and, as consequence Triple DES has been deprecated by NIST in 2017.
The short block size of 64 bits makes 3DES vulnerable to block collision attacks if it is used to encrypt large amounts of data with the same key. The Sweet32 attack shows how this can be exploited in TLS and OpenVPN. Practical Sweet32 attack on 3DES-based cipher-suites in TLS required blocks (785 GB) for a full attack, but researchers were lucky to get a collision just after around blocks, which took only 25 minutes.
OpenSSL does not include 3DES by default since version 1.1.0 (August 2016) and considers it a "weak cipher".
Usage
The electronic payment industry uses Triple DES and continues to develop and promulgate standards based upon it, such as EMV.
Earlier versions of Microsoft OneNote, Microsoft Outlook 2007 and Microsoft System Center Configuration Manager 2012 use Triple DES to password-protect user content and system data. However, in December 2018, Microsoft announced the retirement of 3DES throughout their Office 365 service.
Firefox and Mozilla Thunderbird use Triple DES in CBC mode to encrypt website authentication login credentials when using a master password.
Implementations
Below is a list of cryptography libraries that support Triple DES:
Botan
Bouncy Castle
cryptlib
Crypto++
Libgcrypt
Nettle
OpenSSL
wolfSSL
Trusted Platform Module (alias TPM, hardware implementation)
Some implementations above may not include 3DES in the default build, in later or more recent versions.
See also
DES-X
Advanced Encryption Standard (AES)
Feistel cipher
Walter Tuchman
References and notes
Broken block ciphers
Data Encryption Standard
de:Data Encryption Standard#Triple-DES |
48405 | https://en.wikipedia.org/wiki/Caesar%20cipher | Caesar cipher | In cryptography, a Caesar cipher, also known as Caesar's cipher, the shift cipher, Caesar's code or Caesar shift, is one of the simplest and most widely known encryption techniques. It is a type of substitution cipher in which each letter in the plaintext is replaced by a letter some fixed number of positions down the alphabet. For example, with a left shift of 3, would be replaced by , would become , and so on. The method is named after Julius Caesar, who used it in his private correspondence.
The encryption step performed by a Caesar cipher is often incorporated as part of more complex schemes, such as the Vigenère cipher, and still has modern application in the ROT13 system. As with all single-alphabet substitution ciphers, the Caesar cipher is easily broken and in modern practice offers essentially no communications security.
Example
The transformation can be represented by aligning two alphabets; the cipher alphabet is the plain alphabet rotated left or right by some number of positions. For instance, here is a Caesar cipher using a left rotation of three places, equivalent to a right shift of 23 (the shift parameter is used as the key):
When encrypting, a person looks up each letter of the message in the "plain" line and writes down the corresponding letter in the "cipher" line.
Plaintext: THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG
Ciphertext: QEB NRFZH YOLTK CLU GRJMP LSBO QEB IXWV ALD
Deciphering is done in reverse, with a right shift of 3.
The encryption can also be represented using modular arithmetic by first transforming the letters into numbers, according to the scheme, A → 0, B → 1, ..., Z → 25. Encryption of a letter x by a shift n can be described mathematically as,
Decryption is performed similarly,
(There are different definitions for the modulo operation. In the above, the result is in the range 0 to 25; i.e., if or are not in the range 0 to 25, we have to subtract or add 26.)
The replacement remains the same throughout the message, so the cipher is classed as a type of monoalphabetic substitution, as opposed to polyalphabetic substitution.
History and usage
The Caesar cipher is named after Julius Caesar, who, according to Suetonius, used it with a shift of three (A becoming D when encrypting, and D becoming A when decrypting) to protect messages of military significance. While Caesar's was the first recorded use of this scheme, other substitution ciphers are known to have been used earlier.
His nephew, Augustus, also used the cipher, but with a right shift of one, and it did not wrap around to the beginning of the alphabet:
Evidence exists that Julius Caesar also used more complicated systems, and one writer, Aulus Gellius, refers to a (now lost) treatise on his ciphers:
It is unknown how effective the Caesar cipher was at the time, but it is likely to have been reasonably secure, not least because most of Caesar's enemies would have been illiterate and others would have assumed that the messages were written in an unknown foreign language. There is no record at that time of any techniques for the solution of simple substitution ciphers. The earliest surviving records date to the 9th-century works of Al-Kindi in the Arab world with the discovery of frequency analysis.
A Caesar cipher with a shift of one is used on the back of the mezuzah to encrypt the names of God. This may be a holdover from an earlier time when Jewish people were not allowed to have mezuzot. The letters of the cryptogram themselves comprise a religiously significant "divine name" which Orthodox belief holds keeps the forces of evil in check.
In the 19th century, the personal advertisements section in newspapers would sometimes be used to exchange messages encrypted using simple cipher schemes. Kahn (1967) describes instances of lovers engaging in secret communications enciphered using the Caesar cipher in The Times. Even as late as 1915, the Caesar cipher was in use: the Russian army employed it as a replacement for more complicated ciphers which had proved to be too difficult for their troops to master; German and Austrian cryptanalysts had little difficulty in decrypting their messages.
Caesar ciphers can be found today in children's toys such as secret decoder rings. A Caesar shift of thirteen is also performed in the ROT13 algorithm, a simple method of obfuscating text widely found on Usenet and used to obscure text (such as joke punchlines and story spoilers), but not seriously used as a method of encryption.
The Vigenère cipher uses a Caesar cipher with a different shift at each position in the text; the value of the shift is defined using a repeating keyword. If the keyword is as long as the message, is chosen at random, never becomes known to anyone else, and is never reused, this is the one-time pad cipher, proven unbreakable. The conditions are so difficult they are, in practical effect, never achieved. Keywords shorter than the message (e.g., "Complete Victory" used by the Confederacy during the American Civil War), introduce a cyclic pattern that might be detected with a statistically advanced version of frequency analysis.
In April 2006, fugitive Mafia boss Bernardo Provenzano was captured in Sicily partly because some of his messages, clumsily written in a variation of the Caesar cipher, were broken. Provenzano's cipher used numbers, so that "A" would be written as "4", "B" as "5", and so on.
In 2011, Rajib Karim was convicted in the United Kingdom of "terrorism offences" after using the Caesar cipher to communicate with Bangladeshi Islamic activists discussing plots to blow up British Airways planes or disrupt their IT networks. Although the parties had access to far better encryption techniques (Karim himself used PGP for data storage on computer disks), they chose to use their own scheme (implemented in Microsoft Excel), rejecting a more sophisticated code program called Mujahedeen Secrets "because 'kaffirs', or non-believers, know about it, so it must be less secure". This constituted an application of security through obscurity.
Breaking the cipher
The Caesar cipher can be easily broken even in a ciphertext-only scenario. Two situations can be considered:
an attacker knows (or guesses) that some sort of simple substitution cipher has been used, but not specifically that it is a Caesar scheme;
an attacker knows that a Caesar cipher is in use, but does not know the shift value.
In the first case, the cipher can be broken using the same techniques as for a general simple substitution cipher, such as frequency analysis or pattern words. While solving, it is likely that an attacker will quickly notice the regularity in the solution and deduce that a Caesar cipher is the specific algorithm employed.
In the second instance, breaking the scheme is even more straightforward. Since there are only a limited number of possible shifts (25 in English), they can each be tested in turn in a brute force attack. One way to do this is to write out a snippet of the ciphertext in a table of all possible shifts – a technique sometimes known as "completing the plain component". The example given is for the ciphertext ""; the plaintext is instantly recognisable by eye at a shift of four. Another way of viewing this method is that, under each letter of the ciphertext, the entire alphabet is written out in reverse starting at that letter. This attack can be accelerated using a set of strips prepared with the alphabet written down in reverse order. The strips are then aligned to form the ciphertext along one row, and the plaintext should appear in one of the other rows.
Another brute force approach is to match up the frequency distribution of the letters. By graphing the frequencies of letters in the ciphertext, and by knowing the expected distribution of those letters in the original language of the plaintext, a human can easily spot the value of the shift by looking at the displacement of particular features of the graph. This is known as frequency analysis. For example, in the English language the plaintext frequencies of the letters , , (usually most frequent), and , (typically least frequent) are particularly distinctive. Computers can also do this by measuring how well the actual frequency distribution matches up with the expected distribution; for example, the chi-squared statistic can be used.
For natural language plaintext, there will typically be only one plausible decryption, although for extremely short plaintexts, multiple candidates are possible. For example, the ciphertext could, plausibly, decrypt to either "" or "" (assuming the plaintext is in English); similarly, "" to "" or ""; and "" to "" or "" (see also unicity distance).
With the Caesar cipher, encrypting a text multiple times provides no additional security. This is because two encryptions of, say, shift A and shift B, will be equivalent to a single encryption with shift . In mathematical terms, the set of encryption operations under each possible key forms a group under composition.
See also
Scytale
Notes
Bibliography
F. L. Bauer, Decrypted Secrets, 2nd edition, 2000, Springer. .
David Kahn, The Codebreakers: The Story of Secret Writing, Revised ed. 1996. .
Chris Savarese and Brian Hart, The Caesar Cipher, 1999
External links
Classical ciphers
Group theory |
48551 | https://en.wikipedia.org/wiki/Television%20licence | Television licence | A television licence or broadcast receiving licence is a payment required in many countries for the reception of television broadcasts, or the possession of a television set where some broadcasts are funded in full or in part by the licence fee paid. The fee is sometimes also required to own a radio or receive radio broadcasts. A TV licence is therefore effectively a hypothecated tax for the purpose of funding public broadcasting, thus allowing public broadcasters to transmit television programmes without, or with only supplemental funding from radio and television advertisements. However, in some cases, the balance between public funding and advertisements is the opposite – the Polish broadcaster TVP receives more funds from advertisements than from its TV tax.
History
The early days of broadcasting presented broadcasters with the problem of how to raise funding for their services. Some countries adopted the advertising model, but many others adopted a compulsory public subscription model, with the subscription coming in the form of a broadcast licence paid by households owning a radio set (and later, a TV set).
The UK was the first country to adopt the compulsory public subscription model with the licence fee money going to the BBC, which was formed on 1 January 1927 by royal charter to produce publicly funded programming, yet remaining independent from the government, both managerially and financially. The licence was originally known as a wireless licence.
With the arrival of television, some countries created a separate additional television licence, while others simply increased the radio licence fee to cover the additional cost of TV broadcasting, changing the licence's name from "radio licence" to "TV licence" or "receiver licence". Today, most countries fund public radio broadcasting from the same licence fee that is used for television, although a few still have separate radio licences, or apply a lower or no fee at all for consumers who only have a radio. Some countries, such as the United Kingdom and Japan, also have different fees for users with colour or monochrome TV sets. In most cases, the fee for colour TV owners is much higher than the fee for monochrome TV owners. Many give discounts, or charge no fee, for elderly and/or disabled consumers.
Faced with the problem of licence fee "evasion", some countries choose to fund public broadcasters directly from taxation or via other less avoidable methods such as a co-payment with electricity billing. In some countries, national public broadcasters also carry supplemental advertising.
In 1989, the Council of Europe created the European Convention on Transfrontier Television, which regulates, among other things: advertising standards and the time and the format of breaks, which also has an indirect effect on the usage of licensing. In 1993, this treaty entered into force when it achieved seven ratifications including five EU member states. , it has been acceded to by 34 countries..
Television licences around the world
The Museum of Broadcast Communications in Chicago notes that two-thirds of the countries in Europe and half of the countries in Asia and Africa use television licences to fund public television. TV licensing is rare in the Americas, largely being confined to French overseas departments and British Overseas Territories.
In some countries, radio channels and broadcasters' websites are also funded by a radio receiver licence, giving access to radio and web services free of commercial advertising.
The actual cost and implementation of the television licence varies greatly from country to country. Below is a listing of the licence fee in various countries around the world.
Albania
The Albanian licence fee is 100 lekë (€0.8) per month, paid in the electricity bill, equivalent to 1200 lekë (€9.6) annually. However, the licence fee makes up only a small part of RTSH's funding. RTSH is mainly funded directly from the government through taxes, which makes up 58% of RTSH's funding. The remaining 42% comes from commercials and the licence fee.
Austria
Under the Austrian TV and Radio Licence Law (Fernseh- und Hörfunklizenzrecht), all broadcasting reception equipment in use or operational at a given location must be registered. The location of the equipment is taken to be places of residence or any other premises with a uniform purpose of use.
The agency responsible for licence administration in Austria is the Gebühren Info Service, a fully-owned subsidiary of the Austrian public broadcaster, (ORF). As well as being an agency of the Federal Ministry of Finance, it is charged with performing functions concerning national interests. The transaction volume in 2007 amounted to €682 million, 66% of which are allocated to the ORF for financing the organization and its programs. The remaining 34% are allocated to the federal government and the local governments, including the taxation and funding of local cultural activities. GIS employs some 191 people and approximately 125 freelancers in field service. 3.4 million Austrian households are registered with the GIS. The percentage of licence evaders in Austria amounts to 2.5%.
The main principle of the GIS's communication strategy is to inform instead of control. To achieve this goal, the GIS uses a four-channel communication strategy:
Above-the-line activities (advertising campaigns in printed media, radio and TV).
Direct Mail.
Distribution channels – outlets where people can acquire the necessary forms for registering (post offices, banks, tobacconists & five GIS Service Centers throughout Austria).
Field service – customer consultants visiting households not yet registered.
The annual television & radio licence varies in price depending on which state one lives in. As of 2017, Styria has the highest annual television licence cost, at €320.76, and Salzburg has the highest annual radio licence cost, at €90.00.
Annual fees from April 2017 are:
Bosnia and Herzegovina
The licence fee in Bosnia and Herzegovina is around €46 per year. The war and the associated collapse of infrastructure caused very high evasion rates. This has, in part, been resolved by collecting the licence fee as part of a household's monthly telephone bill. The licence fee is divided between three broadcasters:
50% for BHRT (Radio and Television of Bosnia and Herzegovina) – serving as the main radio and television broadcaster in Bosnia at state level. It is Bosnia's only member in the EBU.
25% for RTVFBiH (Radio-Television of the Federation of Bosnia and Herzegovina) – a radio and television broadcaster that primarily serves the population in the Federation of BiH.
25% for RTRS (Radio-Television of the Republika Srpska) – a radio and television broadcaster which primarily serves the population of the Republika Srpska.
There is a public corporation in the establishment which should be consisted of all public broadcasters in BiH.
Croatia
The licence fee in Croatia is regulated by the Croatian Radiotelevision Act. , the last incarnation of the act dates from 2003.
The licence fee is charged to all owners of equipment, capable of receiving TV and radio broadcasts. The total amount of the fee is set each year as a percentage of the average net salary in the previous year, currently equal to 1.5%. This works out at about €137 per year per household with at least one radio or TV receiver.
The fee is the main source of revenue for the national broadcaster Hrvatska Radiotelevizija (HRT), and a secondary source of income for other national and local broadcasters, which receive a minority share of this money. The Statute of HRT further divides their majority share to 66% for television and 34% for the radio, and sets out further financial rules.
By law, advertisements and several other sources of income are allowed to HRT. However, the percentage of air time which may be devoted to advertising is limited by law to 9% per hour and is lower than the one that applies to commercial broadcasters. In addition, other rules govern advertising on HRT, including a limit on a single commercial during short breaks and no breaks during films.
Croatian television law was formed in compliance with the European Convention on Transfrontier Television that Croatia had joined between 1999 and 2002.
Czech Republic
The licence fee in the Czech Republic is 135 Kč (€4.992 [using the 27 July 2015 exchange rate]) per month as from 1 January 2008, amounting to €59.904 per year. This increase is to compensate for the abolition of paid advertisements except in narrowly defined circumstances during a transitional period. Each household that owns at least one TV set pays for one TV licence or radio licence, regardless of how many televisions and radios they own. Corporations and the self-employed must pay for a licence for each television and radio.
Denmark
, the Danish media licence fee is 1353 kr, or €182 per year. This fee applies to all TVs, computers with Internet access or with TV tuners, or other devices that can receive broadcast TV, which means that the customer has to pay the TV licence if the customer has a relatively new mobile phone. The monochrome TV rate is no longer offered after 1 January 2007. The majority of the licence fee is used to fund the national radio and TV broadcaster DR. However, a proportion is used to fund TV 2's regional services. TV2 itself used to get means from the licence fee. However, it is now funded exclusively through advertising revenue. Though economically independent from the licence fee, TV2 still has obligations and requirements towards serving the public which are laid down in a so-called "public service contract" between the government and all public service providers. Commercials broadcasting in TV2 may only be broadcast between programmes, including films. TV2 receives indirect subsidies through favourable loans from the Danish state. TV2 also gets a smaller part of the licence for their 8 regional TV stations, which have 30 minutes of the daily prime time of the channel (which is commercial-free) and may request additional time on a special new regional TV channel (in which the regional channel has several other non-commercial broadcasters apart from the TV2 regional programmes).
In 2018, the government of Denmark decided to "abolish" the fee from 2019. The media licence was abolished in 2022. Currently, the fee is replaced by general taxation; particularly, in most cases, it is an addition to the Danish income tax.
France
In 2020, the television licence fee in mainland France (including Corsica) was €138. In the overseas dedpartments and collectivities, it was €88. The licence funds services provided by Radio France, France Télévisions and Radio France Internationale. Before the closure of France Ô in August 2021, Metropolitan France receives France 2, France 3, France 5, Arte, France 4 and France Ô, while overseas departments receive Outre-Mer 1ère also and France Ô, in addition to the metropolitan channels, now available through the expansion of the TNT service. Public broadcasters in France supplement their licence fee income with revenue from advertising, but changes in the law in 2009, designed to stop public television chasing ratings, have stopped public broadcasters from airing advertising after 8 pm. Between 1998 and 2004 the proportion of France Télévision's income that came from advertising declined from around 40% to 30%. To keep the cost of collection low, the licence fee in France is collected as part of local taxes.
People who don't own a TV set can opt out in their tax declaration form.
Germany
The licence fee in Germany is a blanket contribution of €18.36 per month (€220 per annum) for all apartments, secondary residences, holiday homes as well as summer houses and is payable regardless of equipment or television and radio usage. Businesses and institutions must also contribute,and the amount is based on several factors, including the number of employees, vehicles and, for hotels, number of beds. The fee is billed monthly but typically paid quarterly, and yearly advanced payments are possible. It is collected by a public collection agency called Beitragsservice von ARD, ZDF und Deutschlandradio which is sometimes criticized for its measures. Since 2013, only recipients of a certain kind of social benefit such as Arbeitslosengeld II or student loans and grants are exempt from the licence fee and those with certain disabilities can apply to pay a reduced contribution of €5.83. Low incomes in general like those of freelancers, trainees and the receipt of full unemployment benefit (Arbeitslosengeld I) are no longer reasons for an exemption. Since the fee is billed to a person and not to a dwelling, empty dwellings, for instance, those being renovated, or for which a tenant is being sought after the previous tenant moved away, remain exempt. The same is true for a house or flat which is for sale and all residents, including the owner, have moved out since those previous residents and the owner will be charged at their new address.
Prior to 2013, only households and businesses with at least one television were required to pay. Households with no televisions but with a radio or an Internet-capable device were subject to a reduced radio-only fee.
The licence fee is used to fund the public broadcasters ZDF and Deutschlandradio, as well as the nine regional broadcasters of the ARD network, who, altogether, run 22 television channels (10 regional, 10 national, 2 international: Arte and 3sat) and 61 radio stations (58 regional, 3 national). Two national television stations and 32 regional radio stations carry limited advertising. The 14 regional regulatory authorities for the private broadcasters are also funded by the licence fee (and not by government grants), and in some states, non-profit community radio stations also get small amounts of the licence fee. In contrast to ARD, ZDF and Deutschlandradio, Germany's international broadcaster, Deutsche Welle, is fully funded by the German federal government, though much of its new content is provided by the ARD.
Germany currently has one of the largest total public broadcast budgets in the world. The per capita budget is close to the European average. Annual income from licence fees reached more than €7.9 billion in 2016.
The board of public broadcasters sued the German states for interference with their budgeting process, and on 11 September 2007, the Supreme Court decided in their favour. This effectively rendered the public broadcasters independent and self-governing.
Public broadcasters have announced that they are determined to use all available ways to reach their "customers" and as such have started a very broad Internet presence with media portals, news and TV programs. National broadcasters have abandoned an earlier pledge to restrict their online activities. This resulted in newspapers taking court action against the ARD, claiming that the ARD's Tagessschau smartphone app, which provides news stories at no cost to the app user, was unfairly subsidised by the licence fee, to the detriment of free-market providers of news content apps. The case was dismissed with the court advising the two sides to agree on a compromise.
Ghana
The licence fee in Ghana is used to fund the Ghana Broadcasting Corporation (GBC),and it was reintroduced in 2015. TV users have to pay between GH¢36 and GH¢60 per year for using one or more TVs at home.
Greece
The licence fee in Greece is indirect but obligatory and paid through electricity bills. The amount to be paid is €51.60 (2013) for every separate account of the electrical company (including residence, offices, shops and other places provided with electricity). Its beneficiary is the state broadcaster Ellinikí Radiofonía Tileórasi (ERT). The predicted 2006 annual revenue of ERT from the licence fee, officially called the retributive fee, is €262.6 million (from €214.3 million in 2005).
There has been some discussion about imposing a direct licence fee, after complaints from people who do not own a television set and yet are still forced to fund ERT. An often-quoted joke is that even the dead pay the licence fee, since graveyards pay electricity bills.
In June 2013, ERT was closed down to save money for the Greek government. In the government decree, it was announced during that time, licence fees are to be temporarily suspended.
In June 2015, ERT was reopened, and licence holders are currently paying €36 per year.
Ireland
As of 2020, the current cost of a television licence in Ireland is €160. However the licence is free to anyone over the age of 70 (regardless of means or circumstances), to some over 66, and to the blind (although these licences are in fact paid for by the state). The Irish post office, An Post, is responsible for the collection of the licence fee and commencement of prosecution proceedings in cases of non-payment. However, An Post has signaled its intention to withdraw from the licence fee collection business. The Irish TV licence makes up 50% of the revenue of RTÉ, the national broadcaster. The rest comes from RTÉ broadcasting advertisements on its radio and TV stations. Some RTÉ services have not relied on the licence as part of their income in the past, such as RTÉ 2fm, RTÉ Aertel, RTÉ.ie, and the transmission network operate on an entirely commercial basis. Since 2012 RTÉ 2FM has seen some financial support from the licence.
The licence fee does not entirely go to RTÉ. After collection costs, 5% is used for the Broadcasting Authority of Ireland's "Sound and Vision Scheme", which provides a fund for programme production and restoration of archive material which is open to applications from any quarter. From 2011 until 2018 5% of what RTÉ then received was granted to TG4. RTÉ are required to provide TG4 with programming. The remainder of TG4's funding is direct state grants and commercial income.
The licence applies to particular premises, so a separate licence is required for holiday homes or motor vehicles which contain a television. The licence must be paid for premises that have any equipment that can potentially decode TV signals, even those that are not RTÉ's.
Italy
In 2014, the licence fee in Italy was €113.50 per household with a TV set, independent of use.
There is also a special licence fee paid by owners of one or more TV or radio sets on public premises or anyhow outside the household context, independent of the use. In 2016, the government opted to lower the licence fee to 100 euros per household and work it into the electricity bill, in an attempt to eliminate evasion. As of 2018, the fee is €90.00.
66% of RAI's income comes from the licence fee (going up from about half of total income about seven years ago). The remainder came from advertising and other income sources, contributing to about 34% of RAI's income in 2014, in which advertising alone contributed to 25% of total income.
Japan
In Japan, the annual licence fee (, jushin-ryō, "receiving fee") for terrestrial television broadcasts is ¥14,205. The fee is slightly less if paid by direct debit and the fee is ¥24,740 for those receiving satellite broadcasts. There is a separate licence for monochrome TV, and fees are slightly less in Okinawa. The Japanese licence fee pays for the national broadcaster, Nippon Hōsō Kyōkai (NHK).
While every household in Japan with a television set is required to have a licence, it was reported in 2006 that "non-payment [had] become an epidemic" because of a series of scandals involving NHK. In 2005, it was reported that, "there is no fine or any other form of sanction for non-payment".
Mauritius
The licence fee in Mauritius is Rs 1,200 per year (around €29). It is collected as part of the electricity bill. The proceeds of the licence fee are used to fund the Mauritius Broadcasting Corporation. The licence fee makes up 60% of MBC's funding. Most of the remaining 40% comes from television and radio commercials. However, the introduction of private broadcasting in 2002 has put pressure on MBC's decreasing commercial revenues. Furthermore, MBC is affecting the profitability of the private stations that want the government to make MBC commercial free.
Montenegro
Under the Broadcasting Law of December 2002, every household and legal entity, situated in Montenegro, where technical conditions for reception of at least one radio or television programme have been provided, is obliged to pay a monthly broadcasting subscription fee. The monthly fee is €3.50, or €42.00 per annum.
The Broadcasting Agency of Montenegro is in charge of collecting the fee (currently through the telephone bills, but after the privatization of state-owned Telekom, the new owners, T-com, announced that they will not administer the collection of the fee from July 2007).
The funds from the subscription received by the broadcasting agency belong to:
the republic's public broadcasting services (radio and television) – 75%
the agency's fund for the support of the local public broadcasting services (radio and television) – 10%
the agency's fund for the support of the commercial broadcasting services (radio and television) – 10%
the agency – 5%
Namibia
The licence fee in Namibia was N$204 (about €23) in 2001. The fee is used to fund the Namibian Broadcasting Corporation.
Pakistan
The television licence in Pakistan is Rs 420 per year. It is collected as a Rs 35 per month charge to all consumers of electricity. The proceeds of the fee and advertising are used to fund PTV.
Poland
As of 2020, the licence fee in Poland for a television set is 22.70 zł per month or 245.15 zł per year. The licence may be paid monthly, bi-monthly, quarterly, half-yearly or annually, but the total cost when paying for less than a year in advance is higher (up to 10%). Those that have no TV but have a radio must pay the radio-only licence which costs 7.00 zł per month or 84.00 zł per year. The licence is collected and maintained by the Polish post office, Poczta Polska.
Around 60% of the fee goes to Telewizja Polska with the rest going to Polskie Radio. In return, public television is not permitted to interrupt its programmes with advertisements (advertisements are only allowed between programmes). The TV licence is waived for those over 75. Only one licence is required for a single household irrespective of the number of sets, but in case of commercial premises one licence for each set must be paid (this includes radios and TVs in company vehicles). However, public health institutions, all nurseries, educational institutions, hospices and retirement homes need to pay only single licence per building or building complex they occupy.
There is a major problem with licence evasion in Poland. There are two main reasons for the large amount. Firstly, licence collection is based on the honesty-based opt-in system, rather than the systems of other countries of opt-out, i.e. a person liable to pay the licence has to register on their own so there is no effective means to compel people to register and to prosecute those that fail to do so. Also as the licensing inspectors, who are usually postmen, do not have the right of entry to inspect premises and must get the owner's or main occupier's permission to enter. Secondly, the public media are frequently accused of being pro-government propaganda mouthpieces, and not independent public broadcasters. Due to this, it is estimated that back in 2012 around 65% of households evade the licence fee, compared to an average of 10% in the European Union. In 2020, it was revealed that only 8% of Polish households paid the licence fee, and as a result, the government gave a 2 billion złoty grant for public media.
Portugal
RTP was financed through government grants and advertising. Since 1 September 2003, the public TV and radio has been also financed through a fee. The "Taxa de Contribuição Audiovisual" (Portuguese for Broadcasting Contribution Tax) is charged monthly through the electricity bills and its value is updated at the annual rate of inflation.
Due to the economic crisis that the country has suffered, RTP's financing model has changed. In 2014, government grants ended, with RTP being financed only through the "Taxa de Contribuição Audiovisual" and advertising. Since July 2016, the fee is €2.85 + VAT, with a final cost of €3.02 (€36.24 per year).
RTP1 can only broadcast 6 minutes of commercial advertising per hour (while commercial televisions can broadcast 12 minutes per hour). RTP2 is a commercial advertising-free channel as well the radios stations. RTP3 and RTP Memória can only broadcast commercial advertising on cable, satellite and IPTV platforms. On DTT they are commercial advertising-free.
Serbia
Licence fees in Serbia are bundled together with electricity bills and collected monthly. There have been increasing indications that the Government of Serbia is considering the temporary cessation of the licence fee until a more effective financing solution is found. However, as of 28 August 2013 this step has yet to be realized.
Slovakia
The licence fee in Slovakia is €4.64 per month (€55.68 per year). In addition to the licence fee RTVS also receives state subsidies and money from advertising.
Slovenia
Since June 2013, the annual licence fee in Slovenia stands at €153.00 (€12.75 per month) for receiving both television and radio services, or €45.24 (€3.77 per month) for radio services only, paid by the month. This amount is payable once per household, regardless of the number of televisions or radios (or other devices capable of receiving TV or radio broadcasts). Businesses and the self-employed pay this amount for every set, and pay higher rates where they are intended for public viewing rather than the private use of its employees.
The licence fee is used to fund national broadcaster RTV Slovenija. In the calendar year 2007, the licence fee raised €78.1 million, or approximately 68% of total operating revenue. The broadcaster then supplements this income with advertising, which by comparison provided revenues of €21.6 million in 2007, or about 19% of operating revenue.
South Africa
The licence fee in South Africa is R265 (about €23) per annum (R312 per year if paid on a monthly basis) for TV. A concessionary rate of R70 is available for those over 70, and disabled persons or war veterans who are on social welfare. The licence fee partially funds the public broadcaster, the South African Broadcasting Corporation. The SABC, unlike some other public broadcasters, derives much of its income from advertising. Proposals to abolish licensing have been circulating since October 2009. The national carrier hopes to receive funding entirely via state subsidies and commercials.
According to IOL.co.za: "Television licence collections for the 2008/09 financial year (April 1, 2008, to March 31, 2009) amounted to R972m." (almost €90m)
South Korea
In South Korea, the television licence fee () is collected for the Korean Broadcasting System and the Educational Broadcasting System. The fee is ₩30,000 per year (about €20.67), and is bundled with electricity bills. It has stood at this level since 1981, and now makes up less than 40% of KBS's income and less than 8% of EBS's income. Its purpose is to maintain public broadcasting in South Korea, and to give public broadcasters the resources to do their best to produce and broadcast public interest programs.
Switzerland
In Swiss law, any person who receives the reception of radio or television programs from the national public broadcaster SRG SSR must be registered and is subject to household licence fees. The licence fee is a flat rate of CHF 355 per year for TV and radio for single households, and CHF 670 for multiple households, e.g. old peoples homes. The licence fee in private households can also be paid quarterly. Households which are unable to receive broadcasting transmissions are exempt from the current fees up and until 2023 if the resident applies for an opt-out. The collection of licence fees is managed by the company Serafe AG, which is a wholly-owned subsidiary of the insurance collections agency Secon. For businesses, the fee is based on the companies' annual turnover and an annual fee is on sliding-scale between nothing for businesses with an annual turnover of less than CHF 500,000, and CHF 35,590 per year for businesses with an annual turnover of over a billion francs. The fee is collected by the Swiss Federal Tax Administration.
A large majority of the fee, which totals CHF 1.2 billion, goes to SRG SSR, while the rest of the money goes a collection of small regional radio and television broadcasters. Non-payment of licence fees means fines for up to CHF 100,000.
On 4 March 2018, there was an initiative on whether TV licensing should be scrapped under the slogan "No Billag", which was reference to the previous collector of the TV licence fees. The parliament have advocated a no vote. Voters had overwhelming rejected the proposal by 71.6% to 28.4% and in all cantons. The fee however was significantly reduced.
Turkey
According to the law, a licence fee at the rate of 8% or 16%, depending on equipment type (2% from computer equipment, 10% from cellular phones, 0.4% from automotives) is paid to the state broadcaster TRT by the producer/importer of the TV receiving equipment. Consumers indirectly pay this fee only once, at the initial purchase of the equipment. Also, 2% tax is used to be cut from each household/commercial/industrial electricity bill monthly, according to a law which was abolished recently. Additionally, TRT also receives funding via advertisements.
No registration is required for purchasing TV receiver equipment, except for cellular phones, which is mandated by a separate law.
United Kingdom
A television licence is required for each household where television programmes are watched or recorded as they are broadcast, irrespective of the signal method (terrestrial, satellite, cable or the Internet). As of September 2016, users of BBC iPlayer must also have a television licence to watch on-demand television content from the service. As of 1 April 2017, after the end of a freeze that began in 2010, the price of a licence may now increase to account for inflation. The licence fee in 2018 was £150.50 for a colour and £50.50 for a black and white TV Licence. As of April 2019, the licence fee is £154.50 for a colour and £52.00 for a black and white TV Licence. As it is classified in law as a tax, evasion of licence fees is a criminal offence. 204,018 people were prosecuted or fined in 2014 for TV licence offences: 173,044 in England, 12,536 in Wales, 4,905 people in Northern Ireland and 15 in the Isle of Man.
The licence fee is used almost entirely to fund BBC domestic radio, television and internet services. The money received from the fee represents approximately 75% of the cost of these services, with most of the remainder coming from the profits of BBC Studios, a commercial arm of the corporation which markets and distributes its content outside of the United Kingdom, and operates or licences BBC-branded television services and brands. The BBC also receives some funding from the Scottish Government via MG Alba to finance the BBC Alba Gaelic-language television service in Scotland. The BBC used to receive a direct government grant from the Foreign and Commonwealth Office to fund television and radio services broadcast to other countries, such as the BBC World Service radio and BBC Arabic Television. These services run on a non-profit, non-commercial basis. The grant was abolished on 1 April 2014, leaving these services to be funded by the UK licence fee, a move which has caused some controversy.
The BBC is not the only public service broadcaster. Channel 4 is also a public television service, but it is funded through advertising. The Welsh language S4C in Wales is funded through a combination of a direct grant from the Department for Culture, Media and Sport, advertising and receives some of its programming free of charge by the BBC (see above). These other broadcasters are all much smaller than the BBC. In addition to the public broadcasters, the UK has a wide range of commercial television funded by a mixture of advertising and subscription. A television licence is still required of viewers who solely watch such commercial channels, although 74.9% of the population watches BBC One in any given week, making it the most popular channel in the country. A similar licence, mandated by the Wireless Telegraphy Act 1904, existed for radio, but was abolished in 1971.
Countries where the TV licence has been abolished
The following countries have had television licences, but subsequently abolished them:
Australia
Radio licence fees were introduced in Australia in the 1920s to fund the first privately owned broadcasters, which were not permitted to sell advertising. With the formation of the government-owned Australian Broadcasting Commission in 1932, the licence fees were used to fund ABC broadcasts while the privately-owned stations were permitted to seek revenue from advertising and sponsorship. Television licence fees were also introduced in 1956 when the ABC began TV transmissions. In 1964 a television licence, issued on a punch card, cost £6 (A$12); the fine for not having a licence was £100 (A$200).
All licence fees were abolished in 1974 by the Australian Labor Party government led by Gough Whitlam on the basis that the near-universality of television and radio services meant that public funding was a fairer method of providing revenue for government-owned radio and television broadcasters. The ABC has since then been funded by government grants, now totalling around A$1.13 billion a year, and its own commercial activities (merchandising, overseas sale of programmes, etc.).
Belgium
Flemish region and Brussels
The Flemish region of Belgium and Brussels abolished its television licence in 2001. The Flemish broadcaster VRT is now funded from general taxation.
Walloon region
As of 1 January 2018, the licence fee in the Walloon region has been abolished. All licences still in effect at this point will remain in effect and payable until the period is up, but will not be renewed after that period (for example, a licence starting 1 April 2017 will still need to be paid until 31 May 2018. After this point, payment for a licence will not be required).
The licence fee in Belgium's Walloon region (encompassing the French and German-speaking communities) was €100.00 for a TV and €0.00 for a radio in a vehicle, listed in Article 1 of the Government of Wallonia decree of 1 December 2008. Only one licence was needed for each household with a functional TV receiver regardless of the number, but each car with a radio had to have a separate car radio licence. Household radios did not require a licence. The money raised by the fee was used to fund Belgium's French and German public broadcasters (RTBF and BRF respectively). The TV licence fee was paid by people with surnames beginning with a letter between A and J between 1 April and 31 May inclusive; those with surnames beginning with a letter between K to Z paid between 1 October and 30 November inclusive. People with certain disabilities were exempt from paying the television licence fee. Hotels and similar lodging establishments paid an additional fee of €50.00 for each additional functional TV receiver and paid between 1 January and 1 March inclusive.
Bulgaria
Currently, the public broadcasters Bulgarian National Television (BNT) and Bulgarian National Radio (BNR) are almost entirely financed by the national budget of Bulgaria. After the fall of Communism and the introduction of democracy in the 1990s, the topic of financing public television and radio broadcasting was widely discussed. One of the methods to raise funding was by collecting a user's fee from every citizen of Bulgaria. Parliament approved and included the fee in the Radio and Television Law; however, the president imposed a veto on the law and a public discussion on the fairness of the decision started. Critics said that this is unacceptable, as many people will be paying for a service they may not be using. The parliament decided to keep the resolution in the law but imposed a temporary regime of financing the broadcasters through the national budget. The law has not been changed to this day; however, the temporary regime is still in effect and has been constantly prolonged. As a result, there is no fee to pay and the funding comes from the national budget.
Canada
Canada eliminated its broadcasting receiver licence in 1953, replacing it with TV equipment excise taxes, shortly after the introduction of a television service. The Radiotelegraph Act of 6 June 1913 established the initial Canadian policies for radio communication. Similar to the law in force in Britain, this act required that operation of "any radiotelegraph apparatus" required a licence, issued by the Minister of the Naval Service. This included members of the general public who only possessed a radio receiver and were not making transmissions, who were required to hold an "Amateur Experimental Station" licence, as well as pass the exam needed to receive an "Amateur Experimental Certificate of Proficiency", which required the ability to send and receive Morse code at five words a minute. In January 1922 the government lowered the barrier for individuals merely interested in receiving broadcasts, by introducing a new licence category, Private Receiving Station, that removed the need to qualify for an amateur radio licence. The receiving station licences initially cost $1 and had to be renewed yearly.
The licence fee eventually rose to $2.50 per year to provide revenue for both radio and television broadcasts by the Canadian Broadcasting Corporation, however, it was eliminated effective 1 April 1953. This action exempted broadcast-only receivers from licensing, and the Department of Transport (DOT) was given authority to exempt other receiver types from licensing as it saw fit. DOT exempted all "home-type" receivers capable of receiving any radio communications other than "public correspondence" (defined as "radio transmissions not intended to be received by just anyone but rather by a member of the public who has paid for the message" – examples include ship-to-shore radiotelephone calls or car-phone transmissions). After 1952, licences were required in Canada only for general coverage shortwave receivers with single-sideband capability, and VHF/UHF scanners which could tune to the maritime or land mobile radiotelephone bands.
Beginning in 1982, in response to a Canadian court's finding that all unscrambled radio signals imply a forfeiture of the right to privacy, the DOC (Department of Communications) required receiver licensing only in cases where it was necessary to ensure technical compatibility with the transmitter.
Regulation SOR-89-253 (published in the 4 February 1989 issue of the Canada Gazette, pages 498–502) removed the licence requirement for all radio and TV receivers.
The responsibility for regulating radio spectrum affairs in Canada has devolved to a number of federal departments and agencies. It was under the oversight of the Department of Naval Service from 1913 until 1 July 1922, when it was transferred to civilian control under the Department of Marine and Fisheries, followed by the Department of Transport (from 1935 to 1969), Department of Communications (1969 to 1996) and most recently to Industry Canada (since 1995).
Cyprus
Cyprus used to have an indirect but obligatory tax for CyBC, its state-run public broadcasting service. The tax was added to electricity bills, and the amount paid depended on the size of the home. By the late 1990s, it was abolished due to pressure from private commercial radio and TV broadcasters. CyBC is currently funded by advertising and government grants.
Finland
On 1 January 2013, Finland replaced its television licence with a direct unconditional income public broadcasting tax (, ) for individual taxpayers. Unlike the previous fee, this tax is progressive, meaning that people will pay up to €163 depending on income. The lowest income recipients, persons under the age of eighteen years, and residents in autonomous Åland are exempt from the tax.
Before the introduction of the Yle tax, the television fee in Finland used to be between €244.90 and €253.80 (depending on the interval of payments) per annum for a household with TV (as of 2011). It was the primary source of funding for Yleisradio (Yle), a role which has now been taken over by the Yle tax.
Gibraltar
It was announced in Gibraltar's budget speech of 23 June 2006 that Gibraltar would abolish its TV licence. The 7,452 TV licence fees were previously used to part fund the Gibraltar Broadcasting Corporation (GBC). However, the majority of the GBC's funding came in the form of a grant from the government.
Hungary
In Hungary, the licence fees nominally exist, but since 2002 the government has decided to pay them out of the state budget. Effectively, this means that funding for Magyar Televízió and Duna TV now comes from the government through taxation.
As of spring 2007, commercial units (hotels, bars etc.) have to pay television licence fees again, on a per-TV set basis.
Since the parliament decides on the amount of public broadcasters' income, during the 2009 financial crisis it was possible for it to decide to cut their funding by more than 30%. This move was publicly condemned by the EBU.
The television licensing scheme has been a problem for Hungarian public broadcasters ever since the initial privatisation changes in 1995, and the public broadcaster MTV has been stuck in a permanent financial crisis for years.
Hong Kong
Hong Kong had a radio and television licence fee imposed by Radio Hong Kong (RHK) and Rediffusion Television. The licence cost $36 Hong Kong dollars per year. Over-the-air radio and television terrestrial broadcasts were always free of charge since 1967, no matter whether they are analogue or digital.
There were public television programmes produced by Radio Television Hong Kong (RTHK). RTHK is funded by the Hong Kong Government, before having its TV channel, it used commercial television channels to broadcast its programmes, and each of the traditional four terrestrial commercial TV channels in Hong Kong (TVB Jade and ATV Home, which carried Cantonese-language broadcasts, and TVB Pearl and ATV World, which carried English-language broadcasts), were required to broadcast 2.5 hours of public television per week. However, there is no such requirement for newer digital channels.
As of 2017, RTHK has three digital television channels RTHK 31, RTHK 32 and RTHK 33. RTHK's own programmes will return to RTHK's channels in the future.
Iceland
The TV licence fee for Iceland's state broadcaster RÚV was abolished in 2007. Instead a poll tax of 17,200 kr. is collected from all people who pay income tax, regardless of whether they use television and radio.
India
India introduced a radio receiver licence system in 1928, for All India Radio Aakaashavani. With the advent of television broadcasting in 1956–57, television was also licensed. With the spurt in television stations beginning 1971–72, a separate broadcasting company, Doordarshan, was formed. The radio and television licences in question needed to be renewed at the post offices yearly. The annual premium for radio was Rs 15 in the 1970s and 1980s. Radio licence stamps were issued for this purpose. License fees for TV were Rs 50. The wireless licence inspector from the post office was authorized to check every house/shop for a WLB (Wireless License Book) and penalize and even seize the radio or TV. In 1984, the licensing system was withdrawn with both of the Indian national public broadcasters, AIR and Doordarshan, funded instead by the Government of India and by advertising.
Indonesia
The radio tax to supplement RRI funding was introduced in 1947, about two years after its foundation and in height of the Indonesian National Revolution, initially set at Rp5 per month. The tax was abolished sometime in the 1980s.
Possibly shortly after TVRI began broadcasting in 1962, the television fee was introduced. Originally the TVRI Foundation (Yayasan TVRI) was assigned to collect the fee, but in 1990 President Suharto enacted a presidential statement to give fee collecting authority to Mekatama Raya, a private company run by his son Sigit Harjojudanto and Suharto's cronies, in the name of the foundation starting in 1991. The problems surrounding the fee collection and the public protests making the company no longer collecting the fee a year later. The television fee then slowly disappeared, though in some places the fee still exist, such as Bandung as of 1998 and Surabaya as of 2001.
According to Act No. 32 of 2002 on Broadcasting, the newly-transformed RRI and TVRI funding comes from several sources, one of them is the so-called "broadcasting fee" (). However, as of today the fee is yet to be implemented. Currently, their funding comes primarily from the annual state budget and "non-tax state revenue", either by advertising or other sources regulated in government regulations.
Israel
Israel's television tax was abolished in September 2015, retroactively to January 2015. The television licence for 2014 in Israel for every household was ₪ 345 (€73) and the radio licence (for car owners only) was ₪ 136 (€29). The licence fee was the primary source of revenue for the Israel Broadcasting Authority, the state broadcaster, which was closed down and replaced by the Israeli Broadcasting Corporation in May 2017; however, its radio stations carry full advertising and some TV programmes are sponsored by commercial entities and the radio licence (for car owners only) for 2020 is ₪ 164 (€41).
Liechtenstein
To help fund a national cable broadcasting network between 1978 and 1998 under the Law on Radio and television, Liechtenstein demanded an annual household broadcasting licence for households that had broadcasting receiving equipment. The annual fee which was last requested in 1998, came to CHF 180. In total, this provided an income of 2.7 million francs of which 1.1 million went the PTT and CHF 250,000 to the Swiss national broadcaster SRG. Since then, the government replaced this with an annual government grant for public media of CHF 1.5 million which is administrated under the supervision of the Mediakommision.
The sole radio station of the principality Radio Liechtenstein, was founded as a private commercial music station in 1995. In 2004, it was nationalised by the government under the ownership of Liechtensteinischer Rundfunk, to create a domestic public broadcasting station broadcasting news and music. The station is funded by commercials and the public broadcasting grant. A commercial television station, 1FLTV, was launched in August 2008.
There have been suggestions of reintroducing a public broadcasting fee in Liechtenstein, and in the 2014–2017 government, budget outlined such as a proposal. However, this was rejected in 2015. One possible reason is that two-thirds of the listenership of Radio Liechtenstein is Swiss and they wouldn't pay such a fee.
Malaysia
Until it was discontinued in April 2000, television licence in Malaysia was paid on an annual basis of MYR 24 (MYR 2 per month), one of the lowest fees for television service in the world. Now, RTM is funded by government tax and advertising, whilst Media Prima owned another four more private broadcasting channels of TV3, NTV7, 8TV and TV9. The other two, TV Alhijrah and WBC are smaller broadcasters. Astro is a paid television service, so they operate by the monthly fees given to them by customers, and it is the same thing for HyppTV and ABNXcess.
Malta
The licence fee in Malta was €34.90. It was used to fund the television (TVM) and radio channels (Radio Malta and Radju Parliament) run by Public Broadcasting Services. Approximately two-thirds of TVM's funding came from the licence fee, with much of the remainder coming from commercials. Malta's television licence was abolished in 2011 when the free-to-air system was discontinued.
Netherlands
Since 1967, advertising has been introduced on public television and radio, but this was only allowed as a small segment before and after news broadcasts. It wasn't until the late 1980s that so-called "floating commercial breaks" were introduced, these breaks are usually segments of multiple commercials with a total duration of 1 to 3 minutes and are placed in-between programmes, to allow programmes themselves to run uninterrupted. At the time, advertising on Sundays still wasn't yet allowed, mainly in part due to the heavy influence of the churches. In 1991 advertising on Sundays slowly began to take place.
With the plan to abolish the licence fee in 2000 due to the excessive collection costs and to pay for public television from government funds, income tax was increased in the late 1990s and maximum run time of commercial breaks was extended to 5 and 7 minutes. The Netherlands Public Broadcasting is now funded by government subsidy and advertising. The amount of time used by commercial breaks may not exceed 15% of the daily available broadcasting time and 10% of the total yearly available time.
New Zealand
Licence fees were first used in New Zealand to fund the radio services of what was to become the New Zealand Broadcasting Corporation. Television was introduced in 1960 and with it the television licence fee, later known as the public broadcasting fee. This was capped at NZ$100 a year in the 1970s, and the country's two television channels, while still publicly owned, became increasingly reliant on advertising. From 1989, it was collected and disbursed by the Broadcasting Commission (NZ On Air) on a contestable basis to support local content production. The public broadcasting fee was abolished in July 1999. NZ On Air was then funded by a direct appropriation from the Ministry for Culture and Heritage.
North Macedonia
As of 19 January 2017, the licence fee was abolished.
The licence fee in the Republic of North Macedonia was around €26 per year. Until 2005 it was collected monthly as part of the electricity bill. The Law on Broadcasting Activity, which was adopted in November 2005, reads that the Public Broadcasting Service – Macedonian Radio and Television (MRT) shall collect the broadcast fee.
The funds collected from the broadcasting fee are allocated in the following manner:
72% for MRT for covering costs for creating and broadcasting programmes;
4.5% for MRT for technical and technological development;
16% for MRD (Makedonska Radiodifuzija – Public operator of the transmission networks of the Public Broadcasting Service) for maintenance and use of the public broadcasting network;
3.5% for MRD for public broadcasting network development and
4% for the Broadcasting Council for regulating and development of the broadcasting activity in the Republic of North Macedonia.
The MRT shall keep 0.5% of the collected funds from the broadcasting fee as a commission fee.
However, MRT still has not found an effective mechanism for collection of the broadcast tax, so it has suffered severe underfunding in recent years.
The Macedonian Government decided to update the Law on Broadcasting authorizing the Public Revenue Office to be in charge of the collection of the broadcast fee.
In addition to broadcast fee funding, Macedonian Radio-Television (MRT) also takes advertising and sponsorship.
The broadcasting fee is paid by hotels and motels are charged one broadcasting fee for every five rooms, legal persons and office space owners are obliged to pay one broadcasting fee to every 20 employees or other persons that use the office space, owners of catering and other public facilities possessing a radio receiver or TV set must pay one broadcasting fee for each receiver/set.
The Government of the Republic of North Macedonia, upon a proposal of the Broadcasting Council, shall determine which broadcasting fee payers in populated areas that are not covered by the broadcasting signal shall be exempt from payment of the broadcasting fee.
The households with a blind person whose vision is impaired over 90% or families with a person whose hearing is impaired with an intensity of over 60 decibels, as determined in compliance with the regulations on disability insurance, were exempt from the duty to pay the broadcasting fee for the household where the family of the person lives.
As of 19 January 2017, the licence fee was abolished, citizens are exempt from paying this fee. Macedonian Radio and television, Macedonian Broadcasting and the Agency for Audio and Audiovisual Media Services will be financed directly from the Budget of the Republic of North Macedonia.
Norway
The licence fee in Norway was abolished in January 2020. Before that, there was a mandatory fee for every household with a TV. The fee was 3000 kr (c. €305) per annum in 2019. The fee was mandatory for any owner of a TV set, and was the primary source of income for Norsk Rikskringkasting (NRK). The licence fee was charged on a per household basis; therefore, addresses with more than one television receiver generally only required a single licence. An exception was made if the household includes persons living at home who no longer was provided for by the parents, e.g. students living at home. If people not in parental care own a separate television they had to pay the normal fee. Since 2020, funding for NRK now comes through taxation from each individual liable for income taxes in Norway.
Romania
The licence fee in Romania was abolished in 2017.
In the past, the licence fee in Romania for a household was 48 RON (€10.857) per year. Small businesses paid about €45 and large businesses about €150. The licence fee was collected as part of the electricity bill. The licence fee makes up part of Televiziunea Română's funding, with the rest coming from advertising and government grants. However, some people allege that it is paid twice (both to the electricity bill and the cable or satellite operator indirectly, although cable and satellite providers are claiming they are not). In Romania, people must prove that they don't own a TV receiver in order not to pay the licence fee, but if they own a computer, they will have to pay, as they can watch TVR content online. Some people have criticized this, because, in recent years, TVR lost a lot of their supervisors, and also because with the analogue switch-off on 17 June 2015, it is still not widely available on digital terrestrial, and it is encrypted on satellite TV (a decryption card and a satellite receiver with card reader must be bought). Also, TVR will shift to DVB-T2, and with many sets only being sold with DVB-T, TVR will become unavailable to some users without a digital terrestrial receiver.
The fee couldn't be avoided, however, as it was to be part of the electricity bill.
In 2016, the Parliament of Romania decided to abolish the fee on 1 January 2017. Since then, TVR's funding mainly comes from government grants and advertising.
Singapore
Residents of Singapore with TVs in their households or TVs and radios in their vehicles were required to acquire the appropriate licences from 1963 to 2010. The cost of the TV licence for a household in Singapore was S$110. Additional licences were required for radios and TVs in vehicles (S$27 and S$110 respectively).
The licence fee for television and radio was removed with immediate effect from 1 January 2011. This was announced during Finance Minister Tharman Shanmugaratnam's budget statement on 18 February 2011. Mr Shanmugaratnam chose to abolish the fees as they were "losing their relevance".
Soviet Union
In the Soviet Union, until 1961, all radio and TV receivers were required to be registered in local telecommunication offices and subscription fee were to be paid monthly. Compulsory registration and subscription fees were abolished on 18 August 1961, and prices on radio and TV receivers were raised to compensate the lost fees. The fee was not re-introduced in the Russian Federation when the Soviet Union collapsed in 1991.
Sweden
On 1 January 2019, the television licence (, literally TV fee) in Sweden was scrapped and replaced by a "general public service fee" (), which is a flat income-based public broadcasting tax of 1% per person, capped at (approximately or ) per year. The administration of the fee is done by the Swedish Tax Agency (), on behalf of the country's three public broadcasters Sveriges Television, Sveriges Radio and Sveriges Utbildningsradio. The fee pays for 5 TV channels, 45 radio channels, and TV and radio on the internet.
Previously the television licence was a household-based flat fee; it was last charged in 2018 at per annum. It was payable in monthly, bimonthly, quarterly or annual instalments, to the agency Radiotjänst i Kiruna, which is jointly owned by SVT, SR and UR. The fee was collected by every household or company containing a TV set, and possession of such a device had to be reported to Radiotjänst as required by law. One fee was collected per household regardless of the number of TV sets either in the home or at alternate locations owned by the household, such as summer houses. Although the fee also paid for radio broadcasting, there is no specific fee for radios, the individual radio licence having been scrapped in 1978. Television licence evasion suspected to be around 11 to 15%. Originally it was referred as the "television licence" (), however it was replaced in the 2000s by term "television fee".
Republic of China (Taiwan)
Between 1959 and the 1970s, all radio and TV receivers in Taiwan were required to have a licence with an annual fee of NT$60. The practice was to prevent influence from mainland China (the People's Republic of China) by tuning in to its channels.
Countries that have never had a television or broadcasting licence
Andorra
Ràdio i Televisió d'Andorra, the public broadcaster, is funded by both advertising and government grants, there is no fee.
Brazil
In Brazil, there is no fee or TV licence. The Padre Anchieta Foundation, which manages TV Cultura and the Cultura FM and Cultura Brasil radio stations, is financed through lendings from the State Government of São Paulo and advertisements and cultural fundraising from the private sector. In December 1997, was created the "Education and Culture Tax", a state tax in São Paulo that financed the programming of TV Cultura and Rádio Cultura stations maintained by Padre Anchieta Foundation. The fee was charged monthly through electricity bills and varied according to the consumers' energy consumption. However, the collection of the fee was declared unconstitutional by the Court of Justice of the State of São Paulo. The public resources dedicated to TV Cultura (that is, the gross budget of the Foundation) was R$74.7 million in 2006, but of those R$36.2 million were donated from private industry partners and sponsors.
The federal company Empresa Brasil de Comunicação, which manages TV Brasil and public radio stations (Rádio MEC and Rádio Nacional), is financed from the Federal Budget, besides profit from licensing and production of programs, institutional advertisement, and service rendering to public and private institutions.
China
China has never had a television licence fee to pay for the state broadcaster. The current state broadcaster, China Central Television (CCTV), established in 1958, is funded almost entirely through the sale of commercial advertising time, although this is supplemented by government funding and a tax of ¥2 per month from all cable television subscribers in the country.
Estonia
In Estonia there are three public TV channels: Eesti Televisioon ETV, ETV2, and ETV+ (ETV+ was launched on 27 September 2015 and mostly targets people who speak Russian). The funding comes from government grant-in-aid. Around 15% of which was until 2008 funded by the fees paid by Estonian commercial broadcasters in return for their exclusive right to screen television advertising. Showing commercials in public broadcasting television was stopped in 2002 (after a previous unsuccessful attempt in 1998–1999). One argument was that its low-cost advertising rates were damaging the ability of commercial broadcasters to operate. The introduction of a licence fee system was considered but ultimately rejected in the face of public opposition.
ETV is currently one of only a few public television broadcasters in the European Union which has neither advertising nor a licence fee and is solely funded by national government grants. At the moment, only RTVE of Spain has a similar model, and from 2013 onwards, Finnish broadcaster Yle also followed suit with a similar model. Latvian broadcaster Latvijas Televīzija adopted the same model since 1 January 2021, through funding from the national budget, after previously using a combination of grant-in-aid from the government and advertising.
Iran
Iran has never levied television licence fees. After the 1979 Islamic Revolution, National Iranian Radio and Television was renamed Islamic Republic of Iran Broadcasting, and it became the state broadcaster. In Iran, private broadcasting is illegal.
Latvia
The Public Broadcasting of Latvia is a consortium of the public radio broadcaster Latvijas Radio and the public TV broadcaster Latvijas Televīzija, which operates the LTV1 and LTV7 channels. After years of debate, the public broadcasters ceased airing commercial advertising from January 1, 2021, and became fully government-funded by the national budget. The introduction of a television licence has been previously debated, but this was opposed by the government.
Luxembourg
Luxembourg has never had a television licence requirement; this is because, until 1993, the country has never had its own national public broadcaster. The country's first and main broadcaster, RTL Télé Lëtzebuerg, is a commercial network financed by advertising, and the only other national broadcaster is the public radio station Radio 100,7, a small radio station funded by the country's Ministry of Culture and sponsorships. The majority of television channels based in Luxembourg are owned by the RTL Group and include both channels serving Luxembourg itself, as well as channels serving nearby countries such as Belgium, France, and the Netherlands, but nominally operating out of and available in Luxembourg.
Monaco
Monaco has never had any kind of listener or viewer broadcasting licence fee. Since the establishment of both Radio Monte-Carlo in 1943 and Télévision Monte-Carlo in 1954, there has never been a charge to pay for receiving the stations as both were entirely funded on a commercial basis.
Nigeria
Television licences are not used in Nigeria, except in the sense of broadcasting licences granted to private networks. The federal government's television station, NTA (Nigerian Television Authority), has two broadcast networks – NTA 1 and NTA 2. NTA 1 is partly funded by the central government and partly by advertising revenue, while NTA 2 is wholly funded by advertisements. Almost all of the thirty-six states have their own television stations funded wholly or substantially by their respective governments.
Spain
RTVE, the public broadcaster, had been funded by government grants and advertising incomes since it launched its radio services in 1937 and television services in 1956. Although the state-owned national radio stations removed all its advertising in 1986, its public nationwide TV channels continued broadcasting commercial breaks until 2009. Since 2010, the public broadcaster is funded by government grants and taxes paid by private nationwide TV broadcasters and telecommunications companies.
United States
In the United States, historically, privately owned commercial radio stations selling advertising quickly proved to be commercially viable enterprises during the first half of the 20th century; though a few governments owned non-commercial radio stations (such as WNYC, owned by New York City from 1922 to 1997), most were owned by charitable organizations and supported by donations. The pattern repeated itself with television in the second half of that century, except that some governments, mostly states, also established educational television stations alongside the privately-owned stations.
The United States created the Corporation for Public Broadcasting (CPB) in 1967, which eventually led to the Public Broadcasting Service (PBS) and National Public Radio (NPR); however, those are loose networks of non-commercial educational (NCE) stations owned by state and local governments, educational institutions, or non-profit organizations, more like U.S. commercial networks (though there are some differences) than European public broadcasters. The CPB and virtually all government-owned stations are funded through general taxes, and donations from individual persons (usually in the form of "memberships") and charitable organizations. Individual programs on public broadcasters may also be supported by underwriting spots paid for by sponsors; typically, these spots are presented at the beginning and conclusion of the program. Because between 53 and 60 percent of public television's revenues come from private membership donations and grants, most stations solicit individual donations by methods including fundraising, pledge drives or telethons which can disrupt regularly scheduled programming. Normal programming can be replaced with specials aimed at a wider audience to solicit new members and donations.
The annual funding for public television in the United States was US$445.5 million in 2014 (including interest revenue).
In some rural portions of the United States, broadcast translator districts exist, which are funded by an ad valorem property tax on all property within the district, or a parcel tax on each dwelling unit within the district. Failure to pay the TV translator tax has the same repercussions as failing to pay any other property tax, including a lien placed on the property and eventual seizure. In addition, fines can be levied on viewers who watch TV from the signals from the translator without paying the fee. As the Federal Communications Commission has exclusive jurisdiction over broadcast stations, whether a local authority can legally impose a fee merely to watch an over-the-air broadcast station is questionable. Depending on the jurisdiction, the tax may be charged regardless of whether the resident watches TV from the translator or instead watches it via cable TV or satellite, or the property owner may certify that they do not use the translator district's services and get a waiver.
Another substitute for TV licences comes through cable television franchise fee agreements. The itemized fee on customers' bills is included or added to the cable TV operator's gross income to fund public, educational, and government access (PEG) television for the municipality that granted the franchise agreement. State governments also may add their own taxes. These taxes generate controversy since these taxes sometimes go into the general fund of governmental entities or there is double taxation (e.g., tax funds public-access television, but the cable TV operator must pay for the equipment or facilities out of its own pocket anyway, or the cable TV operator must pay for earmark projects of the local municipality that are not related to television).
Vietnam
Vietnam has never had a television licence fee. Advertising was introduced in the early 1990s as a way to generate revenue for television stations. The current state broadcaster, Vietnam Television, receives the majority of its funds through advertising and partly from government subsidies. Local television stations in Vietnam are also operated in a similar way.
Detection of evasion of television licences
In many jurisdictions, television licences are enforced. Besides claims of sophisticated technological methods for the detection of operating televisions, detection of illegal television sets can be as simple as the observation of the lights and sounds of an illegally used television in a user's home at night.
United Kingdom
Detection is a fairly simple matter because nearly all homes are licensed, so only those homes that do not have a licence need to be checked.
The BBC claims that "television detector vans" are employed by TV Licensing in the UK, although these claims are unverified by any independent source.
An effort to compel the BBC to release key information about the television detection vans (and possible handheld equivalents) based on the Freedom of Information Act 2000 was rejected. The BBC has stated on record "... detection equipment is complex to deploy as its use is strictly governed by the Regulation of Investigatory Powers Act 2000 (RIPA) and the Regulation of Investigatory Powers (British Broadcasting Corporation) Order 2001. RIPA and the Order outline how the relevant investigatory powers are to be used by the BBC and ensure compliance with human rights." The BBC has also resisted Freedom of Information Act 2000 requests seeking data on the estimated evasion rate for each of the nations of the UK.
Opinions of television licensing systems
Advocates argue that one of the main advantages of television fully funded by a licence fee is that programming can be enjoyed without interruptions for advertisements. Television funded by advertising is not truly free of cost to the viewer, since the advertising is used mostly to sell mass-market items, and the cost of mass-market goods includes the cost of TV advertising, such that viewers effectively pay for TV when they purchase those products. Viewers also pay in time lost watching advertising.
Europeans tend to watch TV one hour less per day than North Americans. but in practice may be enjoying the same amount of television but gaining extra leisure time by not watching advertisements. Even the channels in Europe that do carry advertising carry about 25% less advertising per hour than their North American counterparts.
Critics of receiver licensing point out that a licence is a regressive form of taxation, because poor people pay more for the service in relation to income. In contrast, the advertisement model implies that costs are covered in proportion to the consumption of mass-market goods, particularly luxury goods, so the poorer the viewer, the greater the subsidy. The experience with broadcast deregulation in Europe suggests that demand for commercial-free content is not as high as once thought.
The third option, voluntary funding of public television via subscriptions, requires a subscription level higher than the licence fee (because not all people that currently pay the licence would voluntarily pay a subscription) if quality and/or output volume are not to decline. These higher fees would deter even more people from subscribing, leading to further hikes in subscription levels. In time, if public subscription television were subject to encryption to deny access to non-subscribers, the poorest in society would be denied access to the well-funded programmes that public broadcasters produce today in exchange for the relatively lower cost of the licence.
In 2004, the UK government's Department for Culture, Media and Sport, as part of its BBC Charter review, asked the public what it thought of various funding alternatives. Fifty-nine per cent of respondents agreed with the statement "Advertising would interfere with my enjoyment of programmes", while 31 per cent disagreed; 71 per cent agreed with the statement "subscription funding would be unfair to those that could not pay", while 16 per cent disagreed. An independent study showed that more than two-thirds of people polled thought that, due to TV subscriptions such as satellite television, the licence fee should be dropped. Regardless of this however the Department concluded that the licence fee was "the least worse option".
Another problem, governments use tax money pay for content and content should become public domain but governments give public companies like the British Broadcasting Corporation content monopoly against the public (companies copyright content, people can't resale, remix, or reuse content from their tax money).
In 2005, the British government described the licence fee system as "the best (and most widely supported) funding model, even though it is not perfect". That is, they believe that the disadvantages of having a licence fee are less than the disadvantages of all other methods. In fact, the disadvantages of other methods have led to some countries, especially those in the former Eastern Bloc, to consider the introduction of a TV licence.
Both Bulgaria and Serbia have attempted to legislate to introduce a television licence. In Bulgaria, a fee is specified in the broadcasting law, but it has never been implemented in practice. Lithuania and Latvia have also long debated the introduction of a licence fee but so far made little progress on legislating for one. In the case of Latvia, some analysts believe this is partly because the government is unwilling to relinquish the control of Latvijas Televīzija that funding from general taxation gives it.
The Czech Republic has increased the proportion of funding that the public broadcaster gets from licence fees, justifying the move with the argument that the existing public service broadcasters cannot compete with commercial broadcasters for advertising revenues.
Internet-based broadcast access
The development of the global Internet has created the ability for television and radio programming to be easily accessed outside of its country of origin, with little technological investment needed to implement the capability. Before the development of the Internet, this would have required specially-acquired satellite relaying and/or local terrestrial rebroadcasting of the international content, at considerable cost to the international viewer. This access can now instead be readily facilitated using off-the-shelf video encoding and streaming equipment, using broadband services within the country of origin.
In some cases, no additional technology is needed for international program access via the Internet, if the national broadcaster already has a broadband streaming service established for citizens of their own country. However, countries with TV licensing systems often do not have a way to accommodate international access via the Internet, and instead work to actively block and prevent access because their national licensing rules have not evolved fast enough to adapt to the ever-expanding potential global audience for their material.
For example, it is not possible for a resident of the United States to pay for a British TV Licence to watch all of the BBC's programming, streamed live over the Internet in its original format.
See also
Broadcast licence
City of license
Public broadcasting
Public radio
Public television
References
External links
TV licensing authorities
Broadcasting Fee Association – international organisation for television licence fee collecting organisations
A list of TV licence authorities by the European Audiovisual Observatory
Billag (Switzerland)
PEMRA (Pakistan)
Beitragsservice (Germany)
ORF-GIS (Austria)
Radiotjänst (Sweden)
TV Licences (South Africa)
TV Licensing (United Kingdom)
TV-maksuhallinto (Finland)
Licenses
Television terminology
Broadcast law |
48877 | https://en.wikipedia.org/wiki/Code%20talker | Code talker | A code talker was a person employed by the military during wartime to use a little-known language as a means of secret communication. The term is now usually associated with United States service members during the world wars who used their knowledge of Native American languages as a basis to transmit coded messages. In particular, there were approximately 400 to 500 Native Americans in the United States Marine Corps whose primary job was to transmit secret tactical messages. Code talkers transmitted messages over military telephone or radio communications nets using formally or informally developed codes built upon their native languages. The code talkers improved the speed of encryption and decryption of communications in front line operations during World War II.
There were two code types used during World War II. Type one codes were formally developed based on the languages of the Comanche, Hopi, Meskwaki, and Navajo peoples. They used words from their languages for each letter of the English alphabet. Messages could be encoded and decoded by using a simple substitution cipher where the ciphertext was the native language word. Type two code was informal and directly translated from English into the native language. If there was no word in the native language to describe a military word, code talkers used descriptive words. For example, the Navajo did not have a word for submarine, so they translated it as iron fish.
The name code talkers is strongly associated with bilingual Navajo speakers specially recruited during World War II by the US Marine Corps to serve in their standard communications units of the Pacific theater. Code talking was pioneered by the Cherokee and Choctaw peoples during World War I.
Other Native American code talkers were deployed by the United States Army during World War II, including Lakota, Meskwaki, Mohawk, Comanche, Tlingit, Hopi, Cree, and Crow soldiers; they served in the Pacific, North African, and European theaters.
Languages
Assiniboine
Native speakers of the Assiniboine language served as code talkers during World War II to encrypt communications. One of these code talkers was Gilbert Horn Sr., who grew up in the Fort Belknap Indian Reservation of Montana and became a tribal judge and politician.
Basque
In November 1952, Euzko Deya magazine reported that in May of that year, upon meeting a large number of US Marines of Basque ancestry in a San Francisco camp, Captain Frank D. Carranza had thought of using the Basque language for codes. His superiors were concerned about risk, as there were known settlements of Basque people in the Pacific region, including: 35 Basque Jesuits in Hiroshima, led by Pedro Arrupe; a colony of Basque jai alai players in China and the Philippines; and Basque supporters of Falange in Asia. Consequently, the US Basque code talkers were not deployed in these theaters, instead being used initially in tests and in transmitting logistics information for Hawaii and Australia.
According to Euzko Deya, on August 1, 1942, Lieutenants Nemesio Aguirre, Fernández Bakaicoa, and Juanana received a Basque-coded message from San Diego for Admiral Chester Nimitz. The message warned Nimitz of Operation Apple to remove the Japanese from the Solomon Islands. They also translated the start date, August 7, for the attack on Guadalcanal. As the war extended over the Pacific, there was a shortage of Basque speakers, and the US military came to prefer the parallel program based on the use of Navajo speakers.
In 2017, Pedro Oiarzabal and Guillermo Tabernilla published a paper refuting Euzko Deyas article. According to Oiarzabal and Tabernilla, they could not find Carranza, Aguirre, Fernández Bakaicoa, or Juanana in the National Archives and Records Administration or US Army archives. They did find a small number of US Marines with Basque surnames, but none of them worked in transmissions. They suggest that Carranza's story was an Office of Strategic Services operation to raise sympathy for US intelligence among Basque nationalists.
Cherokee
The first known use of code talkers in the US military was during World War I. Cherokee soldiers of the US 30th Infantry Division fluent in the Cherokee language were assigned to transmit messages while under fire during the Second Battle of the Somme. According to the Division Signal Officer, this took place in September 1918 when their unit was under British command.
Choctaw
During World War I, company commander Captain Lawrence of the US Army overheard Solomon Louis and Mitchell Bobb having a conversation in Choctaw. Upon further investigation, he found that eight Choctaw men served in the battalion. The Choctaw men in the Army's 36th Infantry Division were trained to use their language in code and helped the American Expeditionary Forces in several battles of the Meuse-Argonne Offensive. On October 26, 1918, the code talkers were pressed into service and the "tide of battle turned within 24 hours ... and within 72 hours the Allies were on full attack."
Comanche
German authorities knew about the use of code talkers during World War I. Joseph Goebbels declared that Native Americans were fellow Aryans. In addition, the Germans sent a team of thirty anthropologists to the United States to learn Native American languages before the outbreak of World War II. However, the task proved too difficult because of the large array of native languages and dialects. Nonetheless, after learning of the Nazi effort, the US Army opted not to implement a large-scale code talker program in the European theater.
Initially, 17 code talkers were enlisted but three were unable to make the trip across the Atlantic when the unit was finally deployed. A total of 14 code talkers using the Comanche language took part in the Invasion of Normandy and served in the 4th Infantry Division in Europe. Comanche soldiers of the 4th Signal Company compiled a vocabulary of 250 code terms using words and phrases in their own language. Using a substitution method similar to that of the Navajo, the code talkers used descriptive words from the Comanche language for things that did not have translations. For example, the Comanche language code term for tank was turtle, bomber was pregnant bird, machine gun was sewing machine, and Adolf Hitler was crazy white man.
Two Comanche code talkers were assigned to each regiment, and the remainder were assigned to the 4th Infantry Division headquarters. Shortly after landing on Utah Beach on June 6, 1944, the Comanche began transmitting messages. Some were wounded but none killed.
In 1989, the French government awarded the Comanche code talkers the Chevalier of the National Order of Merit. On November 30, 1999, the United States Department of Defense presented Charles Chibitty with the Knowlton Award, in recognition of his outstanding intelligence work.
Cree
In World War II, the Canadian Armed Forces employed First Nations soldiers who spoke the Cree language as code talkers. Owing to oaths of secrecy and official classification through 1963, the role of Cree code talkers were less known than their US counterparts and went unacknowledged by the Canadian government. A 2016 documentary, Cree Code Talkers, tells the story of one such Métis individual, Charles "Checker" Tomkins. Tomkins died in 2003, but was interviewed shortly before his death by the Smithsonian National Museum of the American Indian. While he identified some other Cree code talkers, "Tomkins may have been the last of his comrades to know anything of this secret operation."
Meskwaki
A group of 27 Meskwaki enlisted in the US Army together in January 1941; they comprised 16 percent of Iowa's Meskwaki population. During World War II, the US Army trained eight Meskwaki men to use their native Fox language as code talkers. They were assigned to North Africa. The eight were posthumously awarded the Congressional Gold Medal in 2013; the government gave the awards to representatives of the Meskwaki community.
Mohawk
Mohawk language code talkers were used during World War II by the United States Army in the Pacific theater. Levi Oakes, a Mohawk code talker born in Canada, was deployed to protect messages being sent by Allied Forces using Kanien'kéha, a Mohawk sub-set language. Oakes died in May 2019, the last of the Mohawk code talkers.
Muscogee (Seminole and Creek)
The Muscogee language was used as type two code (informal) during World War II by enlisted Seminole and Creek people in the US Army. Tony Palmer, Leslie Richard, Edmund Harjo, and Thomas MacIntosh from the Seminole Nation of Oklahoma and Muscogee (Creek) Nation were recognized under the Code Talkers Recognition Act of 2008. The last survivor of these code talkers, Edmond Harjo of the Seminole Nation of Oklahoma, died on March 31, 2014, at the age of 96. His biography was recounted at the Congressional Gold Medal ceremony honoring Harjo and other code talkers at the US Capitol on November 20, 2013.
Navajo
Philip Johnston, a civil engineer for the city of Los Angeles, proposed the use of the Navajo language to the United States Marine Corps at the beginning of World War II. Johnston, a World War I veteran, was raised on the Navajo reservation as the son of missionaries to the Navajo. He was among the few non-Navajo who spoke the language fluently. Many Navajo men enlisted shortly after the attack on Pearl Harbor and eagerly contributed to the war effort.
Because Navajo has a complex grammar, it is not mutually intelligible with even its closest relatives within the Na-Dene family to provide meaningful information. At the time, it was still an unwritten language, and Johnston believed Navajo could satisfy the military requirement for an undecipherable code. Its complex syntax and phonology, not to mention its numerous dialects, made it unintelligible to anyone without extensive exposure and training. One estimate indicates that at the outbreak of World War II, fewer than 30 non-Navajo could understand the language.
In early 1942, Phillip Johnston met with the commanding general of the Amphibious Corps, Major General Clayton B. Vogel, and his staff. Johnston staged simulated combat conditions which demonstrated that Navajo men could transmit and decode a three-line message in 20 seconds, compared to the 30 minutes it took the machines of the time. The idea of using Navajo speakers as code talkers was accepted; Vogel recommended that the Marines recruit 200 Navajo. The first 29 Navajo recruits attended boot camp in May 1942. This first group created the Navajo code at Camp Pendleton.
The Navajo code was formally developed and modeled on the Joint Army/Navy Phonetic Alphabet that uses agreed-upon English words to represent letters. Since it was determined that phonetically spelling out all military terms letter by letter into words while in combat would be too time-consuming, some terms, concepts, tactics, and instruments of modern warfare were given uniquely formal descriptive nomenclatures in Navajo. For example, the word for shark referred to a destroyer, while silver oak leaf indicated the rank of lieutenant colonel.
A codebook was developed to teach the many relevant words and concepts to new initiates. The text was for classroom purposes only and was never to be taken into the field. The code talkers memorized all these variations and practiced their rapid use under stressful conditions during training. Navajo speakers who had not been trained in the code work would have no idea what the code talkers' messages meant; they would hear only truncated and disjointed strings of individual, unrelated nouns and verbs.
The Navajo code talkers were commended for the skill, speed, and accuracy they demonstrated throughout the war. At the Battle of Iwo Jima, Major Howard Connor, 5th Marine Division signal officer, had six Navajo code talkers working around the clock during the first two days of the battle. These six sent and received over 800 messages, all without error. Connor later said, "Were it not for the Navajos, the Marines would never have taken Iwo Jima."
After incidents when Navajo code talkers were mistaken for ethnic Japanese and were captured by other American soldiers, several were assigned a personal bodyguard whose principal duty was to protect them from their own side. According to Bill Toledo, one of the second group after the original 29, they had a secret secondary duty: if their charge was at risk of being captured, they were to shoot him to protect the code. Fortunately, none was ever called upon to do so.
To ensure a consistent use of code terminologies throughout the Pacific theater, representative code talkers of each of the US Marine divisions met in Hawaii to discuss shortcomings in the code, incorporate new terms into the system, and update their codebooks. These representatives, in turn, trained other code talkers who could not attend the meeting. As the war progressed, additional code words were added and incorporated program-wide. In other instances, informal shortcut code words were devised for a particular campaign and not disseminated beyond the area of operation. Examples of code words include the Navajo word for buzzard, , which was used for bomber, while the code word used for submarine, , meant iron fish in Navajo. The last of the original 29 Navajo code talkers who developed the code, Chester Nez, died on June 4, 2014.
Four of the last nine Navajo code talkers used in the military died in 2019: Alfred K. Newman died on January 13, 2019, at the age of 94. On May 10, 2019, Fleming Begaye Sr. died at the age of 97. New Mexico State Senator John Pinto, elected in 1977, died in office on May 24, 2019. William Tully Brown died in June 2019 aged 96. Joe Vandever Sr. died at 96 on January 31, 2020.
The deployment of the Navajo code talkers continued through the Korean War and after, until it was ended early in the Vietnam War. The Navajo code is the only spoken military code never to have been deciphered.
Nubian
In the 1973 Arab–Israeli War, Egypt employed Nubian-speaking Nubian people as code talkers.
Tlingit
During World War Two, American soldiers used their native Tlingit as a code against Japanese forces. Their actions remained unknown, even after the declassification of code talkers and the publication of the Navajo code talkers. The memory of five deceased Tlingit code talkers was honored by the Alaska legislature in March 2019.
Welsh
A system employing the Welsh language was used by British forces during World War II, but not to any great extent. In 1942, the Royal Air Force developed a plan to use Welsh for secret communications, but it was never implemented. Welsh was used more recently in the Yugoslav Wars for non-vital messages.
Wenzhounese
China used Wenzhounese-speaking people as code talkers during the 1979 Sino-Vietnamese War.
Post-war recognition
The Navajo code talkers received no recognition until 1968 when their operation was declassified. In 1982, the code talkers were given a Certificate of Recognition by US President Ronald Reagan, who also named August 14, 1982 as Navajo Code Talkers Day.
On December 21, 2000, President Bill Clinton signed Public Law 106-554, 114 Statute 2763, which awarded the Congressional Gold Medal to the original 29 World War II Navajo code talkers and Silver Medals to each person who qualified as a Navajo code talker (approximately 300). In July 2001, President George W. Bush honored the code talkers by presenting the medals to four surviving original code talkers (the fifth living original code talker was unable to attend) at a ceremony held in the Capitol Rotunda in Washington, DC. Gold medals were presented to the families of the deceased 24 original code talkers.
Journalist Patty Talahongva directed and produced a documentary, The Power of Words: Native Languages as Weapons of War, for the Smithsonian National Museum of the American Indian in 2006, bringing to light the story of Hopi code talkers. In 2011, Arizona established April 23, as an annual recognition day for the Hopi code talkers. The Texas Medal of Valor was awarded posthumously to 18 Choctaw code talkers for their World War II service on September 17, 2007, by the Adjutant General of the State of Texas.
The Code Talkers Recognition Act of 2008 (Public Law 110-420) was signed into law by President George W. Bush on November 15, 2008. The act recognized every Native American code talker who served in the United States military during WWI or WWII (with the exception of the already-awarded Navajo) with a Congressional Gold Medal. The act was designed to be distinct for each tribe, with silver duplicates awarded to the individual code talkers or their next-of-kin. As of 2013, 33 tribes have been identified and been honored at a ceremony at Emancipation Hall at the US Capitol Visitor Center. One surviving code talker was present, Edmond Harjo.
On November 27, 2017, three Navajo code talkers, joined by the President of the Navajo Nation, Russell Begaye, appeared with President Donald Trump in the Oval Office in an official White House ceremony. They were there to "pay tribute to the contributions of the young Native Americans recruited by the United States military to create top-secret coded messages used to communicate during World War II battles." The executive director of the National Congress of American Indians, Jacqueline Pata, noted that Native Americans have "a very high level of participation in the military and veterans' service." A statement by a Navajo Nation Council Delegate and comments by Pata and Begaye, among others, objected to Trump's remarks during the event, including his use "once again ... [of] the word Pocahontas in a negative way towards a political adversary Elizabeth Warren who claims 'Native American heritage'." The National Congress of American Indians objected to Trump's use of the name Pocahontas, a historical Native American figure, as a derogatory term.
See also
Native Americans and World War II
United States Army Indian Scouts
Windtalkers, a 2002 American war film on Navajo radio operators in World War II
Notes
Further reading
Aaseng, Nathan. Navajo Code Talkers: America's Secret Weapon in World War II. New York: Walker & Company, 1992.
Connole, Joseph. "A Nation Whose Language You Will Not Understand: The Comanche Code Talkers of WWII". Whispering Wind Magazine, March, 2012, Vol. 40, No. 5, Issue #279. pp. 21–26
Durrett, Deanne. Unsung Heroes of World War II: The Story of the Navajo Code Talkers. Library of American Indian History, Facts on File, Inc., 1998.
Gawne, Jonathan. Spearheading D-Day. Paris: Histoire et Collections, 1999.
Holm, Tom. Code Talkers and Warriors – Native Americans and World War II. New York: Infobase Publishing, 2007.
Kahn, David. The Codebreakers: The Story of Secret Writing. 1967.
McClain, Salley. Navajo Weapon: The Navajo Code Talkers. Tucson, Arizona: Rio Nuevo Publishers, 2001.
Meadows, William C. The Comanche Code Talkers of World War II. Austin: University of Texas Press, 2002.
Singh, Simon, The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography. 2000.
External links
United States. Code Talkers Recognition Act of 2008
National Museum of the American Indian exhibition on Code Talkers, entitled Native Words/Native Wisdom
Northern Arizona University Cline Library Special Collections Code Talkers exhibition
Encyclopedia of Oklahoma History and Culture – Code Talkers
Official website of the Navajo Code talkers
Congressional Gold Medal recipients
History of cryptography
Native American military history
Native American United States military personnel
Pacific theatre of World War II
United States Marine Corps in World War II
World War II espionage |